id
int64 0
17.2k
| year
int64 2k
2.02k
| title
stringlengths 7
208
| url
stringlengths 20
263
| text
stringlengths 852
324k
|
---|---|---|---|---|
15,567 | 2,020 |
"Data marketplaces will open new horizons for your company | VentureBeat"
|
"https://venturebeat.com/2020/12/23/data-marketplaces-will-open-new-horizons-for-your-company"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest Data marketplaces will open new horizons for your company Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
The coming years will see a massive business strategy change driven by a radical increase in data accessibility. Companies that leverage data marketplaces early on will be tomorrow’s giants.
An early signal of the economic forces data marketplaces bring can be seen in the emergence of Snowflake. Back in September, Snowflake debuted on the public market as the largest software IPO ever, and its performance continues to skyrocket. Some view this as a dark horse, but in reality, this massive dollar amount understates the impact foreshadowed by this record-setting financial debut.
With Snowflake’s debut, the data warehouse, which for decades was little more than a repository for retaining a static record of valuable information, steps into the spotlight as the heart of the digital enterprise. Snowflake’s Secure Data Sharing (SDS) offering put the company in a position to soar past giants like Amazon in the data warehouse business. Snowflake makes it easy to integrate data across every silo in a business as well as across a network of ecosystem partners. Essentially, the company pioneered a data marketplace where customers can access data offered by numerous providers. (In fact, Snowflake announced on Monday that it has added thousands of new datasets to the marketplace through a partnership with Knoema and that its venture arm has invested in Knoema to accelerate the delivery of further data sets.) Snowflake’s Data Marketplace represents a new wave of digital innovation, one that welds together the global virtual economy in a manner similar to that of global supply chains. And just like with most other massive innovations, I predict we will see dramatic consequences.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! As businesses shift to a virtual model, where every stage of the customer experience is delivered and governed digitally, real-time integration and deployment of data to support the customer experience is now the norm for enterprises. In the process, data moves beyond its traditional role of driving executive decision-making, evolving into the lifeblood of operations. But so much of the data that enables optimal experiences is not accessible to companies simply as a byproduct of their customer interactions. Businesses need to tap into data streams beyond their own.
Traditionally, barriers associated with leveraging externally-sourced data have limited its use. New data sources must be found, evaluated, delivered, cleaned, reformatted, integrated, de-bugged, etc. For decades, IT shops have struggled with this process, known as ETL (extract-transform-load) — three letters that translate to time-money-risk. As a result, even the largest companies typically limit their data suppliers to a handful of proven providers.
A marketplace changes every parameter in the equation of data adoption — it democratizes data access, simplifies discovery, evaluation, and ingestion of data, and ultimately helps make data-driven operations possible for a much broader range of businesses.
The emergence of cloud-based infrastructure providers makes this democratization possible. Additionally, marketplaces simplify the exploration and adoption of new data sets, because they quickly deliver comprehensive collections ready to integrate into applications where they are built and operated. Of equal importance, platforms like Snowflake — along with AWS, Google, Azure and lesser-known specialists — bring together communities of technologists, for whom these platforms serve as their workbenches. By connecting these communities and allowing them to share powerful datasets, the new wave of innovation will continue at an exponential rate.
A quick tour of Snowflake’s Data Marketplace shows that leading brands in the data industry are already displaying their wares. Behind this storefront, data is beginning to flow. Additionally, the AWS Data Marketplace demonstrates how easy it is to speed up data adoption, since AWS has taken an approach that facilitates browsing and sampling. Data providers are encouraged to deploy free-standing extracts of their data, chosen to meet a particular need, with descriptions, pricing, and contracts all available for inspection. It’s the data equivalent of street food.
Notably, some of the hottest selling data “dishes” are built to address the unique challenges of COVID. The pandemic dramatically altered consumer behavior and drove high demand for insights that help brands affirm or adjust existing engagement strategies. It also sped up numerous digital adoption curves, shifting perceptions and creating new channels. This further supports the idea that we will see rapid external data adoption and data marketplace growth.
From where we sit, it’s clear the marketplace is already spreading beyond the walls of the major infrastructure players. A week doesn’t go by without companies with data receiving inbound inquiries from a new data marketplace, or from an existing customer or partner who is opening one. Examples include Neustar , mParticle, Narrative, and Nitrogen.ai. It seems possible that making additional relevant data available within an application may become a new requirement for technology platform success.
It’s fair to say data accessibility is only beginning to be appreciated by potential buyers and sellers. The current menu of options reminds me of early-era search engines when human editors posted thousands of links to interesting content organized by topic. The interface will evolve, and the suppliers of data will refine their offerings as demand expands. Data marketplaces will open up access to now dormant datasets, and help drive a diverse and lucrative revenue stream that businesses have yet to discover.
By its example, Snowflake is signaling the rise of data marketplaces, which I believe will enable massive change in how businesses operate in the next decade.
Michael Gorman is SVP of Product Development and Marketing at ShareThis.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,568 | 2,021 |
"Inside Twitter’s growing interest in Google Cloud | VentureBeat"
|
"https://venturebeat.com/2021/02/25/inside-twitters-growing-interest-in-google-cloud"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Inside Twitter’s growing interest in Google Cloud Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Twitter earlier this month announced it would be expanding its partnership with Google, moving more data and workloads from its servers to the Google Data Cloud.
While Twitter doesn’t yet have plans to port its entire infrastructure to the cloud, its growing relationship with Google’s Data Cloud highlights some of the key challenges companies face as their data stores grow and how employing the right cloud strategy can help them solve these challenges.
From on-premise to the cloud Before its interest in the cloud, Twitter had long been running on its own solid IT infrastructure. Servers and datacenters on five continents stored and processed hundreds of petabytes of data, served hundreds of millions of users, and had the capacity to scale with the company’s growth. Twitter also developed many in-house tools for data analysis. But in 2016, the company became interested in exploring the benefits of moving all or part of its data to the cloud.
“The advantages, as we saw them, were the ability to leverage new cloud offerings and capabilities as they became available, elasticity and scalability, a broader geographical footprint for locality and business continuity, reducing our footprint, and more,” Twitter senior manager of software engineering Joep Rottinghuis wrote in a blog post in 2019.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! After evaluating several options, Twitter partnered with Google Cloud to adopt a hybrid approach in which Twitter kept its immediate operations on its own servers and ported some of its data and workloads to the cloud.
“Large companies depend on collecting massive amounts of data, deriving insights and building experiences on top of this data in order to run the day-to-day aspects of their business and scale as they grow,” Google Cloud product management director Sudhir Hasbe told VentureBeat. “This is very similar to what Google does. At Google , we have nine applications with more than 1 billion monthly active users. Over the past 15-plus years, we have built tools and solutions to process large amounts of data and derive value from it to ensure the best possible experience for our users.” The partnership, which officially started in 2018 , involved migrating Twitter’s “ad-hoc clusters” and “dedicated dense storage clusters” to Google Cloud. Ad-hoc clusters serve special, one-off queries, and the dedicated clusters store less frequently accessed data.
Democratizing data analysis One of the key demands Google Cloud has helped address is the democratization of data analysis and mining at Twitter. In essence, Twitter wanted to enable its developers, data scientists, product managers, and researchers to derive insights from its constantly growing database of tweets.
Twitter’s previous data analysis tools, such as Scalding , required a programming background, which made them unavailable to less technical users. And tools such as Presto and Vertica had problems dealing with large-scale data.
The partnership with Google gave Twitter’s employees access to tools like BigQuery and Dataflow. BigQuery is a cloud-based data warehouse with built-in machine learning tools and the capability to run queries on petabytes of data. Dataflow enables companies to collect massive streams of data and process them in real time.
“BigQuery and Dataflow are two examples that do not have open source or Twitter-developed counterparts. These are additional capabilities that our developers, PMs, researchers, and data scientists can take advantage of to enable learning much faster,” Twitter platform leader Nick Tornow told VentureBeat.
Twitter currently stores hundreds of petabytes of data in BigQuery, all of which can be accessed and queried via simple SQL-based web interfaces.
“Many internal use cases, including the vast majority of data science and ML use cases, may start with SQL but will quickly need to graduate to more powerful data processing frameworks,” Tornow said. “The BigQuery Storage API is an important capability for enabling these use cases.” Breaking the silos One of the key problems many organizations face is having their data stored in different silos and separate systems. This scattered structure makes it difficult to run queries and perform analysis tasks that require access to data across silos.
“Talking to many CIOs over the past few years, I have seen that there is a huge issue of data silos being created across organizations,” Hasbe said. “Many organizations use Enterprise Data Warehouse for their business reporting, but it is very expensive to scale, so they put a lot of valuable data like clickstream or operational logs in Hadoop. Using this structure made it difficult to analyze all the data.” Hasbe added that merely moving silos to the cloud is not enough, as the data needs to be connected to provide a full scope of insights into an organization.
In the case of Twitter, siloed data required the extra effort of developing intermediate jobs to consolidate data from separate sources into larger workloads. The introduction of BigQuery helped remove many of these intermediate roles by providing interoperability across different data sources. BigQuery can seamlessly query data stored across various sources, such as BigQuery Storage, the Google Cloud Storage data lake, data lakes from cloud providers like Amazon and Microsoft, and Google Cloud Databases.
“The landscape is still fragmented, but BigQuery, in particular, has played an important role in helping to democratize data at Twitter,” Tornow said. “Importantly, we have found that BigQuery provides a managed data warehouse experience at a substantially larger scale than legacy solutions can support.” An evolving relationship Today, Twitter still runs its main operations on its own servers. But its relationship with Google has evolved and expanded over the last three years. “In some cases, we will move workloads as-is to the cloud. In other cases, we will rewrite workloads to take advantage of the managed services we’re onboarding on,” Tornow said. “Additionally, we are seeing our developers at Twitter come up with new use cases to take advantage of the streaming capabilities offered by Dataflow, as an example.” Google has also benefited immensely from onboarding a customer as big as Twitter. Throughout the partnership, Twitter has communicated feature requests in areas such as storage and computation slot allocation and dashboards that have helped Google better understand how it can improve its data analytics tools.
Under the new deal declared this month, Twitter will move its processing clusters, which run regular production jobs with dedicated capacity, to Google Cloud. The expanded partnership will also include the transition of offline analytics and machine learning workloads to Google Cloud. Machine learning already plays a key role in a wide range of tasks at Twitter, including image classification, natural language processing, content moderation, and recommender systems. Now Twitter will be able to leverage Google’s vast array of tools and specialized hardware to improve its machine learning capabilities.
“GCP’s ML hardware and managed services will accelerate our ability to improve our models and apply ML in additional product surfaces,” Tornow said. “Improvements in our ML applications often connect directly to improved experience for people using Twitter, such as presenting more relevant timelines or more proactive action on abusive content.” How to prepare for the cloud Google’s cloud business is still trailing behind Amazon and Microsoft. But in the past few years, the tech giant has managed to snatch several big-ticket customers, including Wayfair, Etsy, and the Home Depot. Working with Twitter and these companies has helped the Google Cloud team draw important lessons on cloud migration. Hasbe summarizes these into three key tips for organizations considering moving to the cloud: Break down the silos.
“Focus on all data, not just one type of data when you move to the cloud,” Hasbe said.
Build for today but plan for the future.
“Many organizations are hyper-focused on use cases they are using today and moving them as-is to the cloud,” Hasbe said, adding that cloud migration should be an opportunity to plan for long-term modernization and transformation. “Organizations have to live with the platform they pick for years if not decades,” he said.
Focus on business value-driven use cases.
“Don’t boil the ocean and create a data lake. Start small and pick a use case that has real business value. Deliver that value end to end. This will enable business leaders to see the ROI, enable your teams to get confident in their new abilities, and importantly reduce your time to value or failure … You can learn and pivot as you go,” Hasbe said.
Finally, Hasbe stressed that the responsibility for driving innovation cannot fall only on technology teams. “It has to involve all parts of the organization. Hence, having commitment from leadership across business and technology is key,” he said.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,569 | 2,021 |
"Data, analytics, and digital transformation | VentureBeat"
|
"https://venturebeat.com/2021/05/13/data-analytics-and-digital-transformation"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Data, analytics, and digital transformation Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
This post was written by Andrew Spanyi, president of Spanyi International.
Accurate, complete, and timely data has always been required for success with digital programs. This is even more the case when it comes to large, enterprise-wide digital transformations.
Yet, a recent New Vantage survey reported that just 24% of respondents they thought their organization was data-driven, a decline from 37.8% the prior year. Just as analytical tools are becoming in widespread use, requiring even more reliable data, it’s becoming increasingly difficult to be a data-driven company.
Puzzling, isn’t it? What is the reason for this plunge in becoming data driven? The same New Vantage survey reported that cultural challenges — not technological ones — represented the largest impediment and as many as 92.2% of mainstream companies reported that they have struggled with issues such as organizational alignment, business processes, change management, communication, skill sets, and resistance to change.
There is no shortage of advice on how to become more data driven. For example, SAS and TDWI suggest that better collaboration, improved data quality, and a greater focus on governance are part of the answer. Thomas H. Davenport and Nitin Mittal recommended in Harvard Business Review last year that the initiative be driven top down and that organizations pay attention to the use of cross-functional teams, along with other factors such as leading by example, providing specialized training and using analytics to help employees.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Why is it so hard? Most executives acknowledge the importance of data in digital transformation, but when it comes to their own decision making, they are more likely to make intuition and gut feel driven decisions. After all, it’s their many years of experience that has landed them in their position of authority — isn’t it? Also, gathering high quality data can be problematic as department heads have hoarded data for decades in hard to access excel spreadsheets and the IT applications which have often been developed to meet specific departmental needs don’t communicate well with one another. Moreover, bridging data silos is difficult as such initiatives tend to rely on the IT department, which often has other more pressing priorities. Also, doing the analysis takes time — and it’s quite complicated. The amount of patience needed to overcome the challenges of data transparency and the patience needed in waiting for the time it takes to carry out analytics are not commonly observed traits of typical executive behavior. While there is no one universal recipe, paying attention to organizational alignment, cross functional business processes, and executive education is likely to improve the odds of success.
Improving alignment Most executives today would agree that organizational alignment is important. In theory, strategies, organizational capabilities, resources, and management systems should all be arranged to support the enterprise’s purpose. In practice, when it comes to digital transformation — let’s just say — it’s complicated. When individual departments place greater emphasis on their own strategy than that of the organization — then alignment suffers. When there is a greater focus on variance to budget performance by department as opposed to customer value creation – then alignment weakens. This is particularly pertinent to digital transformation, as strategy — not technology — drives digital transformations. Only the CEO can provide the needed momentum to improve organizational alignment by instructing department heads to work together in crafting a company wide strategy and acting in unison on gathering the right data as well as measuring what matters.
Addressing process issues If an organization focuses solely on workflow and processes inside of departmental boundaries — then fragmentation drives data transparency issues, and data driven decisions suffer. An enterprise wide, high level process context is needed to overcome such fragmentation. According to one recent survey 26% of survey respondents said they have any data strategy at all, and 70% don’t have what they consider to be a mature data strategy. A back-to-basics approach is useful in creating a high-level process context with a focus on the core activities of getting products/services developed, made, sold and delivered. This approach would highlight the 12 to 16 end-to-end processes that typically determine organizational capability for most firms. A linear depiction of these processes is not enough. An effective framework must also draw attention to the activities, the cross functional roles and the applications and data needed for exceptional performance.
Most organizations will find that paying attention to key cross functional processes such as “order to delivery”, “request to resolution” and “idea to launch” can pay huge dividends in terms of identifying what data is needed for digital success and at the same time improving customer experience. Similarly, focusing on the key internal business processes that have a major impact on employee experience, such as “requisition to onboard” and “requirements to implementation” can create the right context and the needed focus to drive a data driven approach. The right foundation is created by getting people from the various departments involved in such cross functional business processes to work together in data driven environment to solve problems that are known to matter. For example, in the “order to delivery” process, collaboration is typically needed between sales, operations and customer service.
So, it’s not just about forming cross-functional teams that combine people with different backgrounds such as data analytics, business, and technology — although that’s important too. It’s also about the right context that creates focus, drives cross functional collaboration and management attention on highly visible business issues that is even more valuable. This approach is far superior to viewing data requirements one department at a time.
Providing executive training There’s no shortage of courses on data and analytics.
Wharton , the University of Toronto , and MIT are just a few of the prestigious universities with solid offerings. There’s just one problem — data and analytics can be boring in the abstract. That’s why it’s important to apply analytics to real, pressing problems in the context of end-to-end processes. However, so doing takes both a systemic and systematic approach to big data and analytics in a big picture context of digital transformation. That is sometimes challenging as both CEOs and IT departments are often busy putting out fires — but it can be done with discipline. To improve the odds of success, SAS recommends paying attention to factors such as a balanced focus on developing business skills as well as technical skills, discipline in performance measurement, and an accelerated approach to change management.
How are you doing? Instead of just thinking about deploying a given individual technology tool for the benefit of an individual department, leaders need to shift attention to deploying multiple tools with reliable, accessible data in an integrated, agile manner for the benefit of customers and the business.
Focusing on customer experience and a set of highly visible business problems or opportunities in a process context form the foundation for data driven digital transformation. That’s quite different than a traditional, siloed, departmental approach and involves an outside-in view to drive cross functional collaboration.
How are you doing? Consider answering the following questions.
Do individual departments place greater emphasis on their own strategy than that of the organization? Is process modeling primarily focused on small processes inside of departmental boundaries? Do process improvement projects tend to have small, incremental improvement goals? Do key performance indicators (KPI’s) have a visible bias towards volume and cost? Are your executives more concerned about their department than on creating value for customers? Is organization wide restructuring carried out frequently? Do department heads view one another as competitors for the top job as opposed to collaborators? Are IT projects often launched and executed in response to individual departmental needs? If you answered “YES” to four or more of the above questions, then your company may find it particularly challenging to apply data-based decision making in your digital programs.
You are probably not alone.
Tom Davenport and Randy Bean have been reporting on data driven transformations for over 8 years and found that companies continue to struggle despite substantial investments in technology and applications. Paying attention to organizational alignment, cross functional business processes, and executive education can change the odds of success.
Andrew Spanyi is President of Spanyi International.
He is a member of the Board of Advisors at the Association of Business Process Professionals and has been an instructor at the BPM Institute.
He is also a member of the Cognitive World Think Tank on enterprise AI.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,570 | 2,021 |
"Bias and discrimination in AI: whose responsibility is it to tackle them? | VentureBeat"
|
"https://venturebeat.com/2021/06/08/bias-and-discrimination-in-ai-whose-responsibility-is-it-to-tackle-them"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Bias and discrimination in AI: whose responsibility is it to tackle them? Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
This post was written by Nurit Cohen-Inger, vice-president of product at BeyondMinds We all have our individual biases hardwired into our perceptions and actions. One might think artificial intelligence (AI) would eliminate our biases and create a level playing field. This is not the case. Since humans create the algorithms that enable AI to learn and make inferences, their biases are inherently incorporated into the code.
The following cases illustrate how detrimental the misuse of AI can be: A loan processing algorithm that discriminated between husbands and wives sharing the same household A model for predicting conditional release from jail that discriminated against African Americans An algorithm for approving drug prescription requests that discriminated against low-income patients These examples show how AI can foster discrimination, lack of equal opportunity and social exclusion. Once these cases became public, they also caused considerable damage to the companies and organizations that utilized these AI tools.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! So, whose responsibility is it to stop the perpetual cycle of bias in AI ? There are four key players: Developers They create the models that enable widespread usage of AI technologies. As such, they bear the largest share of responsibility for identifying potential biases in how data is processed by AI. Since this is a crucial piece of the puzzle, having this responsibility should be a requirement of the developer’s role.
At every stage of development, a developer should produce a relevant risk plan, following these key stages: Model Design The data used in training the AI model is representative and balanced, not skewed toward a specific demographic.
The model doesn’t include any discriminating parameters (such as gender, age, ethnicity or socioeconomic status), even at the expense of reducing model performance.
The model doesn’t perpetuate an existing skew in society in which certain populations are discriminated against.
Development: Once the model is created, it’s important to test “edge cases.” For example, if a developer is working on image recognition software, then he/she is required to ensure that a rich diversity of different ages, ethnicities and genders are included, as well as to factor in variables that may skew results.
Production: Once AI is in production, a continuous monitoring process should be employed to detect any deviations in the data that can derail the algorithm.
There are many pitfalls in implementing AI on large scale-populations, and the developers behind this process are pivotal in identifying these initial biases, preventing them from entering the AI model and informing their client (the organization that hired them for the project) about any potential risks that the system bears.
Technology Companies While developers have considerable responsibility to bear, the organization that employs them is responsible for setting checkpoints to ensure the outcomes of AI are being taken into consideration early. It all starts with hiring a diverse development team. Diverse AI teams tend to outperform like-minded teams , and bringing together AI developers from different backgrounds decreases the risk of creating bias-prone algorithms.
Companies creating AI should also be responsible for reviewing developers’ work to ensure biases are being caught and fixed early, facilitating awareness across the company for these potential biases and providing teams with methodologies and guidelines for mitigating these biases and preventing them from slipping into the code.
One way to do this is to hire a data ethicist or establish an ethics committee , which should be composed of both developers and non-technology staff. Pull in members from legal, HR, product management and other departments; primarily, make sure it’s a group of diverse backgrounds and opinions.
Most tech companies are still focused on getting AI to work in production and have not evolved to ensure its ethical standards.
Enterprises Organizations that use the AI solutions are also accountable for any ethical violations. These companies are responsible for understanding the potential risks flagged by the developers and taking action to mitigate these risks before the system is fully deployed.
While enterprises might not be thrilled to make an extra expenditure to fix the AI model they ordered (and in some cases, even shut off the solution entirely), the responsibility for preventing a biased model lies on their shoulders.
Enterprises also have the most to lose. End users will view the AI as a part of their brand, and when something goes wrong they may lose customers and brand equity as a result. In some cases, enterprises may even find themselves facing a lawsuit from a customer harmed by a biased algorithm.
Regulators Government institutions typically lag behind tech companies in placing rules and regulations to ensure market fairness. Regulators in the U.S. and the EU still haven’t set any official guidelines that clearly define the ethical red flags that companies must avoid when using AI. Lawmakers will have to move fast to keep up with the rapidly changing ecosystem. Until regulation exists that balances business needs and fairness to society as a whole, more incidents will inadvertently keep occurring.
AI is gaining traction as a game-changing technology that offers great potential in streamlining operations, cutting costs, personalizing products and improving customer experience. At the same time, using this technology at scale can create new ethical dilemmas regarding unintentional discrimination against segments of the population. Setting ground rules for identifying, managing and regulating these risks is of urgent importance to society, not just to the people directly involved in bringing these algorithms to life.
Nurit Cohen Inger, vice-president of products at BeyondMinds.ai, leads the company in defining and driving the product strategy and lifecycle, along with developing and managing a strong team of product managers and designers.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,571 | 2,021 |
"DataRobot exec talks 'humble' AI, regulation | VentureBeat"
|
"https://venturebeat.com/2021/06/14/datarobot-exec-talks-humble-ai-regulation"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages DataRobot exec talks ‘humble’ AI, regulation Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Organizations of all sizes have accelerated the rate at which they employ AI models to advance digital business transformation initiatives. But in the absence of any clear-cut regulations, many of these organizations don’t know with any certainty whether those AI models will one day run afoul of new AI regulations.
Ted Kwartler, vice president of Trusted AI at DataRobot , talked with VentureBeat about why it’s critical for AI models to make predictions “humbly” to make sure they don’t drift or one day run afoul of government regulations.
This interview has been edited for brevity and clarity.
VentureBeat: Why do we need AI to be humble? VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Ted Kwartler: An algorithm needs to demonstrate humility when it’s making a prediction. If I’m classifying an ad banner at 50% probability or 99% probability, that’s kind of that middle range. You have one single cutoff threshold above this line and you have one outcome. Below this line, you have another outcome. In reality, we’re saying there’s a space in between where you can apply some caveats, so a human has to go review it. We call that humble AI in the sense that the algorithm is demonstrating humility when it’s making that prediction.
VentureBeat: Do you think organizations appreciate the need for humble AI? Kwartler: I think organizations are waking up. They’re becoming much more sophisticated in their forethought around brand and reputational risk. These tools have an ability to amplify. The team that I help lead is really focused on what we call applied AI ethics, where we help educate our clients to this kind of phenomenon of thinking about the impacts; not just the math of it. Senior leaders maybe don’t understand the math. Maybe they don’t understand the implementation. But they definitely understand the implications at the strategic level, so I do think it’s an emerging field. I think senior leaders are starting to recognize that there’s more reputational and brand risk.
VentureBeat: Do you think government regulatory agencies are starting to figure this out as well? And if so, what are the concerns? Kwartler: It’s interesting. If you’ve read the AI Algorithmic Accountability Act, it’s written very broadly. That’s tough because you have an evolving technological landscape. And there are thresholds in that bill around $50 million in revenue that require an impact assessment if your algorithm is going to impact people. I like the idea of the high-risk use cases that were clearly defined. That’s a little prescriptive, but in a good way, I also like that it’s collaborative, because this is an evolving space. You want this stuff to be aligned to your societal values, not build tools of oppression. At the same time, you can’t just clamp it all down because it has shown economic progress. We all benefit from AI technology. It’s a balancing act.
VentureBeat: Do business executives have a vested interest in encouraging governments to define the AI rules sooner than later? Kwartler: Ambiguity is the tough spot. Organizations are willing to write impact assessments. They’re willing to get third-party audits of their models that are in production. They’re willing to have different monitoring tools in place. A lot of monitoring and model risk management already exists but not for AI, so there are mechanisms by which this can happen. As the technology and use cases improve, how do you then adjust what counts or constitutes as high risk? There is both a need to balance economic prosperity and the guardrails that can operate it.
VentureBeat: What do make of the European Union efforts to regulate AI? Kwartler: I think next-generation technologists welcome that collaboration. It gives us a path forward. The one thing I really liked about it is that it didn’t seem overreaching. It seemed like it was balancing prosperity with security. It seemed like it was trying to be prescriptive enough about high-risk use cases. It seemed like a very reasoned approach. It wasn’t slamming the door and saying “no more AI.” What that does is it leaves AI development to maybe governments and organizations that operate in the dark. You don’t want that either.
VentureBeat: Do you think the U.S. will move in a similar direction? Kwartler: We will interpret it for our own needs. That’s what we’ve done in the past. In the end, we will have some form of regulation. I think that we can envision a world where some sort of model auditing is a real feature.
VentureBeat: That would be a preferable alternative to 50 different states attempting to regulate AI? Kwartler: Yes. There are even regulations coming out in New York City itself. There are regulations in California and Washington that by themselves can dictate it for the whole country. I would be in favor of anything that helps clear up ambiguity so that the whole industry can move forward.
VentureBeat: Do you think there’s going to be an entire school of law built around AI regulations and enforcement? Kwartler: I suspect that there’s an opportunity for good regulation to really help as a protective measure. I’m certainly no legal expert, so I wouldn’t know if there’s going to be ambulance chasers or not. I do think that there is an existing precedent for good regulation for protecting companies. Once that regulation is in place, you remove the ambiguity. That’s a safer space for organizations that want to do good in the world using this technology.
VentureBeat: Do we have the tools needed to monitor AI? Kwartler: I would say the technology for monitoring technology and mathematical equations for algorithmic bias exists. You can also apply algorithms to identify the characteristics of data and data quality checks. You can apply methods. You can also apply some heuristics to — or after the model is in production and making predictions — to mitigate biases or risks. Algorithms, heuristics, and mathematical equations can be used throughout that kind of workflow.
VentureBeat: Bias may not be a one-time event. Do we need to continuously evaluate the AI model for bias as new data becomes available? Do we need some sort of set of best practices for evaluating these AI models? Kwartler: As soon as you build a model, no matter what, it’s going to be wrong. The pandemic has also shown us the input data that you use to train the model does not always equate to the real world. The truth of the matter is that data actually drifts. And once it’s in production, you have data drift, or in the case of language, you have what’s called a concept drift. I do think that there’s a real gap right now. Our AI executive survey showed a very small number of organizations were actually monitoring models in production. I think that is a huge opportunity to help inform these guardrails to get the right behavior. I think the community’s focused a lot on the model behavior, when I think we need to migrate to monitoring and MLOps (machine learning operations) to engender trust way downstream and be less technical.
VentureBeat: Do you think there is a danger a business will evaluate AI models based on their optimal result and then simply work backward to force that outcome? Kwartler: In terms of model evaluation, I think that’s where a good collaboration for AI regulation can come in and say, for instance, if working in hiring you need to use statistical parity to make sure that you have equal representation by protected classes. That’s a very specific targeted metric. I think that’s where we need to go. The mandated organizations should have a common benchmark. We want this type of speed and this type of accuracy, but how does it deal with outliers? How does it deal with a a set of design choices left to the data scientist as the expert? Let’s bring more people to the table.
VentureBeat: We hear a lot about the AutoML frameworks being employed to make AI more accessible to end users. What is the role of data scientists in an organization that adopts AutoML? Kwartler: I’m very biased in the sense that I have an MBA in learned data science. I believe that data scientists that are operating in a silo don’t deliver the value because their speed to market with the model is much slower than if you do it with AutoML. Scientists don’t always see the real desire of the business person trying to sponsor the project. They’ll want to build the model and optimize to six decimal points, when in reality it makes no difference unless you’re at some massive scale. I’m a firm believer in AutoML because that allows the data scientists doing a forecast for call centers to go sit with a call center agent and learn from them. I tell data scientists to go see where the data is actually being made. You’ll see all sorts of data integrity issues that will inform your model. That’s harder to do when it takes six months to build a bespoke model. If I can use AutoML to speed up velocity to value then I have this luxury to go deeper into the weeds.
VentureBeat: AI adoption is relatively slow still. Is AutoML about to speed things up? Kwartler: I’ve worked in large, glacial companies. They move slower. The models themselves took maybe a year plus to move into production. I would say there is going to be data drift if it takes six months to build the model and six months to implement the model. The model is not going to be as accurate as I think it is. We need to increase that velocity to democratize it for people that are closer to the business problem.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,572 | 2,021 |
"Announcing the winners of the Women in AI Awards at Transform 2021 | VentureBeat"
|
"https://venturebeat.com/2021/07/16/announcing-the-winners-of-the-women-in-ai-awards-at-transform-2021"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Announcing the winners of the Women in AI Awards at Transform 2021 Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
One of the goals of Transform 2021 is to bring a broad variety of expertise, views, and experiences to the stage — virtual this year — to illustrate all the different ways AI is changing the world. As part of VentureBeat’s commitment to supporting diversity and inclusion in AI , that also means being mindful of who is being represented on the panels and talks.
The Women in AI Awards ends a week that kicked off with the Women in AI Breakfast , with several number of talks on inclusion and bias in between. Margaret Mitchell, a leading AI researcher on responsible AI, spoke, as well as executives from Pinterest, Redfin, Intel, and Salesforce.
Selecting the winners VentureBeat leadership made the final selections out of the over 100 women who were nominated during the open nomination period. Selecting the winners was difficult because it was clear that each of these nominees are trailblazers who made outstanding contributions in the AI field.
AI Entrepreneur Award This award honors women who have started companies showing great promise in AI and considers factors such as business traction, the technology solution offered by the company, and impact in the AI space.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Briana Brownell , founder and CEO of Pure Strategy was the winner of the AI Entrepreneur Award for 2021. Brownell and her team at Pure Strategy designed “Annie” (ANIE), an Automated Neural Intelligence Engine to help humans understand unstructured data. Annie has been used by doctors, specialists and physicians assistants to communicate with patients and with each other across cultural knowledge and overcoming biases, phobias and anxieties.
AI Research Award This award honors those who have made a significant impact in an area of research in AI, helping accelerate progress either within their organization, as part of academic research, or impacting AI approaches in technology in general.
Dr. Nuria Oliver , chief scientific advisor of Vodafone Institute, received the AI Research Award for 2021. Oliver is the named inventor of 40 filed patents, including a computational modeling of human behavior via machine learning techniques and on the development of intelligent interactive systems. She’s been named an ACM Distinguished Scientist and Fellow, as well as a Fellow of the IEEE and of Euroway. She also pioneered the not-for-profit business and academic research to use anonymized mobile data to track and prevent the spread of Ebola and Malaria in Africa, which has since been deployed across Africa and Europe in a matter of days in 2020 to track and prevent the spread of COVID-19. What’s more, she has proposed that all of the data scientists involved in her humanitarian efforts work on those projects pro-bono.
Responsibility & Ethics in AI Award This award honors those who demonstrate exemplary leadership and progress in the growing topic of responsible AI. This year, there was a tie.
Haniyeh Mahmoudian , the global AI ethicist at DataRobot and Noelle Silver , founder of the AI Leadership Institute, both received the Responsibility & Ethics Award for 2021.
Mahmoudian was an early adopter of bringing statistical bias measures into developmental processes. She wrote Statistical Parity along with natural language explanations for users, a feat that has resulted in a priceless improvement in model bias that scales exponentially, as the platform is used across hundreds of companies and verticals such as banking, insurance, tech, CPG and manufacturing. A contributing member of the Trusted AI team’s culture of inclusiveness, Mahmoudian operates under the core belief that diversity of thought will result in thoughtful and improved outcomes. Mahmoudian’s research in the risk level for COVID contagion outside of racial bias was used at the Federal level to inform resource allocation and also by Moderna during vaccine trials.
A consistent champion for public understanding of AI and tech fluency, Silver has launched and established several initiatives supporting women and underrepresented communities within AI including the AI Leadership Institute, WomenIn.AI. and more. She’s a Red Hat Managed OpenShift Specialist in AI/ML, a WAC Global Digital Ambassador, a Microsoft MVP in Artificla Intelligence and numerous other awards as well as a 2019 winner of the VentureBeat Women in AI mentorship award AI Mentorship Award This award honors leaders who helped mentor other women in the field of AI, provided guidance and support, and encouraged more women to enter the field of AI.
Katia Walsh , Levi Strauss’ chief strategy and AI officer, was the recipient of the AI Mentorship Award for 2021. Walsh has been an early influencer for women in AI since her work at Vodafone, actively searching for female candidates on the team and mentoring younger female colleagues, and serving as strategy advisor to Fellowship.AI, a free data science training program. At Levi Strauss, Walsh created a digital upskilling program that is the first of its kind in the industry, with two thirds of its bootcamp participants are women.
Rising Star Award This award honors those in the beginning stages of their AI career who have demonstrated exemplary leadership traits.
The Rising Star Award for 2021 was awarded to Arezou Soltani Panah , a research fellow at Deakin University in Australia.
Panah’s work at Swinburne Social Innovation Research Institute focuses on solving complex social problems such as loneliness, family violence and social stigma. While her work demands substantial cross-disciplinary research and collaborating with subject matter experts like social scientists and governmental policy advisors, she has created a range of novel structured machine learning solutions that span across those disciplines to create responsible AI research. Panah’s focus on social inequality and disempowerment has used the power of natural language processing to measure language and algorithmic bias. One such project quantified the extent of gender bias in featuring female athletes in the Victoria, Australian news and how women’s achievements are attributed to their non-individual efforts such as their team, coach or partner compared to their male counterparts. Another project looked at gender biases in reporting news on obesity and the consequences to weight stigmatization in public health policies.
Inspiring leaders, meaningful work One thing was very clear from reading over the nominations that came in: There are many leaders doing meaning work in AI. It was very inspiring to see the caliber of executives and scientists leading the way in AI and making a difference in our world. The list of nominations are full of leaders who will continue to make their mark over the next few years and there will be more opportunities to hear about their work.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,573 | 2,021 |
"How an AI entrepreneur deals with dirty real-world data | VentureBeat"
|
"https://venturebeat.com/2021/07/23/how-an-ai-entrepreneur-deals-with-dirty-real-world-data"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How an AI entrepreneur deals with dirty real-world data Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Women in the AI field are making research breakthroughs, spearheading vital ethical discussions, and inspiring the next generation of AI professionals. We created the VentureBeat Women in AI Awards to emphasize the importance of their voices, work, and experience, and to shine a light on some of these leaders. In this series, publishing Fridays, we’re diving deeper into conversations with this year’s winners , whom we honored recently at Transform 2021.
Briana Brownell, winner of VentureBeat’s Women in AI entrepreneur award, didn’t enter this field to earn accolades. She set out to create an AI that would do her job for her — or at least that’s the joke she likes to tell.
Really, she set out to build a company that would combine her data analytics background with AI.
In 2015, she launched Pure Strategy , which uses an Automated Neural Intelligence Engine (ANIE) to help companies understand unstructured data. She and her team invented algorithms from scratch to make it happen, and the system has been used by doctors to communicate with patients and with each other across cultural knowledge, for example. She also moonlights as a science communicator, inspiring not just young children — especially girls — but everyone around her.
“Whether you’re interested in the intricacies of algorithms to validate unsupervised machine learning models or a high-level future view of humanity and AI, Briana makes you feel comfortable with her genius,” said HCare CEO Roger Sanford, who nominated her for the award.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Brownell told VentureBeat she’s “extremely excited to have won this award.” “It’s a huge honor to me,” she said. “It was definitely a surprise because I think the competition was pretty fierce.” Indeed it was, but we’re pleased to recognize Brownell’s work as an AI entrepreneur, and even more excited to further chat with her about her work, the role of AI entrepreneurship in the broader field, and bringing more women to the table.
This interview has been edited for brevity and clarity.
VentureBeat: Tell us a little about your work and approach to AI. How did you come to launch Pure Strategy? And what drives you overall? Briana Brownell: I started Pure Strategy after spending about 10 years as a data scientist. I was still doing a lot by hand, but there were new techniques coming out that made working with some of those datasets a lot easier. You started seeing natural language understanding, and neural network infrastructure became available in open source packages.
All of that really just accelerated. I jokingly said I wanted to essentially program myself into the computer so that I could create an AI that would do my job for me. And that’s essentially what I set out to do — try to use those technology tools to make it easier and faster to do data analysis.
VentureBeat: And when you were creating your product ANIE, what were some of the challenges you faced? And how did you overcome them? Brownell: There were a lot of challenges for sure. The first was that many of the algorithms we use weren’t actually invented yet. And so we have a whole suite of proprietary methods that make our platform perform at the level it needs to. And so that was really a challenge because it was a lot of trial and error and a lot of building the system out so that it would generalize to a lot of different cases. The second one was being able to find and analyze the data that we needed. The size and scale of the datasets we use for training made it extremely difficult to program things efficiently. I would, let’s say, set a neural network to train, and then I’d have to wait 20 or 30 minutes for it to do the first step. And so that took a lot of time and was a real challenge.
VentureBeat: How do you view AI entrepreneurship versus academic AI research and other aspects of the field? What are their unique roles, and how can they best come together? Brownell: I think one of the challenges people have in going from AI academia to entrepreneurship is that they are very, very good when the data is all correct, the algorithm fits the assumptions of the modeling, and everything is sort of beautifully positioned to fit the case. But in the real world, everything is incomplete and data is dirty. You may not be able to find the data that you need, or you might have to find a way to approximate it. You might have to merge data sources. All kinds of little issues come up when you’re working with real data, and that’s where I think my experience working in the industry, with lots of different kinds of data, and lots of different kinds of problems with data, really came in handy. Because when you’re building a platform that you’re going to try to get a company to use, it doesn’t matter if it’s the perfect algorithm academically; it matters whether or not it works and if it helps the company make the right decision. And so I find that it’s increasingly difficult for people to be really strong in both business outcomes and the theoretical AI area. And so we need translators, essentially, that can work across those lines and understand what’s possible with AI and what’s relevant for the business. So that intersection is really, really important.
VentureBeat: Do you have any pieces of advice for AI-focused entrepreneurs. What often gets overlooked? Or what’s something you wish you’d known earlier on? Brownell: It’s easy to create a general model that will do something, but it’s very difficult to customize that model to work in a specific case and do that at scale. If you look at all major AI company failures, and I don’t know if you’ve followed Element AI, for example. But they had [$257 million] in funding and all this amazing talent, and they struggled with that. And I think that we all underestimate how valuable that customization actually is. I think that’s a critical, critical factor. Big companies really struggle to get their heads around AI because there’s no guarantee it’s going to work. They love to make these huge claims to get in the door, and then so many of these projects fail because they’re over-promising. And so I see that as a big threat to the industry. The graveyard is littered with AI companies that have made huge claims.
VentureBeat: Your nominator said you’re often the only woman in the room, which is, of course, common for women in AI and in tech more broadly. There’s long been talk about this problem and the risks when it comes to AI in particular. But do you feel like anything’s changing? And how does it all play into these ongoing discussions around the importance of ethical and responsible AI? Brownell: At my first job, which was in finance, I was the only woman who worked at the whole company, actually. And at my next job, I actually worked for a female CEO with a lot of women technical staff. And so I thought women in data science and analytics was just the normal state of the world. And then I got a rude awakening when I got into tech. And I think it’s a real shame because there’s a lot of promise with how AI can change societies and the world. And not just more women, but people from underrepresented groups overall at the table can help us solve problems that can’t just be solved when you have group think. And so I’m hoping that as more women start becoming prominent in AI, the types of use cases start becoming more interesting and that more women choose this career. Because there’s a huge need for diverse perspectives and new ways of thinking about how the technology impacts our lives.
VentureBeat: You’re also working on a children’s show that revolves around explaining complex science topics — like AI — to preteens. How did you get into that, and why is science communication important to you? Brownell: It’s extremely important to me. I actually have a few other things I’m working on in that area: I write about physics and astronomy for Discovery, develop K-12 AI content with charities to make it more fun and accessible, and am working with TED on AI explainer videos for kids, too. I think reaching students when they’re young is really important, because you don’t really know what careers are possible when you’re growing up unless you see it in your inner circle. I worked with an engineering association called APEGS, which has a program to encourage more women to consider engineering. And one of the things that they talk about is that a lot of the women who decided to go into engineering, they had a relative or close family friend in the field who could see their skills and encourage them. And so being able to expose people to the kinds of careers that are available, I think, is really critical.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,574 | 2,021 |
"Why Redfin's CTO, Bridget Frey, approaches D&I like an engineering project | VentureBeat"
|
"https://venturebeat.com/2021/07/09/why-redfins-cto-bridget-frey-approaches-di-like-an-engineering-project"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages VB Event Why Redfin’s CTO, Bridget Frey, approaches D&I like an engineering project Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
One of the major goals of Transform every year, our intensive, week-long applied AI event, is to bring a broad variety of expertise, views, and experiences to the table, to be mindful of the importance of diversity, and to not just give lip service to representation, but focus on inclusion on a large percentage of our panels and talks.
Inspired by the event’s commitment to diversity and inclusivity, we wanted to sit down with some of Transform’s speakers, leaders in the AI industry who make diversity, equity & inclusion (DE&I) a central tenet of their work.
Among them is Bridget Frey, CTO of Redfin. She’ll be speaking at Transform this year about how she works to build diverse and inclusive teams at the company, and the potential for AI to combat redlining and diversify American neighborhoods. We had the opportunity to speak to Frey about what launched her love of tech, how Redfin makes their focus on equity and inclusion better than just lipservice, and more.
See the first in the series here.
More to follow.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! VB: Could you tell me about your background, and your current role at your company? BF: I’m the CTO of Redfin, where we build technology to make buying and selling homes less complicated and less stressful. Right now, we’re investing heavily in machine learning, services, and the cloud as we scale multiple businesses such as Redfin Mortgage and Redfin Now, our iBuyer business.
When I was five, my dad brought home an Apple IIe, and the two of us learned to code on it together. I’ve spent my career working at high-growth technology companies, and just celebrated 10 years with Redfin this spring.
VB: Any woman in the tech industry, or adjacent to it, is already forced to think about DE&I just by virtue of being “a woman in tech” — how has that influenced your career? BF: I’ve been the only woman in an engineering department more than once in my career. That experience of feeling isolated has had a big influence in how I approach creating a culture that listens to all voices and a team that builds inclusive products. When I became CTO, I had this realization that I was now responsible in a very real way for DE&I on my team, and it inspired me to find ways to make a difference. We still have plenty of work to do, but I firmly believe that the tech industry can improve with focused effort.
VB: Can you tell us about the diversity initiatives you’ve been involved in, especially in your community? BF: When I joined Redfin in 2011, I was the only woman on the Seattle engineering team. Today, 36% of our technical team are women and 10% are Black or Latinx. Some ways we’ve gotten here: We approached DE&I the same way our engineering team approached any engineering project — we made a backlog of bugs, we prioritized them, and we started making changes in how we recruit, train, promote, pay, and so many other areas.
We sourced candidates from alternative backgrounds, and set them up to succeed. We’ve made investments in documentation and training, which let us hire more people in 2020 who don’t have traditional computer-science backgrounds. We also opened roles early to candidates from non-traditional sources.
We started hiring more engineers outside of SF and Seattle. In 2018, we opened an engineering office in Frisco, Texas, and 21% of this team is Black or Latinx. As we hire more fully remote workers, we hope to build on that momentum.
We improved the diversity of our recruiting team. From June 1 to December 31, 2020, the percentage of Redfin’s Black and Latinx recruiters increased from 15% to 23%; 47% of our recruiters are now people of color. Their personal networks, and their own Redfin experience, make our recruiters formidable advocates for hiring people of color.
VB: How do you see the industry changing in response to the work that women, especially Black and BIPOC women, are doing on the ground? What will the industry look like for the next generation? BF: I feel a debt of gratitude to the Black and BIPOC women who are sharing their experiences and pushing our industry to do more. Timit Gebru, as a recent example, has inspired a whole host of researchers and practitioners to speak out on issues of equity in the field of AI. And it’s spreading beyond the ethical AI folks you’d expect to be the most aware of these issues to a broad set of tech workers who are advocating for systemic change. It’s unfortunately still easy to point to things that are broken in the DE&I space, but I’m optimistic that the tech industry is getting better at confronting these issues in a transparent way and then finding concrete solutions that will make a difference.
[Frey’s talk is just one of many conversations around D,E&I at Transform 2021 next week (July 12-16).
On Monday, we’ll kick off with our third Women in AI breakfast gathering. On Wednesday, we will have a session on BIPOC in AI. On Friday, we’ll host the Women in AI awards. Throughout the agenda, we’ll have numerous other talks on inclusion and bias, including with Margaret Mitchell, a leading AI researcher on responsible AI, as well as with executives from Pinterest, Redfin, and more.] VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,575 | 2,021 |
"Why Salesforce's Kathy Baxter says diversity and inclusion efforts aren't enough | VentureBeat"
|
"https://venturebeat.com/2021/07/12/why-salesforces-kathy-baxter-says-di-efforts-arent-nearly-enough"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Why Salesforce’s Kathy Baxter says diversity and inclusion efforts aren’t enough Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
At this year’s Transform we’re stepping up our efforts to build a roster of speakers that reflects the diversity in the industry and highlights the work of leaders who are making a difference.
Among them is Kathy Baxter, Principal Architect, Ethical AI Practice at Salesforce. In 2016, Baxter pitched the role of AI Ethicist to the company’s chief scientist, who pitched it to the CEO, and six days later, it was official. We were excited for the opportunity to speak to her about what the role entails, as well as her thoughts on how the industry is changing, and why focusing on diversity, equity and inclusion (DE&I) efforts isn’t enough.
See the first two in the series: Intel’s Huma Abidi and Refin’s Bridget Frey.
More to follow.
VB: Could you tell us about your background, and your current role at your company? VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! I received a BS [Bachelor of Science] in Applied Psychology and a MS [Masters of Science] in Engineering Psychology/Human Factors Engineering from GA [Georgia] Tech. The degrees combine social science with technology. It also had a strong foundation in research ethics.
I started working on AI ethics “on the side” at Salesforce in 2016, and by 2018, I was working the equivalent of two full-time jobs. I pitched a full-time role of AI Ethicist to our Chief Scientist at the time, Richard Socher, in August of 2018. He agreed this was needed and pitched it to our CEO, Marc Bennioff, who also agreed, and six days later, it was official.
My colleague, Yoav Schlesinger, and I partner with research scientists and product teams to identify potential unintended consequences of the AI research and features they create. We work with them to ensure that the development is responsible, accountable, transparent, and inclusive. We also work to ensure that our solutions empower our customers and society. It’s not about AI replacing humans but helping us create better solutions where it makes sense. That means we also want to avoid techno-solutionism and so we always ask, not just ‘Can we do this, but should we?’ We also work with our partners and customers to ensure that they are using our AI technology responsibly and with our government affairs team to participate in the creation of regulations that will ensure everyone is creating and using AI responsibly.
VB: Any woman in the tech industry, or adjacent to it, is already forced to think about DE&I just by virtue of being “a woman in tech” — how has that influenced your career? I have always participated in DE&I events at the places I have worked whether that was educational or recruiting events. I’ve also facilitated courses focused on skills to help URMs [underrepresented minorities] advance to higher levels in companies where we’ve seen a large drop off.
The last few years though, I have stepped away from those efforts because I don’t believe they actually address the root cause of lack of diversity and inclusion. Recruiting events or teaching skills to people in underrepresented groups how to deal with systemic bias puts the emphasis on this being a pipeline problem or that the people facing bias are responsible for fixing it.
In my experience, both of these premises fail to address the most serious cause of lack of diversity, and that’s the inherent bias of those in power to decide who is hired, how people are treated when they are hired, and who gets promoted.
I look for every opportunity to ensure when we are hiring for a role that I have any contact with that we reach out to as wide a field of candidates as possible, that we are aware of our biases during the hiring and promotion discussions, and to always be the person that speaks out when I hear or see non-inclusive behavior happening. It’s about calling people in, not out.
So reminding people when we talk about things as simple as project names, “That’s another male scientist’s name. How about a female’s name or we avoid gendered names altogether?” Or looking around the room in important meetings and observing out loud, “Wow. This is a pretty homogenous group we have here. How can we get some other voices involved?” I also believe in the importance of mentoring and sponsoring others. When I find brilliant folks with expertise that aren’t in the room, in a document, or on an email thread perhaps because they are junior or they aren’t connected with the particular project at hand, I make sure to mention their names and bring them in. It takes work to make sure that hierarchy or organizational charts don’t prevent us from having the best people in discussions because it is worth it for everyone.
VB: How do you see the industry changing in response to the work that women, especially Black and BIPOC women, are doing on the ground? What will the industry look like for the next generation? The ethics in tech, especially ethics in AI work is largely driven by women and BIPOC since they are the ones harmed by non-inclusive practices and products. It’s taken a long time but it’s gratifying to see that the work of Joy Buolamwini and Timnit Gebru on bias in facial recognition technology [FRT] being broadly consumed by regulators, technology creators, and even consumers thanks to the “Coded Bias” video on Netflix.
We still have a long way to go as FRT is increasingly being used in harmful ways because there is no transparency or accountability when harm is found.
I’m also excited to see more and more students graduating from technology programs with a better understanding of ethics and responsibility. As they become a larger part of tech companies, my hope is that we will see a demise of dark design patterns and a greater focus on helping society, not just making money off of it.
This won’t be sufficient so we need meaningful regulation to stop irresponsible companies from racing to the ethical bottom in the pursuit of profits by any means necessary. We need more women, LGBTQ+, Black, and BIPOC members in the government, civil society, and leadership positions in all companies to make significant changes.
[Baxter’s talk is just one of many conversations around diversity and inclusion at Transform 2021 (July 12-16).
On July 12, we’ll kick off with our third Women in AI breakfast gathering. On Wednesday, we will have a session on BIPOC in AI. On Friday, we’ll host the Women in AI awards. Throughout the agenda, we’ll have numerous other talks on inclusion and bias, including with Margaret Mitchell, a leading AI researcher on responsible AI, as well as with executives from Pinterest, Redfin, and more.] VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,576 | 2,021 |
"McAfee's Celeste Fralick explains why diversity is fundamental to cybersecurity | VentureBeat"
|
"https://venturebeat.com/2021/07/13/mcafees-celeste-fralick-explains-why-diversity-is-fundamental-to-cybersecurity"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages VB Event McAfee’s Celeste Fralick explains why diversity is fundamental to cybersecurity Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
This year’s Transform 2021 is in full swing. As we put together this year’s conference, we were conscious of the need to invite panelists from a broad array of experiences, cultures, and backgrounds. It’s increasingly clear that AI isn’t one-size-fits-all, and these business leaders and execs make that tenet a central part of their work.
We had the opportunity to speak with Dr. Celeste Fralick, chief data scientist and senior principal engineer at McAfee about how the pandemic has changed company perspectives, why effective cybersecurity requires diversity of thought, and more.
See the first three in the series: Intel’s Huma Abidi, Redfin’s Bridget Frey, and Salesforce’s Kathy Baxter.
More to follow.
VB: Could you tell me about your background, and your current role at your company? VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! I have worked in data since 1980, with my first project being statistical process control (SPC) for a Texas Instruments manufacturing plant. Fast forward through Fairchild, Medtronic (retired), Intel (retired), and its spin-out McAfee, and I have always gravitated to (or been assigned directly to) statistics and now data science. My PhD in biomedical engineering focused on neural networks — quite timely for the advent of big data! My current role as a chief data scientist and senior principal engineer requires me to interface with data scientists, management, vendors, and customers — so I embrace a satisfying combination of detailed technical work and system-wide implications to analytics every day. I like to draw upon my background in process and product development to enhance my approach to AI. I definitely like to connect the dots with people, operations, projects, and data. Always data! VB: Any woman or BIPOC in the tech industry, or adjacent to it, is already forced to think about DE&I just by virtue of being “a woman or BIPOC in tech” — how has that influenced your career? I try NOT to think about it and just do the best job I can. I have only had a few instances where I had to draw the line or bring up an oversight, but constructive confrontation and data always resulted in great outcomes. It helps to have a predictable emotional “amplitude” in what bothers you and what doesn’t.
VB: Can you tell us about the diversity initiatives you’ve been involved in, especially in your community.
One of the challenges of being a biomedical engineer (BME) by education is that companies tend to forget that 50% of BME students are female and are often proficient in complementary areas, including physics, computer science, and data science. I serve/served on industrial advisory boards for universities’ BME departments and I am always surprised at the university career centers’ sluggishness to realize this degree’s diversity and analytical approach.
The security industry, as a whole, has been male-dominated but I see this as an evolutionary product of the industry itself — e.g., more attacks yield more knowledge about cybersecurity protection and the companies that provide that security, leading potential hires to be aware of this intriguing field. McAfee diligently works to recruit technical females, even ensures at least one woman employee is on interview panels, and meticulously supports our diversity.
One particular internal organization I am very fond of is Women in Security (WISE) where we are exposed to every challenge a woman faces — even financial acumen! The learnings and comradery from WISE have been excellent and I am very proud to have initiated the section in Argentina. WiSE is well supported by the C-Suite.
VB: How do you see the industry changing in response to the work that women, especially Black and BIPOC women, are doing on the ground? What will the industry look like for the next generation? Working from home over the past year has brought to light to many companies the intricacies of work-life balance. As women, we have been juggling with that for decades, so it is uplifting to see it recognized mainstream. I believe it has inspired companies to be more flexible, tolerant, and emotionally intelligent with their employees.
I am also finding that I can choose to shop at BIPOC-owned companies or for BIPOC-created products, as even large retail companies are highlighting these afore-hidden gems. We have choices now, compared to just a few years ago — how refreshing! (I grew up amongst Alaskan natives and their products, so I feel like the world at large is becoming more like home.) Both working from home and choosing BIPOC will continue to expand in all industries due to the recent global health and societal inflections.
As for the next generation, I know the cybersecurity industry will continue to increase its women and BIPOC recruitment. I continue to accentuate the unique field of BME, hopefully expanding diversity of thought — adversaries certainly think out-of-the-box, and the more diversity our industry has, the better we will respond to protect our customers.
Finally, as data and Data Science degrees have increased exponentially worldwide, the opportunity for hiring women is excellent — let’s just continue our efforts in K-12 to teach data and statistics from early-on, and influence the next generation that AI can bring great efficiencies, discoveries, and opportunities to everyone.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,577 | 2,021 |
"Cindi Howson on tackling microaggressions, fraught conversations, and more | VentureBeat"
|
"https://venturebeat.com/2021/07/28/cindi-howson-on-tackling-a-culture-of-microaggressions-having-the-fraught-conversations-and-more"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Cindi Howson on tackling microaggressions, fraught conversations, and more Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
VentureBeat is committed to a world where the AI is ethical, diversity and inclusion aren’t just buzzwords, and leaders in the AI space who are working to make that happen get a spotlight. As part of that, we were lucky to sit down with Cindi Howson, chief data strategy officer at ThoughtSpot who helms the company’s customer enablement initiatives.
In her role at ThoughtSpot, Howson’s directing the conversation around what it means to hire the most qualified candidate for a job — and how to make sure that hiring process swings open the door for every candidate, not just the usual white, male suspects. She also advocates on behalf of women in STEM, as well as focuses on finding and inspiring the BIPOC kids who are neglected at every level of the education, and too often left behind.
See the others in the series: Intel’s Huma Abidi, Redfin’s Bridget Frey, Salesforce’s Kathy Baxter , and McAfee’s Celeste Fralick.
This conversation has been edited for length and clarity.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! VB: Could you tell me about your background, and your current role at your company? CH: I’m chief data strategy officer at ThoughtSpot. There’s three parts of it. I work with our top customers helping them execute their data analytics strategy. I work with our product teams evolving our product capabilities. And then thought leadership, whether it’s writing, speaking, or hosting the Data Chief podcast, produced by Mission.org, the makers of IT Visionaries, and sponsored by ThoughtSpot.
VB: Can you talk more about the work that you’re doing, both for your own company and your own personal passion projects? CH : I asked one of our interns, the head of Black Engineers Who Code, why is it so hard for Black and BIPOC kids to get into STEM? And she said, “I was not taught calculus — my high school didn’t even offer it. I had to take it online. I got my first laptop only in college.” And so you compare a student coming from that kind of school system or environment with somebody who’s had parents giving them laptops in junior high or elementary school, who sends them to the best schools. By the time they get to Stanford or MIT or wherever else tech companies are recruiting, that gap just gets wider and wider. This is where, as an industry, we have to focus on diversity and inclusion, but I also want us thinking about assessing somebody’s aptitude to gain these skills, no matter which stage of life they’re at or where they’re coming from — if it’s formal education or different boot camps or employer-provided training.
VB: Can you talk more about the work that you’re doing, both for your own company and your own personal passion projects? At ThoughtSpot, I also chair or champion our diversity and inclusion efforts that are specifically sponsored by our CEO, Sudheesh Nair. We have divided it into different pillars. The one that I lead is related to volunteering and giving. We look at what organizations we want to donate to in terms of sponsoring funds. But then the volunteering and giving is really around the concept that we have a pipeline problem in the tech industry. If we can get people more excited at an earlier age, then hopefully we can close the diversity and talent gap. This includes working with groups like the Mark Cuban AI Foundation, where we’re sponsoring one of their boot camps in the fall — along with organizations like Girls Plus Data and Women in Data.
We scout out the organizations, and meet with their leaders to understand their approach, and who the students are they seek to influence or get excited about data analytics. The first Girls Plus Data workshop we did was one of the most inspiring or fun classes that I’ve gotten to teach, and I’ve been teaching data analytics for 30 years.
I’m really excited that the one we’ll be doing next will be in my home state. Atlantic City is really a tale of two cities, super rich and super poor, and so this bootcamp gets at socioeconomic diversity and ethnic diversity as well.
VB: Can you talk about being a woman in tech and how that’s influenced your career? CH: Being a woman in tech — what can I tell you? There are good days and there are bad days. I was thinking, is it getting any better? I was having a conversation with a CDO last week. She expressed a concern to me that I hadn’t thought about. Was it more overt 20 years ago, much more blatant, whether it’s sexual harassment or being passed over or being asked ‘Where do I get the coffee?’ — and now it’s much more subtle? The microaggressions that continue to undermine our self-confidence, undermine the possibilities of working in high tech.
I was reading a stat this morning that we don’t have a pipeline problem, but we have a leaky pipeline problem. Forty percent of women who majored in one of these areas leave within the first three to five years, because you just think — this is not a lot of fun.
If I ask somebody, what’s the deadline, or who is accountable for this, I can get called aggressive. The way women and men have responded to the pandemic — I can not skip a beat, even though I have experienced personal losses. You can’t mourn those personal losses. You have to show up. And then when I just get a tremor in my voice because we’re going to lay some people off, I’m labeled emotional. And yet a man who actually gets teary-eyed, he’s called vulnerable and in touch.
VB: Do you see this changing? I know it probably won’t stop just because we actually recognize this, but do you see that bringing these microaggressions up, making them more visible — do you see the industry changing now that we talk about it more? CH: You can look at two comments made in the industry in the last month by very powerful, important, influential CEOs [saying] ‘I’m not concerned about diversity. I’m concerned about merit.’ On the one hand I agree with him — please, don’t ever hire me because I’m a woman. Hire me because I am the best talent. But I also want you to recognize that unconscious biases and lack of a network and no time to network or go play golf may limit the exposure you have had to me. Let’s pay attention to both, but they’re not mutually exclusive.
But I look at it at an individual level, and at a company level. I see this slow but steady progress at ThoughtSpot. And we absolutely have our problems. But I look at some of the progress and the commitment from every single level to be data-driven in it, and to have these hard, uncomfortable conversations. If anything, 2020 has forced others to confront this, but I feel like we’re not better at having these conversations. Come to it from a place of wanting to understand the perspectives and get to a better world for everyone.
VB: It’s frustrating that just having conversations is so fraught.
CH: Well, it is fraught. Whoever’s in power, whether it’s men or Caucasions or — you don’t want to say the wrong thing. You don’t want to offend anyone. I think people have gotten quite hostile. Then it’s just better to be quiet, and that’s not ideal either.
VB: Is there anything else you want to touch on? CH: What I want people to think of is that unconscious bias is real. We call it unconscious because you don’t notice. It doesn’t make you a bad person. It makes you human. The more that people can recognize that, then I think that goes a long way to just acknowledging how difficult this can be. But I believe that a diverse and inclusive world is a better world, and if that’s not good enough, look at McKinsey’s data. Higher profits. Two to three times the revenue growth of less diverse organizations. In an AI-driven world, it’s critical that we get this right.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,578 | 2,020 |
"Open source machine learning platform Kubeflow reaches version 1.0 | VentureBeat"
|
"https://venturebeat.com/2020/03/02/open-source-machine-learning-platform-kubeflow-reaches-version-1-0"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Open source machine learning platform Kubeflow reaches version 1.0 Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Kubeflow, the freely available machine learning platform cofounded by developers at Google, Cisco, IBM, Red Hat, CoreOS, and CaiCloud, made its debut at the annual Kubecon conference in 2017. Three years later, Kubeflow has reached version 1.0 — its first major release — as the project grows to hundreds of contributors over 30 participating organizations. Companies including US Bank, Chase, GoJek, Amazon Web Services, Bloomberg, Uber, Shopify, GitHub, Canonical, Intel, Alibaba Cloud, TuSimple, Dell, Shell, Arrikto, and Volvo are among those using it in production.
Project coauthors Jeremy Lewi, Josh Bottum, Elvira Dzhuraeva, David Aronchick, Amy Unruh, Animesh Singh, and Ellis Bigelow announced the news in a Medium post this morning. “Kubeflow’s goal is to make it easy for machine learning engineers and data scientists to leverage cloud assets (public or on-premise) for [machine learning] workloads,” they wrote. “With Kubeflow, there is no need for data scientists to learn new concepts or platforms to deploy their applications, or to deal with ingress, networking certificates, etc.” Kubeflow 1.0 graduates to a core set of stable components needed to develop, build, train, and deploy models efficiently on Kubernetes, the Google-developed open source container-orchestration system for automating app deployment, scaling, and management. In addition to Kubeflow’s central dashboard UI and Jupyter notebook controller, Kubeflow 1.0 ships with the web app Tensorflow Operator (TFJob), PyTorch Operator (for distributed training), kfctl (for deployment and upgrades), and a profile controller and multiuser management UI.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! With Kubeflow 1.0, developers can use the programming notebook platform Jupyter and Kubeflow tools like Kubeflow’s Python software development kit to develop models, build containers, and create Kubernetes resources to train those models. Trained models can be optionally funneled through Kubeflow’s KFServing resource to create, deploy, and auto-scale an inferencing server across a range of hardware, tapping into new KFServing explainability and payload logging features in alpha.
Kubeflow 1.0 introduces a command-line interface and configuration files that enable it to be deployed with a single command, as well as modules under development like Pipelines.
(Pipelines is partly based on and utilizes libraries from TensorFlow Extended, which was used internally at Google to build machine learning components and then allow developers on various internal teams to utilize that work and put it into production.) Other work-in-progress apps in Kubeflow 1.0 are Metadata (for tracking datasets, jobs, and models); Katib (for hyper-parameter tuning); and distributed operators for other frameworks like xgboost. In future releases of Kubeflow, they’ll be graduated to 1.0.
As before, Kubeflow enables data scientists and teams to run workloads within namespaces. (Namespaces provide security and resource isolation, and, using Kubernetes resource quotas, admins can limit how much resources an individual or team can consume to ensure fair scheduling.) From the Kubeflow UI, users can launch programming notebooks by choosing one of the pre-built images or entering the URL of a custom image. They can then set how many processors and graphics cards to attach to their notebook, as well as which configuration and secrets parameters to include from repositories and databases. Plus, they’re able to define a TFJob or PyTorch resource to have the controller take care of spinning up and managing processes and configuring them to talk to one another.
“This was a significant investment. It has taken several organizations and a lot of precious resources to get here,” wrote Cisco distinguished engineer and Kubeflow contributor Debo Dutta in a blog post.
“We are very excited about the future of Kubeflow. We would like to see the community get stronger and more diverse, and we would like to request more individuals and organizations to join the community.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,579 | 2,020 |
"Google launches Cloud AI Platform Pipelines in beta to simplify machine learning development | VentureBeat"
|
"https://venturebeat.com/2020/03/11/google-launches-cloud-ai-platform-pipelines-in-beta-to-simplify-machine-learning-development"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google launches Cloud AI Platform Pipelines in beta to simplify machine learning development Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Google today announced the beta launch of Cloud AI Platform Pipelines , a service designed to deploy robust, repeatable AI pipelines along with monitoring, auditing, version tracking, and reproducibility in the cloud. Google’s pitching it as a way to deliver an “easy to install” secure execution environment for machine learning workflows, which could reduce the amount of time enterprises spend bringing products to production.
“When you’re just prototyping a machine learning model in a notebook, it can seem fairly straightforward. But when you need to start paying attention to the other pieces required to make a [machine learning] workflow sustainable and scalable, things become more complex,” wrote Google product manager Anusha Ramesh and staff developer advocate Amy Unruh in a blog post. “A machine learning workflow can involve many steps with dependencies on each other, from data preparation and analysis, to training, to evaluation, to deployment, and more. It’s hard to compose and track these processes in an ad-hoc manner — for example, in a set of notebooks or scripts — and things like auditing and reproducibility become increasingly problematic.” AI Platform Pipelines has two major parts: (1) the infrastructure for deploying and running structured AI workflows that are integrated with Google Cloud Platform services and (2) the pipeline tools for building, debugging, and sharing pipelines and components. The service runs on a Google Kubernetes cluster that’s automatically created as a part of the installation process, and it’s accessible via the Cloud AI Platform dashboard. With AI Platform Pipelines, developers specify a pipeline using the Kubeflow Pipelines software development kit (SDK), or by customizing the TensorFlow Extended (TFX) Pipeline template with the TFX SDK. This SDK compiles the pipeline and submits it to the Pipelines REST API server, which stores and schedules the pipeline for execution.
Above: A schematic of Cloud AI Platform Pipelines.
AI Pipelines uses the open source Argo workflow engine to run the pipeline and has additional microservices to record metadata, handle components IO, and schedule pipeline runs. Pipeline steps are executed as individual isolated pods in a cluster and each component can leverage Google Cloud services such as Dataflow, AI Platform Training and Prediction, BigQuery, and others. Meanwhile, the pipelines can contain steps that perform graphics card and tensor processing unit computation in the cluster, directly leveraging features like autoscaling and node auto-provisioning.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! AI Platform Pipeline runs include automatic metadata tracking using ML Metadata, a library for recording and retrieving metadata associated with machine learning developer and data scientist workflows. Automatic metadata tracking logs the artifacts used in each pipeline step, pipeline parameters, and the linkage across the input/output artifacts, as well as the pipeline steps that created and consumed them.
In addition, AI Platform Pipelines supports pipeline versioning, which allows developers to upload multiple versions of the same pipeline and group them in the UI, as well as automatic artifact and lineage tracking. Native artifact tracking enables the tracking of things like models, data statistics, model evaluation metrics, and many more. And lineage tracking shows the history and versions of your models, data, and more.
Google says that in the near future, AI Platform Pipelines will gain multi-user isolation, which will let each person accessing the Pipelines cluster control who can access their pipelines and other resources. Other forthcoming features include workload identity to support transparent access to Google Cloud Services; a UI-based setup of off-cluster storage of backend data, including metadata, server data, job history, and metrics; simpler cluster upgrades; and more templates for authoring workflows.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,580 | 2,021 |
"Microsoft unveils Azure Percept, a family of edge devices optimized for AI | VentureBeat"
|
"https://venturebeat.com/2021/03/02/microsoft-unveils-azure-percept-a-family-of-edge-devices-optimized-for-ai"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Microsoft unveils Azure Percept, a family of edge devices optimized for AI Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
During its Microsoft Ignite 2021 conference this week, Microsoft unveiled Azure Percept, a platform of hardware and services aimed at simplifying the ways customers can use AI technologies at the edge. According to the company, the goal of the new offering is to give customers an end-to-end system, from the hardware to the AI and machine learning capabilities.
Edge computing is forecast to be a $6.72 billion market by 2022. Its growth will coincide with that of the deep learning chipset market, which some analysts predict will reach $66.3 billion by 2025. There’s a reason for these rosy projections — edge computing is expected to make up roughly three-quarters of the total global AI chipset business in the next six years.
The Azure Percept platform includes a development kit with a camera called Azure Percept Vision, as well as a “getting started” experience called Azure Percept Studio that guides customers through the AI lifecycle. Azure Percept Studio includes developing and training resources, as well as guidance on deploying proof-of-concept ideas.
AI at the edge Azure Percept Vision and Azure Percept Audio, which ships separately from the development kit, connect to Azure services and come with embedded hardware-accelerated modules that enable speech and vision AI at the edge or during times when the device isn’t connected to the internet. The hardware in the Azure Percept development kit uses the industry standard 80/20 T-slot framing architecture, which Microsoft says will make it easier for customers to pilot new product ideas.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! As customers work on their ideas with the Azure Percept development kit, they’ll have access to Azure AI Cognitive Services and Azure Machine Learning models, plus AI models available from the open source community designed to run on the edge, Microsoft says. In addition, Azure Percept devices will automatically connect to Azure IoT Hub, which helps enable communication with security protections between internet of things devices and the cloud.
Azure Percept competes with Google’s Coral , a collection of hardware kits and accessories intended to bolster AI development at the edge. And Amazon recently announced AWS Panorama Appliance , a plug-in appliance that connects to a network and identifies videos from existing cameras with computer vision models for manufacturing, retail, construction, and other industries.
But in addition to announcing first-party hardware, Microsoft says it’s working with third-party silicon and equipment manufacturers to build an ecosystem of devices to run on the Azure Percept platform. Moreover, the company says the Azure Percept team is currently working with select early customers to understand concerns around the responsible development and deployment of AI on devices, providing them with documentation and access to toolkits for their AI implementations.
“We’ve started with the two most common AI workloads, vision and voice [and] sight and sound, and we’ve given out that blueprint so that manufacturers can take the basics of what we’ve started,” Microsoft VP Roanne Sones said. “But they can envision it in any kind of responsible form factor to cover a pattern of the world.” A continued investment In 2018, Microsoft committed $5 billion to intelligent edge innovation by 2022 — an uptick from the $1.5 billion it spent prior to 2018 — and pledged to grow its IoT partner ecosystem to over 10,000. This investment has borne fruit in Azure IoT Central, a cloud service that enables customers to quickly provision and deploy IoT apps, and IoT Plug and Play, which provides devices that work with a range of off-the-shelf solutions. Microsoft’s investment has also bolstered Azure Sphere; Azure Security Center, its unified cloud and edge security suite; and Azure IoT Edge, which distributes cloud intelligence to run in isolation on IoT devices directly.
Microsoft has competition in Google’s Cloud IoT, a set of tools that connect, process, store, and analyze edge device data. Not to be outdone, Amazon Web Services’ IoT Device Management tracks, monitors, and manages fleets of devices running a range of operating systems and software. And Baidu’s OpenEdge offers a range of IoT edge computing boards and a cloud-based management suite to manage edge nodes, edge apps, and resources such as certification, password, and program code.
But the Seattle company has ramped up its buildout efforts, most recently with the acquisition of CyberX and Express Logic , a San Diego, California-based developer of real-time operating systems (RTOS) for IoT and edge devices powered by microcontroller units. Microsoft has also partnered with companies like DJI, SAP, PTC, Qualcomm, and Carnegie Mellon University for IoT and edge app development.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,581 | 2,021 |
"65% of execs can't explain how their AI models make decisions, survey finds | VentureBeat"
|
"https://venturebeat.com/2021/05/25/65-of-execs-cant-explain-how-their-ai-models-make-decisions-survey-finds"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages 65% of execs can’t explain how their AI models make decisions, survey finds Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Despite increasing demand for and use of AI tools, 65% of companies can’t explain how AI model decisions or predictions are made. That’s according to the results of a new survey from global analytics firm FICO and Corinium, which surveyed 100 C-level analytic and data executives to understand how organizations are deploying AI and whether they’re ensuring AI is used ethically.
“Over the past 15 months, more and more businesses have been investing in AI tools, but have not elevated the importance of AI governance and responsible AI to the boardroom level,” FICO chief analytics officer Scott Zoldi said in a press release. “Organizations are increasingly leveraging AI to automate key processes that — in some cases — are making life-altering decisions for their customers and stakeholders. Senior leadership and boards must understand and enforce auditable, immutable AI model governance and product model monitoring to ensure that the decisions are accountable, fair, transparent, and responsible.” The study, which was commissioned by FICO and conducted by Corinium, found that 33% of executive teams have an incomplete understanding of AI ethics.
While IT, analytics, and compliance staff have the highest awareness, understanding across organizations remains patchy. As a result, there’s significant barriers to building support — 73% of stakeholders say they’ve struggled to get executive support for responsible AI practices.
Implementing AI responsibly means different things to different companies. For some, “responsible” implies adopting AI in a manner that’s ethical, transparent, and accountable. For others, it means ensuring that their use of AI remains consistent with laws, regulations, norms, customer expectations, and organizational values. In any case, “responsible AI” promises to guard against the use of biased data or algorithms, providing an assurance that automated decisions are justified and explainable — at least in theory.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! According to Corinium and FICO, while almost half (49%) of respondents to the survey report an increase in resources allocated to AI projects over the past year, only 39% and 28% say they’ve prioritized AI governance and model monitoring or maintenance, respectively. Potentially contributing to the ethics gap is a lack of consensus among executives about what a company’s responsibilities should be when it comes to AI. The majority of companies (55%) agree that AI for data ingestion must meet basic ethical standards and that systems used for back-office operations must also be explainable. But almost half (43%) say that they don’t have responsibilities beyond meeting regulations to manage AI systems whose decisions might indirectly affect people’s livelihoods.
Turning the tide What can enterprises do to embrace responsible AI? Combating bias is an important step, but only 38% of companies say that they have bias mitigation steps built into their model development processes. In fact, only a fifth of respondents (20%) to the Corinium and FICO survey actively monitor their models in production for fairness and ethics, while just one in three (33%) have a model validation team to assess newly developed models.
The findings agree with a recent Boston Consulting Group survey of 1,000 enterprises, which found fewer than half of those that achieved AI at scale had fully mature, “responsible” AI implementations. The lagging adoption of responsible AI belies the value these practices can bring to bear. A study by Capgemini found customers and employees will reward organizations that practice ethical AI with greater loyalty, more business, and even a willingness to advocate for them — and in turn, punish those that don’t.
This being the case, businesses appear to understand the value of evaluating the fairness of model outcomes, with 59% of survey respondents saying they do this to detect model bias. Additionally, 55% say they isolate and assess latent model features for bias, and half (50%) say they have a codified mathematical definition for data bias and actively check for bias in unstructured data sources.
Businesses also recognize that things need to change, as the overwhelming majority (90%) agree that inefficient processes for model monitoring represent a barrier to AI adoption. Thankfully, almost two-thirds (63%) respondents to the Corinium and FICO report believe that AI ethics and responsible AI will become a core element of their organization’s strategy within two years.
“The business community is committed to driving transformation through AI-powered automation. However, senior leaders and boards need to be aware of the risks associated with the technology and the best practices to proactively mitigate them,” Zoldi added. “AI has the power to transform the world, but as the popular saying goes — with great power comes great responsibility.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,582 | 2,020 |
"Traceable raises $20 million for AI system that shields cloud app APIs from cyberattacks | VentureBeat"
|
"https://venturebeat.com/2020/07/14/traceable-raises-20-million-for-ai-system-that-shields-cloud-app-apis-from-cyberattacks"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Traceable raises $20 million for AI system that shields cloud app APIs from cyberattacks Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Traceable , a startup developing an end-to-end cloud app security solution, today emerged from stealth with $20 million in funding. CEO Jyoti Bansal plans to focus on acquiring customers globally while growing Traceable’s team and accelerating R&D.
Cloud-native apps are often built with hundreds or even thousands of API microservices (i.e., loosely coupled services), making them difficult to protect at scale. Gartner predicts that by 2022 API abuses will be the most frequent attack vector, which isn’t surprising, considering API calls represented 83% of web traffic as of 2018.
Traceable works to protect these APIs with machine learning algorithms that analyze app activity from the user and session all the way down to the code. These algorithms learn to distinguish between normal and anomalous behavior with a false positive rate of less than 1%, Bansal claims, and to provide alerts for activity that might deviate from the norm.
“Cloud-native applications have clearly become hackers’ favorite targets. These applications are all API-driven, with APIs exposing business logic to the outside world. Existing application security approaches aren’t built for modern application architectures and use data in a narrow context to detect threat activity,” Bansal told VentureBeat. “Traceable’s approach is to feed TraceAI, our machine learning technology, with extremely rich and highly useful distributed tracing data directly from the application. This combination of real-time trace data and machine learning uniquely enables Traceable to distinguish between legitimate and malicious users and application activity with a high degree of accuracy.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Bansal, the founder and former CEO of AppDynamics, cofounded Traceable with former AppyDynamics VP Sanjay Nagaraj. (Cisco acquired AppDynamics in 2017 for roughly $3.7 billion.) While at AppDynamics, Bansal had a prime view of the growing adoption of cloud-native architectures. He says he soon realized existing approaches to cloud app security fell short — most only provided limited visibility into the app layer and suffered from high false-positive rates, while others were designed to protect traditional apps with well-understood protocols, as opposed to distributed apps using custom APIs.
“One of our customers has approximately 700 API endpoints. These sessions ranged anywhere from 10 API calls to 100 API calls,” explained Nagaraj. “Theoretically, this would come down to 700 to the power of 10, or 700 to the power of 100 possible personas. But like in natural language, applications have their own grammar, where APIs are akin to words in natural language and API interaction is based on a latent grammar. Each of these endpoints had as many as 6,000 response body keys and around 100 request keys and hundreds of headers. The combinatorial complexity of validating this intricate relationship at scale is something that cannot be solved by brute-force analysis or a rules-based engine. Instead, it requires advanced and scalable machine learning techniques.” Bansal says Traceable has a number of paying customers, but to spur adoption of the platform, he and Nagaraj made the underlying distributed tracing technology available in open source. Dubbed Hypertrace, it enables DevOps teams to observe and monitor production applications with the same tracing and observability features powering Traceable.
Bansal’s own Unusual Ventures led Traceable’s $20 million series A round. This is one of the venture firm’s largest commitments since April 2019, when it participated in a $60 million round in Bansal’s Harness.io, a startup that leverages AI to detect the quality of app deployments and automatically roll back failed attempts.
Traceable’s exit from stealth follows the launch of Salt Security , which is also developing a protection solution that discovers APIs and spots vulnerabilities. Salt and Traceable take an approach that is similar — but not identical — to that of Elastic Beam, an API cybersecurity company that was acquired by Denver, Colorado-based Ping Identity in June 2018. Other rivals include Spherical Defense, which adopts a machine learning-based approach to web application firewalls, and Wallarm, which provides an AI-powered security platform for APIs, as well as websites and microservices.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,583 | 2,020 |
"Salt Security raises $30 million to automatically protect APIs from cyberattacks | VentureBeat"
|
"https://venturebeat.com/2020/12/08/salt-security-raises-30-million-to-automatically-protect-apis-from-cyberattacks"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Salt Security raises $30 million to automatically protect APIs from cyberattacks Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Salt Security , which is developing a threat protection solution that discovers APIs and detects vulnerabilities, today raised $30 million. The Palo Alto, California-based startup plans to use the capital to bolster product development, sales and marketing, and customer acquisition efforts well into 2021, following a $20 million raise in June.
Application programming languages (APIs) dictate the interactions between software programs. They define the kinds of calls or requests that can be made, how they’re made, the data formats that should be used, and the conventions to follow. As over 80% of web traffic becomes API traffic, they are coming under increasing threat. Gartner predicts that by 2021, 90% of web apps will have more surface area for attacks in the form of exposed APIs than frontends.
Salt’s platform aims to prevent these attacks with a combination of AI and machine learning technologies. It analyzes a copy of the traffic from web, software-as-a-service, mobile, microservice, and internet of things app APIs and uses this process to gain an understanding of each API and create a baseline of normal behavior. From these baselines, Salt identifies anomalies that might be indicators of an attack during reconnaissance, eliminating the need for things like signatures and configurations.
Above: The web dashboard for the Salt Security platform.
Salt leverages dozens of behavioral features to identify anomalies. Its machine learning models are trained to detect when an attacker is probing an API, for instance, because this deviates from typical usage. They analyze the “full communication,” taking into consideration factors like how an API responds to malicious calls. And they correlate attacker activity, enabling Salt to connect probing attempts performed over time to a single attacker, even if the perpetrator attempts to conceal their identity by rotating devices, API tokens, IP addresses, and more.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Confirmed anomalies trigger a single alert to security teams with a timeline of attacker activity.
Salt takes an approach similar — but not identical — to that of Elastic Beam, an API cybersecurity startup that was acquired by Denver, Colorado-based Ping Identity in June 2018. Other rivals include Spherical Defense, which adopts a machine learning-based approach to web application firewalls, and Wallarm, which provides an AI-powered security platform for APIs, as well as websites and microservices.
But Salt is doing brisk business, with customers like Gett, City National Bank, TripActions, and Armis. The company claims the size of its customer base has increased 200%.
The series B funding round announced today was led by Sequoia Capital, with participation from existing investors Tenaya Capital, S Capital VC, and Y Combinator. It brings Salt’s total raised to $60 million.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,584 | 2,020 |
"ACH fraud is up. Learn how to defeat it. (VB Live) | VentureBeat"
|
"https://venturebeat.com/2020/07/08/ach-fraud-is-up-learn-how-to-defeat-it-vb-live"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages VB Live ACH fraud is up. Learn how to defeat it. (VB Live) Share on Facebook Share on X Share on LinkedIn Presented by Envestnet | Yodlee Nacha’s launch of new account validation requirements is a big event, with new opportunities for financial services companies. Learn what’s changed, how to turn fraud safety into consumer satisfaction, an overview of the online payment landscape, and more in this VB Live event.
Register here for free.
ACH has gained great value for companies in the fintech space as consumers flock more to digital payment accounts like Paypal and Venmo where their purchases are funded through a bank account rather than through a credit card, says Eric Jamison, VP, Product Management, Envestnet | Yodlee.
“What people in the fintech space have realized, especially those in the startup space, is that there’s value in allowing their customers to use ACH, because every dollar counts,” Jamison says. “It’s helped to raise awareness of the use of the ACH network to allow these types of transactions and, with all the different advancements, to do them with speed.” That speed — the ability to facilitate a transaction in real time through the ACH network, as quickly as a credit card would — has also helped advance the use and the ubiquity of that service in the fintech space, to the point where it’s become one of the primary methods that fintechs are looking at for consumers to transact.
The rising fraud problem — and Nacha’s response But no good deed goes unpunished.
“As growth has happened in the space, it just becomes a target,” Jamison says. “Fraudsters are like water — they’re going to go through the path of least resistance. If they perceive ACH as one of those paths, they’re going to target it.” Now the industry has done a good job helping mitigate and fight fraud, but it’s always evolving. That’s one of the key issues behind the recent NACHA initiative.
It’s reacting to the market and helping raise awareness of the need to continue to evolve your tactics to mitigate fraud.
Nacha recognized the problem, and its growing impact, as early as 2018. Originally the rule was set to come into effect in early 2020, but it had to be pushed out to March 2021 in order to allow companies the time to develop ways to meet the requirements.
Behind the new Nacha rule Companies using or initiating ACH web debits have always had the requirement to use a commercially reasonable fraud detection system. The rule change is that one of those commercially reasonable solutions could be account validation.
In essence, what that means is, before a transaction is initiated off an ACH account or demand deposit account (DDA), the institution has to validate that account.
There are a variety of ways that they can do that. There’s pre-note, to make sure that account exists. There are services that can provide information about whether an account exists and the standing it’s in, in conjunction with providing some measure of ownership. Or there are services that, like Yodlee’s account verification that can do an account ownership balance inquiry in real time, while also obtaining the account and routing number information for the account.
Nacha is open to the different types of services one can use to perform that validation, as long as the provider meets their requirements.
Benefits and opportunities to fintechs in new validation rule One of the biggest advantages this new rule offers is speed, Jamison says. For a consumer looking to perform a transaction, whether it’s a transfer, or setting up their direct deposit with their employer, or making a purchase or payment, instant account verification is a huge bonus.
With new account verification systems, consumers have the ability to validate that information in-stream — no need to ask them to go off and find their checkbook, which increasingly people don’t have handy, and use the credentials they have at hand, like a valid login for a site, or their face ID or thumbprint on their mobile device.
Account verification should be as easy as taking out your credit card and swiping it, or easier, because people don’t want to futz around, looking for that information. They just want to transact, Jamison explains.
“It just allows that consumer to feel confident that, hey, this provider and the service that I’m looking to use is looking out for my best interests,” he says. “You’re making it easy to verify their information and get them through the transaction and on with their day.” The speed fintechs can offer to their customers, coupled with the fraud mitigation ability that comes from being able to analyze the type of activity you’re running, are major advantages to fintech companies — so now is the time to get ready for the coming validation.
To learn more about the Nacha rule and the account verification systems that preferred partners offer, plus a deeper look at the payments landscape and more, don’t miss this VB Live event.
Don’t miss out! Register here for free.
Attendees will take away: An in-depth look at the Nacha WEB Debit Account Validation Rule Broad overview of the current and future online payment landscape Lessons from the financial services companies in the fray And more Speakers: Jason Carone , Director of Product Management, Silicon Valley Bank Eric Jamison , VP, Product Management, Envestnet | Yodlee Evan Schuman , Moderator, VentureBeat The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,585 | 2,021 |
"Cybereason: 80% of orgs that paid the ransom were hit again | VentureBeat"
|
"https://venturebeat.com/2021/06/16/cybereason-80-of-orgs-that-paid-the-ransom-were-hit-again"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Cybereason: 80% of orgs that paid the ransom were hit again Share on Facebook Share on X Share on LinkedIn Bitcoin lost more than 60% of its value in 2022.
Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Ransomware attacks are on the rise globally as cybercriminals adopt more sophisticated tactics.
The Federal Bureau of Investigation reported a 225% increase in total losses from ransomware in the United States in 2020. According to Cybersecurity Ventures, businesses are under attack every 11 seconds, on average, and damage losses are projected to reach $20 billion worldwide.
Against this backdrop, the Cybereason Global Ransomware Study measured how much financial and reputational damage these attacks wreak on businesses.
Dealing with the aftermath of a ransomware attack can be complicated and costly. The vast majority of organizations experienced significant business impact due to ransomware attacks, including loss of revenue (66%), damage to the organization’s brand (53%), unplanned workforce reductions (29%), and even closure of the business altogether (25%).
Above: This table provides a side-by-side comparison of which solutions were in place that may have protected organizations from a ransomware attack and the investments made by organizations after an attack.
After an organization experienced a ransomware attack, the top 5 solutions implemented included security awareness training (48%), security operations (SOC) (48%), endpoint protection (44%), data backup and recovery (43%), and email scanning (41%). The least deployed solutions post-attack included web scanning (40%), endpoint detection and response (EDR) and extended detection and response (XDR) technologies (38%), antivirus software (38%), mobile and SMS security solutions (36%), and managed security services provider (MSSP) or managed detection and response (MDR) provider (34%). Only 3% of respondents said they did not make any new security investments after a ransomware attack.
Cybereason’s study found that the majority of organizations that chose to pay ransom demands in the past were not immune to subsequent ransomware attacks, often by the same threat actors. In fact, 80% of organizations that paid the ransom were hit by a second attack, and almost half were hit by the same threat group.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! This study offers insight into the business impact of ransomware attacks across key industry verticals and reveals data that can be leveraged to improve ransomware defenses. For example, after an organization experienced a ransomware attack, the top two solutions implemented included security awareness training (48%) and security operations (48%). This research underscores that prevention is the best strategy for managing ransomware risk and ensuring your organization does not fall victim to a ransomware attack in the first place.
1,263 cybersecurity professionals took part in the study commissioned by Cybereason and fielded by Censuswide, with participants in varying industries from the United States, United Kingdom, Spain, Germany, France, United Arab Emirates, and Singapore.
Read the full Cybereason Global Ransomware Study.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,586 | 2,021 |
"Cybersecurity is the next frontier for AI and ML | VentureBeat"
|
"https://venturebeat.com/2021/06/18/cybersecurity-is-the-next-frontier-for-ai-and-ml"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Cybersecurity is the next frontier for AI and ML Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Before diving into cybersecurity and how the industry is using AI at this point, let’s define the term AI first. Artificial intelligence (AI), as the term is used today, is the overarching concept covering machine learning (supervised, including deep learning, and unsupervised), as well as other algorithmic approaches that are more than just simple statistics. These other algorithms include the fields of natural language processing (NLP), natural language understanding (NLU), reinforcement learning, and knowledge representation. These are the most relevant approaches in cybersecurity.
Given this definition, how evolved are cybersecurity products when it comes to using AI and ML? I do see more and more cybersecurity companies leverage ML and AI in some way. The question is to what degree. I have written before about the dangers of algorithms. It’s gotten too easy for any software engineer to play a data scientist. It’s as easy as downloading a library and calling the .start() function. The challenge lies in the fact that the engineer often has no idea what just happened within the algorithm and how to correctly use it. Does the algorithm work with non normally distributed data? What about normalizing the data before inputting it into the algorithm? How should the results be interpreted? I gave a talk at BlackHat where I showed what happens when we don’t know what an algorithm is doing.
Above: Slide from BlackHat 2018 talk about Why Algorithms Are Dangerous showing what can go wrong by blindly using AI.
So, the mere fact that a company is using AI or ML in their product is not a good indicator of the product actually doing something smart. On the contrary, most companies I have looked at that claimed to use AI for some core capability are doing it ‘wrong’ in some way, shape or form. To be fair, there are some companies that stick to the right principles, hire actual data scientists, apply algorithms correctly, and interpret the data correctly.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! How AI is used in security Generally, I see the correct application of AI in the supervised machine learning camp where there is a lot of labeled data available: malware detection (telling benign binaries from malware), malware classification (attributing malware to some malware family), document and website classification, document analysis, and natural language understanding for phishing and BEC detection. There is some early but promising work being done on graph (or social network) analytics for communication analysis. But you need a lot of data and contextual information that is not easy to get your hands on. Then, there are a couple of companies that are using belief networks to model expert knowledge, for example, for event triage or insider threat detection. But unfortunately, these companies are a dime a dozen.
That leads us into the next question: What are the top use-cases for AI in security ? I am personally excited about a couple of areas that I think are showing quite some promise to advance the cybersecurity efforts: Using NLP and NLU to understand people’s email habits to then identify malicious activity (BEC, phishing, etc.). Initially we have tried to run sentiment analysis on messaging data, but we quickly realized we should leave that to analyzing tweets for brand sentiment and avoid making human (or phishing) behavior judgements. It’s a bit too early for that. But there are some successes in topic modeling, token classification of things like account numbers, and even looking at the use of language.
Leveraging graph analytics to map out data movement and data lineage to learn when exfiltration or malicious data modifications are occurring. This topic is not researched well yet, and I am not aware of any company or product that does this well just yet. It’s a hard problem on many layers, from data collection to deduplication and interpretation. But that’s also what makes this research interesting.
Given the above it doesn’t look like we have made a lot of progress in AI for security. Why is that? I’d attribute it to a few things: Access to training data.
Any hypothesis we come up with, we have to test and validate. Without data that’s hard to do. We need complex data sets that are showing user interactions across applications, their data, and cloud apps, along with contextual information about the users and their data. This kind of data is hard to get, especially with privacy concerns and regulations like GDPR putting more scrutiny on processes around research work.
A lack of e ngineers that understand data science and security.
We need security experts with a lot of experience to work on these problems. When I say security experts, these are people that have a deep understand (and hands-on experience) of operating systems and applications, networking and cloud infrastructures. It’s unlikely to find these experts who also have data science chops. Pairing them with data scientists helps, but there is a lot that gets lost in their communications.
Research dollars.
There are few companies that are doing real security research. Take a larger security firm. They might do malware research, but how many of them have actual data science teams that are researching novel approaches? Microsoft has a few great researchers working on relevant problems. Bank of America has an effort to fund academia to work on pressing problems for them. But that work generally doesn’t see the light of day within your off the shelf security products. Generally, security vendors don’t invest in research that is not directly related to their products. And if they do, they want to see fairly quick turn arounds. That’s where startups can fill the gaps. Their challenge is to make their approaches scalable. Meaning not just scale to a lot of data, but also being relevant in a variety of customer environments with dozens of diverging processes, applications, usage patterns, etc. This then comes full circle with the data problem. You need data from a variety of different environments to establish hypotheses and test your approaches.
Is there anything that the security buyer should be doing differently to incentivize security vendors to do better in AI? I don’t think the security buyer is to blame for anything. The buyer shouldn’t have to know anything about how security products work. The products should do what they claim they do and do that well. I think that’s one of the mortal sins of the security industry: building products that are too complex.
As Ron Rivest said on a panel the other day: “Complexity is the enemy of security.” Raffael Marty is a technology executive, entrepreneur, and investor and writes about artificial intelligence, big data, and the product landscape around the cyber security market.
This story originally appeared on Raffy.ch.
Copyright 2021 VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,587 | 2,017 |
"AWS Macie secures sensitive cloud data using AI | VentureBeat"
|
"https://venturebeat.com/2017/08/14/aws-macie-secures-sensitive-cloud-data-using-ai"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages AWS Macie secures sensitive cloud data using AI Share on Facebook Share on X Share on LinkedIn An AWS logo spotted in Singapore.
Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Amazon Web Services unveiled a new service today that’s aimed at helping businesses automatically protect data stored in the company’s cloud. Called Macie , the service uses machine learning to classify sensitive information and then analyze access patterns to make sure that it’s staying safe.
When users set the system up, they help it classify sensitive information and assign that information a risk score. Macie will then use that training data to automatically classify new data as it comes into AWS going forward. After that, the system uses unsupervised machine learning to figure out regular access patterns for that information. If something changes unexpectedly, Macie will alert a customer’s security team so they can check it out.
The service is designed to protect companies from large-scale data breaches using machine learning. For example, the system should be able to flag if someone new is accessing a large volume of human resources data, which could help prevent a damaging data breach.
Macie is similar to services that other cloud providers and security companies already offer, but benefits from being native to AWS. One of the service’s key pluses is that it can help companies protect themselves from insider threats, since unusual access from a credentialed user will still create a Macie flag, even if their credentials weren’t taken.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Right now, Macie works with data stored in AWS’ Simple Storage Service (S3), and the company says that it will support other kinds of data later this year. It also uses events generated from the company’s CloudTrail logging service. Companies pay for Macie based on the number of gigabytes analyzed and the number of CloudTrail events processed.
AWS already has a number of marquee customers using Macie, including Netflix, Edmunds.com, and Autodesk. At launch, the service is only available through the cloud provider’s Northern Virginia and Oregon data centers.
In addition to Macie, AWS also announced some other security updates today. CloudTrail, a logging service that helps power Macie, will be turned on by default for all customers going forward. Businesses will get 7 days of historical logging data through CloudTrail for free, and can pay for additional history and better visualization of events.
The company’s CloudHSM service, which provides customers with access to hardware security modules stored in cloud data centers for encryption keys, has been updated to better support a cloud deployment model.
The previous iteration of the CloudHSM service will still be available as CloudHSM Classic, so customers with code that depends on the older service will be able to keep running that without modification.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,588 | 2,021 |
"AI could help advertisers recover from loss of third-party cookies | VentureBeat"
|
"https://venturebeat.com/2021/03/28/ai-could-help-advertisers-recover-from-loss-of-third-party-cookies"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages AI could help advertisers recover from loss of third-party cookies Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Options for targeting digital advertising in a way that doesn’t rely on cookies are increasing, thanks to advances in predictive analytics and AI that will ultimately lessen the current dominance of Google, Facebook, and other large-scale content aggregators.
Google announced earlier this month that it will no longer allow third-party cookies to collect data via its Chrome browser.
Many companies have historically relied on those cookies to better target their digital advertising, as the cookies enable digital ad networks and social media sites to create a profile of an end user without knowing specifically who that individual is. While that approach doesn’t necessarily breach anyone’s privacy, it does give many users the feeling that some entity is tracking the sites they visit in a way that makes them uncomfortable.
Providers of other browsers, such as Safari from Apple and the open source Firefox browser, have already abandoned third-party cookies. To be clear, Google isn’t walking away from tracking user behavior. Instead, the company has created a Federated Learning of Cohorts (FLoC) mechanism to track user behavior that doesn’t depend on cookies to collect data. Instead of being able to target an ad to a specific anonymous user, advertisers are presented with an opportunity to target groups of end users that are now organized into cohorts based on data Google still collects.
It remains to be seen how these initiatives might substantially change the user experience. However, some advertisers are now looking to employ machine learning algorithms and other forms of advanced analytics being made available via digital advertising networks to reduce their dependency on Google, Facebook, Twitter, Microsoft, and other entities that control massive online communities.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! For example, Equifax, a credit reporting agency, is working with Quantcast to place advertising closer to where relevant content is being originally created and consumed, said Joella Duncan, director of media strategy for North America at Equifax.
“We want our marketing teams to be able to pull more levers,” Duncan said. “Third-party cookies are stale.” That approach provides the added benefit of lessening an advertiser’s dependency on walled online gardens dominated by a handful of companies, Quantcast CEO Konrad Feldman said.
At the core of the Quantcast platform is an Ara engine that applies machine learning algorithms to data collected from 100 million online destinations in real time. That data is then analyzed using a set of predictive models that surface the behavioral patterns that make it possible to target ad campaigns. Those predictive models are scored a million times per second, in addition to being continuously updated to reflect recent events across the internet. “We’re not dependent on only one technique,” Feldman said.
That capability not only benefits clients such as Equifax, it also enables publishers of original content to retain a larger share of the advertising revenue generated. Google, Facebook, and Microsoft are all now moving toward compensating publishers for content that appears on their sites, but the bulk of the advertising revenue will still wind up in their coffers.
Quantcast is making a case for an alternative approach to digital advertising that would make it more evenly distributed. Advertisers are not likely to walk away from walled online gardens that make it cost-efficient for them to target millions of users. However, many of those same advertisers are looking for a way to more efficiently target narrower audience segments that might have a greater affinity for their products and services based on the content they regularly consume.
The AI and advanced analytics capabilities being embedded within digital advertising platforms may not upend the business models used by Google, Facebook, and others and based on walled gardens that themselves were constructed using algorithms. But it’s becoming apparent that fissures in the walls of those gardens are starting to appear as other entities in the world of advertising apply their own AI countermeasures.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,589 | 2,021 |
"4 alternatives to cookies and device IDs for marketers | VentureBeat"
|
"https://venturebeat.com/2021/05/30/4-alternatives-to-cookies-and-device-ids-for-marketers"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest 4 alternatives to cookies and device IDs for marketers Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
How companies identify and market to audiences across the digital landscape is undergoing a fundamental transformation. We’re not just talking about one of those back-end technical issues that the ad tech community needs to solve. We’re talking about a sea change that has implications for every brand and agency marketer on the planet as we enter a privacy-first world.
By now, the headlines are familiar: Google is discontinuing support for third-party cookies on Chrome. Apple has deprecated its IDFA with iOS 14.5. But that’s just the beginning.
Today’s advertisers need to be seeking alternatives in a world without cookies and device IDs. Unfortunately, there’s no single turnkey replacement forthcoming — but that doesn’t mean advertisers are powerless. Here are a few key areas where your future-proofing efforts should be focused.
1. Cohorts The term “cohorts” has shot to the top of 2021 industry buzzwords thanks to Google’s Federated Learning of Cohorts (FLoC) , but the concept of grouping people based on similar interests isn’t a new one. Right now, Apple and a handful of other providers are also developing new cohort-based solutions for targeting that eliminate the need for individual targeting and the related privacy concerns.
Google has received plenty of criticism for its plans around FLoC , but the overall approach — clustering large groups of people with similar interests together in a way that they remain anonymized — has validity. Today’s advertisers need to be seeking partners that are collaborating and integrating with tech companies to take advantage of emerging cohort-based audience options.
2. Universal identifiers Even as Google and Apple are deprecating long-relied-upon web and mobile identifiers, a host of companies are racing to provide alternatives in a privacy-compliant way. The resulting universal identifiers — from companies including ID5, LiveRamp, Zeotap, and The Trade Desk (UID 2.0) — offer an interoperable way of tracking users, independent of a tech provider. The advantage of these IDs is that user consent and opt-outs can be managed in a streamlined, transparent fashion. More importantly, universal IDs provide a much cleaner solution compared to cookies, eliminating the need for continuous syncing between the ad tech platforms to be able to trade, while at the same time adding another friction point for the user (i.e., providing their email address).
Although Google has said it will not support these solutions in the Chrome browser, platforms ( including The Trade Desk ) are confident that these solutions will remain available to buyers. In general, universal IDs represent a viable, privacy-focused alternative to cookies — and one that will be particularly important on the open web. From an advertiser standpoint, the key is to embrace a “yes, and” mentality versus an “either, or” stance. By working with partners that integrate with all leading universal ID providers, advertisers can ensure the broadest continued coverage following the final death knell of the cookie.
3. On-device solutions As advertisers look to offset the impact of the move to a cookieless world, it’s also important to be covering their bases on mobile. Going forward, “limit ad tracking” will become the new normal in mobile environments. In fact, only 10-20% of users are expected to opt in to ad tracking with Apple’s IDFA enforcement. As such, advertisers will see a significant impact as it relates to opportunities for one-to-one personalization and reaching consumers at scale, not to mention ad pacing, rotation, and forecasting.
This is where on-device audience solutions come in. The in-app environment combines the best of data and privacy through on-device audiences, a privacy-focused solution that doesn’t rely on mobile device identifiers. Rather, on-device audiences can be generated on the device, and only the audience segments — not the individuals themselves — are available for targeting. Ultimately, the user data never leaves the device. Such solutions can layer device data, app metadata, and advertisement interactions to probabilistically infer behavioral characteristics, such as age groups, gender, interests, and many more, without the need to access personal information such as a mobile device identifier. This approach will become increasingly relevant for mobile advertising, particularly given that Google is expected to follow in Apple’s footsteps and eventually deprecate its mobile device ID (GAID) as well.
4. Contextual targeting Finally, let’s not forget that our industry has long had the means of targeting ads without the need for personally identifiable information (PII). We’re talking, of course, about contextual targeting, which is understandably gaining traction again as we move into a privacy-first world. The beauty of contextual targeting is that it does not require consent and works across all environments (e.g., desktop, mobile, CTV, etc.). Contextual audiences are built based on the type of media or subject matter that a user consumes digitally, versus the user’s identity. Advances in data processing and machine learning allow for real-time audience generation and activation based on such signals. In other words, the effectiveness of contextual targeting is improving every day.
Adapting (and measuring) for the future As we move forward into a very different future for marketers, there’s a need to get back to basics when it comes to how we understand the effectiveness of advertising spend. Our industry’s overreliance on deterministic data is going to need to broaden towards thoughtful probabilistic measurement strategies. The good news is that these techniques, designed to help marketers understand incrementality in a cross-channel reality, are well-established. Going forward, strong media mix modelling will become essential and will ultimately elevate our industry’s omnichannel understanding in ways that today’s last-click tendencies do not.
The writing has been on the wall for third-party cookies for years now, and mobile identifiers like the IDFA have already lost a great deal of relevance and reach within today’s targeting landscape. The challenges to identity across the digital and mobile landscapes will continue to escalate. Mobile marketing will become less one-to-one in a privacy-first world, and strong omnichannel marketing strategies will become more important than ever.
What’s required of marketers at this juncture is a reset of their strategic mindset and a tactical pivot on multiple fronts. Now is not the time to be seeking simple solutions to systemic challenges. Rather, now is the time to be implementing a broad array of alternatives to see what works best — and committing to an ongoing test-and-learn loop for the foreseeable future.
Ionut Ciobotaru is Chief Product Officer at Verve Group.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,590 | 2,020 |
"Demon's Souls review – A stunning, player-punishing PS5 powerhouse | VentureBeat"
|
"https://venturebeat.com/2020/11/23/demons-souls-review-a-stunning-player-punishing-ps5-powerhouse"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Review Demon’s Souls review – A stunning, player-punishing PS5 powerhouse Share on Facebook Share on X Share on LinkedIn How can you lose when you have a fire sword? Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
The first thing that strikes you in Demon’s Souls isn’t the business end of a bastard sword, but the PlayStation 5 launch title’s stunning visual presentation.
You might not expect a cursed kingdom consumed in thick fog to be the ideal showcase for the new system’s graphical prowess, but I’d argue its default murkiness makes the eye-pleasing details pop even more. Whether seeing its detailed environments staring back at you from a muddy puddle or watching a torch’s flickering flames dance against a dimly lit dungeon’s walls, you’ll continually be amazed by how beautiful this brutal world from developer Bluepoint Games is.
Feel the pain But pretty ray-tracing effects are far from the only way the remake’s harnessing the PS5’s power. The platform’s DualSense controller’s also pulling its weight to immerse players in ways the 2009 original couldn’t dream of. The combination of haptic feedback and mic-delivered audio cues inject a number of nuanced layers to almost every action. Sure, all the expected effects accompany the sword-clashing combat, but the tech goes so much further than that. Follow a well-timed parry with a deadly riposte and the sensation of your rapier entering a foe’s throat feels – and sounds – different than its blood-soaked exit.
Of course, spend too much time marveling over the game’s capability to sting your senses, and you’ll soon find one of its bosses flossing its teeth with your spine. Demon’s Souls has been rebuilt, but the overhaul hasn’t come at the cost of the original’s brutal difficulty. That said, the PS5 upgrade does bring some fresh features that help smooth the steep challenge a bit. Call up the system’s Activity bar, and you have immediate access to helpful videos, without leaving the game. The short clips – nearly 200 of them – can even be pinned to the side of your screen, allowing you to watch while you’re playing.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Above: Gothic and pretty.
Death to load times Additionally, those sounds coming from the controller’s mic can tip you off to a hidden threat’s location or an unexpected attack from behind. Best of all, the PS5’s snappy load times ensure you won’t be making sandwiches or taking bathroom breaks between deaths. Tolerating long stretches on the trial-and-error treadmill is much easier when most of that time is spent actually playing rather than staring at loading screens.
While some of these additions and enhancements help soften the blow of the game’s player-punishing encounters, it still takes good old-fashioned practice and perseverance to cleanse the Kingdom of Boletaria of its demonic beasts and suffocating darkness. Thankfully, it follows the original’s tough-but-fair template, retaining the foundation that helped establish and define the Souls-like subgenre. Whether you’re finally learning the ins and out of a labyrinthine, trap-filled level or slaying a screen-swallowing monster after dozens of attempts, seeing your dedication and determination pay off is extraordinarily rewarding.
Tough, fair, sometimes frustrating But it’s not just the major accomplishments that fuel this satisfaction. Even minor victories – like discovering how to parry a low-level enemy’s attack that’d previously left you mourning your maxed life meter – can make you feel like you’ve conquered the world. Demon’s Souls is brimming with these moments, providing seemingly endless opportunities to make you feel as though you’ve scaled a previously impassable mountain.
All that said, not everyone will appreciate how blatantly it flaunts its difficulty. While even complete newcomers are likely aware of the game’s reputation for repeatedly bringing players to their knees in combat, they may be surprised to discover that absolutely nothing comes easy in Boletaria.
Above: He’s on fire! Beyond teaching the basics at the start of your adventure, there’s no tutorials to speak of. Items in your inventory often come with vague descriptions, and even an act as simple as leveling your character has its own hurdles to overcome. The latter is usually a time to celebrate your victories in other games, but it’s something you need to suss out in Demon’s Souls. It’s not especially trying, but if you’re expecting a chatty vendor or buxom barkeep to welcome you with open arms and a clear explanation of how to up your stats, well, you’re in the wrong role-playing game.
Proceed with caution Demon’s Souls is both a fantastic game and a stunning showcase of the next-gen console’s capabilities. And while it’ll serve as the perfect launch title for many, I’m hesitant to wholeheartedly recommend it to everyone that picks up a PS5. Its rewards are immense, but for those unfamiliar with the genre or craving some stress-free fun on their new hardware, I wouldn’t bet against the smile-inducing thrills that come with swinging through NYC as Miles Morales in Spider-Man.
If you’re a seasoned Souls-like fan, then it’s worth picking up a PS5 just for Demon’s Souls. If not, I still recommend having your passport stamped in Boletaria, but know what you’re getting into beforehand and proceed with caution … and plenty of Moon Grass.
Demon’s Souls is out now for the PlayStation 5. The publisher sent us a code for this review.
GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.
Discover our Briefings.
Join the GamesBeat community! Enjoy access to special events, private newsletters and more.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,591 | 2,021 |
"Metroid: Dread brings the franchise to Switch | VentureBeat"
|
"https://venturebeat.com/2021/06/15/metroid-dread-brings-the-franchise-to-switch"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Metroid: Dread brings the franchise to Switch Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Nintendo announced a new 2D Metroid game today during its E3 Direct presentation.
It’s called Metroid: Dread, and it’s coming out October 8.
The trailers also called the title Metroid 5, making it clear that this is a successor to Metroid: Fusion.
We last saw Metroid on the 3DS in 2017 with Metroid: Samus Returns, itself a remake of 1991’s Metroid II: The Return of Samus for the Game Boy. This will be the first entry in the series created for the Switch.
Nintendo also noted that development on Metroid Prime 4 is continuing, although it did share any new details.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Metroid: Dread has been the rumored name for a new 2D Metroid game since the Nintendo DS era. Nintendo confirmed in a deep-dive video after the Direct that the project had two failed attempts at development, but now it is finally real. Mercury Stream, the studio behind Samus Returns, is developing Dread.
The free-aim and melee counter abilities are back from Samus Returns, and you can also do a new slide move.
GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.
Discover our Briefings.
Join the GamesBeat community! Enjoy access to special events, private newsletters and more.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,592 | 2,021 |
"Dell finally spins off VMware stake in $9.7B deal | VentureBeat"
|
"https://venturebeat.com/2021/04/14/dell-finally-spins-off-vmware-stake-in-9-7b-deal"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Dell finally spins off VMware stake in $9.7B deal Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
(Reuters) — Dell Technologies said on Wednesday it would spin off its 81% stake in cloud computing software maker VMware to create two standalone public companies in a move that will help the PC maker reduce its pile of debt.
VMware is currently Dell’s best-performing unit , as it has benefited from companies looking to cut costs and move to the cloud, a shift that has been accelerated by the COVID-19 pandemic.
Shares of Dell rose more than 8% in extended trading.
VMware will distribute a special cash dividend of between $11.5 billion and $12 billion to all of its shareholders, including Dell, which will receive between $9.3 billion and $9.7 billion.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! For Dell, the special dividend will help reduce its long-term debt of $41.62 billion, much of which was taken on during its $67 billion acquisition of VMWare’s then-majority owner EMC in 2016.
The companies said the deal will simplify their capital structures. Both companies will also enter into a commercial arrangement to continue to align sales activities and for the co-development of solutions.
VMware, whose software helps companies squeeze more work out of datacenter servers, has been looking for a CEO after previous boss Pat Gelsinger was tapped to lead Intel.
Dell first announced the spinoff plans in July last year. The deal is expected to close in the fourth quarter.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,593 | 2,020 |
"Salesforce's Einstein platform is now serving over 80 billion predictions per day | VentureBeat"
|
"https://venturebeat.com/2020/11/24/salesforces-einstein-platform-is-now-serving-over-80-billion-predictions-per-day"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Salesforce’s Einstein platform is now serving over 80 billion predictions per day Share on Facebook Share on X Share on LinkedIn Salesforce Tower in Indianapolis.
Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
In September 2016, Salesforce launched Einstein, an AI platform to power predictions across all of the company’s cloud-hosted products. Just over four years after Einstein’s debut, Salesforce says the platform is now delivering more than 80 billion AI-powered predictions every day, up from 6.5 billion predictions in October 2019.
Forrester Research recently wrote that companies “have to rebuild their businesses, not for today, or even next year, but to prepare to compete in an AI-driven future.” Reflecting this changing landscape, IDC expects global spending on AI to more than double to $110 billion in 2024, up from $50 billion in 2020.
Salesforce asserts that Einstein is poised to drive a substantial portion of this growth. Einstein’s predictions can include internal and customer service answers for a given use case, like when to engage with a sales lead, how likely an invoice is to be paid, and which products to recommend to bolster sales. For instance, outdoor apparel and lifestyle brand Orvis taps Einstein to develop personalized conversations with its online shoppers. Internet Creations, a business technology and consulting firm, is using Einstein to forecast long- and short-term cash flow during the pandemic. And outdoor apparel retailer Icebreaker is leveraging Einstein to suggest products for new and existing target audiences.
Beyond the top-line prediction milestone announced today, Salesforce reports a 300% increase in Einstein Bot sessions since February of this year — a 680% year-over-year increase compared to 2019. That’s in addition to a 700% increase in predictions for agent assistance and service automation and a 300% increase in daily predictions for Einstein for Commerce in Q3 2020. As for Einstein for Marketing Cloud and Einstein for Sales , email and mobile personalization predictions were up 67% in Q3, and there was a 32% increase in converting prospects to buyers using Einstein Lead Scoring.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Salesforce also says Einstein Search is fielding more than 1.5 million natural language searches per month, which works out to 1.5 natural language searches every second. It’s also delivering more than 100 million tailored keyword searches per month.
The Einstein platform is the purview of Salesforce Research, a unit previously led by former Salesforce chief scientist Richard Socher. (Socher, who joined Salesforce through the company’s acquisition of MetaMind in 2016, left in July 2020.) To train its underlying algorithms, Salesforce Research’s hundreds of data scientists draw from sources that include the anonymized content in emails, calendar events, tweets, Chatter activity, and customer data. Salesforce says innovations in Einstein arise from scientific investigations into computer vision, natural language models , translation , and simulation.
Einstein’s voice services recently underwent a reorganization with Salesforce’s decision to shut down Einstein Voice Assistant and Voice Skills in favor of the newly released Salesforce Anywhere app.
At the time, a company spokesperson told VentureBeat that voice capabilities remained “a priority” for Salesforce and that the products it’s discontinuing will inform the development of “reimagined” functionality focused on productivity and collaboration.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,594 | 2,020 |
"MessageBird acquires Pusher to bring more real-time communication APIs to businesses | VentureBeat"
|
"https://venturebeat.com/2020/12/16/messagebird-acquires-pusher-to-bring-more-real-time-communication-apis-to-businesses"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages MessageBird acquires Pusher to bring more real-time communication APIs to businesses Share on Facebook Share on X Share on LinkedIn MessageBird CEO Robert Vis Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Cloud communications company MessageBird has acquired Pusher in a deal worth $35 million. Pusher enables developers to integrate real-time functionalities into their software, including push notifications and in-app messaging.
As more companies transition to the cloud to boost their digital operations in 2020 and beyond, demand for APIs will continue to grow.
In the past seven months alone, API development platform Postman raised $150 million at a $2 billion valuation , API marketplace RapidAPI secured $25 million , and Skyflow locked down $17.5 million to bring its data privacy API to more businesses.
MessageBird’s acquisition comes two months after it announced a fresh $200 million in funding , valuing the Netherlands-based company at $3 billion, and bolsters its platform ahead of a planned IPO in 2021.
Omnichannel Founded out of Amsterdam in 2011, MessageBird was entirely bootstrapped (and profitable) before its first significant round of funding in 2017. It has amassed an impressive roster of customers including Facebook, Uber, and SAP for a Twilio-like platform that enables app makers to add WhatsApp messaging , voice , SMS , and email functionality to their products through APIs. And earlier this year, MessageBird expanded its horizons when it launched Inbox , a cross-channel contact center platform which it touts as the “Slack for external communications.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! With Pusher under its wing, MessageBird gains instant access to a slew of APIs and SDKs which it will integrate under the new MessageBird Pusher brand. That will allow the company to offer enterprises and developers in-app messaging features, location-tracking tools, and push notifications. So if an ecommerce company wants to allow delivery drivers to message customers, customers to see the live location of their delivery driver, and alert the customer when the delivery driver is nearby, MessageBird can help.
Above: Live location-tracking MessageBird said it was already working on similar features internally before it elected to procure Pusher. “We were still early in the development process, and the Pusher platform and team have allowed us to drastically speed things up,” CEO Robert Vis told VentureBeat, adding that the new features will enable “more omnichannel use cases” for its customers. In other words, companies need to address all the ways that customers expect to be supported in 2020, be that through real-time two-way chats, or information relayed proactively such as through location-tracking or push notifications.
Above: Push(er) notifications Expanding its coverage to more developer use cases makes a great deal of sense for MessageBird, particularly as it has an existing roster of big-name customers it can cross-sell and upsell to — it has a captive audience. Moreover, at a time when countless companies are scrambling to support the huge global shift to digital driven by the global pandemic, it makes more sense than ever for MessageBird to strike while the iron is hot.
“COVID-19 and the situation the world finds itself in has certainly had an impact [on its product roadmap], but more importantly our customers are seeing the value of offering their customers an omnichannel experience more than ever before,” Vis said. “We’ve seen more demand for features across the board, and the functionality that Pusher brings to our platform is no exception.” The API economy API-based platforms such as MessageBird and Pusher allow businesses to build functionalities into their apps without having to develop the infrastructure themselves, enabling companies such as Uber to offer two-way communication features directly inside their apps. This type of technology is crucial for businesses today, as customers’ expectations have evolved from relying on telephone communications to using WhatsApp, web chat, or whatever communication conduit makes most sense.
The so-called “ API economy ” was already a burgeoning trend in the technology sphere, as companies transitioned their software from tightly woven, monolithic entities to applications built on microservices. Uber is a good example of this: It kicked off a massive rewrite back in 2015 and moved to service-oriented architecture (SOA), to “break up the monolith into multiple codebases,” as Uber noted at the time.
Founded out of London in 2010, Pusher had raised around $20 million in funding since its inception and already has some high-profile clients such as GitHub, DoorDash, and MailChimp. The company said despite being ingested by MessageBird, it will continue as a standalone product and vowed to support all its existing customers.
As for MessageBird, well, an IPO remains firmly on the agenda, though the company has yet to set a firm date yet.
“We’re still planning on 2021, and acquisitions like today’s are supporting us on that journey,” Vis said.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,595 | 2,021 |
"Sendbird secures $100M to help businesses add chat, voice, and video calling to their apps | VentureBeat"
|
"https://venturebeat.com/2021/04/06/sendbird-secures-100m-to-help-businesses-add-chat-voice-and-video-calling-to-their-apps"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Sendbird secures $100M to help businesses add chat, voice, and video calling to their apps Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Sendbird , a platform that makes it easier for businesses to add messaging, voice, and video chat functionality to their apps, has raised $100 million in a series C round of funding at a $1.05 billion valuation.
The raise comes while the so-called API economy is thriving as businesses across the spectrum have been forced to embrace digital transformation, be that through extending online customer service channels or expanding into video-based telehealth. The API economy was on an upward trajectory long before the global pandemic took hold, though, driven in part by a gradual shift from monolithic on-premises software to the cloud and microservices-based applications. Smaller, function-based components are easier to develop and maintain, with individual teams or developers taking responsibility for a single service — and APIs are integral to joining them all together.
Moreover, consumers and end-users increasingly expect to be able to engage with companies directly through their mobile apps. But a company that offers an app-based food delivery service, for example, doesn’t really want to consume resources building their own communications infrastructure to enable customers to chat with their driver — it’s much easier if they can leverage platforms that were custom-built for that purpose. This is where Sendbird comes into play.
“Even before Covid, there has been a shift to more and more of the tasks we accomplish in our lives occurring within mobile apps — online purchases, entertainment, food delivery, and lots of others,” Sendbird cofounder and CEO John S. Kim told VentureBeat. “Brands are increasingly choosing in-app chat over SMS as the way of connecting with users and connecting users with each other within the mobile and sometimes Web experience. These interactions facilitate purchases, provide support, and build loyalty. This is what’s been driving Sendbird’s growth for the last five years and continues to do so as the shift from offline to online continues.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Above: Sendbird powers in-app messaging and communications.
There has been a flurry of activity across the API sphere of late: MessageBird acquired Pusher to expand its real-time communication APIs ; Idera, meanwhile, acquired Apilayer , a startup that provides cloud-based APIs to big names such as Amazon, Apple, and Facebook; and RapidAPI acquired Paw to help developers build, test, and manage APIs. And in the funding sphere, companies including MessageBird , Postman , and Kong have all raised large sums of money at multi-billion dollar valuations over the past year.
Founded out of Korea in 2013, Sendbird had largely focused on offering chat and messaging services to developers, but last March it expanded to offer real-time voice- and video-calling too.
Although businesses can already choose from a wide array of free existing tools to connect with clients or customers, they don’t provide sufficient control over the experience, which is why many prefer to create custom solutions themselves in-house.
“Smaller companies typically rely on free services like Zoom or WhatsApp to connect with their customers,” Kim said. “But brands who want to control the branding and user experience, get the benefits of the data and analytics, and integrate conversations into a core workflow — such as connecting a seller with a buyer who has questions — those businesses are going to invest in a great mobile experience and that experience is going to need chat, voice, and video interactions as a core piece.” Target market Sendbird’s typical customer is a mobile-first digital company rather than traditional enterprise clients such as banks or insurance companies. For example, it counts several mobile wallets as customers, such as Indian super app Paytm.
That said, Sendbird does have traditional enterprise clients too, including Korea Telecom and ServiceNow. “We do have traditional industries that did not start cloud first or mobile first as our customers, but going after those companies proactively is not a focus for us,” Kim said.
Prior to now, Y Combinator alum Sendbird had raised around $121 million in funding, the bulk of which arrived via its series B round of funding which closed in 2019. The company’s latest cash injection was spearheaded by Steadfast Capital Ventures, with participation from Emergence Capital, Softbank Vision Fund 2, World Innovation Lab, Iconiq Growth, Tiger Global Management, and Meritech Capital, and it said that it plans to use the funds to “aggressively accelerate its R&D efforts” and hire across its key hubs in San Mateo (California), New York, London, Munich, Singapore, Bengaluru, and Seoul.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,596 | 2,019 |
"Electric raises $25 million to automate IT tasks | VentureBeat"
|
"https://venturebeat.com/2019/01/23/electric-raises-25-million-to-automate-it-tasks"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Electric raises $25 million to automate IT tasks Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Nothing annoys an IT team more than answering the same doggone questions about Wi-Fi passwords over and over again. And the truth is that, excepting hardware installations and particularly tricky software configurations, many (if not most) small and mid-sized business’ day-to-day IT tasks now can be automated or handled offsite.
That’s music to the ears of companies that have historically paid a premium to keep IT technicians on premises, and at least a few of these businesses have recruited Electric to help them make the transition. The New York startup, which emerged from stealth in December 2016, offers a chatbot-forward interface that integrates with Slack — a simple, no-frills solution that’s helped it attract 301 customers with more than 10,000 employees (up from 90 customers in 2017). Now, as Electric gears up for its next stage of growth, it’s announcing a new funding round that brings its total capital raised to $38 million.
GGV Capital led Electric’s $25 million series B, with participation from existing investor Bessemer Venture Partners. It comes almost a year after Electric’s $9.3 million series A last March and will be used to “further invest” in the platform’s features and its client, sales and marketing, and executive teams. The goal this year is to triple the number of customers, users, and sales to increase revenue 3 to 4 times from 2018 and to expand to 25 U.S. markets. (Electric currently services New York, San Francisco, Boston, Philadelphia, Chicago, Austin, Washington D.C., and others.) Electric also revealed that Jeff Richards, managing partner at GGV Capital, will join Electric’s board of directors and that former Blue Apron executive Rani Yadav and Compass head David Weiner have been hired on as chief operating officer and vice president of sales, respectively.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Above: A screenshot of Electric’s web backend, which managers can use to set up integrations with third-party services.
“This past year has brought exponential growth for Electric, and I’m proud to call us the fastest-growing company in our competitive set,” said Electric founder and CEO Ryan Denehy. “Our sales, product and engineering, and account management teams have scaled up to support a wide range of customers, and most importantly, make those customers happy. With the new funding, we’re excited to continue on this rapid growth trajectory and become the de facto IT solution for small and mid-size offices all over the country.” Denehy — whose previous startup, Swarm, was acquired by Groupon in 2014 — describes Electric’s core service as “AI-driven,” with a heavy reliance on automation. Tasks like setting up an email address, connecting to an enterprise platform, and turning on a firewall are handled largely without human intervention; Electric claims it can resolve 99 percent of IT issues within an hour. In place of support tickets, users ping the Electric bot with “@Electric” on any Slack channel, which identifies the client and software being used, delivers a troubleshooting guide or suggested fix, and registers follow-up requests in a web dashboard.
“In short, we built a data warehouse fed by a human support desk,” Denehy explained via email, “and over the last two years used the data to inform our decisions about what to automate, when to automate it, and to create self-learning systems that rapidly increase the intelligence of our task automations over time.” In the event a more complicated problem arises, Electric connects users with a technician who can undertake systems administration, security and network management, and troubleshooting remotely, or with a local vendor who can perform on-site assistance. Additionally, it affords them and administrators the ability to quickly perform tasks like creating users, updating permissions, deleting files, resetting passwords for apps, adding members to groups, and updating company-issued devices with security and vulnerability patches.
Companies pay a flat rate of $60 per month per employee for Electric — a fraction of the cost of employing an IT staff, Richards said.
“Small and mid-sized businesses will spend over $600 billion on technology in 2019 — more than $180 billion in the U.S. alone,” he said. “Now more than ever, those companies are struggling to deploy and manage their IT infrastructure. Ryan and the Electric team have built an incredible platform that leverages modern cloud technologies like AI and chat to support customers in a scalable way we haven’t seen before.” Electric employs a team of around 100, and is headquartered in New York.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,597 | 2,019 |
"Moveworks raises $75 million to automate tech support | VentureBeat"
|
"https://venturebeat.com/2019/11/14/moveworks-raises-75-million-to-automate-tech-support"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Moveworks raises $75 million to automate tech support Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
IT issues take time, which costs money. A PagerDuty survey found that 38.4% of organizations that take more than 30 minutes to resolve IT incidents see an impact on consumer-facing digital services. Moreover, nearly one-third of departments regularly affected by technical issues say that an hour of downtime costs them $1 million or more.
That’s where Moveworks comes in — or so say Bhavin Shah, Jiang Chen, Vaibhav Nivargi, and Varun Singh. They founded the Mountain View-based company in 2016 to build an AI platform that could resolve IT support issues automatically, and impressively, they’ve already gained a foothold in an IT solutions segment that’s expected to reach $35.98 billion by 2025. Case in point: Moveworks recently signed on LinkedIn, Symantec, Belkin, Freedom, Western Digital, Nutanix, Rambus, Autodesk, Broadcom, and Stitch Fix as customers, and it recorded 300% revenue growth year-over-year.
Moveworks today revealed that it’s raised $75 million in a series B fundraising round, bringing the company’s total amount raised to $105 million following a $30 million round in April 2019. New investors Iconiq Capital, Kleiner Perkins, and Sapphire Ventures led the round with participation from existing backers Lightspeed Venture Partners, Bain Capital Ventures, and Comerica Bank, as well as a personal investment from Microsoft Chairman John W. Thompson.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! CEO Shah said the fresh capital will be used to accelerate research and development, with a particular emphasis on natural language understanding and conversational AI.
“Building Moveworks over the past three years has been an exercise in discipline and focus,” said CEO Shah. “The possibilities of AI are so vast that many startups get trapped by the allure of solving every problem their customers present to them. We chose to focus on a single problem that’s been holding IT support back for the last 30 years: resolving IT tickets, quickly and with minimal disruption to employees’ day-to-day jobs. We focused AI on deeply understanding enterprise IT support tickets to solve this very difficult problem. And we’ve succeeded.” Moveworks’ cloud-hosted suite integrates with existing service management systems, identity and access management systems, knowledge bases, email accounts, workflow automation, and facilities management dashboards, applying AI to suss out enterprise language and identify troubleshooting steps for support issues. A stateless engine adapts to changes in conversation flows and enables employees to use natural language to diagnose issues, as well as to identify optimal resolution methods and disambiguate complex requests.
A semantic search component taps context to sift through and extract answers from articles, documents, and FAQs. It complements Moveworks’ remediation solution that fields a range of requests automatically, and that lets employees self-serve email list requests and find coworkers and conference rooms while automatically routing support tickets to the right group.
Moveworks says its machine learning algorithms continuously improve thanks to a paradigm known as collective learning, where language is broken down into generalized features before it’s consolidated from small data sets into a large corpus. It’s on this corpus that the aforementioned models train, ensuring (at least in theory) that they always outperform models trained on a single data set.
Moveworks competes directly with Electric , which raised $25 million in January for its AI-powered IT task automation platform. But Kleiner Perkins partner Mamoon Hamid believes it’s on track to nab a larger slice of the market.
“Moveworks has become the clear market leader in IT support automation, yet in many ways, the company is still in its first inning,” said Hamid. “I’ve been tracking Moveworks from the moment they signed their first customer and we believe it has the potential to become the main interaction model for a broader set of enterprise workflows. We’re thrilled to partner with the Moveworks team — IT support is just the start.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,598 | 2,021 |
"Moveworks helps enterprises automate IT self-service tasks | VentureBeat"
|
"https://venturebeat.com/2021/03/31/moveworks-helps-enterprises-automate-it-self-service-tasks"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Moveworks helps enterprises automate IT self-service tasks Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
IT issues take time to solve, which can cost enterprises money. A PagerDuty survey found 38.4% of organizations that take more than 30 minutes to resolve IT incidents see an impact on customer-facing services. Moreover, nearly one-third of departments regularly affected by technical problems say that an hour of downtime costs them $1 million or more.
That’s where Moveworks aims to make a difference. The Mountain View, California-based company, which was founded in 2016, is developing an AI platform that can resolve IT support issues automatically. Today marks the launch of Moveworks’ newest product, the Employee Service Platform, which brings together AI and natural language understanding technologies to get employees help across departments. Moveworks says that the system can handle human resources, finance, and facilities issues end-to-end, from the initial request to the final resolution.
According to CEO Bhavin Shah, Moveworks has been laying the groundwork for the Employee Service Platform since the company’s earliest days. Eighteen months ago, after experiencing success in the IT segment — Moveworks counts among its customers Palo Alto Networks, Slack, and LinkedIn — the company began building the platform. More recently, they started inviting customers in early access.
“Everything we do at Moveworks is inspired by a simple idea: It shouldn’t take days to get help at work,” Shah said. “Today, after half a decade, Moveworks … delivers instant help to all lines of business.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Beyond answering questions about unlocking accounts, resetting passwords, and provisioning software, the Employee Service Platform helps surface forms, pull answers from knowledge bases, and route requests to the right subject-matter experts. The platform’s engine, which was trained on over 100 million real-world issues, combines domain recognition, semantic search, and deep integrations to address questions with answers from departments’ knowledge bases.
Most enterprises have to wrangle countless data buckets, some of which inevitably become underused or forgotten. A Forrester survey found that between 60% and 73% of all data within corporations is never analyzed for insights or larger trends. The opportunity cost of this unused data is substantial, with a Veritas report pegging it at $3.3 trillion by 2020.
“We engineered a unique approach to understanding the language used in the enterprise, which we deployed prior to this product expansion to resolve IT issues — without predefining specific intents or hard-coding rigid workflows. That approach is our multifaceted intent system,” CTO Vaibhav Nivargi told VentureBeat via email. “At a high level, it is a generalized natural language understanding system. Rather than predefining specific user intents, our multifaceted intent system determines the overarching action and resource type needed to resolve each issue. Once we’ve established this generalized intent, we then evaluate the utility of potential resources.” The Employee Service Platform also transforms resources to display information in a conversational format inside collaboration tools like Slack, Microsoft Teams, and more. For example, users can fill out IT forms without leaving the Moveworks interface in Teams or receive only the pertinent paragraph of a human resources policy after asking Moveworks a question in Slack.
As part of the Employee Service Platform, Moveworks released the Employee Communications module, which enables company leaders to send messages via a cross-platform chatbot. The engine ingests knowledge articles and documents several times per day, enabling the chatbot to answer follow-up questions about messages autonomously.
The chatbot market is expected to reach $1.23 billion by 2025, according to Grand View Research, and there’s reason for its continued growth. Fifty-three percent of service organizations expect to use chatbots within 18 months, according to a Salesforce survey.
And Gartner estimates that chatbots were powering 85% of all customer service interactions as of last year.
“Immediately following the pandemic, we saw a significant increase in the overall volume of tickets submitted to Moveworks — approximately twice as many in March 2020 than in February. Employees across industries needed to learn to use new collaboration tools, order new devices for the home office, look up colleagues’ contact information, troubleshoot Zoom, stay abreast of business continuity plans, and more,” Nivargi said. “Perhaps the most enduring challenge for companies in this work-from-anywhere economy is keeping their employees up-to-date and on the same page. … We responded to the demand by accelerating the creation of our new solution for employee communications. Our customers regularly achieve 50% to 70% engagement with communications campaigns done through Moveworks, compared to around 10% for the average mass email.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,599 | 2,016 |
"Xavier Niel explains 42: the coding university without teachers, books, or tuition | VentureBeat"
|
"https://venturebeat.com/2016/06/16/xavier-niel-explains-42-the-coding-university-without-teachers-books-or-tuition"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Exclusive Xavier Niel explains 42: the coding university without teachers, books, or tuition Share on Facebook Share on X Share on LinkedIn Xavier Niel, founder of 42, a tuition-free coding university with campuses in Paris and Silicon Valley.
Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
42 is the answer to the ultimate questions about life, the universe, and everything. At least that’s what it is in Douglas Adams’ science fiction classic The Hitchhiker’s Guide to the Galaxy.
And for French entrepreneur Xavier Niel, 42 was the perfect name for the innovative coding school he founded two years ago in Paris, and which recently opened a campus in Silicon Valley.
Niel founded upstart telecom company Free and is one of France’s best-known entrepreneurs. I sat down with Niel in Paris to better understand his vision of a university without teachers, without books, and above all, without any tuition.
VentureBeat: What does 42 represents for you? Xavier Niel: 42 is a school that allows anyone to start out on an equal footing. We figured out, in France, as well as in the United States, that the students from top colleges come from a privileged social background. As opposed to what we are led to believe, the most brilliant students aren’t more likely to end up in these colleges, but those from more comfortable backgrounds do.
In fields like ours — computer development — general culture isn’t a requirement. Only two things are taken into account: logic, which is the capacity to process information in a particular order, and the will to pull through. These are the keys required for anyone who would like to do IT development.
We wanted that mix in the form of an association, meaning without any benefit to ourselves — we are not looking to make money out of it. Combining these ideas with the people who founded the best tech colleges in France (e.g. Nicolas Sadirac), we launched 42 in 2013. Today, we are bringing it to the United States. If we focus on the American management style, the idea is to give kids from various backgrounds a chance, to allow them to hope and have a shot at a job, even to give them the tools to create their own company while earning a salary – a salary that can go up to $140,000 to $150,000 a year in Silicon Valley.
VB: Why do you do this? What does that bring you? Niel: There are many reasons. The first one is a sentiment that is really specific to French people and not to Americans: It’s the notion of giving back. Once I had made a lot of money in France and in the U.S. — and I hope it will be the case in many other countries as well — I always asked myself, “How can I give back some of the money I have made in those places?” VB: What were the reactions in the United States, given that the costs of studying there are more important than in France? Niel: What’s crazy is that we have young folks at the 42 school of Paris coming from California. They came all this way because they couldn’t afford to study in the United States, and therefore came to France because of the free education and all the other things we put together in order to help them. We managed to give them a chance to study. The reaction was therefore very positive. We aren’t there to bother other colleges; the young adults we’re bringing in couldn’t have afforded them anyway. Those schools cost around $50,000 a year, and most of these students can’t even get access to loans in order to enroll in them. They would have struggled anyway and would have gotten odd jobs, one after the other.
It’s therefore really hard to find negative aspects to our schools. When we launched 42 in France, some said, “You’re only doing that in order to hire people for your businesses!” I hire three students from 42 every year out of a thousand. This has no impact. There aren’t any hidden agendas in this. Take a look at the press release, you’ll see we do not put ourselves forward. Our name’s at the end because we have no reason to hide. We say what each of us do in life, we include a little bio, but that’s only for the concept.
VB: How do students get into 42? Niel: We don’t ask anything when you want to join: We only ask for a name, a last name, and a birthdate. The candidates have to be between 18 and 30. That’s all. We don’t ask if you have a diploma, if you can read or write, we don’t do any of that. In France, we have people coming from all over the world, some of them arrived in Paris and did not speak French. Even for them, things turn out really well.
When we pick these young people, we try to select them on objective criteria. We forget everything they might have done in their life. They first have to take an online test — which hundreds of young Americans take every day by going on our website. These are pure logic tests. You can be absolutely terrible at math and still pass. It’s quite funny, these are games to which we don’t give out the rules, but you have to find the key. You’re already quite good if you pass this stage. For those who succeed in these games, we invite them to come take on La Piscine (“the swimming pool”).
Online, we tested their logic capacities — which doesn’t mean being good at math. Then, we try out their motivation through La Piscine.
This entails working at the school for 450 hours in a month, 15 hours a day, every day for 30 days. That’s how we test their motivation. What we have seen in France — I don’t know if it’s the same in the United States — is that soon, some of them say: “That’s really nice and all, but it’s not for me. It’s too much work, it’s too hard. I’d rather leave and do something else.” Some of them hang on and stay on to the end. In a month, they have learned in computer engineering what you need two years of college in France to achieve. In the U.S., from what I know, it’s about the same. When they finish La Piscine , some of the students have already started to learn how to code. We tell these students that they have the level of qualification required to carry on with us, and that, from now on, we are going to give them an education, and we will help them as much as we can. If they are coming from the United States, for example, we’ve got a building next door with dormitories. We tell them “From now on, we’re going to help you learn this over time, and you’re going to become a coding genius.” We really give them a good push through teaching from that moment.
And it works. It works objectively. Whether you have a criminal record, suck at math, say dumb shit, we don’t give a damn. We don’t take that into account, we only care about two objective criteria. And if you happen to have those, we’re pleased to help you, because we think you’ve got everything to pull yourself out. What must be understood is that in France, half of these students have never coded in their life, they have never touched a computer. You’re in a world where there’s no need to have a computing background. We don’t care about that.
VB: How did you come up with this idea? Niel: For starters, I asked myself what I observed about my job. In my profession, when you want to hire someone who knows how to code, you make them sit and code. You don’t ask them for their diploma. If they have a diploma, that’s great for them, but we don’t care about it. Coding is a job or a know-how in which a diploma has no importance. In the end, people have it, or they don’t. It may be the case in other fields, but in mine, a diploma is not something that permits you to objectively judge someone when it comes to a know-how. Plus, the fact that there’s no diploma takes away some of the stress for the students.
A diploma also means following rules. 42 is a school that’s open 24/7. At 3 a.m., you can still see between 300 and 400 students working there. So we’re used to a system in which a certain number of rules are necessary in order to get a diploma, but those aren’t compatible with our teaching methods.
We do not have teachers, we do peer-to-peer correcting and other things that make 42 a radical firm, and this is why it doesn’t correspond to any existing diploma program.
VB: How does a school without teachers, lectures, and mentors work? Niel: We’re doing something that works quite well: We rely on cooperation. People talk a lot about Collaborative Economics nowadays. Well, here at 42, we chose Collaborative Education. What does it means? It means putting people together and making them learn together. The knowledge, you can acquire it from the internet. You can type anything into Google, and there’s your answer. So lessons are useless, you’ll find the best lectures in the world on the internet, if you want to learn. But we do not wish to make them learn stuff by heart, we want to teach them how to develop, work, and live together, to build projects together and to make them happen. That’s what we want to teach them.
From that moment, the teacher is of no interest, and the lecture even less. We sometimes have youngsters who got out of the Educational system at 10 or 11, and who don’t know how to coexist with teachers. However, we always ask them to work together. The grading, they do it among themselves. That means at any moment of the day or night, some students are there, ready to grade the work of other students. Partnership, nowadays broadly accepted in economy, is still shocking to most when it comes to education, but that’s the system we chose.
People sometimes ask us: “Why don’t you dematerialize it, do it from a remote location?” It happens that our educational system is remotely accessible if students want to do their work somewhere else. But we found out that when you come here, you work faster and better, simply because you work with others, and you need to. It is of the utmost importance, because it helps you maintain your motivation and keeps you going forward. Once again, we chose something quite radical, but we’re fine with it because it works.
I’ll give you another example: The school has no fixed duration. That means, for a student to finish school, he or she must pass 21 levels. Some will successfully pass those levels in two years, others will do so in five. Students will go at their own pace. Some will pass in two years and three months, others in three years and a day. What is this idea that everyone must learn the same thing, at the same moment? It does not look like a clever way of doing it. We aren’t all made the same, we cannot all learn or move forward the same. Likewise, the school adapts to everyone’s speed.
VB: What’s a level? Does the student have to build up a project? Niel: Yes, that’s it. Everything works by projects. At first, there are a few mandatory projects that all students are required to do, in a certain order and sequential way. A project is presented as a short five-minute presentation video with text that tells the students what is expected of them and what they must turn in. Some projects are to do on your own, but most of them are group projects. When you’re finished, you move on to the next one, and when that one is done, you keep moving on.
After a few projects, you may choose the parts of a project you might want to do. You’re not required to do something anymore, you can do what interests you. So if I’m into graphic design, I’m going to continue with a project about graphic design, and if I like managing databases or if I want to understand how these work, I’m going pick only database projects that interest me.
The more I complete projects, the more I will earn points, and those points will allow me to move on to the next level. When I reach level 7, I have to do an internship. I’m therefore stuck at that level with the obligation of doing a training program. And when I reach another level, I have to do another internship. It’s really like a video game, with levels to beat. Some of them are compulsory and others are the results of your choices. So all of our students are trained in a different way. There are a lot of projects, and the students may work on them side by side, but they always end up being completely different.
VB: Is this a learning method already being used in the United States or is it something new that you’re exporting to the U.S.? Niel: When you go to the United States, you always see French people among the big names in technology. We’re quite good at this in France. We have real knowledge in terms of math and coding. So if we manage to export it, all the better! What we love is working on a big scale, because you can do lots of things if you have a huge number of students. If you don’t, the 24 hour-a-day correction system doesn’t work anymore.
VB: Was placing a former student from 42 Paris at the new 42 school in Fremont a coincidence or intentional? Niel: It’s a type of auto-management. That’s what rules our school, meaning that the best students help with its internal functioning. All students from our school develop huge computer systems, manage them internally etc. We systematically ask our top students to help us.
Brittany Bir was a brilliant American student at 42 Paris, who naturally wanted to go back to her country one day. Because she had great skills, we said, “Listen, since things are working well (we’ll help you of course), would you mind making this happen?” She is part of the group of young people who left the United States because they had no chance of accessing those schools. Her family couldn’t afford it, so her way in was to come study in France. We’re happy to do this because we hope that, the next time, others will not have to leave their country to go to a school if they cannot afford it.
VB: Is there something radical in the way you organize the school too? Niel: We have several elements there, some simply practical, others which are landmarks. First of all, there’s the financial aspect, because what was the most costly for the school was furnishing it, and we wanted the school to have as many young people as possible. Then, we wanted them to speak to each other, to exchange ideas. And in order to do so, you need to make sure that there are enough accessible people around you. The idea is that a student could talk to seven other students without having to speak up.
Then, you need facilities where you can sleep and relax right next to the working spaces, so that the students can be at their maximum. As we ask them to work 15 hours a day, the less they need to move, the better.
Here’s the spirit: you need a nice space, where people will want to go and where they will feel good. And at the same time, you need this place to have a large capacity for exchange and welcome a maximum of students.
VB: Why is it called “42?” Niel: It’s from The Hitchhiker’s Guide to the Galaxy — it’s the answer to the greatest question about life and the universe. Also, 42 is a magical number. It’s really important in the geek universe. It shows that the school, it’s not just out of the geek world, and we’re happy about that.
VB: In the end, what is a good developer today? Niel: They are people who know how to adapt. Our parents learned computing in a different way: They were told that is was about “learning by heart.” And therefore, people sometimes find it difficult to adapt to a world that’s rapidly changing. Because the developing language now isn’t the same as the one from three months ago. What we teach at 42, is C, the most universal language — and the hardest. It is an element of a great importance. We think, and I’m pretty sure about it, that once the C or C++ language is learned, students will be able to adapt to any language and will find others simpler.
We start by teaching them something really hard, thinking they will be able to adapt to something simpler without any difficulty. A good coder is someone who is capable of adapting to the software environment of a company, who’s capable of working in group. They will also be asked to have the kind of logic which will give them the ability to deliver a clean and functioning code. That’s what we look for in these youngsters, that logic. Then we teach them how to use it every day.
VB: Adapting, working in groups, are those, in the end, the two necessary elements required to work in the digital world in general? Niel: Yes, maybe. People tell us that we could do that for a lot more activities. But we only know one thing: computer development.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,600 | 2,018 |
"Lambda School, where students don't pay until they land a $50,000 tech job, graduates its first class | VentureBeat"
|
"https://venturebeat.com/2018/01/30/lambda-school-where-students-dont-pay-until-they-land-a-50000-tech-job-graduates-its-first-class"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Lambda School, where students don’t pay until they land a $50,000 tech job, graduates its first class Share on Facebook Share on X Share on LinkedIn Joram Clervius, one of the first 20 students to graduate from Lambda School.
Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
While the tech industry offers some of the highest-paying and fastest-growing jobs in the U.S., obtaining one is still out of reach for many Americans. Particularly for those who are looking to make a career change, it’s expensive. Tuition for six-month coding schools can run upwards of $10,000, and some graduates still have to complete an internship or two before employers will consider them for an entry-level job engineering job.
Enter Lambda School , a nearly year-old startup that aims to remove some of the traditional barriers that have scared students away from coding schools in the past. Today, Lambda School announced that it has closed a $4 million seed round, led by Y Combinator and Tandem Capital. The school has also graduated its first batch of 20 students from its software engineering course — five of whom already have job offers a week after graduation.
VentureBeat’s Heartland Tech channel invites you to join us and other senior business leaders at BLUEPRINT in Reno on March 5-7. Learn how to expand jobs to Middle America, lower costs, and boost profits. Click here to request an invite and be a part of the conversation.
Lambda School is one of a number of new tech school entrants that is giving students the option of paying for their education via an income-share agreement, rather than paying for tuition upfront. The idea is to reassure students, some of whom may have felt that their previous degrees or training have been worthless in the job search, that Lambda School’s main goal is to prepare them for a job.
“To some degree — if a student doesn’t get a job, that’s on us and we shouldn’t get paid,” Lambda School cofounder Austen Allred told VentureBeat.
Allred, formerly a senior manager for LendUp in San Francisco, grew up in a town of 4,000 people in Utah. Allred said he was inspired to start Lambda School after seeing friends from his hometown who wanted to break into the tech industry but didn’t have the means to do so.
How it works For now, Lambda School only has one course — a six month-long computer science “academy” where students learn the basics of software engineering. Classes run from Monday-Friday, 8 a.m.- 6 p.m. PST, and are broadcast live. Students have to fill out an application, participate in a phone interview with Lambda School, and complete a crash course in web development — HTML, CSS, JavaScript and Git — before they are accepted into the course.
Though Lambda School just graduated its first class, it starts a new academy each month, so right now there are about 200 students enrolled in its courses. The school is also planning to introduce a second academy in April, focused on artificial intelligence and machine learning, which roughly 40 students have already enrolled in, as well as an iOS course in July. Allred says the school hopes to graduate 1,000 students by the end of the year.
Lambda School gives students three payment options. They can either pay $20,000 upfront, pay $10,000 upfront and forgo 17 percent of their salary for a year (with the maximum payment capped at $15,000) or pay zero dollars upfront and forgo 17 percent of their salary for two years (with the maximum payment capped at $30,000). Allred estimates that more than 90 percent of students have opted to pay via an income share agreement.
Both of the income share agreements only apply to students who are making at least $50,000 — so if students don’t find a job, in theory, they would pay nothing. And if students lose their job, or their pay slips below $50,000, they can pause the income share agreement.
That means that a student who makes exactly $50,000 per would have to give up roughly $708 of his or her paycheck each month to Lambda School. That’s far more than what the average millennial saddled with college debt pays each month — according to the Federal Reserve, the median monthly payment for U.S. adults between ages 20-30 who have student debt is $203.
The percent per paycheck that Lambda School graduates will have to hand over is so great that the income share agreement is likely out of reach for prospective students who are saddled with any other debt, such as student loans from their undergraduate studies.
But Lambda School’s pitch is that if students forgo more of their paycheck, they can get rid of their debt more quickly — and that the school can give the students the tools they need to find not just any job, but a job that pays more than any job they might have been able to find in their previous industry.
Getting a quick start Joram Clervius, a member of Lambda School’s inaugural class, said that he turned to Lambda School after trying to learn software engineering on his own, but found that studying on his own wasn’t enough to land him a job. The Florida native had received a scholarship to study biology at Florida Agricultural and Mechanical University, but dropped out after deciding he wasn’t interested in becoming a doctor anymore. He began working as a web developer for a real estate company, aspiring to obtain a greater role in the tech industry, when he saw an advertisement for Lambda School.
“I had zero hesitation because it seemed like they wanted to focus more on the students than on making money,” Clervius said.
Clervius was one of the first Lambda School graduates to receive a job offer. Before the course was over, he moved to Ann Arbor, Michigan to work as a senior developer for local software company Nexient. He will make $85,000 a year at his new role, and chose to pay for his Lambda School tuition via an income share agreement.
However, Lambda School will need many more students to receive job offers like the one Clervius got to make enough money to build a sustainable business.
To help increase the odds that students will find jobs quickly after they graduate from Lambda School, the school has secured “hiring partnerships” with 75 companies, including PayPal, IBM, Eventbrite, and 30 companies within the Y Combinator network. Some hiring partners agree to look over resumes of Lambda School students, while others have internships for graduates available.
“We have a full-time career development team, so they’re constantly interviewing, going over resumes, helping [students] figure out the right way to reach out to people,” Allred said. “We view that as part of the job, which is different from most schools.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,601 | 2,021 |
"Microsoft paves digital twins' on-ramp for construction, real estate | VentureBeat"
|
"https://venturebeat.com/2021/06/05/microsoft-paves-digital-twins-on-ramp-for-construction-real-estate"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Microsoft paves digital twins’ on-ramp for construction, real estate Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Digital twins for the smart building of the future are still under construction. But Microsoft is working to enable this advanced technology with a special ontology that works with its internet of things (IoT) platform Azure Digital Twins. Such capabilities move smart buildings closer to reality.
An ontology is essentially a shared data model that simplifies the process of connecting applications in a particular domain, and it’s one of the core elements for developing digital twins.
“Microsoft is investing heavily in enabling our partners with the technology and services they need to create digital twin solutions that support new and existing needs of the world’s largest real estate portfolios,” said Microsoft Azure IoT general manager Tony Shakib.
This recent push into construction extends the utility of Microsoft’s Azure Digital Twins , released last year.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! To gain a foothold in the field, Microsoft partnered with RealEstateCore , a Swedish consortium of real estate owners, tech companies, and research groups, to integrate these services with various industry standards. Making a Smart Building RealEstateCore ontology for Azure Digital Twins enables the various parties in building markets — owners, construction teams, and vendors — to collaborate and communicate about real estate.
This could accelerate the ability to weave IoT data, AI models, and analytics into digital twins, and to help simplify the transition to sustainable and green innovation, currently one of one of the fastest-growing venture capital sectors.
Accelerating digital transformation Digital transformation has been slow to develop in construction and real estate markets. Microsoft believes that the development of better standards and integrations could help accelerate such transformation. That is important if only because real estate represents one of the largest asset classes in the world. In its recent Global Building Stock Database update, Guidehouse Insights predicts the square footage of buildings will grow from about 166 billion square meters in 2020 to 196 billion square meters in 2030.
Building owners are hoping that digital twins could help increase the value of their existing holdings at less cost than building new ones.
But figuring out how to increase building asset value and net operating income is a complicated problem that spans technology and change management issues, Shakib said.
This shift is further complicated by challenges in retrofitting digital twins’ capabilities to existing building management systems. Shakib said many building management and automation vendors have attempted to limit buildings to custom, proprietary “walled garden” approaches that can hurt clients in the long run.
Better ontologies could smooth this transition. Such thinking was behind the RealEstateCore Consortium, which was born out of a partnership between academia and industry. The consortium created the RealEstateCore ontology that employed a graph data model and built on years of best practices gleaned from experience with larger property owners such as Vasakronan.
RealEstateCore can provide a bridge to various building industry standards such as Brick Schema, Project Haystack, W3C Building Topology Ontology (W3C BOT), and more. Today, different partners can run into problems integrating applications using custom data formats. This is especially relevant in construction, as there are huge pitfalls from data loss in the steps from building design to construction, commission, handover, and operation.
Seeing a return Improved digital twins promise significant ROI for building owners and operators. By improving the categorization, integration, and fidelity of data, digital twin developers can create better digital replicas of physical buildings and the components they comprise.
Some of the early gains come from cost savings related to energy efficiency. Microsoft has been exploring these techniques on its campuses to realize 20% to 30% energy savings. These projects can start by harvesting data from existing building control systems to find room for improvement.
Microsoft’s Project Bonsai has been able to squeeze an additional 10% to 15% of savings by applying AI to optimize controls further. Down the road, the U.S. Department of Energy’s Grid-Interactive Efficient Buildings could help owners save even more by enabling their facilities to interact with the digital electric grid in real time.
Beyond energy savings, there has been rapidly growing interest in using digital twins to optimize building space, activate building amenities, and support various health and wellness scenarios in the wake of COVID. For example, RXR Realty uses Azure Digital Twins to combine building data with people counting, social distance detection, face mask detection, and air quality monitoring to provide a building wellness index. The appropriate ontology also allowed them to capture important metrics while still respecting privacy and ethics.
Turning things into assets Digital twins help a group of people make sense of the data surfaced by IoT devices. An ontology provides a set of models for wiring these up in a particular domain, such as a building structure, system, city, or energy grid.
An ontology can provide a starting point for organizing the information to solve a problem that spans different roles, such as designers, builders, vendors, and operators. For example, a construction team might need to know how to install a new heater; a general contractor would want to know how long installing it will take, while the owner would want to know the appropriate maintenance schedule.
The built world is complex, and a smart building’s ontology must seek to represent that intricate reality in a way that is simple for developers to use. “An ontology must balance power and comprehensiveness with simplicity and ease of use to generate enough adoption,” Shakib said.
All of the major cloud vendors have announced various kinds of IoT initiatives for helping to weave sensors and actuators into new cloud applications. But Microsoft has been the only one to champion digital twins thus far. The real value of digital twins lies in helping decision-makers frame how their decisions about these IoT-related applications can be woven together to impact assets in the real world.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,602 | 2,019 |
"It's time for workers to worry about AI | VentureBeat"
|
"https://venturebeat.com/2019/04/07/its-time-for-workers-to-worry-about-ai"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest It’s time for workers to worry about AI Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Recent news of significant corporate investments in artificial intelligence (AI) suggests this technology is moving toward mainstream use. Evidence for this includes DocuSign injecting $15 million into an AI contract discovery startup, Apple absorbing an AI camera developer, and CIO reporting that banks are expected to spend $5.6 billion on AI solutions in 2019, “ushering in the next financial revolution.” Indeed, the green shoots of AI are appearing everywhere.
Despite a surfeit of ethical concerns, leading AI advocates such as Andrew Ng are encouraging companies to jump into AI use. Many are doing just that. KPMG claims more than half of business executives plan to implement some form of AI within the next 12 months. One of the more common AI discussions is the potential impact on jobs. This impact is probably incalculable, though many try to estimate it. Gartner, for example, believes AI will create more jobs than it destroys between now and 2025. Previous technology revolutions have destroyed jobs but ultimately created new jobs and industries. That pattern has happened repeatedly, and this dynamic has now become conventional wisdom.
But not everyone is so sanguine when it comes to the impact of AI. In a 60 Minutes interview , Kai-Fu Lee, one of the world’s foremost experts on artificial intelligence, claimed that — in as soon as 15 years — AI technology could displace about 40% of the jobs in the world. The disruption is already beginning, with fully 75% of the organizations KPMG surveyed expecting intelligent automation to significantly impact 10 to 50% of their employees in the next two years. A Citigroup executive told Bloomberg that better AI could reduce headcount at the bank by 30%.
In the face of all this change, many companies publicly state that AI will eliminate some dull and repetitive jobs and make it possible for people to do higher-order work. However, as a prominent venture capitalist relayed to me recently on this topic: “most displaced call center workers don’t become Java programmers.” It is not only low-skilled jobs that are at risk. Gartner analysts recently reported that AI will eliminate 80% of project management tasks. This led SiliconANGLE to opine : “Project managers who are worried about the prospects of artificial intelligence one day stealing their jobs might do well to consider a career change as soon as they can.” The KPMG study found that virtually all organizations need help preparing employees for the changes ahead.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The risk of falling behind A New York Times article noted that while many company executives pay public lip service to “human-centered AI” and the need to provide a safety net for those who lose their jobs, they privately talk about racing to automate their workforces “to stay ahead of the competition, with little regard for the impact on workers.” The article also cites a Deloitte survey from 2017 that found 53% of companies had already started to use machines to perform tasks previously done by humans. The figure is expected to climb to 72% by next year.
This perceived risk of falling behind is broadly affecting the C-Suite. For example, according to a Grant Thornton report , CFOs need to “alter their mindset when it comes to technology investments. CFOs must be willing to experiment — and incur failures along the way — or risk falling behind.” And as noted by a Harvard Business Review (HBR) article, returns for AI front-runners tend to be large. “They will benefit from innovations enabling them to serve (and perhaps create) new markets and, at the same time, gain share from non-AI adopters in existing markets.” Furthermore, the authors conclude: “A fierce competitive race among companies appears to be in prospect with a widening gap between those investing in AI and those that are not.” The net of this dynamic is that workers are not a major factor in the economic calculus of the business drive to adopt AI, despite so many public statements to the contrary. So perhaps it’s not a surprise when the Edelman 2019 AI survey shows a widely held view that AI will lead to short-term job losses with the potential for societal disruption and that AI will benefit the rich and hurt the poor.
The trend toward AI is not inevitable, as issues could arise that will either slow down implementation or even bring it to a halt. Numerous ethical concerns have surfaced in recent months, and companies don’t want to be on the wrong side of an employee or consumer revolt. These countervailing pressures may provide some pause but are unlikely to substantially change the trajectory of adoption, as the business benefits are simply too great.
AI is the new reality, and it’s coming fast Indeed, AI may be the fastest paradigm shift in the history of technology. The HBR article traces the time it took for other major technology advances including the web, mobile, cloud, and big data to reach substantial implementation levels and concluded that AI may take less than half as long.
The Brookings Institute takes a decidedly glass half full approach, noting that over the past 30 years technology has been a significant source of new job creation and opportunity. Nevertheless, they believe the US needs to help workers and communities adjust to job displacement and to reduce hardships for those who are struggling. In addition to traditional job-training programs, Brookings calls for a Universal Adjustment Benefit. This includes robust income support for workers in training but stops short of a call for Universal Basic Income (UBI), an unconditional periodic cash payment made by the state to all people regardless of whether or not they work.
Retraining for new positions could be relatively easy for some of those displaced but much harder for others, and it’s possible that AI advances could leave many behind and create a new permanent underclass. Historian, philosopher, and bestselling author Yuval Noah Harari believes it’s quite possible AI will lead to the development of a “useless class” — billions of people who are unemployable. In a Guardian interview , he said: “If they want to continue to have a job, and to understand the world, and be relevant to what is happening, people will have to reinvent themselves again and again, and faster and faster.” Otherwise, there is UBI. In a recent New York Times story , Harari explained why Silicon Valley is supportive of UBI. “The message is: ‘We don’t need you. But we are nice, so we’ll take care of you.’” The stage is now set. On one side is the conventional view that technology revolutions will create many new jobs, and more than offset losses as positions are eliminated by automation. The other view is that this time is different, that we are not just automating labor but also cognition and many fewer people will be needed by industry. Given that many insiders are starting to lean towards the latter, workers really have only two choices: 1) continuously upgrade their knowledge and skills much faster than before or; 2) hope that UBI becomes a reality in time to prevent them from falling into an AI abyss.
Gary Grossman is Senior Vice President, Technology Practice Lead, at the Edelman AI Center of Expertise.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,603 | 2,021 |
"Which BI apps do enterprise users most admire? | VentureBeat"
|
"https://venturebeat.com/2021/03/24/which-bi-apps-do-enterprise-users-most-admire"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Which BI apps do enterprise users most admire? Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Enterprises rely on business intelligence applications to identify and predict the potential outcomes of their business strategies. Whether or not the business intelligence application is effective at delivering measurable business value depends on the accuracy of the application’s predictions, and how those predictions affect the organization.
SoftwareReview’s 2021 Business Intelligence Data Quadrant Report asked 1,234 IT decision makers to weigh in on the factors that separated the most and least admired BI vendors. Of the 16 vendors evaluated for the report, respondents identified Zoho Analytics, Tableau, Dundas BI, TIBCO Spotfire, and Qlik Sense as delivering the greatest business value to their users. Zoho Tableau, Qlik Sense, and Yellowfin received the highest scores for reusability and intuitiveness, and Board, Qlik Sense, and Looker were rated as being the most customizable, according to the survey.
Emotional response ratings across 25 questions were aggregated to create an indicator of overall user feeling toward the vendor and product. Two of the metrics in the survey indicated how favorably the respondents viewed the BI applications: Value Index, or user satisfaction given the costs paid; and Net Emotional Footprint, or high-level user sentiment about the application. Dundas BI, Tableau, Board, Looker, and Zoho Analytics had the highest combined Value Index and Net Emotional Footprint scores across 16 BI vendors included in the study.
Above: The placement of a product in the Software Reviews Data Quadrant indicates its relative ranking as well as its categorization.
Analytics and data science: TIBCO When asked how well BI vendors supported advanced analytics and data science, the respondents rated TIBCO Spotfire the highest, at 83%, followed by Qlik Sense, Domo, and Tableau. Surprisingly, the aggregate level of customer satisfaction with advanced analytics and data science support was 76%, which is actually pretty low considering that BI vendors are extensively hyping these capabilities.
Users were vocal about their satisfaction levels regarding features in BI apps, such as advanced analytics and data science. Tableau received 158 survey responses and a satisfaction score of 79%, the second-highest in the survey. Microsoft Power BI received the most survey responses with 207 and had a satisfaction score of 75% on this attribute. The chart below ranks how respondents ranked vendor support for advanced analytics and data science in BI applications.
Above: Advanced analytics includes techniques such as data and text mining, machine learning, forecasting, what-if analysis, and sentiment analysis.
Product strategy: Tableau Survey participants were also asked about their impressions of the applications’ product strategy and rate of improvement. Tableau, Dundas BI, and Sisense were the most respected, according to the survey. The survey results reflect Tableau’s efforts to build a self-service tool that addresses business users’ needs across an enterprise.
Above: Purchasing software can be a significant commitment, so it’s important for vendors to be serious about constant improvement and deliberate strategic direction.
Platform security: Domo The survey respondents considered application and platform security as the highest priority feature, tied with operational reporting capabilities. To be considered in this category, the BI platform had to support data access control management, including access permissions management, user authentication, and enforcement of access permissions via technology.
Domo was the highest-rated BI vendor for applications and platform security. The high rating reflects Domo’s support for advanced security features, including multiple logical and physical security layers, least privilege and separation of duties access model, and transport layer encryption and encryption at rest, allowing customers to manage their encryption keys.
Above: Platform security includes access permissions management, user authentication, and enforcement of access permissions.
Looking forward Enterprise leaders evaluating which BI application to buy need to consider their most important use case. For example, self-service BI, where business analysts connect with, and aggregate data from, diverse data sources that drive visualizations, might dominate business needs. The most important criteria in selecting a self-service BI application include data integration, data preparation expertise, and intuitive data visualization workflows.
Another use case is enterprise-wide BI deployment. In that scenario, key criteria to consider include support for governance, centralized manageability, and scale to deliver access and content to a broad community of analytics users.
A third use case is augmented BI, defined as automating manual processes involved in data analysis, integration, and visualization using machine learning. The most important criterion for evaluating augmented BI applications is the automated insights module, which includes machine learning algorithms trainable by an organization. Additional key criteria include natural language query, natural language generation, and support for data storytelling.
Depending on which use case is most important, different BI applications might work better for an organization. Regardless, such applications need to progress beyond data visualization and dashboards by adopting machine learning techniques to generate insights that deliver more business value.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,604 | 2,021 |
"How AI-powered BI tools will redefine enterprise decision-making | VentureBeat"
|
"https://venturebeat.com/2021/04/02/how-ai-powered-bi-tools-will-redefine-enterprise-decision-making"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest How AI-powered BI tools will redefine enterprise decision-making Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Value-creation in business intelligence (BI) has followed a consistent pattern over the last few decades. The ability to democratize and expand the addressable user base of solutions has corresponded to large value increases. Enterprise BI arguably started with highly technical solutions like SAS in the mid-’70s, accessible only to a small fraction of highly specialized employees. The BI world began to open up in the ’90s with the advent of solutions like SAP Business Objects, which created an abstraction layer on top of query language to allow a broader swath of employees to run business intelligence. BI 3.0 came in the last decade, as solutions like Alteryx have provided WYSIWYG interfaces that further expanded both the sophistication and accessibility of BI.
But in many cases, BI still involves analysts writing SQL queries to analyze large data sets so that they can provide intelligence for non-technical executives. While this paradigm for analysis continues to increase, I believe that a new BI paradigm will emerge and grow in importance over the next few years — one in which AI surfaces relevant questions and insights, and even proposes solutions.
This fourth wave of BI will leverage powerful AI advancements to further democratize analytics so that any line of business specialist can supervise more insightful and prescriptive recommendations than ever before.
In this fourth wave, the traditional order of BI will be inverted. The traditional method of BI generally begins with a technical analyst investigating a specific question. For example, an electronics retailer may wonder if a higher diversity of refrigerator models in specific geographies will likely increase sales. The analyst blends relevant data sources (perhaps an inventory management system and a billing system) and investigates whether there is a correlation. Once the analyst has completed the work, they present a conclusion about past behavior. They then create a visualization for business decision makers in a system like a Tableau or Looker , which can be revisited as the data changes.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! This investigation method works quite well, assuming the analyst asks the right questions, the number of variables is relatively well-understood and finite, and the future continues to look somewhat similar to the past. However, this paradigm presents several potential challenges in the future as companies continue to accumulate new types of data, business models and distribution channels evolve, and real-time consumer and competitive adjustments cause constant disruptions. Specifically: The amount of data produced today is unfathomably large and accelerating. IDC predicts that worldwide data creation will grow to 163ZB by 2025 , up 10x from 2017. With that amount of data, the ability to zero in on the variables that matter is akin to finding a needle in a haystack.
Business models and ways of reaching customers are becoming more varied and complex. Multi-modal distribution (digital, D2C, distributor-led, retail, ecommerce), international customers, mobile usage, and marketing channels (social media, search engine, display, television, etc.) have changed the dynamics of decision making and are more complicated than ever before.
Customers have more options and can change preferences and abandon brands faster than ever. New competition arises from both tech behemoths like Amazon, Google, Microsoft, and Apple and a record amount of venture-backed startups.
BI 4.0 AI-enabled platforms that will define the fourth wave of BI start by crunching and blending massive amounts of data to find and surface patterns and relevant statistical insights. A data analyst applies judgment to these myriad insights to decide which patterns are truly meaningful or actionable for the business. After digging into areas of interest, the platform suggests potential actions based on correlations that have been seen over a more extended period — again validated by human judgment.
The time is ripe for this methodology to proliferate — AI advancements are coming online in conjunction with the growth of cloud-native vendors like Snowflake. Simultaneously, businesses are increasingly feeling the strain that business complexity and data proliferation are putting on their traditional BI processes.
The data analytics space has spawned some incredible companies capable of tackling this challenge. In the last six months, Snowflake vaulted into the top 10 cloud businesses with a valuation above $70 billion, and Databricks raised $1 billion at a $28 billion valuation. Both of these companies (along with similar offerings from AWS and Google Cloud) are vital enablers for modern data analytics, providing data warehouses where teams can leverage flexible, cloud-based storage and compute for analytics.
Industry verticals such as ecommerce and retail that are under the most strain from the three challenges outlined above are starting to see industry-specific platforms emerge to deliver BI 4.0 capabilities — platforms like Tradeswell, Hypersonix , and Soundcommerce.
In the energy and materials sector, platforms like Validere and Verusen are helping to address these challenges by using AI to boost margins of operators.
In addition, broad technology platforms like Outlier , Unsupervised, and Sisu have demonstrated the power to pull exponentially more patterns from a dataset than a human analyst could. These are examples of intuitive BI platforms that are easing the strains, old and new, that data analysts face. And we can expect to see more of them emerging over the next couple of years.
Steve Sloane is a Partner at Menlo Ventures.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,605 | 2,021 |
"Proper data hygiene critical as enterprises focus on AI governance | VentureBeat"
|
"https://venturebeat.com/2021/05/06/proper-data-hygiene-critical-as-enterprises-focus-on-ai-governance"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Proper data hygiene critical as enterprises focus on AI governance Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Today’s artificial intelligence/machine learning algorithms run on hundreds of thousands, if not millions, of data sets. The high demand for data has spawned services that collect, prepare, and sell them.
But data’s rise as a valuable currency also subjects it to more extensive scrutiny. In the enterprise, greater AI governance must accompany machine learning’s growing use.
In a rush to get their hands on the data, companies might not always do due diligence in the gathering process — and that can lead to unsavory repercussions. Navigating the ethical and legal ramifications of improper data gathering and use is proving to be challenging, especially in the face of constantly evolving legal regulations and growing consumer awareness about privacy and consent.
The role of data in machine learning Supervised machine learning , a subset of artificial intelligence, feeds on extensive banks of datasets to do its job well. It “learns” a variety of images or audio files or other kinds of data.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! For example, a machine learning algorithm used in airport baggage screening learns what a gun looks like by seeing millions of pictures of guns — and millions not containing guns. This means companies need to prepare such a training set of labeled images.
Similar situations play out with audio data, says Dr. Chris Mitchell, CEO of sound recognition technology company Audio Analytic.
If a home security system is going to lean on AI, it needs to recognize a whole host of sounds including window glass breaking and smoke alarms, according to Mitchell. Equally important, it needs to pinpoint this information correctly despite potential background noise. It needs to feed on target data, which is the exact sound of the fire alarm. It will also need non-target audio, which are sounds that are similar to — but different from — the fire alarm.
ML data headaches As ML algorithms take on text, images, audio, and other various data types, the need for data hygiene and provenance grows more acute. As they gain traction and find new for-profit use cases in the real world, however, the provenance of related data sets is increasingly coming under the microscope. Questions companies increasingly need to be prepared to answer are: Where is the data from? Who owns it? Has the participant in the data or its producer granted consent for use? These questions place AI data governance needs at the root of ethical concerns and laws related to privacy and consent. If a facial recognition system scans people’s faces, after all, shouldn’t every person whose face is being used in the algorithm need to have consented to such use? Laws related to privacy and consent concerns are gaining traction. The European Union’s General Data Protection Regulation (GDPR) gives individuals the right to grant and withdraw consent to use their personal data, at any time. Meanwhile, a 2021 proposal from the European Union would set up a legal framework for AI governance that would disallow use of some kinds of data and require permission before collecting data.
Even buying datasets does not grant a company immunity from responsibility for their use. This was seen when the Federal Trade Commission slapped Facebook with a $5 billion fine over consumer privacy. One of the many prescriptions was a mandate for tighter control over third-party apps.
The take-home message is clear, Mitchell says: The buck starts and stops with the company using the data, no matter the data’s origins. “It’s now down to the machine learning companies to be able to answer the question: ‘Where did my data come from?’ It’s their responsibility,” Mitchell said.
Beyond fines and legal concerns, the strength of AI models depends on robust data. If companies have not done due diligence in monitoring the provenance of data, and if a consumer retracts permission tomorrow, extracting that set of data can prove to be a nightmare as AI channels of data use are notoriously difficult to track down.
The complicated consent landscape Asking for consent is a good prescription, but one that’s difficult to execute. For one thing, dataset use might be so far removed from the source that companies might not even know from whom to obtain consent.
Nor would consumers always know what they’re consenting to, says Dr. James Giordano, director of the Program in Biosecurity and Ethics at the Cyber-SMART Center of Georgetown University and co-director of the Program in Emerging Technology and Global Law and Policy.
“The ethical-legal construct of consent, at its bare minimum, can be seen as exercising the rights of acceptance or refusal,” Giordano said. “When I consent, I’m saying, ‘Yes, you can do this.’ But that would assume that I know what ‘this’ is.” This is not always practical. After all, the data might have originally been collected for some unrelated purpose, and consumers and even companies might not know where the trail of data breadcrumbs actually leads.
“As a basic principle, ‘When in doubt, ask for consent’ is a sensible strategy to follow,” Mitchell said.
So, company managers need to ensure robust, well-governed data is the foundation of ML models. “It’s rather simple,” Mitchell said. “You’ve got to put the hard work in. You don’t want to take shortcuts.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,606 | 2,020 |
"COVID and mental health: What employers and HR need to know (VB Live) | VentureBeat"
|
"https://venturebeat.com/2020/10/23/covid-and-mental-health-what-employers-and-hr-need-to-know-vb-live"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages VB Live COVID and mental health: What employers and HR need to know (VB Live) Share on Facebook Share on X Share on LinkedIn Sponsored by TriNet The pandemic may dramatically affect your employees’ mental health and productivity — and, in turn, that may impact the health of your company. Join this VB Live event to learn about the internal strategies HR can use to support and rally employees in a difficult time.
Register here for free.
We’re eight months into the coronavirus pandemic, and communities, ecosystems, and supply chains are still coming to grips with the widespread impact of the disease. Whatever their personal situation, your employees may be struggling with that uncertainty, worrying about the health and safety of their loved ones, and concerned about how the pandemic is disrupting their work life.
Their mental and physical health has a direct impact on the quality of their work, their engagement with their jobs, and their productivity – which also has a direct impact on the company’s health. And now is the time to understand and meet their needs with empathy, generosity, and flexibility. Here are a few ways to begin.
Start at the top Company leaders need to show up for their people. How they set the tone in a crisis has a powerful, long-lasting effect on company morale and team spirit. In a sea of unknowns, with no land on the horizon yet, employees are looking for direction and leadership. You need to be proactive, staying on top of new news and developments as the situation evolves, while keeping consistent and calm in your messaging and your response to changes. You’ll need to stay focused on the here and now, but don’t forget to plan for recovery, with confidence that you and your employees can meet the challenges of the new normal.
Review leave policies It’s unavoidable – too many employees are worried about getting sick and losing income, or even losing their jobs. Paid sick leave, leave of absence, and work-from-home policies should be revisited to help ensure that your employees feel safe enough to call out when they’re sick, or ask to work virtually if their at-home situation changes. Working parents are particularly struggling as schools continue to look for the balance between in-person and virtual education that’s best for everyone. Be aware of the federal Families First Coronavirus Response Act (FFCRA), which provides paid leave to eligible employees for several different qualifying reasons (including for child care needs due to COVID-19 related school/child care closures), and numerous temporary state and local emergency/supplemental paid sick leave entitlements for COVID-19 related reasons.
Develop a solid communication plan Your communication strategy is one of your top tools to ensure your employees’ confidence and sense of safety. The information you provide your employees and partners should be transparent, accurate, and clear. Concealing risk or potentially bad news may backfire, and can cause rifts between your employees and your company leaders, destroy trust, and cause even more damage and risk to your company.
Educate employees There’s a lot of information out there about the coronavirus, and with that overload comes a lot of anxiety and sometimes even panic. Provide your employees with comprehensive, actionable education about the disease and its symptoms, as well as best practice safety measures. That keeps them safer – and that, plus instituting evidence-based safety measures in the workplace like masks, social distancing, and more also demonstrates that you are deeply invested in their health and safety. Make sure there are protocols for work-from-home procedures that help keep employees productive and connected while out of the office, and that there are ways for employees to ask questions or share concerns.
Brainstorm employee support strategies Whether they’re working at home or in the office, the social and collaborative nature of work has changed. The pandemic is requiring employees to keep their distance from one another, and as a result they may be feeling isolated, detached, and unmotivated, especially as the pandemic continues with no clear end in sight. Now is the time to start developing strategies to keep your employees connected, whether that’s virtual happy hours and trivia games, frequent scheduled check-ins among team members or with team leaders, or developing a buddy system. Employees should also feel that it’s safe to discuss their fears, issues, and questions without being judged or worried that they’re putting themselves or their job at risk. Consider establishing avenues for employees to easily contact HR reps or other leaders, whether that’s a hotline, an anonymous virtual drop box, or a dedicated chat channel.
An Employee Assistance Program (EAP) is also a prime way to offer access to confidential resources for employees to get help managing stress or dealing with personal matters. EAPs offer flexible solutions from counseling and wellness to crisis preparedness and management.
To learn more about the impact that COVID is having on your employees’ mental health and productivity, plus the internal strategies organizations need now to support them, register now for this VB Live event.
Don’t miss out! Register here for free.
You’ll learn: What employers need to know about COVID’s impact on the mental health of their employees How the mental strain of COVID may negatively impact the health of a company and employee productivity Best steps and practices companies are taking to help their employees get through this difficult time Speakers: Christy Yaccarino, Executive Director, Benefit Strategy and Wellness, TriNet Michael McCafferty, Consultant, FEI Behavioral Health Stewart Rogers, Moderator, VentureBeat The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,607 | 2,020 |
"Google and Harvard release COVID-19 prediction models | VentureBeat"
|
"https://venturebeat.com/2020/08/03/google-and-harvard-release-covid-19-prediction-models"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google and Harvard release COVID-19 prediction models Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
In partnership with the Harvard Global Health Institute, Google today released the COVID-19 Public Forecasts , a set of models that provide projections of COVID-19 cases, deaths, ICU utilization, ventilator availability, and other metrics over the next 14 days for U.S. counties and states. The models are trained on public data such as those from Johns Hopkins University, Descartes Labs, and the United States Census Bureau, and Google says they’ll continue to be updated with guidance from its collaborators at Harvard.
The COVID-19 Public Forecasts are intended to serve as a resource for first responders in health care, the public sector, and other affected organizations preparing for what lies ahead, Google says. They allow for targeted testing and public health interventions on a county-by-county basis, in theory enhancing the ability of those who use them to respond to the rapidly evolving COVID-19 pandemic. For example, health care providers could incorporate the forecasted number of cases as a datapoint in resource planning for PPE, staffing, and scheduling. Meanwhile, state and county health departments could use the forecast of infections to help inform testing strategies and identify areas at risk of outbreaks.
To create the COVID-19 Public Forecasts, Google says its researchers developed a novel time-series machine learning approach that combines AI with a clever epidemiological foundation. By design, the models are trained on public data and leverage an architecture that allows researchers to dive into relationships the models have identified and interpret why they make certain forecasts. They’ve also been evaluated to ensure predictions with respect to people of color — who have been hardest hit by COVID-19, with disproportionately high rates of cases and deaths — aren’t wildly skewed or otherwise misleading.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “We observe that our models produce meaningfully lower absolute error and normalized (relative) error as compared to the comparison model across predominantly African American, Hispanic, and white counties,” Google researchers wrote in a fairness analysis of the COVID-19 prediction models. “Our models optimize for high accuracy across all U.S. counties to provide the best overall forecast for most communities.” The COVID-19 Public Forecasts are free to query in BigQuery as part of the service’s 1TB-per-month free tier or to download as comma-separated value files (CSVs). Additionally, they’re available through Google’s Data Studio dashboard and its National Response Portal.
All bytes processed in queries against the data set will be zeroed out, Google says, but data joined with the data set will be billed at the normal rate to prevent abuse. After September 15, queries over the forecast sets will revert to the normal Google Cloud billing rate.
The release of the COVID-19 Public Forecasts follows the launch of Google’s COVID-19 Public Datasets program, which hosts a repository of public data sets relating to the crisis and makes them easier to access and analyze. Corpora within the COVID-19 Public Datasets program includes the Johns Hopkins Center for Systems Science and Engineering (JHU CSSE) data set, Global Health Data from the World Bank, and OpenStreetMap data, all of which are stored for at no cost on Google Cloud.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,608 | 2,021 |
"Matillion raises $100 million to help enterprises accelerate cloud data integration | VentureBeat"
|
"https://venturebeat.com/2021/02/16/matillion-raises-100-million-to-help-enterprises-accelerate-cloud-data-integration"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Matillion raises $100 million to help enterprises accelerate cloud data integration Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
As enterprises build their modern data stack, preparing the data to be used has become an increasingly resource-intensive problem. To move faster, these companies are searching for solutions that help them integrate data into their workflows more quickly. Riding that wave, Matillion today announced that it has raised $100 million as it races to keep up with the demand for better cloud integration options.
Lightspeed Venture Partners led the latest round, which included investment from Battery Ventures, Sapphire Ventures, Scale Venture Partners, and Silicon Valley Bank Capital.
While the market for cloud data integration has become competitive, Matillion believes it has an advantage because its solution was built for the cloud-native era from the ground up. That could be critical as overall cloud adoption continues to accelerate.
“In the post-COVID world, any remaining reticence that companies have to move to the cloud has now been removed,” Matillion CEO Matthew Scullion told VentureBeat. “And companies of any scale and across all industries are now accelerating their migrations to the cloud, and that’s driving new innovations in the cloud.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The Matillion ETL platform (extract, transform, load) helps dramatically reduce the time it takes to sort and organize information so it can move more quickly from data lake services such as Snowflake into the applications where it can be used to support business intelligence, create visualizations, or power artificial intelligence and machine learning.
In addition to Snowflake, Matillion has become a key solution for customers using Delta Lake on Databricks, Amazon Redshift, Google BigQuery, and Microsoft Azure Synapse. As such, Matillion has attracted more than 500 major clients, including Cisco, Slack, DocuSign, Siemens, and Accenture.
Though the company is based in Manchester in the U.K. and has customers in 40 countries, its biggest market has so far been the U.S. Matillion has about 260 employees and expects to grow to around 400 by the end of the year.
The latest round comes less than two years after the company raised a $35 million round.
Scullion believes Matillion has tapped into a massive market. He said studies suggest that for every $5 enterprises spend on data warehousing, they are spending $1 on ETL, which would imply a $10 billion market.
“One of the reasons that we’re really excited about this round is that we are building a consequential company,” Scullion said. “This is a large market. Matillion has been doing this for a while, serving some of the premier enterprises in the world and working with the next-generation cloud data stack. This round of funding allows us to continue and accelerate this journey.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,609 | 2,021 |
"Accenture AI expert on how first principles prevent problems | VentureBeat"
|
"https://venturebeat.com/2021/04/09/accenture-ai-expert-on-how-first-principles-prevent-problems"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Accenture AI expert on how first principles prevent problems Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
As more organizations begin employing AI in production environments, it’s clear not everyone has completely thought through how AI will fundamentally change their business. Most of the focus today tends to be on AI to reduce operational costs in the wake of the economic downturn brought on by the COVID-19 pandemic.
VentureBeat talked with Fernando Lucini, global data science and machine learning engineering lead for Accenture, about why organizations shouldn’t focus on initial success. Lucini stressed how important it is for organizations adopting AI to keep first principles uppermost in mind.
This interview has been edited for clarity and brevity.
Above: Fernando Lucini, global data science and machine learning engineering lead for Accenture.
VentureBeat: Prior to the COVID-19 pandemic, most organizations were struggling when it came to AI. Now we’re seeing more AI than ever. How has the pandemic impacted those projects? VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Fernando Lucini : It’s been a confluence of events. CEOs are starting to ask “Where has all the money gone?” People started to ask some really deep questions about those investments. We’re thinking more about the value of AI. From a human perspective, companies that were affected needed to get smart because they were squeezed a bit because of COVID.
VentureBeat: Is there a danger organizations are now moving too quickly without really understanding AI? Lucini: We all get very excited about AI, but it needs to run with the right kind of controls and ethics. Three years from now, you’re going to be in a land where there’s a model that connects to a model that connects to a model.
It will all be intertwined in a complex way. I think there’s a ways to go.
VentureBeat: Will different models conflict with one another? Lucini: There are no models interacting yet, but synthetic data is quite exciting. We have customers who literally can’t get ahold of their own data because it’s so protected, so there’s going to be in the modeling world the concept of synthetic data that is a true synthesis. It’s not a copy anymore. It reflects the original pattern but never has any of the original data. I think there’s going to be a lot of synthetic data out in the world.
That’s when you’ll see a model created by a bank interacting with a model from an insurance company. As we move along and we get into more complex models, the winners are going to be those that actually have a great handle on things. They understand how things are happening, why they’re happening, and have strong controls and strong governance around how they do things.
VentureBeat: Right now it takes a fair amount of time to train an AI model. Will that process become faster? Lucini: I always joke that if you put five software engineers in a room and you give them five hours, no code will be written but they will know how to compile everything and what standards to use. If you put five data scientists in the next room for the same five hours, you’ll get five models based on five different mechanisms that are badly coded but very brilliant. We need to bring those two things together if you want to get the kind of speed of innovation we need. If you just have a few patterns, it’s very clear that you can go from data to model to production in an industrialized way. Where people fall down at the moment is because there have been loads of pilots in the last six months, but none of them can go to production.
VentureBeat: Machine learning operations ( MLOps ) has emerged as an IT discipline for implementing AI. Does this need to be folded into traditional IT operations? Lucini: In time. Data science and ML engineering are in the same group at Accenture. These folks need to have quite a deep understanding of the mechanisms to make these things. They need to have knowledge that is a little bit more specific to the model. I suspect there’ll be specialization for a while. I don’t think that’s going to go away anytime soon.
VentureBeat: There’s a lot of talk about the democratization of AI these days using AutoML frameworks. Is that really possible to achieve? Lucini: It’s inevitable that some of these platforms are doing more and more AutoML. I was speaking to a professor at Stanford a couple of weeks ago, and he was telling me that 90% of the people that go to his course on neural nets are not computer science students. The average education of people understanding statistical mathematics is going up. You also need industry expertise. Having somebody who understands how to use a model but doesn’t understand the problem at hand quite as deeply doesn’t work. My view is you’re going to have more AutoML that people can use, but we’re also going to need more guardrails to make sure that whatever it is they’re using is within the scope of safety.
Education takes them to a point where they do understand whether they created a monster or not. We’re going to have to add more of these industry people that know more of the science.
There are already generalists and citizen data scientists. I joke with CIOs and CEOs that these people can also be dangerous amateurs. Then you have this debate about how people don’t really understand how cars work and they still drive them. We still test people so they can drive cars. There’s a good reason for that, so let’s do the same. It’s important to have enough of an education.
VentureBeat: What’s your best advice to organizations then? Lucini.
Think about the first principles. If you think about AI as being important to you, then you should think about what is your business strategy for AI? Not how AI is part of your business strategy. Educate yourself sufficiently so you can apply principles to understand how AI might actually make a difference to what you’re doing. The truth is AI has a hidden cost of learning how to do it at scale. “Think 10 times” is the first principle of education.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,610 | 2,016 |
"Google BigQuery now lets you analyze data from Google Sheets | VentureBeat"
|
"https://venturebeat.com/2016/05/06/google-bigquery-now-lets-you-analyze-data-from-google-sheets"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google BigQuery now lets you analyze data from Google Sheets Share on Facebook Share on X Share on LinkedIn Working with Google Sheets data in BigQuery.
Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Google is announcing today that its BigQuery cloud service for running SQL-like queries on data can now easily take in data from the Google Sheets cloud-based spreadsheet software and then save query results inside Google Sheets files.
And changes to spreadsheets won’t cause problems for BigQuery.
“Time after time, we can make changes within our Google Sheets spreadsheet, and BigQuery will automatically pick up the changes next time you run a query against the spreadsheet!” Google BigQuery technical program manager Tino Tereshko wrote in a blog post.
If you’re a power user of Sheets, you’ll probably appreciate the ability to do more fine-grained research with data in your spreadsheets. It’s a sensible enhancement for Google to make, as it unites BigQuery with more of Google’s own existing services. Previously, Google made it possible to analyze Google Analytics data in BigQuery.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! These sorts of integrations could make BigQuery a better choice in the market for cloud-based data warehouses, which is increasingly how Google has positioned BigQuery. Public cloud market leader Amazon Web Services (AWS) has Redshift but no widely used tool for spreadsheets. Microsoft Azure’s SQL Data Warehouse, which has been in preview for several months , does not currently have an official integration with Microsoft Excel, surprising though it may be.
But Google in the past few months has shown signs of caring more about what companies want out of cloud services. In March the company disclosed plans to open data centers in 12 more regions around the world.
In December, Google enhanced BigQuery with custom quotas to limit the amount of money a user spends on any given day.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,611 | 2,021 |
"Financial giant S&P taps Snowflake for better cloud data distribution | VentureBeat"
|
"https://venturebeat.com/2021/05/05/financial-giant-sp-taps-snowflake-for-better-cloud-data-distribution"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Financial giant S&P taps Snowflake for better cloud data distribution Share on Facebook Share on X Share on LinkedIn Graph, Digital Display, Stock Market Data, Bank Account, Chart Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
When Warren Breakstone wanted to make it easier for S&P Global Market Intelligence customers to consume the trove of finance data the company holds, he turned to cloud data specialist Snowflake.
As managing director and chief product officer for data management solutions at S&P Global Market Intelligence , Breakstone recognized that the choice of cloud data platform was a key concern for his organization, which is a division of finance giant S&P Global. His team is continually on the lookout for new ways to create innovative data-led products for its major clients, which include finance firms and blue-chip enterprises across a range of sectors.
The organization was keen to take advantage of the cloud and make it easier to use data held on the S&P Global Marketplace, which brings together the firm’s data and information from third-party sources. After a period of evaluation, the organization started working with Snowflake last year. Here, Breakstone discusses why he selected Snowflake and how its technology forms a platform for further innovation.
This interview has been edited for brevity and clarity.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! VentureBeat: What was the aim of the implementation? Warren Breakstone: What we’re focused on is productizing data — creating new data-driven products, linking all of that together and combining it so that clients can get incremental value. And then also making it available to clients in the way they want to consume it. And that’s what we’ve really done with Snowflake , which is make all our data on the S&P Global Marketplace available through the Snowflake distribution and couple it with Snowflake compute power, so that clients can take advantage of bigger data queries, and all the advantages of compute power, so that they can study and research and analyze and evaluate, not just our data, but our data in combination with their own data.
VentureBeat: What was the business challenge that you were looking to solve? Breakstone: The big challenge has always been that different clients have different means of bringing data into their environments. Some want it through our Xpressfeed solution, which is our bulk-delivery technology that automates the ingestion of data directly into their environments. Others want to access the data through APIs. Then there’s a third tranche, who want it through pre-packaged software products, such as our Capital IQ platform. The challenge is being able to support all the different clients and the different ways that they want to consume data.
What Snowflake provides us is a modern addition to our array of distribution, and has additional advantages such as the ability to utilize the compute power as data gets bigger and bigger. Clients want to do new and interesting things by bringing different data sets together, so the ability to access compute power is so important. That has opened up all sorts of new opportunities for us and for our clients in the way we deliver new capabilities, new content, new products, and additional value.
VentureBeat: How did you deal with the build versus buy question? Breakstone: The challenge was more around who we would partner with. We have many home-built delivery solutions, such as Xpressfeed, which we’ve enhanced with what’s called a loader, which is a piece of software that automates the ingestion of data for our clients. And that’s a great product and clients love it. But clients also are increasingly looking to the cloud. And that’s where we had to make a decision: How best do we approach that opportunity, and who do we partner with to get there? And that’s what led us to Snowflake.
VentureBeat: Why did you select the Snowflake cloud data platform? Breakstone: First and foremost, it was about being closely connected to our clients — and our clients were talking about Snowflake and the opportunities that it provided to them. So as we were doing a pretty robust review of the landscape and different partners, and knowing that we wanted to get into cloud-based distribution, the question was how best to do it. Snowflake was one of the alternatives we considered.
We then needed a solution that would support our clients based on where they are today. Clients are on different solutions — some are on AWS, some are on Google Cloud Platform, some are on Azure. How do we support all of those different clients, based in the environments that they’ve stood up? That also was another plus in the Snowflake column because it’s a cloud-agnostic solution; we can build it once and serve many.
VentureBeat: What were some of the other technological factors that led you to Snowflake? Breakstone: We did various tests to see what the compute was like relative to other alternatives in the market and we were very impressed. Some of that came back to the initial architecture that Snowflake has built itself on, where they’ve separated their compute from their storage, and because you’ve separated those two, you’re able to get a bit more performance out of the compute.
Snowflake also has connections to other applications and tools in the space. Various visualization and analytic tools are already connected to Snowflake. Once we put our data into Snowflake, if a client wants to consume that data through a third-party visualization or analytics tool, more often than not, that provider is already connected with Snowflake, which makes the process for us to get the data into that solution and into their environment much less complicated because there’s a pre-existing pipe.
VentureBeat: How did you implement the Snowflake cloud data platform ? Breakstone: That involved a tight partnership between our technology group and our product management organization, where we first prioritized — based on customer needs — what data we were going to add to Snowflake’s environment and in what order. And then we were able to work with Snowflake to develop a rigorous and repeatable process, where we would be able to load the data into that environment. It was a very partnership-oriented approach. And we got there quite quickly; far smoother than we had expected.
The challenges were really one of prioritization. We have hundreds of different datasets, so where do you start? Do you start with the bigger, most complex data sets? Do you start with the simpler ones that are easier to load? We had a group of clients who partnered with us and helped us set those priorities. And that was very useful.
VentureBeat: What does the implementation mean for other investments in the data stack? Breakstone: We’ve just introduced our Marketplace Workbench, which is a platform that we’ve built on top of Snowflake and Databricks , who are a partner of Snowflake. This new platform enables our clients to use our data in a collaborative development environment, using a programming language of their choice, whether that’s Python or R or SQL, to get more out of the data.
So, what we’re happy about is that this isn’t just a singular, one-off type of opportunity for us. This is something that we continue to build on, and we build on it in a way that’s relevant to our clients. It’s not about us, it’s about how our clients are able to generate value and utility from these various connected solutions that are all built on top of our data.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,612 | 2,021 |
"Fivetran acquires Teleport Data to address database replication challenges | VentureBeat"
|
"https://venturebeat.com/2021/06/24/fivetran-acquires-teleport-data-to-address-database-replication-challenges"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Fivetran acquires Teleport Data to address database replication challenges Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Fivetran , a company that automates connectors to sync data into data warehouses , today announced that it acquired Teleport Data, a platform designed to improve reliability and performance in database replication. Teleport Data’s technology will launch as Fivetran Teleport Sync later this year, which will move only changed data without requiring access to secure binary log files.
When trying to transfer data from a source location to a data warehouse, it can be difficult to track changes and ensure data quality. Current solutions like snapshots, which capture that data at a point in time, are slow and inherently not in real time. That’s problematic, as enterprises face an explosion of data that’s often hard to assess for insights due to the acceleration of digital transformations spurred by the pandemic.
Access to secure database binary log files is usually required to ship database changes to another target system — a method that can be hard to configure, requires specialized security access, and is error-prone. By contrast, Fivetran Teleport Sync offers a code-free method to set up historical analysis, improving on throughput with less database overhead compared with logs, timestamp columns to indicate changes, or primary keys in illogical tables.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Fivetran CEO George Fraser, who personally discovered Teleport Data through a Slack data community, says that the technology will help Fivetran to develop new replication technologies toward its mission of making access to data “as simple and reliable as electricity.” In a press release, Fraser added, “[Teleport Data offers a] method that had not been previously considered. It proved to be a very innovative invention and it will serve our customers well.” Platform growth In addition to Teleport Sync, Fivetran is introducing a number of other database replication enhancements including history mode, which records every version of each record in a source table to the corresponding table in a destination. The company also says it’s continuing to deliver speed improvements, some of which were enabled by data extraction improvements to processing and load optimizations.
“Teleport Data is another example of Fivetran’s singular ability to address the challenges enterprises now face as they undergo digital transformations, where speed is a core requirement in accessing, analyzing, and deploying the latest data,” Bob Muglia, former Snowflake CEO and Microsoft president, said in a statement. “Fivetran is once again showing itself to be the extract, load, and transform leader, innovating so that customers of all sizes have access to actionable data at any time.” Fivetran’s purchase of Teleport Data comes after the former company attained unicorn status with a $1.2 billion post-money valuation in June 2020. Oakland, California-based Fivetran, which was founded in 2013 by Fraser and Taylor Brown, was conceived as a system to provide data visualizations but struggled to take off until it pivoted to focusing on automating data integration. While Fivetran isn’t yet profitable, annual recurring revenue was up 129% to $30 million in the fiscal year ending in May. The startup now has over 2,000 customers and more than 600 employees.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,613 | 2,021 |
"DevOps orchestration platform Opsera raises $15M | VentureBeat"
|
"https://venturebeat.com/2021/04/28/devops-orchestration-platform-opsera-raises-15m"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages DevOps orchestration platform Opsera raises $15M Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Opsera , a continuous orchestration platform for DevOps, today announced it has closed a $15 million series A round led by Felicis Ventures. Opsera says it will put the capital toward growing its engineering team and accelerating its sales, marketing, and customer success initiatives.
An estimated 19% to 23% of software development projects fail, with that statistic holding steady for the past couple of decades, according to data compiled by Ask Wonder. Standish Group found that “challenged” projects — i.e., those that fail to meet scope, time, or budget expectations — account for about 52% of software projects. Often, a lack of user involvement, executive support, and clear requirements are to blame for missed benchmarks.
Opsera, which was founded in 2020, aims to combat DevOps challenges with a self-service, no-code orchestration platform that lets engineers provision or integrate their CI/CD tools from a common framework. With Opsera, users can build declarative pipelines for a range of use cases, including software delivery lifecycle, infrastructure as code, and software-as-a-service app releases. Opsera correlates and unifies data throughout the development process to provide contextualized diagnostics, metrics, and insights.
A growing DevOps market The DevOps market is projected to reach $14.97 billion by 2026, at a compound annual growth rate of 19.1%, according to a Fortune Business Insights report.
Opsera competes directly or indirectly with companies including Harness , a continuous integration and delivery platform for engineering and DevOps teams, and Tasktop , which recently nabbed $100 million. There’s also OpsRamp , which applies AI to DevOps processes. And Productboard offers a product planning interface designed for DevOps orchestration.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! But Opsera claims to have a growing client roster that includes “several” Fortune 500 customers.
Above: Toolchain automation in Opsera.
“Our mission is to democratize software delivery by abstracting any CI/CD tools into a common framework that can empower engineers to build pipelines in minutes, not days or weeks,” cofounders Chandra Ranganathan and Kumar Chivukula said in a press release. “We offer the only DevOps platform that connects and orchestrates the entire tool stack with complete choice and visibility. Our customers can focus on their core product and will never waste time and resources building and managing toolchains and pipelines in-house or be stuck with single-vendor solutions. Having the support of Felicis and all of our investment partners will accelerate how we help customers along their DevOps journey.” Beyond Felicis Ventures, existing backers Clear Ventures, Trinity Partners, and Firebolt Ventures and new investor HMG Ventures also participated in San Francisco, California-based Opsera’s latest financing round. It brings the startup’s total raised to $19.3 million.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,614 | 2,021 |
"Self-service, need for accuracy power data governance's momentum | VentureBeat"
|
"https://venturebeat.com/2021/05/17/self-service-need-for-accuracy-power-data-governances-momentum"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Self-service, need for accuracy power data governance’s momentum Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
As more enterprises shift to ensure customer data privacy, data governance work is becoming increasingly important for IT departments pursuing digital transformation.
While companies may have previously collected data in case someone would want it, today’s businesses are actively asking how to store and dispose of data and, increasingly, how to generate maximum value while protecting their customers’ privacy.
Those are among the most salient conclusions from the 2021 State of Data Governance and Empowerment report commissioned by Erwin by Quest, a division of Quest Software. The report, compiled by ESG Research, surveyed 220 professionals across a collection of businesses selected from a broad range of industries, like manufacturing, health care, construction, and government.
Data is key to economic success. In the survey, 84% of respondents agreed that gathering data offered “the best opportunity for my organization to develop a competitive advantage.” But if they didn’t use this data to customize their product line and serve each customer, 74% believed they “will be disrupted by competitors that do.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! So how will companies go about achieving this? The report found 42% have a working plan for data governance that’s already staffed and underway, while 45% are in the process of implementing a plan that has recently been solidified. The last 13% said they were just starting to plan but that it was definitely on their roadmap.
Under the data governance umbrella What is data governance? In a nutshell, it’s a mix of accuracy, completeness, and privacy protection. When asked to select their top motivations, the respondents pointed to more than 10 factors, with 58% citing improved data security as most important, while 45% focused on improving data quality.
Other contenders included improving analytics (35%), improving compliance (34%), and increasing customer satisfaction (23%). (Respondents were able to choose multiple factors, so the percentages in the report didn’t add up to 100%.) The findings suggest the organizational barriers dividing data governance, data operations, and data protection are diminishing as organizations hone their data capabilities , Quest Software exec Heath Thompson told VentureBeat. But he said better collaboration and automation is in store.
Responses indicated the move to automated gathering and curation of data is still in the early stages. Only 7% said their data pipelines were entirely automated, while 59% said at least half of the work was manual. In any case, they agreed finding better software solutions is essential.
Many respondents said they’re focusing on making it simpler for users to request the data they need without intervention. “Self-service” is a big part of the answer, mixed with the right amount of automation for assistance, as 42% said their company has already begun work on self-service options, and 51% said they’re developing those options now.
Another big challenge the companies faced was building out automated tools for data collection, cleansing, and analysis. These three are foundational tasks for the data governance roles at many companies. The survey shows 93% saw “room to incorporate more automation into their data operations.” The report coincided with Quest’s first Data Empowerment Summit and is something of a coming out party for Erwin, which Quest Software acquired earlier this year.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,615 | 2,021 |
"How data analytics can help recruit the best engineers | VentureBeat"
|
"https://venturebeat.com/2021/06/01/how-data-analytics-can-help-recruit-the-best-engineers"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How data analytics can help recruit the best engineers Share on Facebook Share on X Share on LinkedIn Miles Ward is the CTO at cloud services provider SADA.
Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Employers are facing a lot of pressure to fill open positions efficiently and effectively — a task made even more difficult in light of low unemployment and a shortage of people with specific types of skills. Some enterprises are tackling this challenge with data analytics, by incorporating embedded reporting and analytical tools into their talent acquisition programs.
By integrating different data sources into their hiring processes, these enterprises can expand the pool of potential candidates, identify qualified applicants, and improve the hiring process.
Miles Ward, chief technology officer at cloud solutions provider SADA , explained to VentureBeat how his organization uses data analytics in recruiting and hiring. Prior to SADA, Ward was a director at Google Cloud and was involved with initiatives including NASA’s livestream of the Mars Rover landing and the Obama for America 2012 U.S. presidential campaign.
This interview has been edited for clarity and brevity.
VentureBeat: What are some of the challenges IT leaders face today in sourcing, recruiting, and hiring new engineers? VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Miles Ward: Here are three: Sourcing a diverse pipeline of candidates. I’m eager to have my company be the best place for engineers from all backgrounds to learn cloud, and making sure we do the work to dive into equitable sourcing is a critical part of that. Being the best takes more work than just putting up the job description.
Doing interviews over video chat is hard, especially without practice. Now that everyone has had around 60 weeks straight at it, we’re certainly improving, but it still can be difficult to get a clear read on a candidate: Was that awkwardness or just a mic cut? Do they make strong eye contact when it’s not a camera? How do you take good notes when the candidate can’t speak over typing noise? Onboarding workers remotely is a whole new process, You can’t just mail folks a laptop and hope for the best.
VentureBeat: What role does data management/analytics play in overcoming these challenges? Please provide an example of how you’re using data/analytics tools in this way.
Ward: It’s critical to keep a distributed and remote team on the same page. We use Google Forms to garner feedback from candidates and from employees on what we can do better, and Google Sheets and Google Data Studio to create simple points of collaboration inside and across teams to share the feedback. We also use a recruiting tool to track all candidates, stages of interviews, feedback and scoring, and offers. Those help us pay attention to our metrics to make sure that we’re keeping track of commitments and holding each other accountable.
VentureBeat: How is the use of data/analytics tools going to change the process of talent acquisition ? What advice do you have for IT leaders looking to better leverage data for hiring purposes? Ward: We’re working on improving the performance of our interviewers. Measuring the success of folks and the ratings given by our interviewers helps us find out who is more predictive in their evaluations.
We’re also working on improving the targeting of our promotions for open roles so that we’re sure to evaluate an increasingly diverse candidate pool.
We’re doing more careful monitoring of our team utilization forecasts and getting a more nuanced view of the skills we’ll need to tackle tomorrow’s customer challenge. That’s helping us both recruit and cross-train our teams to continue to meet customers where they are.
When evaluating SaaS tools, I look for my teams to have clear ownership over and access to the data we create, where the SaaS vendor can be clear about what infrastructure system is being used to host our app and where the tools have existing integrations into other key parts of our operating stack. We also want to see examples from their other customers who’ve done the same integrations, with timelines and details galore. The more they can share, the more comfortable we can get.
For our systems at SADA, and many that we’ve helped customers stand up, we’ve built integrations between SaaS APIs like HubSpot, Monday, Netsuite, Greenhouse, Trello, and many more using Google Dataflow, as a low-overhead, efficient, managed platform for building and maintaining these crucial integrations. Built on OSS Apache Beam (a model and set of language-specific SDKs), it’s a safe investment that’s paying dividends for us in our pace of integration.
VentureBeat: One of the challenges is to know how to buy the right platform/tools (or go homegrown). What specific questions do you ask in the evaluation process? What features are useful for the examples you gave us earlier? Ward: Very little of basic business functions are not best served by SaaS tools today. The real question is: Do you already have part of what you need? Sometimes that can be almost harder: If you have the recruiting system online but no data from the marketing system connected to it, how can you tell which roles get the most interest (but which maybe don’t convert into applications)? The bigger you are, the more you need to be prepared for the bulk of work being in the integration, not the selection and implementation of tools.
VentureBeat: What was it like helping Obama for America (OFA) win the 2012 presidential campaign? I assume there is a data story there, too? Ward: OFA of course was all about data, and about recruiting. I’d built a startup that analyzed social media data and then went to Amazon. I got a call from “the most important technologist” in the Obama campaign, who said since I’d worked on social, and on cloud, and on some of the tools they were using in cloud, they’d love me to come help. I was pretty surprised: How did they know what I’d worked on? His response: “Well, you tweeted about all that stuff, right?” It seems they’d figured me out, or at least the tools they were working on had. Data can help you find great candidates.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,616 | 2,018 |
"VR veterans found Artie augmented reality avatar company | VentureBeat"
|
"https://venturebeat.com/2018/12/06/vr-veterans-found-artie-augmented-reality-avatar-company"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages VR veterans found Artie augmented reality avatar company Share on Facebook Share on X Share on LinkedIn Artie founders Ryan Horrigan (left) and Armando Kirwin.
The migration of virtual reality veterans to augmented reality continues. A new AR startup dubbed Artie is coming out of stealth mode today in Los Angeles with the aim of giving you artificial intelligence companions in your own home.
Armando Kirwin and Ryan Horrigan started the company to use artificial intelligence and augmented reality to build “emotionally intelligent avatars” as virtual companions for people. Those avatars would be visible anywhere that you can take your smartphone or AR gear, Horrigan said in an interview.
The startup has backing from a variety of investors, including YouTube cofounder Chad Hurley, Founders Fund, DCG, and others. But Kirwin said the company isn’t disclosing the amount of the investment yet.
Above: Artie’s AR avatars in action.
The company’s software will enable content creators to bring virtual characters to life with its proprietary Wonderfriend Engine, which makes it easy to create avatar-to-consumer interactions that are lifelike and highly engaging. Kirwin said the company is working with major entertainment companies to get access to familiar characters from famous brands.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! “Our ambitions is to unlock the world of intellectual property you are already familiar with,” said Kirwin, in an interview with VentureBeat. “You can bring them into your home and have compelling experiences with them.” The company hopes to announce some relationships in the first quarter, Kirwin said.
Once created, the avatars then exist on an AR network where they can interact and converse with consumers and each other. It reminds me of Magic Leap’s Mica digital human demo, but so far Artie isn’t showing anything quite as fancy as that yet.
“The avatar will use AI to figure out whether you are happy or sad and that would guide it in terms of the response it should have,” Kirwin said. “Some developers could use this to create photoreal avatars or animated characters.” Artie is also working on Instant Avatar technology to make its avatars shareable via standard hyperlinks, allowing them to be discovered on social media and other popular content platforms (i.e. in the bio of a celebrity’s Instagram account, or in the description of a movie trailer on YouTube).
Horrigan said that the team has 10 people, and it is hiring people with skills in AI, AR, and computer vision. One of the goals is to create avatars who are more believable because they can be inserted in the real world in places like your own home. The team has been working for more than a year.
“Your avatar can be ready, so you don’t have to talk to it to activate it,” Kirwin said. “It’s always on, and it’s really fast, even though it is cloud based. We can recognize seven emotional states so far, and 80 different common objects. That’s where the technology stands today.” Above: Artie will be able to detect your mood and react to it.
Horrigan was previously chief content officer of the Comcast-backed immersive entertainment startup Felix & Paul Studios, where he oversaw content and business development, strategy and partnerships.
Ryan and his team at Felix & Paul forged numerous partnerships with Fortune 500 companies and media conglomerates including Facebook, Google, Magic Leap, Samsung, Xiaomi, Fox and Comcast, and worked on projects with top brands and A-list talent such as NASA and Cirque du Soleil.
One of Felix & Paul’s big projects was a virtual reality tour of the White House with the Obamas.
That project, The People’s House, won an Emmy Award for VR, as it captured the White House as the Obama family left it behind.
Prior to Felix & Paul, Horrigan was a movie studio executive at Fox/New Regency, where he oversaw feature film projects including Academy Award Best Picture Winner 12 Years A Slave.
He began his career in the Motion Picture department at CAA and at Paramount Pictures. Ryan has given numerous talks, including at Ted, Cannes, Facebook, Google, Sundance, SXSW and throughout China. He holds a Bachelor’s in Film Studies and lives in Los Angeles, California.
Kirwin has focused on VR and AR in both Hollywood and Silicon Valley. He has helped create more than 20 notable projects for some of the biggest companies in the world. These projects have gone on to win four Emmy nominations and seven Webby nominations.
Prior to co-founding Artie, Kirwin helped create the first 4K streaming video on demand service, Odemax – which was later acquired by Red Digital Cinema. He was later recruited by Chad Hurley, cofounder and ex-CEO of YouTube, to join his private technology incubator in Silicon Valley.
Prior to his career in immersive entertainment, Kirwin worked on more than 50 projects, predominantly feature films, which include “The Book of Eli,” the first major motion picture shot in digital 4K. He also acted as consultant to vice president of physical production at Paramount Pictures.
Other investors include Cyan Banister (investing personally), The Venture Reality Fund, WndrCo, M Ventures, Metaverse Ventures, and Ubiquity6 CEO Anjney Midha.
Artie has already cemented partnerships with Google and Verizon for early experiments with its technology and is beginning to onboard major media companies, celebrities, influencers, and an emerging class of avatar-based entertainment creators.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
Join the GamesBeat community! Enjoy access to special events, private newsletters and more.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,617 | 2,020 |
"Google's ML-fairness-gym lets researchers study the long-term effects of AI's decisions | VentureBeat"
|
"https://venturebeat.com/2020/02/05/googles-ml-fairness-gym-lets-researchers-study-the-long-term-effects-of-ais-decisions"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google’s ML-fairness-gym lets researchers study the long-term effects of AI’s decisions Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
To determine whether an AI system is maintaining fairness in its predictions, data scientists need an understanding of models’ short- and long-term effects, which might be informed by disparities in error metrics on a number of static data sets. In some cases, it’s necessary to consider the context in which the AI system operates, in addition to error metrics, which is why Google researchers developed ML-fairness-gym , a set of components for evaluating algorithmic fairness in simulated social environments.
ML-fairness-gym — which was published in open source on GitHub this week — can be used to research the long-term effects of automated systems by simulating decision-making using OpenAI’s Gym framework. AI-controlled agents interact with digital environments in a loop, and at each step an agent chooses an action that affects the environment’s state. The environment then reveals an observation that the agent uses to inform its next actions so that the environment models the system and dynamics of a problem and the observations serve as data.
For instance, given the classic lending problem , where the probability that groups of applicants pay back a bank loan is a function of their credit score, the bank acts as the agent and receives applicants, their scores, and their membership in the form of environmental observations. It makes a decision — accepting or rejecting a loan — and the environment models whether the applicant successfully repays or defaults and then adjusts their credit score accordingly. Throughout, ML-fairness-gym simulates outcomes so that the fairness of the bank’s policies can be assessed.
ML-fairness-gym in this way cleverly avoids the pitfalls of static data set analysis. If the test sets (i.e., corpora used to evaluate model performance) in classical fairness evaluations are generated from existing systems, they may be incomplete or reflect the biases inherent to those systems. Furthermore, the actions informed by the output of AI systems can have effects that might influence their future input.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Above: In the lending problem scenario, this graph illustrates changing credit score distributions for two groups over 100 steps of simulation.
“We created the ML-fairness-gym framework to help ML practitioners bring simulation-based analysis to their ML systems, an approach that has proven effective in many fields for analyzing dynamic systems, where closed form analysis is difficult,” wrote Google Research software engineer Hansa Srinivasan in a blog post.
Several environments that simulate the repercussions of different automated decisions are available, including for college admissions, lending, attention allocation, and infectious disease. (The ML-fairness-gym team cautions that the environments aren’t meant to be hyper-realistic and that best-performing policies won’t necessarily translate to the real world.) Each has a set of experiments corresponding to published papers, which are meant to provide examples of ways ML-fairness-gym can be used to investigate outcomes.
The researchers recommend using ML-fairness-gym to explore phenomena like censoring in the observation mechanism, errors from the learning algorithm, and interactions between the decision policy and the environment. The simulations allow for the auditing of agents to assess the fairness of decision policies based on observed data, which can motivate data collection policies. And they can be used in concert with reinforcement learning algorithms — which spur on agents with rewards — to derive new policies with potentially novel fairness properties.
In recent months, a number of corporations, government agencies, and independent researchers have attempted to tackle the “black box” problem in AI — the opaqueness of some AI systems — with varying degrees of success.
“Machine learning systems have been increasingly deployed to aid in high-impact decision-making, such as determining criminal sentencing, child welfare assessments, who receives medical attention, and many other settings,” continued Srinivasan. “We’re excited about the potential of the ML-fairness-gym to help other researchers and machine learning developers better understand the effects that machine learning algorithms have on our society, and to inform the development of more responsible and fair machine learning systems.” In 2017, the U.S. Defense Advanced Research Projects Agency launched DARPA XAI , a program that aims to produce “glass box” models that can be easily understood without sacrificing performance. In August, scientists from IBM proposed a “factsheet” for AI that would provide information about a model’s vulnerabilities, bias, susceptibility to adversarial attacks, and other characteristics. A recent Boston University study proposed a framework to improve AI fairness. And Microsoft, IBM, Accenture, and Facebook have developed automated tools to detect and mitigate bias in AI algorithms.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,618 | 2,020 |
"Montreal AI Ethics Institute suggests ways to counter bias in AI models | VentureBeat"
|
"https://venturebeat.com/2020/06/30/montreal-ai-ethics-institute-suggests-ways-to-counter-bias-in-ai-models"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Montreal AI Ethics Institute suggests ways to counter bias in AI models Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
The Montreal AI Ethics Institute, a nonprofit research organization dedicated to defining humanity’s place in an algorithm-driven world, today published its inaugural State of AI Ethics report.
The 128-page multidisciplinary paper, which covers a set of areas spanning agency and responsibility, security and risk, and jobs and labor, aims to bring attention to key developments in the field of AI this past quarter.
The State of AI Ethics first addresses the problem of bias in ranking and recommendation algorithms, like those used by Amazon to match customers with products they’re likely to purchase. The authors note that while there are efforts to apply the notion of diversity to these systems, they usually consider the problem from an algorithmic perspective and strip it of cultural and contextual social meanings.
“Demographic parity and equalized odds are some examples of this approach that apply the notion of social choice to score the diversity of data,” the report reads. “Yet, increasing the diversity, say along gender lines, falls into the challenge of getting the question of representation right, especially trying to reduce gender and race into discrete categories that are one-dimensional, third-party, and algorithmically ascribed.” The authors advocate a solution in the form of a framework that does away with rigid, ascribed categories and instead looks at subjective ones derived from a pool of “diverse” individuals: determinantal point process (DPP). Put simply, it’s a probabilistic model of repulsion that clusters together data a person feels represents them in embedding spaces — the spaces containing representations of words, images, and other inputs from which AI models learn to make predictions.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! In a paper published in 2018, researchers at Hulu and video sharing startup Kuaishou used DPP to create a recommendation algorithm enabling users to discover videos with a better relevance-diversity trade-off than previous work. Similarly, Google researchers tested a YouTube recommender system that statistically modeled diversity based on DPPs and led to a “substantial” increase in user satisfaction.
The State of AI Ethics authors acknowledge that DPP leaves open the question of sourcing ratings from people about what represents them well and encoding these in a way that’s amenable to “teaching” an algorithmic model. Nonetheless, they argue DPP provides an interesting research direction that might lead to more representation and inclusion in AI systems across domains.
“Humans have a history of making product design decisions that are not in line with the needs of everyone,” the authors write. “Products and services shouldn’t be designed such that they perform poorly for people due to aspects of themselves that they can’t change … Biases can enter at any stage of the [machine learning] development pipeline and solutions need to address them at different stages to get the desired results. Additionally, the teams working on these solutions need to come from a diversity of backgrounds including [user interface] design, [machine learning], public policy, social sciences, and more.” The report examines Google’s Quick Draw — an AI system that attempts to guess users’ doodles of items — as a case study. The goal of Quick Draw, which launched in November 2016, was to collect data from groups of users by gamifying it and making it freely available online. But over time, the system became exclusionary toward objects like women’s apparel because the majority of people drew unisex accessories.
“Users don’t use systems exactly in the way we intend them to, so [engineers should] reflect on who [they’re] able to reach and not reach with [their] system and how [they] can check for blind spots, ensure that there is some monitoring for how data changes, over time and use these insights to build automated tests for fairness in data,” the report’s authors write. “From a design perspective, [they should] think about fairness in a more holistic sense and build communication lines between the user and the product.” The authors also recommend ways to rectify the private sector’s ethical “race to the bottom” in pursuit of profit. Market incentives harm morality, they assert, and recent developments bear that out. While companies like IBM , Amazon , and Microsoft have promised not to sell their facial recognition technology to law enforcement in varying degrees, drone manufacturers including DJI and Parrot don’t bar police from purchasing their products for surveillance purposes. And it took a lawsuit from the U.S. Department of Housing and Urban Development before Facebook stopped allowing advertisers to target ads by race, gender, and religion.
“Whenever there is a discrepancy between ethical and economic incentives, we have the opportunity to steer progress in the right direction,” the authors write. “Often the impacts are unknown prior to the deployment of the technology at which point we need to have a multi-stakeholder process that allows us to combat harms in a dynamic manner. Political and regulatory entities typically lag technological innovation and can’t be relied upon solely to take on this mantle.” The State of AI Ethics makes the powerful, if obvious, assertion that progress doesn’t happen on its own. It’s driven by conscious human choices influenced by surrounding social and economic institutions — institutions for which we’re responsible. It’s imperative, then, that both the users and designers of AI systems play an active role in shaping those systems’ most consequential pieces.
“Given the pervasiveness of AI and by virtue of it being a general-purpose technology, the entrepreneurs and others powering innovation need to take into account that their work is going to shape larger societal changes,” the authors write. “Pure market-driven innovation will ignore societal benefits in the interest of generating economic value … Economic market forces shape society significantly, whether we like it or not.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,619 | 2,020 |
"LinkedIn open-sources toolkit to measure AI model fairness | VentureBeat"
|
"https://venturebeat.com/2020/08/25/linkedin-open-sources-toolkit-to-measure-ai-model-fairness"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages LinkedIn open-sources toolkit to measure AI model fairness Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
LinkedIn today released the LinkedIn Fairness Toolkit (LiFT) , an open source software library designed to enable the measurement of fairness in AI and machine learning workflows. The company says LiFT can be deployed during training and scoring to measure biases in training data sets, and to evaluate notions of fairness for models while detecting differences in their performance across subgroups.
There are countless definitions of fairness in AI, each capturing different aspects of fairness to users. Monitoring models along these definitions is a step toward ensuring fair experiences, but although several toolkits tackle fairness-related challenges, most don’t address large-scale problems and are tied to specific cloud environments.
By contrast, LiFT can be leveraged for ad hoc fairness analysis or as a part of any large-scale A/B testing system. It’s usable for exploratory analysis and in production, with bias measurement components that can be integrated into stages of a machine learning training and serving system. Moreover, it introduces a novel metric-agnostic testing framework that can detect statistically significant differences in performance as measured across different subgroups.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! LiFT is reusable, LinkedIn says, with wrappers and a configuration language intended for deployment. At the highest level, the library provides a basic driver program powered by a simple configuration, enabling fairness measurement for data sets and models without the need to write code and related unit tests. But LiFT also provides access to higher-level and lower-level APIs that can be used to compute fairness metrics at all levels of granularity, with the ability to extend key classes to enable custom computation.
To achieve scalability, LiFT taps Apache Spark, loading data sets into an organized database with only the primary key, labels, predictions, and protected attributes. Data distributions are computed and stored on a single system in-memory to speed up the computation of subsequent fairness metric computations; users can operate on these distributions or deal with cached data sets for more involved metrics.
To date, LinkedIn says it has applied LiFT internally to measure the fairness metrics of training data sets for models prior to their training. In the future, the company plans to increase the number of pipelines where it’s measuring and mitigating bias on an ongoing basis through deeper integration of LiFT.
“News headlines and academic research have emphasized that widespread societal injustice based on human biases can be reflected both in the data that is used to train AI models and the models themselves. Research has also shown that models affected by these societal biases can ultimately serve to reinforce those biases and perpetuate discrimination against certain groups,” LinkedIn senior software engineer Sriram Vasudevan, machine learning engineer Cyrus DiCiccio, and staff applied researcher Kinjal Basu wrote in a blog post. “We are working toward creating a more equitable platform by avoiding harmful biases in our models and ensuring that people with equal talent have equal access to job opportunities.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,620 | 2,020 |
"Google's MinDiff aims to mitigate unfair biases in classifiers | VentureBeat"
|
"https://venturebeat.com/2020/11/16/googles-mindiff-aims-to-mitigate-unfair-biases-in-classifiers"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google’s MinDiff aims to mitigate unfair biases in classifiers Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Google today released MinDiff , a new framework for mitigating (but not eliminating) unfair biases when training AI and machine learning models. The company says MinDiff is the culmination of years of work and has already been incorporated into various Google products, including models that moderate content quality.
The task of classification, which involves sorting data into labeled categories, is prone to biases against groups that are underrepresented in model training datasets. One of the most common metrics used to measure this bias is equality of opportunity , which seeks to minimize differences in false positive rates across different groups. But it’s often difficult to achieve balance because of sparse data about demographics, the unintuitive nature of debiasing tools, and unacceptable accuracy tradeoffs.
MinDiff leverages in-process approaches in which a model’s training objective is augmented with an objective focused on removing biases. This new objective is then optimized over a small sample of data with known demographic information. Given two slices of data, MinDiff works by penalizing the model for differences in the distributions of scores between the two sets such that as the model trains, it will try to minimize the penalty by bringing the distributions closer together.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! To improve ease of use, researchers at Google switched from adversarial training to a regularization framework that penalizes statistical dependency between its predictions and demographic information for non-harmful examples. This encourages models to equalize error rates across all groups.
MinDiff minimizes the correlation between the predictions and the demographic group, which fine-tunes for the average and variance of predictions to be equal across groups even if the distributions differ afterward. It also considers the maximum mean discrepancy loss, which Google claims is better able to both remove biases and maintain model accuracy.
Google says MinDiff is the first in what will be a larger “ model remediation library ” of techniques suitable for different use cases. “Gaps in error rates of classifiers is an important set of unfair biases to address, but not the only one that arises in machine learning applications,” Google senior software engineer Flavien Prost and staff research scientist Alex Beutel wrote in a blog post. “For machine learning researchers and practitioners, we hope this work can further advance research toward addressing even broader classes of unfair biases and the development of approaches that can be used in practical applications.” Google previously open-sourced ML-fairness-gym , a set of components for evaluating algorithmic fairness in simulated social environments. Other model debiasing and fairness tools in the company’s suite include the What-If Tool , a bias-detecting feature of the TensorBoard web dashboard for its TensorFlow machine learning framework, and an accountability framework intended to add a layer of quality assurance for businesses deploying AI models.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,621 | 2,018 |
"The American public is already worried about AI catastrophe - Vox"
|
"https://www.vox.com/future-perfect/2019/1/9/18174081/fhi-govai-ai-safety-american-public-worried-ai-catastrophe"
|
"Vox homepage Give Give Newsletters Newsletters Site search Search Vox main menu Explainers Crossword Video Podcasts Politics Policy Culture Science Technology Climate Health Money Life Future Perfect Newsletters More Explainers Israel-Hamas war 2024 election Supreme Court Buy less stuff Open enrollment What to watch All explainers Crossword Video Podcasts Politics Policy Culture Science Technology Climate Health Money Life Future Perfect Newsletters We have a request Vox's journalism is free, because we believe that everyone deserves to understand the world they live in. Reader support helps us do that. Can you chip in to help keep Vox free for all? × Filed under: Future Perfect The American public is already worried about AI catastrophe A new report suggests that we expect big advances in software capabilities — and we’re nervous.
By Kelsey Piper Jan 9, 2019, 3:30pm EST Share this story Share this on Facebook Share this on Twitter Share All sharing options Share All sharing options for: The American public is already worried about AI catastrophe Reddit Pocket Flipboard Email A new report suggests Americans are concerned about AI Javier Zarracina/Vox This story is part of a group of stories called Finding the best ways to do good.
For decades, some researchers have been arguing that general artificial intelligence will, if improperly deployed, harm and possibly endanger humanity.
For a long time, these worries were unheard of or on the back burner for most people — AI looked to be a long way away.
In the past few years, though, that has been changing. We’ve seen major advances in what AI systems can do — and a new report from the Center for the Governance of AI suggests that many people are concerned about where AI will lead next.
“People are not convinced that advanced AI will be to the benefit of humanity,” Allan Dafoe, an associate professor of international politics of artificial intelligence at Oxford and a co-author of the report, told me.
The Center for the Governance of AI is part of Oxford University’s Future of Humanity Institute. The new report, by Dafoe and Yale’s Baobao Zhang, is based on a 2018 survey of 2,000 US adults.
There are two big surprises in the report. The first is that the median respondent expects big developments in AI within the next decade — and, relatedly, the median respondent is nearly as concerned with “big picture” AI concerns like artificial general intelligence as with concerns like data privacy and cyberattacks.
The second is that concern about risks from AI, often stereotyped as a concern exclusively of Silicon Valley software engineers inflating the importance of their own work, is actually common at all income levels and for all backgrounds — with low-income people and women being the most concerned about AI.
Surveying the general public isn’t a good way to learn whether AI is actually a risk, of course — small differences in phrasing on surveys can affect the responses dramatically, and especially on a topic as contentious as AI, the public can easily be just misinformed. But surveys like this still matter for AI policy work — they help researchers identify which AI safety concerns are now mainstream and which are misunderstood, and they paint a clearer picture of how the public is looking at transformative technology on the horizon.
What concerns about AI do people have? The word AI is used to refer both to present-day technology like Siri, Google Translate, and IBM’s Watson and to transformative future technologies that surpass human capabilities in all areas. That means surveying people about “risks from AI” is a fraught project — some of them will be thinking about Facebook’s News Feed, and some of them, like Stephen Hawking , about technologies that exceed our intelligence “by more than ours exceeds that of snails.” The survey handled this by identifying 13 possible challenges from AI systems. Each respondent saw five of the 13 and was asked to rank those five on a scale from 0 (not at all important) to 3 (very important).
Among the concerns respondents rated most likely to impact large numbers of people, and most urgent for tech companies and governments to tackle, were data privacy, digital manipulation (for example, fake images), AI-enhanced cyberattacks, and surveillance. But the most striking result was that respondents were also deeply concerned with more “long-term” concerns.
Many people who talk about AI safety distinguish between the problems we’re already having today — with algorithmic bias, transparency, and interpretability of AI systems — and the problems that won’t arise until AI systems are vastly more capable than they are today, like extinction risks from general artificial intelligence.
Other experts think this is a false dichotomy — the reason general artificial intelligence will be so dangerous is that the machine learning systems we have today often pursue their goals in unexpected ways, and their behavior can get more unpredictable as they get more powerful.
Survey respondents on average ranked tomorrow’s AI concerns — like technological unemployment, failures of “value alignment” (failing to design systems that share our goals), and “critical AI safety failures” that kill at least 10 percent of people on Earth — as nearly as important as present-day concerns. “The public regards as important the whole space of AI governance issues, including privacy, fairness, autonomous weapons, unemployment, and other risks that may arise from advanced AI,” Dafoe told me. That might suggest that policymakers should be trying to address all these issues hand in hand — and that it’d be a mistake to ignore any.
Who’s afraid of risks from advanced artificial intelligence? Fears of risks from advanced artificial intelligence are often attributed to Silicon Valley, and sometimes covered as if they’re yet another fad out of the Bay Area tech community.
“If Silicon Valley Types Are Scared of A.I., Should We Be?” wondered an article in Slate in 2017, worrying that risks from AI might be “a grandiose delusion, on the part of computer programmers and tech entrepreneurs and other cloistered egomaniacal geeks.” The report suggests that gets it exactly wrong. An overwhelming majority of Americans — 82 percent — “believe that robots and/or AI should be carefully managed,” Zhang and Dafoe write, noting this is “comparable to survey results from EU respondents.” Men are less concerned than women, high-income Americans are less concerned than low-income Americans, and programmers are less concerned than people working in other fields.
Not only are high-income programmers and tech entrepreneurs far from the only ones concerned with AI risk, they are, as a group, more optimistic about AI than most respondents. “People who have CS or engineering degrees or CS or programming experience seem to be more supportive of developing AI and seem to be less concerned with these AI governance challenges we ask about,” Zhang said. (Of course, many prominent computer scientists and machine learning researchers are also among those calling for AI safety research.
) Do different demographic groups fear different AI scenarios, though? For example, is it the case that programmers and tech entrepreneurs are more concerned with disastrous AI system deployments, while low-income respondents fear technological unemployment? Studying cross-sections of a survey like that can introduce some spurious results, so a better analysis of this question will need a lot more data, but there’s no indication that’s going on here. Fears of disastrous system deployments and fears of data privacy problems aren’t held by disparate groups of people; most respondents ranked both highly. It might be time to lay “AI is a rich techie concern” to rest — AI will affect everyone, and this poll suggests that almost everyone has some reservations about it.
The public expects huge advances in AI — soon Expert estimates of when we can next expect big advances in AI vary immensely.
While some expect to keep building on the momentum of recent years and deploy world-altering systems within the next few decades, others have argued that general AI might be centuries off.
The general public, according to the new report, expects progress quickly. The survey asked respondents to predict “when machines are able to perform almost all tasks that are economically relevant today better than the median human.” That would be a sea change in the global economy. The median respondent predicted a 54 percent chance of AI with those capabilities by 2028.
This, as the report notes, is “considerably sooner than the predictions by experts in two previous surveys. In Müller and Bostrom (2014) , expert respondents predict a 50 percent probability of high-level human intelligence being developed by 2040-2050 and 90 percent by 2075. In Grace et al. (2018) , experts predict that there is a 50 percent chance that high-level machine intelligence will be built by 2061.” Part of the difference might be that Zhang and Dafoe ask about an AI that surpasses median human capabilities, while Grace asked about an AI that surpasses most human capabilities — but Zhang and Dafoe found the gap between popular opinion and expert opinion when they asked the exact same question as Grace asked experts.
Some machine learning researchers worry that high public expectations about AI could actually kill the industry: If results don’t arrive as quickly as people are expecting them, the public will quickly grow disillusioned, and there’ll be less public pressure for good policy around AI when the public has dismissed it.
If this survey is right — and, again, it’s just one survey — it looks like the public is paying attention to advances in AI and is apprehensive about future advances. That doesn’t mean public expectations necessarily match machine learning researchers’ best understanding of which problems are the key ones ahead. Progress toward safe deployment of AI systems takes more than public interest in a topic, but the public interest in the topic nonetheless suggests that AI safety may be starting to go mainstream.
Sign up for the Future Perfect newsletter.
Twice a week, you’ll get a roundup of ideas and solutions for tackling our biggest challenges: improving public health, decreasing human and animal suffering, easing catastrophic risks, and — to put it simply — getting better at doing good.
Will you support Vox’s explanatory journalism? Most news outlets make their money through advertising or subscriptions. But when it comes to what we’re trying to do at Vox, there are a couple reasons that we can't rely only on ads and subscriptions to keep the lights on.
First, advertising dollars go up and down with the economy. We often only know a few months out what our advertising revenue will be, which makes it hard to plan ahead.
Second, we’re not in the subscriptions business. Vox is here to help everyone understand the complex issues shaping the world — not just the people who can afford to pay for a subscription. We believe that’s an important part of building a more equal society. We can’t do that if we have a paywall.
That’s why we also turn to you, our readers, to help us keep Vox free.
If you also believe that everyone deserves access to trusted high-quality information, will you make a gift to Vox today? One-Time Monthly Annual $5 /month $10 /month $25 /month $50 /month Other $ /month /month We accept credit card, Apple Pay, and Google Pay. You can also contribute via In This Stream The rapid development of AI has benefits — and poses serious risks Elon Musk wants to merge humans with AI. How many brains will be damaged along the way? The American public is already worried about AI catastrophe 55 Next Up In Future Perfect Most Read Formula 1 grew too fast. Now its new fans are tuning out.
The controversy over TikTok and Osama bin Laden’s “Letter to America,” explained The Ballad of Songbirds & Snakes might be the best Hunger Games movie yet Most of Israel’s weapons imports come from the US. Now Biden is rushing even more arms.
The Crown increasingly becomes a fantastical apologetic for the royal family vox-mark Sign up for the newsletter Sentences The day's most important news stories, explained in your inbox.
Thanks for signing up! Check your inbox for a welcome email.
Email (required) Oops. Something went wrong. Please enter a valid email and try again.
The Latest Most of Israel’s weapons imports come from the US. Now Biden is rushing even more arms.
By Jonathan Guyer Formula 1 grew too fast. Now its new fans are tuning out.
By Izzie Ramirez The controversy over TikTok and Osama bin Laden’s “Letter to America,” explained By A.W. Ohlheiser and Li Zhou Your phone is the key to your digital life. Make sure you know what to do if you lose it.
By Sara Morrison Alex Murdaugh stands guilty of killing his wife and son. That’s just scratching the surface.
By Aja Romano Is the green texting bubble about to burst? By Sara Morrison Chorus Facebook Twitter YouTube About us Our staff Privacy policy Ethics & Guidelines How we make money Contact us How to pitch Vox Contact Send Us a Tip Vox Media Terms of Use Privacy Notice Cookie Policy Do Not Sell or Share My Personal Info Licensing FAQ Accessibility Platform Status Advertise with us Jobs @ Vox Media
"
|
15,622 | 2,018 |
"Gmail's getting an AI-powered Smart Compose feature for faster emailing | VentureBeat"
|
"https://venturebeat.com/2018/05/08/gmails-getting-a-smart-compose-feature-for-faster-replies"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Gmail’s getting an AI-powered Smart Compose feature for faster emailing Share on Facebook Share on X Share on LinkedIn Gmail: Smart Compose Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Google has announced a new Smart Compose feature is coming to Gmail.
The reveal was made today by Google CEO Sundar Pichai at Google’s annual I/O developer conference, held May 8-10 this year in Mountain View, California.
In a nutshell, Smart Compose taps the wonders of artificial intelligence (AI) to help users formulate drafts, from the beginning to the end. It’s basically like real-time auto-complete for entire emails, with Gmail serving up suggestions as you type.
Above: Smart Compose Anyone already accustomed to predictive keyboards will be familiar with the basic concept behind Smart Compose. It uses historical grammar and typing patterns to guess what it thinks you want to say, and then if you like the suggestion, just hit the tab key to enact it.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Additionally, Smart Compose will also tap contextual cues to make some suggestions. If you’re writing the email on a Friday, for example, it may suggest “Have a nice weekend” as a closing pleasantry.
This new feature will be landing in the consumer version of Gmail “in the coming weeks,” with support for G Suite users arriving later this year.
The news comes just a few weeks after Google unveiled the all-new Gmail, featuring confidential mode, nudges, snooze, and a bunch more notable upgrades. It seems that Smart Compose fits into that upcoming update, meaning that you have to manually opt in by activating the “Try the new Gmail” option in settings and then enabling “experimental access.” While this won’t be available by default at first, Google previously revealed that it would be pushing all these new Gmail upgrades more proactively in the coming months.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,623 | 2,018 |
"Microsoft completes its $7.5 billion GitHub acquisition | VentureBeat"
|
"https://venturebeat.com/2018/10/26/microsoft-completes-its-7-5-billion-github-acquisition"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Microsoft completes its $7.5 billion GitHub acquisition Share on Facebook Share on X Share on LinkedIn The GitHub Octocat figurine.
Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Nearly five months after announcing its plans to acquire GitHub, Microsoft has officially closed the $7.5 billion deal.
While the deal was expected to close without any major hiccup, now that it is finalized, developers the world over will await the fate of their favorite code-hosting repository. But first, a new GitHub CEO is about to start.
“Monday is my first day as [GitHub] CEO,” announced incoming CEO Nat Friedman, formerly CEO of Xamarin, which was acquired by Microsoft in 2016.
GitHub had been on the hunt for a new CEO since last August, after cofounder Chris Wanstrath revealed he was stepping down.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Microsoft has increasingly embraced open source technologies in recent years, and its projects already attract more contributors than other projects on GitHub. Buying GitHub is Microsoft’s way of getting closer to the developer community.
However, the plans are to still run GitHub as a standalone entity.
“GitHub will operate independently as a community, platform, and business,” Friedman added. “This means that GitHub will retain its developer-first values, distinctive spirit, and open extensibility. We will always support developers in their choice of any language, license, tool, platform, or cloud.” GitHub represents one of Microsoft’s five biggest acquisitions to date , after LinkedIn, Skype, and Nokia’s mobile phone unit.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,624 | 2,020 |
"Codota raises $12 million for AI that suggests and autocompletes code | VentureBeat"
|
"https://venturebeat.com/2020/04/27/codota-raises-12-million-for-ai-that-suggests-and-autocompletes-code"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Codota raises $12 million for AI that suggests and autocompletes code Share on Facebook Share on X Share on LinkedIn The Codota plugin for Eclipse.
Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Codota , a startup developing a platform that suggests and autocompletes Python, C, HTML, Java, Scala, Kotlin, and JavaScript code, today announced that it raised $12 million. The bulk of the capital will be spent on product R&D and sales growth, according to CEO and cofounder Dror Weiss.
Companies like Codota seem to be getting a lot of investor attention lately, and there’s a reason. According to a study published by the University of Cambridge’s Judge Business School, programmers spend 50.1% of their work time not programming; the other half is debugging. And the total estimated cost of debugging is $312 billion per year. AI-powered code suggestion and review tools, then, promise to cut development costs substantially while enabling coders to focus on more creative, less repetitive tasks.
Codota’s cloud-based and on-premises solutions — which it claims are used by developers at Google, Alibaba, Amazon, Airbnb, Atlassian, and Netflix — complete lines of code based on millions of Java programs and individual context locally, without sending any sensitive data to remote servers. They surface relevant examples of Java API within integrated development environments (IDE) including Android Studio, VSCode, IntelliJ, Webstorm, and Eclipse, and Codota’s engineers vet the recommendations to ensure they’ve been debugged and tested.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Codota says the program analysis, natural language processing, and machine learning algorithms powering its platform learn individual best practices and warn of deviation, largely by extracting an anonymized summary of the current IDE scope (but not keystrokes or string contents) and sending it via an encrypted connection to Codota. The algorithms are trained to understand the semantic models of code — not just the source code itself — and trigger automatically whenever they identify useful suggestions. (Alternatively, suggestions can be manually triggered with a keyboard shortcut.) Codota is free for individual users — the company makes money from Codota Enterprise, which learns the patterns and rules in a company’s proprietary code. The free tier’s algorithms are trained only on vetted open source code from GitHub, StackOverflow, and other sources.
Codota acquired competitor TabNine in December last year, and since then, its user base has grown by more than 1,000% to more than a million developers monthly. That positions it well against potential rivals like Kite , which raised $17 million last January for its free developer tool that leverages AI to autocomplete code, and DeepCode , whose product learns from GitHub project data to give developers AI-powered code reviews.
This latest funding round — which was led by e.ventures, with the participation of existing investor Khosla Ventures and new investors TPY Capital and Hetz Ventures — came after seed rounds totaling just over $2.5 million. It brings Codota’s total raised to over $16 million. As a part of it, e.ventures general partner Tom Gieselmann will join Codota’s board of directors.
Codota is headquartered in Tel Aviv. It was founded in 2015 by Weiss and CTO Eran Yahav, a Technion professor and former IBM Watson Fellow.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,625 | 2,020 |
"GitHub launches Codespaces for browser-based coding | VentureBeat"
|
"https://venturebeat.com/2020/05/06/github-launches-codespaces-for-browser-based-coding"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages GitHub launches Codespaces for browser-based coding Share on Facebook Share on X Share on LinkedIn GitHub Codespaces Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
GitHub announced a handful of new features and updates at its online Satellite 2020 event today, covering the cloud, collaboration, security, and more.
As with other technology companies , the Microsoft- owned code-hosting platform has chosen to move its annual developer event online due to the COVID-19 crisis, with Satellite 2020 representing GitHub’s first ever virtual conference. In an accompanying blog post, GitHub’s senior VP of product Shanku Niyogi said that this year’s event was all about “giving communities tools to come together to solve the problems that matter to them and removing barriers that stand in their way.” The biggest facet of today’s news is a new product called GitHub Codespaces, which is designed to make it easier for developers to join a project, launch a developer environment, and start coding with minimal configuration — all from a browser. Available in “limited public beta” from this week, Codespaces is a cloud-hosted development environment with all the GitHub features, and it can be set up to load a developer’s code and dependencies, extensions, and dotfiles, and includes a built-in debugger.
It’s worth noting here that Microsoft last year launched an online version of Visual Studio called (unsurprisingly) Visual Studio Online, and recently rebranded it as Visual Studio Codespaces.
And this gives a strong hint as to the building blocks of the new GitHub Codespaces — this is Microsoft bringing Visual Code’s branding and browser-based functionality to GitHub.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Above: GitHub Codespaces Code-editing functionality in Codespaces will always be free, according to GitHub, and for the duration of the beta the whole product will be free — though at some point it will ship under a pay-as-you-go pricing model.
GitHub is also gearing up to launch a new community-centric portal where developers can ask questions and converse around specific problems or topics inside a project repository. Before now, such discussions could only really take place through issues and pull requests , while there was a separate discussions tool for teams to plan and share information.
With GitHub Discussions, GitHub is now looking to build a community knowledge base outside the main codebase, and in truth it seems like it’s setting out to achieve something similar to Stack Overflow.
Discussions are built around threads, and questions can be marked as “answered” for future reference.
Above: GitHub Discussions GitHub Discussions has been available in limited private beta for a while already in several open source communities, and the company said that it will be opening it up to all open source communities this summer.
Elsewhere, GitHub also announced two new beta cloud security features as part of its advanced security offering. Code scanning is a new native GitHub tool that automatically scans every git push for vulnerabilities, with results shown inside the pull request. According to GitHub, code scanning uses CodeQl, an advanced semantic analysis engine it procured via its Semmle acquisition last year.
Above: GitHub code scanning And then there’s secret scanning, formerly known as token scanning , which helps companies identify cryptographic secrets inside code so that they can be revoked before it’s intercepted by bad actors. Secret scanning was made available for public repositories back in 2018 , and now it will be made available for private repositories too.
Finally, GitHub also announced that “private instances” would be available soon for enterprises that operate in highly regulated industries, which will bring a handful of security and policy features such as bring-your-own-key encryption, backup archiving, and tools to help companies comply with local data sovereignty regulations.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,626 | 2,019 |
"Facebook open-sources AI Habitat to help robots navigate realistic environments | VentureBeat"
|
"https://venturebeat.com/2019/06/14/facebook-open-sources-ai-habitat-to-help-robots-navigate-realistic-environments"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Facebook open-sources AI Habitat to help robots navigate realistic environments Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Facebook AI Research is today making available AI Habitat , a simulator that can train AI agents that embody things like a home robot to operate in environments meant to mimic typical real-world settings like an apartment or office.
For a home robot to understand what to do when you say “Can you check if laptop is in the other room and if it is, can you bring it to me?” will require drawing together multiple forms of intelligence.
Embodied AI research can be put to use to help robots navigate indoor environments by marrying together a number of AI systems related to computer vision, natural language understanding, and reinforcement learning.
“Habitat-Sim achieves several thousand frames per second (fps) running single-threaded, and can reach over 10,000 fps multi-process on a single GPU, which is orders of magnitude faster than the closest simulator,” a dozen AI researchers said in a paper about Habitat.
“Once a promising approach has been developed and tested in simulation, it can be transferred to physical platforms that operate in the real world.” Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Facebook Reality Labs, formerly named Oculus Research, is also open-sourcing Replica, a data set of photorealistic 3D environments like a retail store, apartment, and other indoor environments that resemble the real world. AI Habitat can work with Replica but also works with other embodied AI research data sets like Matterport3D for indoor environments.
Simulated data is commonly used in AI to train robotic systems, create reinforcement learning models, and power AI systems from Amazon Go to enterprise applications of few-shot learning with small amounts of data. Simulations can allow environmental control, reducing costs that arise from the need to collect real-world data.
AI Habitat was introduced in an effort to create a unified environment and address standardization for embodied research by the robotics and AI community. To that end, Facebook also released PyTorch Hub earlier this week.
“We aim to learn from the successes of previous frameworks and develop a unifying platform that combines their desirable characteristics while addressing their limitations. A common, unifying platform can significantly accelerate research by enabling code re-use and consistent experimental methodology. Moreover, a common platform enables us to easily carry out experiments testing agents based on different paradigms (learned vs. classical) and generalization of agents between datasets,” said Facebook.
In addition to the Habitat simulation engine, the Habitat API provides a library of high-level embodied AI algorithms for things like navigation, instruction following, and question answering.
Facebook released the PyTorch Hub platform for reproducibility of AI models earlier this week.
Researchers found that “learning outperforms SLAM if scaled to an order of magnitude more experience than previous investigations” and that only agents with depth sensors generalize well across datasets.
“AI Habitat consists of a stack of three modular layers, each of which can be configured or even replaced to work with different kinds of agents, training techniques, evaluation protocols, and environments. Separating these layers differentiates the platform from other simulators, whose design can make it difficult to decouple parameters in order to reuse assets or compare results,” the paper reads.
AI Habitat is the latest Facebook AI initiative to use embodied AI research, and follows research to train an AI agent to navigate the streets of New York with 360-degree images and to get around an office by watching videos.
Facebook VP and chief AI scientist Yann LeCun told VentureBeat the company is interested in robotics because the opportunity to tackle complex tasks attracts the top AI talent.
AI Habitat is the most recent example of tech giants attempting to deliver a robotics creation platform for AI developers and researchers.
Microsoft introduced a robotics and AI platform in limited preview last month , while Amazon’s AWS RoboMaker , which draws on Amazon’s cloud and AI systems, made its debut in fall 2018.
How AI Habitat works was detailed in an arXiv paper written by a team that includes Facebook AI Research, Facebook Reality Labs, Intel AI Labs, Georgia Institute of Technology, Simon Fraser University, and University of California, Berkeley.
AI Habitat will be showcased in a workshop next week at the Computer Vision and Pattern Recognition (CVPR) conference in Long Beach, California.
In other recent contributions to the wider AI community, Facebook AI research scientist Mike Lewis and AI resident Sean Vasquez introduced MelNet, a generative model that can imitate music and the voices of people like Bill Gates.
Major object detection AI systems from Google, Microsoft, Amazon, and Facebook are less likely to work for people in South America and Africa than North America and Europe, and less likely to work for households that make less than $50 a month.
Facebook VP of AR/VR Andrew Bosworth earlier this week said new Portal devices — the first after the video chat devices were introduced in October 2018 — will make their public debut this fall.
Facebook also announced plans to open an office with 100 new AI roles in London.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,627 | 2,020 |
"Facebook releases tools to help AI navigate complex environments | VentureBeat"
|
"https://venturebeat.com/2020/08/21/facebook-releases-tools-to-help-ai-navigate-complex-environments"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Facebook releases tools to help AI navigate complex environments Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Facebook says it’s progressing toward assistants capable of interacting with and understanding the physical world as well as people do. The company announced milestones today implying its future AI will be able to learn how to plan routes, look around its physical environments, listen to what’s happening, and build memories of 3D spaces.
The concept of embodied AI draws on embodied cognition , the theory that many features of psychology — human or otherwise — are shaped by aspects of the entire body of an organism. By applying this logic to AI, researchers hope to improve the performance of AI systems like chatbots, robots, autonomous vehicles, and even smart speakers that interact with their environments, people, and other AI. A truly embodied robot could check to see whether a door is locked, for instance, or retrieve a smartphone that’s ringing in an upstairs bedroom.
“By pursuing these related research agendas and sharing our work with the wider AI community, we hope to accelerate progress in building embodied AI systems and AI assistants that can help people accomplish a wide range of complex tasks in the physical world,” Facebook wrote in a blog post.
SoundSpaces While vision is foundational to perception, sound is arguably as important. It captures rich information often imperceptible through visual or force data like the texture of dried leaves or the pressure inside a champagne bottle. But few systems and algorithms have exploited sound as a vehicle to build physical understanding, which is why Facebook is releasing SoundSpaces as part of its embodied AI efforts.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! SoundSpaces is a corpus of audio renderings based on acoustical simulations for 3D environments. Designed to be used with AI Habitat , Facebook’s open source simulation platform, the data set provides a software sensor that makes it possible to insert simulations of sound sources in scanned real-world environments.
SoundSpaces is tangentially related to work from a team at Carnegie Mellon University that released a “sound-action-vision” data set and a family of AI algorithms to investigate the interactions between audio, visuals, and movement. In a preprint paper, they claimed the results show representations from sound can be used to anticipate where objects will move when subjected to physical force.
Unlike the Carnegie Mellon study, Facebook says creating SoundSpaces required an acoustics modeling algorithm and a bidirectional path-tracing component to model sound reflections in a room. Since materials affect the sounds received in an environment, like walking across marble floors versus a carpet, SoundSpaces also attempts to replicate the sound propagation of surfaces like walls. At the same time, it allows the rendering of concurrent sound sources placed at multiple locations in environments within popular data sets like Matterport3D and Replica.
In addition to the data, SoundSpaces introduces a challenge that Facebook calls AudioGoal, where an agent must move through an environment to find a sound-emitting object. It’s an attempt to train AI that sees and hears to localize audible targets in unfamiliar places, and Facebook claims it can enable faster training and higher-accuracy navigation compared with conventional approaches.
“This AudioGoal agent doesn’t require a pointer to the goal location, which means an agent can now act upon ‘go find the ringing phone’ rather than ‘go to the phone that is 25 feet southwest of your current position.’ It can discover the goal position on its own using multimodal sensing,” Facebook wrote. “Finally, our learned audio encoding provides similar or even better spatial cues than GPS displacements. This suggests how audio could provide immunity to GPS noise, which is common in indoor environments.” Semantic MapNet Facebook is also today releasing Semantic MapNet, a module that uses a form of spatio-semantic memory to record the representations of objects as it explores its surroundings. (The images are captured from the module’s point of view in simulation, much like a virtual camera.) Facebook asserts these representations of spaces provide a foundation to accomplish a range of embodied tasks, including navigating to a particular location and answering questions.
Semantic MapNet can predict where particular objects (e.g., a sofa or a kitchen sink) are located on a pixel-level, top-down map it creates. MapNet builds what’s known as an “allocentric” memory, which refers to mnemonic representations that capture (1) viewpoint-agnostic relations among items and (2) fixed relations between items and the environment. Semantic MapNet extracts visual features from its observations and then projects them to locations using an end-to-end framework, decoding top-down maps of the environment with labels of objects it has seen.
This technique enables Semantic MapNet to segment small objects that might not be visible from a bird’s-eye view. The project step also allows Semantic MapNet to reason about multiple observations of a given point and its surrounding area. “These capabilities of building neural episodic memories and spatio-semantic representations are important for improved autonomous navigation, mobile manipulation, and egocentric personal AI assistants,” Facebook wrote.
Exploration and mapping Beyond the SoundSpaces data set and MapNet module, Facebook says it has developed a model that can infer parts of a map of an environment that can’t be directly observed, like behind a table in a dining room. The model does this by predicting occupancy — i.e., whether an object is present — from still image frames and aggregating these predictions over time as it learns to navigate its environment.
Facebook says its model outperforms the best competing method using only a third the number of movements, attaining 30% better map accuracy for the same amount of movements. It also received first place in a task at this year’s Conference on Computer Vision and Pattern Recognition that required systems to adapt to poor image quality and run without GPS or compass data.
The model hasn’t been deployed in the real world on a real robot — only in simulation. But Facebook expects that when used with PyRobot , its robotic framework that supports robots like LoCoBot, the model could accelerate research in the embodied AI domain. “These efforts are part of Facebook AI’s long-term goal of building intelligent AI systems that can intuitively think, plan, and reason about the real world, where even routine conditions are highly complex and unpredictable,” the company wrote in a blog post.
Facebook’s other recent work in this area is vision-and-language navigation in continuous environments (VLN-CE) , a training task for AI that involves navigating an environment by listening to natural language directions like “Go down the hall and turn left at the wooden desk.” Ego-Topo , another work-in-progress project, decomposes a space captured in a video into a topological map of activities before organizing the video into a series of visits to different zones.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,628 | 2,021 |
"Reinforcement learning competition pushes the boundaries of embodied AI | VentureBeat"
|
"https://venturebeat.com/2021/05/01/reinforcement-learning-competition-pushes-the-boundaries-of-embodied-ai"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Reinforcement learning competition pushes the boundaries of embodied AI Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Since the early decades of artificial intelligence, humanoid robots have been a staple of sci-fi books, movies, and cartoons. Yet after decades of research and development in AI, we still have nothing that comes close to The Jetsons’ Rosey the Robot.
This is because many of our intuitive planning and motor skills — things we take for granted — are a lot more complicated than we think. Navigating unknown areas, finding and picking up objects, choosing routes, and planning tasks are complicated feats we only appreciate when we try to turn them into computer programs.
Developing robots that can physically sense the world and interact with their environment falls into the realm of embodied artificial intelligence, one of AI scientists’ long-sought goals. And even though progress in the field is still a far shot from the capabilities of humans and animals, the achievements are remarkable.
In a recent development in embodied AI, scientists at IBM, the Massachusetts Institute of Technology, and Stanford University developed a new challenge that will help assess AI agents’ ability to find paths, interact with objects, and plan tasks efficiently. Titled ThreeDWorld Transport Challenge , the test is a virtual environment that will be presented at the Embodied AI Workshop during the Conference on Computer Vision and Pattern Recognition, held online in June.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! No current AI techniques come close to solving the TDW Transport Challenge. But the results of the competition can help uncover new directions for the future of embodied AI and robotics research.
Reinforcement learning in virtual environments At the heart of most robotics applications is reinforcement learning , a branch of machine learning based on actions, states, and rewards. A reinforcement learning agent is given a set of actions it can apply to its environment to obtain rewards or reach a certain goal. These actions create changes to the state of the agent and the environment. The RL agent receives rewards based on how its actions bring it closer to its goal.
RL agents usually start by knowing nothing about their environment and selecting random actions. As they gradually receive feedback from their environment, they learn sequences of actions that can maximize their rewards.
This scheme is used not only in robotics, but in many other applications, such as self-driving cars and content recommendations.
Reinforcement learning has also helped researchers master complicated games such as Go, StarCraft 2, and DOTA.
Creating reinforcement learning models presents several challenges. One of them is designing the right set of states, rewards, and actions, which can be very difficult in applications like robotics, where agents face a continuous environment that is affected by complicated factors such as gravity, wind, and physical interactions with other objects. This is in contrast to environments like chess and Go that have very discrete states and actions.
Another challenge is gathering training data. Reinforcement learning agents need to train using data from millions of episodes of interactions with their environments. This constraint can slow robotics applications because they must gather their data from the physical world, as opposed to video and board games, which can be played in rapid succession on several computers.
To overcome this barrier, AI researchers have tried to create simulated environments for reinforcement learning applications. Today, self-driving cars and robotics often use simulated environments as a major part of their training regime.
“Training models using real robots can be expensive and sometimes involve safety considerations,” Chuang Gan, principal research staff member at the MIT-IBM Watson AI Lab, told TechTalks. “As a result, there has been a trend toward incorporating simulators, like what the TDW-Transport Challenge provides, to train and evaluate AI algorithms.” But replicating the exact dynamics of the physical world is extremely difficult, and most simulated environments are a rough approximation of what a reinforcement learning agent would face in the real world. To address this limitation, the TDW Transport Challenge team has gone to great lengths to make the test environment as realistic as possible.
The environment is built on top of the ThreeDWorld platform , which the authors describe as “a general-purpose virtual world simulation platform supporting both near-photo realistic image rendering, physically based sound rendering, and realistic physical interactions between objects and agents.” “We aimed to use a more advanced physical virtual environment simulator to define a new embodied AI task requiring an agent to change the states of multiple objects under realistic physical constraints,” the researchers write in an accompanying paper.
Task and motion planning Reinforcement learning tests have different degrees of difficulty. Most current tests involve navigation tasks, where an RL agent must find its way through a virtual environment based on visual and audio input.
The TDW Transport Challenge, on the other hand, pits the reinforcement learning agents against “task and motion planning” (TAMP) problems. TAMP requires the agent to not only find optimal movement paths but to also change the state of objects to achieve its goal.
The challenge takes place in a multi-roomed house adorned with furniture, objects, and containers. The reinforcement learning agent views the environment from a first-person perspective and must find one or several objects from the rooms and gather them at a specified destination. The agent is a two-armed robot, so it can only carry two objects at a time. Alternatively, it can use a container to carry several objects and reduce the number of trips it has to make.
At every step, the RL agent can choose one of several actions, such as turning, moving forward, or picking up an object. The agent receives a reward if it accomplishes the transfer task within a limited number of steps.
While this seems like the kind of problem any child could solve without much training, it is indeed a complicated task for current AI systems. The reinforcement learning program must find the right balance between exploring the rooms, finding optimal paths to the destination, choosing between carrying objects alone or in containers, and doing all this within the designated step budget.
“Through the TDW-Transport Challenge, we’re proposing a new embodied AI challenge,” Gan said. “Specifically, a robotic agent must take actions to move and change the state of a large number of objects in a photo- and physically realistic virtual environment, which remains a complex goal in robotics.” Abstracting challenges for AI agents Above: In the ThreeDWorld Transport Challenge, the AI agent can see the world through color, depth, and segmentation maps.
While TDW is a very complex simulated environment, the designers have still abstracted some of the challenges robots would face in the real world. The virtual robot agent, dubbed Magnebot, has two arms with nine degrees of freedom and joints at the shoulder, elbow, and wrist. However, the robot’s hands are magnets and can pick up any object without needing to handle it with fingers, which itself is a very challenging task.
The agent also perceives the environment in three different ways: as an RGB-colored frame, a depth map, and a segmentation map that shows each object separately in hard colors. The depth and segmentation maps make it easier for the AI agent to read the dimensions of the scene and tell the objects apart when viewing them from awkward angles.
To avoid confusion, the problems are posed in a simple structure (e.g., “vase:2, bowl:2, jug:1; bed”) rather than as loose language commands (e.g., “Grab two bowls, a couple of vases, and the jug in the bedroom, and put them all on the bed”).
And to simplify the state and action space, the researchers have limited the Magnebot’s navigation to 25-centimeter movements and 15-degree rotations.
These simplifications enable developers to focus on the navigation and task-planning problems AI agents must overcome in the TDW environment.
Gan told TechTalks that despite the levels of abstraction introduced in TDW, the robot still needs to address the following challenges: The synergy between navigation and interaction: The agent cannot move to grasp an object if this object is not in the egocentric view, or if the direct path to it is obstructed.
Physics-aware interaction: Grasping might fail if the agent’s arm cannot reach an object.
Physics-aware navigation: Collision with obstacles might cause objects to be dropped and significantly impede transport efficiency.
This highlights the complexity of human vision and agency.
The next time you go to a supermarket, consider how easily you can find your way through aisles, tell the difference between different products, reach for and pick up different items, place them in your basket or cart, and choose your path in an efficient way. And you’re doing all this without access to segmentation and depth maps and by reading items from a crumpled handwritten note in your pocket.
Pure deep reinforcement learning is not enough Above: Experiments show hybrid AI models that combine reinforcement learning with symbolic planners are better suited to solving the ThreeDWorld Transport Challenge.
The TDW-Transport Challenge is in the process of accepting submissions. In the meantime, the authors of the paper have already tested the environment with several known reinforcement learning techniques. Their findings show that pure reinforcement learning is very poor at solving task and motion planning challenges. A pure reinforcement learning approach requires the AI agent to develop its behavior from scratch, starting with random actions and gradually refining its policy to meet the goals in the specified number of steps.
According to the researchers’ experiments, pure reinforcement learning approaches barely managed to surpass 10% success in the TDW tests.
“We believe this reflects the complexity of physical interaction and the large exploration search space of our benchmark,” the researchers wrote. “Compared to the previous point-goal navigation and semantic navigation tasks, where the agent only needs to navigate to specific coordinates or objects in the scene, the ThreeDWorld Transport challenge requires agents to move and change the objects’ physical state in the environment (i.e., task-and-motion planning), which the end-to-end models might fall short on.” When the researchers tried hybrid AI models , where a reinforcement learning agent was combined with a rule-based high-level planner, they saw a considerable boost in the system’s performance.
“This environment can be used to train RL models, which fall short on these types of tasks and require explicit reasoning and planning abilities,” Gan said. “Through the TDW-Transport Challenge, we hope to demonstrate that a neuro-symbolic, hybrid model can improve this issue and demonstrate a stronger performance.” The problem, however, remains largely unsolved, and even the best-performing hybrid systems had around 50% success rates. “Our proposed task is very challenging and could be used as a benchmark to track the progress of embodied AI in physically realistic scenes,” the researchers wrote.
Mobile robots are becoming a hot area of research and applications.
According to Gan, several manufacturing and smart factories have already expressed interest in using the TDW environment for their real-world applications. It will be interesting to see whether the TDW Transport Challenge will help usher new innovations into the field.
“We’re hopeful the TDW-Transport Challenge can help advance research around assistive robotic agents in warehouses and home settings,” Gan said.
Ben Dickson is a software engineer and the founder of TechTalks, a blog that explores the ways technology is solving and creating problems.
This story originally appeared on Bdtechtalks.com.
Copyright 2021 VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,629 | 2,021 |
"Experian: Consumers prefer 'invisible security' to passwords | VentureBeat"
|
"https://venturebeat.com/2021/04/07/experian-consumers-prefer-invisible-security-to-passwords"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Experian: Consumers prefer ‘invisible security’ to passwords Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Could the era of passwords be drawing to a close? Decades of fumbling around to remember the right password or having to constantly reset passwords or jump through authentication hoops have made them dirty words for many consumers.
All those headaches and there’s still a good chance your personal information will wind up for sale somewhere on the internet.
Perhaps it’s not surprising then that a survey released by Experian shows consumers are embracing new methods of security based on physical markers or behavior. In fact, the company’s 2021 Global Identity and Fraud Report revealed consumers did not rank passwords among the three most secure ways to protect their identity.
Instead, the top 3 are “invisible” methods: VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Physical biometrics: Think facial recognition and fingerprints.
Pin codes: Convenient for mobile devices.
Behavioral analytics: Passively observed signals, which mean consumers do nothing.
The Experian survey included 9,000 consumers and more than 2,700 businesses spread across 10 countries.
The push to end passwords is gaining greater attention from the security industry, enterprises, and venture capitalists. In December 2020, Beyond Identity raised $75 million for its solution that uses digital certificates to replace passwords. And just a few weeks ago, Identiq raised $47 million for a cryptographic network that can be used to confirm identity.
The Experian study follows a surge in online activity during the pandemic that spans distance learning, remote work, and ecommerce. According to Experian, online consumer transactions over the past year were up 20%. While digital convenience has helped businesses and consumers adapt to the pandemic, it has also raised serious security concerns, with 55% of people surveyed ranking security as “the most important aspect of their online experience,” the report says.
Of course, the fact that consumers are taking security more seriously is a good sign. The study found that 34% of consumers now worry about privacy, up from 29% before the pandemic. Likewise, 33% worry about identity theft, up from 28% one year ago. And 49% have bigger concerns about fraud, compared to just 37% last year.
These responses highlight a key challenge for businesses seeking to expand their digital footprint. How can they securely authenticate customers without making the process too burdensome and yet still weed out fraud? The answer would appear to lie in those invisible security strategies. In the survey, 48% of consumers under the age of 40 said they felt safer using biometric security now than before COVID-19, though that number drops to 37% for respondents over 40.
“Consumers want to be recognized digitally without extra steps to identify themselves, and they don’t want to remember yet another password,” Eric Haller, Experian EVP and general manager of Identity, Fraud and DataLabs, said in a statement. “They are open to more practical solutions in today’s digital era.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,630 | 2,021 |
"Why a Cedars-Sinai hospital and BP use facial recognition | VentureBeat"
|
"https://venturebeat.com/2021/04/20/why-a-cedars-sinai-hospital-and-bp-use-facial-recognition-exclusive"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Why a Cedars-Sinai hospital and BP use facial recognition Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
( Reuters ) — Deployments of facial recognition from Israeli startup AnyVision show how the surveillance software has gained adoption across the United States even as regulatory and ethical debates about it rage on.
The technology finds certain faces in photos or videos, with banks representing one sector that has taken interest in systems from AnyVision or its many competitors to improve security and service.
Organizations in other industries are chasing similar goals. Los Angeles hospital Cedars-Sinai and oil giant BP are among several previously unreported users of AnyVision.
Cedars-Sinai’s main hospital uses AnyVision facial recognition to give staff a heads-up about individuals known for committing violence or drug fraud or using different names at the emergency room, three sources said.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Cedars said it “does not publicly discuss our security programs” and could not confirm the information.
Meanwhile, BP has used facial recognition for at least two years at its Houston campus to help security staff detect people on a watchlist because they had previously trespassed or issued threats, two sources said.
BP declined to comment.
AnyVision declined to discuss specific clients or deals.
Gaining additional clients may be difficult for AnyVision amid mounting opposition to facial recognition from civil liberties advocates.
Critics say the technology compromises privacy, targets marginalized groups , and normalizes intrusive surveillance. Last week, 25 social justice groups, including Demand Progress and Greenpeace USA, called on governments to ban corporate use of facial recognition in an open letter.
AnyVision CEO Avi Golan, a former SoftBank Vision Fund operating partner who joined the startup in November, sees a bright future. He told Reuters that AnyVision has worked with companies across retail, banking, gaming, sports, and energy on uses that should not be banned because they stop crime and boost safety.
“I am a bold advocate for regulation of facial recognition.
There’s a potential for abuse of this technology both in terms of bias and privacy,” he said. “[But] blanket bans are irresponsible.” The startup has faced challenges in the past year. AnyVision laid off half of its staff, with deep cuts to research and sales, according to people who have worked for the company, as well as customers and partners, all speaking on the condition of anonymity.
The slashing followed the onset of COVID-19 shrinking clients’ budgets, sources said, with investor Microsoft in March 2020 saying it would divest its stake over ethical concerns.
AnyVision announced raising an additional $43 million last September.
Detecting threats Macy’s installed AnyVision in 2019 to alert security when known shoplifters entered its store in New York’s Herald Square, five sources said. The deployment expanded to around 15 more New York stores, three sources said, and if not for the pandemic would have reached an additional 15 stores, including on the West Coast.
Macy’s told Reuters it uses facial recognition “in a small subset of stores with high incidences of organized retail theft and repeat offenders.” Menards, a U.S. home improvement chain, has used AnyVision facial recognition to identify known thieves, three sources said. Its system has also alerted staff to the arrival of design center clients and reidentified them on future visits to improve service, a source said.
Menards said its current face mask policy has rendered “any use of facial recognition technology pointless.” In an online video, and without naming Menards, AnyVision has touted its results, and two sources said the companies struck a deal for 290 stores. In 2019, Menards apprehended 54% more potential threats and recovered over $5 million, according to the video.
The U.S. financial services unit of automaker Mercedes-Benz said it has used AnyVision at its Fort Worth, Texas offices since 2019 to authenticate about 900 people entering and exiting daily before the pandemic, adding a layer of security on top of building access cards.
Such employee-access applications are a common early use of AnyVision, including at Houston Texans’ and Golden State Warriors’ facilities, sources said.
The sports teams declined to comment.
Entertainment deals Several deals have failed to materialize, however. Among organizations that considered AnyVision early last year were Amazon’s grocery chain Whole Foods to monitor workers at stores, Comcast to enable ticketless experiences at Universal theme parks, and baseball’s Dodger Stadium for suite access, sources said.
Talks with airports in the Dallas and San Francisco areas referenced in public records have not led to contracts either.
Universal Parks, the Los Angeles Dodgers, and the airports all declined to comment on their interest. And Whole Foods did not respond to a request for comment.
Government requirements for surveillance at casinos have made the gaming industry a big purchaser of facial recognition. Las Vegas Sands, for instance, is using AnyVision, three sources said. Sands declined to comment.
MGM Resorts International and Cherokee Nation Entertainment also use AnyVision, representatives of the casino operators said last month in an online presentation seen by Reuters.
Ted Whiting of MGM said the software, deployed in 2017 and used at 11 properties, including the Aria in Las Vegas, has detected vendors not wearing masks and helped catch patrons accused of violence.
MGM said its “surveillance system is designed to adhere to regulatory requirements and support ongoing efforts to keep guests and employees safe.” Cherokee’s Joshua Anderson said in addition to security uses, AnyVision has accelerated coronavirus contact tracing as the Oklahoma company rolls out the technology across 10 properties.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,631 | 2,012 |
"The World's First Computer Password? It Was Useless Too | WIRED"
|
"https://www.wired.com/2012/01/computer-password"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Robert McMillan Business The World's First Computer Password? It Was Useless Too Save this story Save Save this story Save If you're like most people, you're annoyed by passwords. You've got dozens to remember -- some of them tortuously complex -- and on any given day, as you read e-mails, send tweets, and order groceries online, you're bound to forget one, or at least mistype it. You may even be one of those unfortunate people who've had a password stolen, thanks to the dodgy security on the machines that store them.
But who's to blame? Who invented the computer password? Like the invention of the wheel or the story of the doorknob, the password's creation is shrouded in the mists of history. Romans used them. Shakespeare kicks off Hamlet with one -- "Long live the King" -- when Bernardo must prove he's a loyal soldier of the King of Denmark. But where did the first computer password show up? It probably arrived at the Massachusetts Institute of Technology in the mid-1960s, when researchers at the university built a massive time-sharing computer called CTSS. The punchline is that even then, passwords didn't protect users as well as they could have. Technology changes. But, then again, it doesn't.
Nearly all of the computer historians contacted by Wired in the past few weeks said that the first password must have come from MIT's Compatible Time-Sharing System. In geek circles, it's famous. CTSS pioneered many of the building blocks of computing as we know it today: things like e-mail , virtual machines, instant messaging, and file sharing.
Fernando Corbató -- the man who shepherded the CTSS project back in the mid-1960s -- is a little reluctant to take credit. "Surely there must be some antecedents for this mechanism," he told us, before questioning whether the CTSS was beaten to the punch by IBM's $30 million Sabre ticketing system , a contraption built in 1960, back when $30 million could buy you a handful of jetliners. But when we contacted IBM, it wasn't sure.
According to Corbató, even though the MIT computer hackers were breaking new ground with much of what they did, passwords were pretty much a no-brainer. "The key problem was that we were setting up multiple terminals which were to be used by multiple persons but with each person having his own private set of files," he told Wired. "Putting a password on for each individual user as a lock seemed like a very straightforward solution." Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight Back in the '60s, there were other options, according to Fred Schneider, a computer science professor at Cornell University. The CTSS guys could have gone for knowledge-based authentication, where instead of a password, the computer asks you for something that other people probably don't know -- your mother's maiden name, for example.
But in the early days of computing, passwords were surely smaller and easier to store than the alternative, Schneider says. A knowledge-based system "would have required storing a fair bit of information about a person, and nobody wanted to devote many machine resources to this authentication stuff." The irony is that the MIT researchers who pioneered the passwords didn't really care much about security. CTSS may also have been the first system to experience a data breach. One day in 1966, a software bug jumbled up the system's welcome message and its master password file so that anyone who logged in was presented with the entire list of CTSS passwords. But that's not the good story.
Twenty-five years after the fact, Allan Scherr, a Ph.D. researcher at MIT in the early '60s, came clean about the earliest documented case of password theft.
In the spring of 1962, Scherr was looking for a way to bump up his usage time on CTSS. He had been allotted four hours per week, but it wasn't nearly enough time to run the detailed performance simulations he'd designed for the new computer system. So he simply printed out all of the passwords stored on the system.
"There was a way to request files to be printed offline by submitting a punched card," he remembered in a pamphlet written last year to commemorate the invention of the CTSS.
"Late one Friday night, I submitted a request to print the password files and very early Saturday morning went to the file cabinet where printouts were placed and took the listing." To spread the guilt around, Scherr then handed the passwords over to other users. One of them -- J.C.R. Licklieder -- promptly started logging into the account of the computer lab's director Robert Fano, and leaving "taunting messages" behind.
Scherr left MIT in May 1965 to take a job at IBM, but 25 years later he confessed to Professor Fano in person. "He assured me that my Ph.D. would not be revoked." Senior Writer X Topics Enterprise Hardware Nelson C.J.
Peter Guest Andy Greenberg Joel Khalili David Gilbert Kari McMahon Jacopo Prisco Will Knight Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
15,632 | 2,021 |
"Adversarial attacks are a ticking time bomb, but no one cares"
|
"https://thenextweb.com/news/adversarial-attacks-are-a-ticking-time-bomb-but-no-one-cares-syndication"
|
"Toggle Navigation News Events TNW Conference 2024 June 20 & 21, 2024 TNW Vision: 2024 All events Spaces Programs Newsletters Partner with us Jobs Contact News news news news Latest Deep tech Sustainability Ecosystems Data and security Fintech and ecommerce Future of work More Startups and technology Investors and funding Government and policy Corporates and innovation Gadgets & apps Early bird Business passes are 90% SOLD OUT 🎟️ Buy now before they are gone → This article was published on January 8, 2021 Deep tech Adversarial attacks are a ticking time bomb, but no one cares Image by: Bdtechtalks If you’ve been following news about artificial intelligence, you’ve probably heard of or seen modified images of pandas and turtles and stop signs that look ordinary to the human eye but cause AI systems to behave erratically. Known as adversarial examples or adversarial attacks , these images—and their audio and textual counterparts —have become a source of growing interest and concern for the machine learning community.
But despite the growing body of research on adversarial machine learning , the numbers show that there has been little progress in tackling adversarial attacks in real-world applications.
The fast-expanding adoption of machine learning makes it paramount that the tech community traces a roadmap to secure the AI systems against adversarial attacks. Otherwise, adversarial machine learning can be a disaster in the making.
AI researchers discovered that by adding small black and white stickers to stop signs, they could make them invisible to computer vision algorithms (Source: arxiv.org) What makes adversarial attacks different? Every type of software has its own unique security vulnerabilities, and with new trends in software, new threats emerge. For instance, as web applications with database backends started replacing static websites, SQL injection attacks became prevalent. The widespread adoption of browser-side scripting languages gave rise to cross-site scripting attacks. Buffer overflow attacks overwrite critical variables and execute malicious code on target computers by taking advantage of the way programming languages such as C handle memory allocation. Deserialization attacks exploit flaws in the way programming languages such as Java and Python transfer information between applications and processes. And more recently, we’ve seen a surge in prototype pollution attacks , which use peculiarities in the JavaScript language to cause erratic behavior on NodeJS servers.
In this regard, adversarial attacks are no different than other cyberthreats. As machine learning becomes an important component of many applications , bad actors will look for ways to plant and trigger malicious behavior in AI models.
The <3 of EU tech The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now! [Read: Meet the 4 scale-ups using data to save the planet ] What makes adversarial attacks different, however, is their nature and the possible countermeasures. For most security vulnerabilities, the boundaries are very clear. Once a bug is found, security analysts can precisely document the conditions under which it occurs and find the part of the source code that is causing it. The response is also straightforward. For instance, SQL injection vulnerabilities are the result of not sanitizing user input. Buffer overflow bugs happen when you copy string arrays without setting limits on the number of bytes copied from the source to the destination.
In most cases, adversarial attacks exploit peculiarities in the learned parameters of machine learning models. An attacker probes a target model by meticulously making changes to its input until it produces the desired behavior. For instance, by making gradual changes to the pixel values of an image, an attacker can cause the convolutional neural network to change its prediction from, say, “turtle” to “rifle.” The adversarial perturbation is usually a layer of noise that is imperceptible to the human eye.
(Note: in some cases, such as data poisoning , adversarial attacks are made possible through vulnerabilities in other components of the machine learning pipeline, such as a tampered training data set.) A neural network thinks this is a picture of a rifle. The human vision system would never make this mistake (source: LabSix ) The statistical nature of machine learning makes it difficult to find and patch adversarial attacks. An adversarial attack that works under some conditions might fail in others, such as a change of angle or lighting conditions. Also, you can’t point to a line of code that is causing the vulnerability because it spread across the thousands and millions of parameters that constitute the model.
Defenses against adversarial attacks are also a bit fuzzy. Just as you can’t pinpoint a location in an AI model that is causing an adversarial vulnerability, you also can’t find a precise patch for the bug. Adversarial defenses usually involve statistical adjustments or general changes to the architecture of the machine learning model.
For instance, one popular method is adversarial training, where researchers probe a model to produce adversarial examples and then retrain the model on those examples and their correct labels. Adversarial training readjusts all the parameters of the model to make it robust against the types of examples it has been trained on. But with enough rigor, an attacker can find other noise patterns to create adversarial examples.
The plain truth is, we are still learning how to cope with adversarial machine learning. Security researchers are used to perusing code for vulnerabilities. Now they must learn to find security holes in machine learning that are composed of millions of numerical parameters.
Growing interest in adversarial machine learning Recent years have seen a surge in the number of papers on adversarial attacks. To track the trend, I searched the arXiv preprint server for papers that mention “adversarial attacks” or “adversarial examples” in the abstract section. In 2014 , there were zero papers on adversarial machine learning.
In 2020 , around 1,100 papers on adversarial examples and attacks were submitted to arxiv.
From 2014 to 2020, arXiv.org has gone from zero papers on adversarial machine learning to 1,100 papers in one year.
Adversarial attacks and defense methods have also become a key highlight of prominent AI conferences such as NeurIPS and ICLR. Even cybersecurity conferences such as DEF CON, Black Hat, and Usenix have started featuring workshops and presentations on adversarial attacks.
The research presented at these conferences shows tremendous progress in detecting adversarial vulnerabilities and developing defense methods that can make machine learning models more robust. For instance, researchers have found new ways to protect machine learning models against adversarial attacks using random switching mechanisms and insights from neuroscience.
It is worth noting, however, that AI and security conferences focus on cutting edge research. And there’s a sizeable gap between the work presented at AI conferences and the practical work done at organizations every day.
The lackluster response to adversarial attacks Alarmingly, despite growing interest in and louder warnings on the threat of adversarial attacks, there’s very little activity around tracking adversarial vulnerabilities in real-world applications.
I referred to several sources that track bugs, vulnerabilities, and bug bounties. For instance, out of more than 145,000 records in the NIST National Vulnerability Database, there are no entries on adversarial attacks or adversarial examples. A search for “machine learning” returns five results. Most of them are cross-site scripting (XSS) and XML external entity (XXE) vulnerabilities in systems that contain machine learning components. One of them regards a vulnerability that allows an attacker to create a copy-cat version of a machine learning model and gain insights, which could be a window to adversarial attacks. But there are no direct reports on adversarial vulnerabilities. A search for “deep learning” shows a single critical flaw filed in November 2017. But again, it’s not an adversarial vulnerability but rather a flaw in another component of a deep learning system.
The National Vulnerability Database contains very little information on adversarial attacks I also checked GitHub’s Advisory database, which tracks security and bug fixes on projects hosted on GitHub. Search for “adversarial attacks,” “adversarial examples,” “machine learning,” and “deep learning” yielded no results. A search for “TensorFlow” yields 41 records, but they’re mostly bug reports on the codebase of TensorFlow. There’s nothing about adversarial attacks or hidden vulnerabilities in the parameters of TensorFlow models.
This is noteworthy because GitHub already hosts many deep learning models and pretrained neural networks.
GitHub Advisory contains no records on adversarial attacks.
Finally, I checked HackerOne, the platform many companies use to run bug bounty programs. Here too, none of the reports contained any mention of adversarial attacks.
While this might not be a very precise assessment, the fact that none of these sources have anything on adversarial attacks is very telling.
The growing threat of adversarial attacks Adversarial vulnerabilities are deeply embedded in the many parameters of machine learning models, which makes it hard to detect them with traditional security tools.
Automated defense is another area that is worth discussing. When it comes to code-based vulnerabilities Developers have a large set of defensive tools at their disposal.
Static analysis tools can help developers find vulnerabilities in their code. Dynamic testing tools examine an application at runtime for vulnerable patterns of behavior. Compilers already use many of these techniques to track and patch vulnerabilities. Today, even your browser is equipped with tools to find and block possibly malicious code in client-side script.
At the same time, organizations have learned to combine these tools with the right policies to enforce secure coding practices. Many companies have adopted procedures and practices to rigorously test applications for known and potential vulnerabilities before making them available to the public. For instance, GitHub, Google, and Apple make use of these and other tools to vet the millions of applications and projects uploaded on their platforms.
But the tools and procedures for defending machine learning systems against adversarial attacks are still in the preliminary stages. This is partly why we’re seeing very few reports and advisories on adversarial attacks.
Meanwhile, another worrying trend is the growing use of deep learning models by developers of all levels. Ten years ago, only people who had a full understanding of machine learning and deep learning algorithms could use them in their applications. You had to know how to set up a neural network, tune the hyperparameters through intuition and experimentation, and you also needed access to the compute resources that could train the model.
But today, integrating a pre-trained neural network into an application is very easy.
For instance, PyTorch, which is one of the leading Python deep learning platforms, has a tool that enables machine learning engineers to publish pretrained neural networks on GitHub and make them accessible to developers. If you want to integrate an image classifier deep learning model into your application, you only need a rudimentary knowledge of deep learning and PyTorch.
Since GitHub has no procedure to detect and block adversarial vulnerabilities, a malicious actor could easily use these kinds of tools to publish deep learning models that have hidden backdoors and exploit them after thousands of developers integrate them in their applications.
How to address the threat of adversarial attacks Understandably, given the statistical nature of adversarial attacks, it’s difficult to address them with the same methods used against code-based vulnerabilities. But fortunately, there have been some positive developments that can guide future steps.
The Adversarial ML Threat Matrix , published last month by researchers at Microsoft, IBM, Nvidia, MITRE, and other security and AI companies, provides security researchers with a framework to find weak spots and potential adversarial vulnerabilities in software ecosystems that include machine learning components. The Adversarial ML Threat Matrix follows the ATT&CK framework, a known and trusted format among security researchers.
Another useful project is IBM’s Adversarial Robustness Toolbox, an open-source Python library that provides tools to evaluate machine learning models for adversarial vulnerabilities and help developers harden their AI systems.
These and other adversarial defense tools that will be developed in the future need to be backed by the right policies to make sure machine learning models are safe. Software platforms such as GitHub and Google Play must establish procedures and integrate some of these tools into the vetting process of applications that include machine learning models. Bug bounties for adversarial vulnerabilities can also be a good measure to make sure the machine learning systems used by millions of users are robust.
New regulations for the security of machine learning systems might also be necessary. Just as the software that handles sensitive operations and information is expected to conform to a set of standards, machine learning algorithms used in critical applications such as biometric authentication and medical imaging must be audited for robustness against adversarial attacks.
As the adoption of machine learning continues to expand, the threat of adversarial attacks is becoming more imminent. Adversarial vulnerabilities are a ticking timebomb. Only a systematic response can defuse it.
This article was originally published by Ben Dickson on TechTalks , a publication that examines trends in technology, how they affect the way we live and do business, and the problems they solve. But we also discuss the evil side of technology, the darker implications of new tech and what we need to look out for. You can read the original article here.
Story by Ben Dickson Ben Dickson is the founder of TechTalks. He writes regularly about business, technology and politics. Follow him on Twitter and Facebook (show all) Ben Dickson is the founder of TechTalks.
He writes regularly about business, technology and politics. Follow him on Twitter and Facebook Get the TNW newsletter Get the most important tech news in your inbox each week.
Also tagged with Artificial intelligence Machine learning Attack (computing) Generative Adversarial Network Source code cybersecurity Story by Ben Dickson Popular articles 1 New erotic roleplaying chatbots promise to indulge your sexual fantasies 2 UK plan to lead in generative AI ‘unrealistic,’ say Cambridge researchers 3 New AI tool could make future vaccines ‘variant-proof,’ researchers say 4 3D-printed stem cells could help treat brain injuries 5 New technique makes AI hallucinations wake up and face reality Related Articles data security TikTok’s first European data centre in Dublin is now operational sustainability AI sensors in the forest can smell a wildfire before it spreads Join TNW All Access Watch videos of our inspiring talks for free → deep tech Here’s how Unilever is harnessing AI to innovate your favourite products future of work Secure your Mac like a rockstar — 5 easy life hacks to stay safe online The heart of tech More TNW Media Events Programs Spaces Newsletters Jobs in tech About TNW Partner with us Jobs Terms & Conditions Cookie Statement Privacy Statement Editorial Policy Masthead Copyright © 2006—2023, The Next Web B.V. Made with <3 in Amsterdam.
"
|
15,633 | 2,016 |
"Microsoft exec apologizes for Tay chatbot's racist tweets, says users 'exploited a vulnerability' | VentureBeat"
|
"https://venturebeat.com/2016/03/25/microsoft-exec-apologizes-for-tay-chatbots-racist-tweets-says-users-exploited-a-vulnerability"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Microsoft exec apologizes for Tay chatbot’s racist tweets, says users ‘exploited a vulnerability’ Share on Facebook Share on X Share on LinkedIn Tay tweet.
Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Peter Lee, the corporate vice president of Microsoft Research, Microsoft’s research and development wing, today apologized for the behavior of Tay , the artificial intelligence-powered chatbot the company unveiled earlier this week and soon thereafter took offline.
“We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay,” Lee wrote in a blog post , adding that the bot will come back online only after the company is sure that it’s ready to deal with “malicious intent.” Indeed, Lee said that a small number of people “exploited a vulnerability” in Tay and thus were to blame for the tweets, which spoke positively of Hitler, among other things.
“Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack,” Lee wrote.
The incident is in contrast to the more well received Xiaoice chatbot that Microsoft deployed in China in 2014. Of course chatbots are not new — remember AOL’s SmarterChild? — but team communication tool Slack and other companies have been pushing bots as a way to automatically supply helpful information so people don’t need to.
Microsoft has been investing in AI research aplenty alongside Facebook, Google , and other companies. Microsoft has previously had imperfect demos of its AI-powered speech recognition. And in image recognition Microsoft had some troubles last year with the launch of the How Old Do You Look? app — it got many people’s ages wrong. But Tay’s remarks and Microsoft’s decision to stop it from working after it behaved badly provoked some concern about AI, and now a top figure at Microsoft has come to say sorry.
“We will remain steadfast in our efforts to learn from this and other experiences as we work toward contributing to an Internet that represents the best, not the worst, of humanity,” Lee wrote.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,634 | 2,018 |
"Google Brain researchers demo method to hijack neural networks | VentureBeat"
|
"https://venturebeat.com/2018/07/02/google-brain-researchers-demo-method-to-hijack-neural-networks"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google Brain researchers demo method to hijack neural networks Share on Facebook Share on X Share on LinkedIn Researchers at Google Brain demonstrated an attack that retrains computer vision algorithms to perform novel tasks.
Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Computer vision algorithms aren’t perfect. Just this month, researchers demonstrated that a popular object detection API could be fooled into seeing cats as “crazy quilts” and “cellophane.” Unfortunately, that’s not the worst of it: They can also be forced to count squares in images, classify numbers, and perform tasks other than the ones for which they were intended.
In a paper published on preprint server Arxiv.org titled “ Adversarial Reprogramming of Neural Networks ,” researchers at Google Brain , Google’s AI research division, describe an adversarial method that in effect reprograms machine learning systems. The novel form of transfer learning doesn’t even require an attacker to specify the output.
“Our results [demonstrate] for the first time the possibility of … adversarial attacks that aim to reprogram neural networks …” the researchers wrote. “These results demonstrate both surprising flexibility and surprising vulnerability in deep neural networks.” Here’s how it works: A malicious actor gains access to the parameters of an adversarial neural network that’s performing a task and then introduces perturbations, or adversarial data, in the form of transformations to input images. As the adversarial inputs are fed into the network, they repurpose its learned features for a new task.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The scientists tested the method across six models. By embedding manipulated input images from the MNIST computer vision dataset (black frames and white squares ranged from 1 to 10), they managed to get all six algorithms to count the number of squares in an image rather than identify objects like “white shark” and “ostrich.” In a second experiment, they forced them to classify the digits. And in a third and final test, they had the models identifying images from CIFAR-10, an object recognition database, instead of the ImageNet corpus on which they were originally trained.
Bad actors could use the attack to steal computing resources by, for example, reprogramming a computer vision classifier in a cloud-hosted photo service to solve image captchas or mine cryptocurrency. And although the paper’s authors didn’t test the method on a recurrent neural network, a type of network that’s commonly used in speech recognition, they hypothesize that a successful attack could induce such algorithms to perform “a very large array of tasks.” “Adversarial programs could also be used as a novel way to achieve more traditional computer hacks,” the researchers wrote. “For instance, as phones increasingly act as AI-driven digital assistants, the plausibility of reprogramming someone’s phone by exposing it to an adversarial image or audio file increases. As these digital assistants have access to a user’s email, calendar, social media accounts, and credit cards, the consequences of this type of attack also grow larger.” It’s not all bad news, luckily. The researchers noted that random neural networks appear to be less susceptible to the attack than others, and that adversarial attacks could enable machine learning systems that are easier to repurpose, more flexible, and more efficient.
Even so, they wrote, “Future investigation should address the properties and limitations of adversarial programming and possible ways to defend against it.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,635 | 2,019 |
"Text-based AI models are vulnerable to paraphrasing attacks, researchers find | VentureBeat"
|
"https://venturebeat.com/2019/04/01/text-based-ai-models-are-vulnerable-to-paraphrasing-attacks-researchers-find"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest Text-based AI models are vulnerable to paraphrasing attacks, researchers find Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Thanks to advances in natural language processing (NLP), companies and organizations are increasingly putting AI algorithms in charge of carrying out text-related tasks such as filtering spam emails, analyzing the sentiment of social media posts and online reviews, evaluating resumes, and detecting fake news.
But how far can we trust these algorithms to perform their tasks reliably? New research by IBM, Amazon, and University of Texas proves that with the right tools, malicious actors can attack text-classification algorithms and manipulate their behavior in potentially malicious ways.
The research, being presented today at the SysML AI conference at Stanford, looks at “paraphrasing” attacks, a process that involves modifying input text so that it is classified differently by an AI algorithm without changing its actual meaning.
To understand how a paraphrasing attack works, consider an AI algorithm that evaluates the text of an email message and classifies it as “spam” or “not spam.” A paraphrasing attack would modify the content of a spam message so that the AI classifies it as “not spam.” Meanwhile, to a human reader, the tampered message would have the same meaning as the original one.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The challenges of adversarial attacks against text models In the past few years, several research groups have explored aspects of adversarial attacks, input modifications meant to cause AI algorithms to misclassify images and audio samples while preserving their original appearance and sound to human eyes and ears. Paraphrasing attacks are the text equivalent of these. Attacking text models is much more difficult than tampering with computer vision and audio recognition algorithms.
“For audio and images you have full differentiability,” says Stephen Merity , an AI researcher and expert on language models. For instance, in an image classification algorithm, you can gradually change the color of pixels and observe how these modifications affect the output of the model. This can help researchers find the vulnerabilities in a model.
“Text is traditionally harder to attack. It’s discrete. You can’t say I want 10% more of the word ‘dog’ in this sentence. You either have the word ‘dog’ or you take it out. And you can’t efficiently search a model for vulnerabilities,” Merity says. “The idea is, can you intelligently work out where the machine is vulnerable, and nudge it in that specific spot?” “For image and audio, it makes sense to do adversarial perturbations. For text, even if you make small changes to an excerpt — like a word or two — it might not read smoothly to humans,” says Pin-Yu Chen, researcher at IBM and co-author of the research paper being presented today.
Creating paraphrasing examples Past work on adversarial attacks against text models involved changing single words in sentences. While this approach succeeded in changing the output of the AI algorithm, it often resulted in modified sentences that sounded artificial. Chen and his colleagues focused not only on changing words but also on rephrasing sentences and changing longer sequences in a way that remain meaningful.
“We are paraphrasing words and sentences. This gives the attack a larger space by creating sequences that are semantically similar to the target sentence. We then see if the model classifies them like the original sentence,” Chen says.
The researchers have developed an algorithm to find optimal changes in a sentence that can manipulate the behavior of an NLP model. “The main constraint was to make sure that the modified version of the text was semantically similar to the original one. We developed an algorithm that searches a very large space for word and sentence paraphrasing modifications that will have the most impact on the output of the AI model. Finding the best adversarial example in that space is very time consuming. The algorithm is computationally efficient and also provides theoretical guarantees that it’s the best search you can find,” says Lingfei Wu, scientist at IBM Research and another co-author of the paper.
In their paper, the researchers provide examples of modifications that change the behavior of sentiment analysis algorithms, fake news detectors, and spam filters. For instance, in a product review, by simply swapping the sentence “The pricing is also cheaper than some of the big name conglomerates out there” with “The price is cheaper than some of the big names below,” the sentiment of the review was changed from 100% positive to 100% negative.
Humans can’t see paraphrasing attacks The key to the success of paraphrasing attacks is that they are imperceptible to humans, since they preserve the context and meaning of the original text.
“We gave the original paragraph and modified paragraph to human evaluators, and it was very hard for them to see differences in meaning. But for the machine, it was completely different,” Wu says.
Merity points out that paraphrasing attacks don’t need to be perfectly coherent to humans, especially when they’re not anticipating a bot tampering with the text. “Humans aren’t the correct level to try to detect these kinds of attacks, because they deal with faulty input every day. Except that for us, faulty input is just incoherent sentences from real people,” he says. “When people see typos right now, they don’t think it’s a security issue. But in the near future, it might be something we will have to contend with.” Merity also points out that paraphrasing and adversarial attacks will give rise to a new trend in security risks. “A lot of tech companies rely on automated decisions to classify content, and there isn’t actually a human-to-human interaction involved. This makes the process vulnerable to such attacks,” Merity says. “It will run in parallel to data breaches, except that we’re going to find logic breaches.” For instance, a person might fool a hate-speech classifier to approve their content, or exploit paraphrasing vulnerabilities in a resume-processing model to push their job application to the top of the list.
“These types of issues are going to be a new security era, and I’m worried companies will spend as little on this as they do on security, because they’re focused on automation and scalability,” Merity warns.
Putting the technology to good use The researchers also discovered that by reversing paraphrasing attacks, they can build more robust and accurate models.
After generating paraphrased sentences that a model misclassifies, developers can retrain their model with modified sentences and their correct labels. This will make the model more resilient against paraphrasing attacks. It will also render them more accurate and generalize their capabilities.
“This was one of the surprising findings we had in this project. Initially, we started with the angle of robustness. But we found out that this method not only improves robustness but also improves generalizability,” Wu says. “If instead of attacks, you just think about what is the best way to augment your model, paraphrasing is a very good generalization tool to increase the capability of your model.” The researchers tested different word and sentence models before and after adversarial training, and in all cases, they experienced an improvement both in performance and robustness against attacks.
Ben Dickson is a software engineer and the founder of TechTalks , a blog that explores the ways technology is solving and creating problems.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,636 | 2,020 |
"Baidu details its adversarial toolbox for testing robustness of AI models | VentureBeat"
|
"https://venturebeat.com/2020/01/17/baidu-details-its-adversarial-toolbox-for-testing-robustness-of-ai-models"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Baidu details its adversarial toolbox for testing robustness of AI models Share on Facebook Share on X Share on LinkedIn Baidu Silicon Valley AI Lab in Sunnyvale, California.
Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
No matter the claimed robustness of AI and machine learning systems in production, none are immune to adversarial attacks, or techniques that attempt to fool algorithms through malicious input. It’s been shown that generating even small perturbations on images can fool the best of classifiers with high probability. And that’s problematic considering the wide proliferation of the “AI as a service” business model, where companies like Amazon, Google, Microsoft, Clarifai, and others have made systems that might be vulnerable to attack available to end users.
Researchers at tech giant Baidu propose a partial solution in a recent paper published on Arxiv.org: Advbox.
They describe it as an open source toolbox for generating adversarial examples, and they say it’s able to fool models in frameworks like Facebook’s PyTorch and Caffe2, MxNet, Keras, Google’s TensorFlow, and Baidu’s own PaddlePaddle.
While the Advbox itself isn’t new — the initial release was over a year ago — the paper dives into revealing technical detail.
AdvBox is based on Python, and it implements several common attacks that perform searches for adversarial samples. Each attack method uses a distance measure to quantify the size of adversarial perturbation, while a sub-model — Perceptron, which supports image classification and object detection models as well as cloud APIs — evaluates the robustness of a model to noise, blurring, brightness adjustments, rotations, and more.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! AdvBox ships with tools for testing detection models susceptible to so-called adversarial t-shirts or facial recognition attacks. Plus, it offers access to Baidu’s cloud-hosted deepfakes detection service via an included Python script.
“Small and often imperceptible perturbations to [input] are sufficient to fool the most powerful [AI],” wrote the coauthors. “Compared to previous work, our platform supports black box attacks … as well as more attack scenarios.” Baidu isn’t the only company publishing resources designed to help data scientists defend from adversarial attacks. Last year, IBM and MIT released a metric for estimating the robustness of machine learning and AI algorithms called Cross Lipschitz Extreme Value for Network Robustness, or CLEVER for short. And in April, IBM announced a developer kit called the Adversarial Robustness Toolbox, which includes code for measuring model vulnerability and suggests methods for protecting against runtime manipulation. Separately, researchers at the University of Tübingen in Germany created Foolbox, a Python library for generating over 20 different attacks against TensorFlow, Keras, and other frameworks.
But much work remains to be done. According to Jamal Atif, a professor at the Université Paris-Dauphine, the most effective defense strategy in the image classification domain — augmenting a group of photos with examples of adversarial images — at best has gotten accuracy back up to only 45%. “This is state of the art,” he said during an address in Paris at the annual France is AI conference hosted by France Digitale. “We just do not have a powerful defense strategy.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,637 | 2,020 |
"MIT CSAIL's TextFooler generates adversarial text to strengthen natural language models | VentureBeat"
|
"https://venturebeat.com/2020/02/07/mit-csails-textfooler-generates-adversarial-text-to-fool-ai-natural-language-models"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages MIT CSAIL’s TextFooler generates adversarial text to strengthen natural language models Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
AI and machine learning algorithms are vulnerable to adversarial samples that have alterations from the originals. That’s especially problematic as natural language models become capable of generating humanlike text, because of their attractiveness to malicious actors who would use them to produce misleading media. In pursuit of a technique that illustrates the extent to which adversarial text can affect model prediction, researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), the University of Hong Kong, and Singapore’s Agency for Science, Technology, and Research developed TextFooler , a baseline framework for synthesizing adversarial text examples. They claim in a paper that it was able to successfully attack three leading target models, including Google’s BERT.
“If those tools are vulnerable to purposeful adversarial attacking, then the consequences may be disastrous,” said Di Jin, MIT Ph.D. student and lead author on the paper, who noted that the adversarial examples produced by TextFooler could improve the robustness of AI models trained on them. “These tools need to have effective defense approaches to protect themselves, and in order to make such a safe defense system, we need to first examine the adversarial methods.” The researchers assert that besides the ability to fool AI models, the outputs of a natural language “attacking” system like TextFooler should meet certain criteria: human prediction consistency, such that human predictions remain unchanged; semantic similarity, such that crafted examples bear the same meaning as the source; and language fluency, such that generated examples look natural and grammatical. TextFooler meets all three even when no model architecture or parameters (values that influence model performance) are available — i.e., black-box scenarios.
It achieves this by identifying the most important words for the target models and replacing them with semantically similar and grammatically correct words until the prediction is altered. TextFooler is applied to two different tasks — text classification and entailment (the relationship between text fragments in a sentence) — with the goal of changing the classification or invalidating the entailment judgment of the original models. For instance, given the input “The characters, cast in impossibly contrived situations, are totally estranged from reality,” TextFooler might output “The characters, cast in impossibly engineered circumstances, are fully estranged from reality.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! To evaluate TextFooler, the researchers applied it to text classification data sets with various properties, including news topic classification, fake news detection, and sentence- and document-level sentiment analysis, where the average text length ranged from tens of words to hundreds of words. For each data set, they trained the aforementioned state-of-the-art models on a training set before generating adversarial examples semantically similar to the test set to attack those models.
The team reports that on the adversarial examples, they managed to reduce the accuracy of almost all target models in all tasks to below 10% with fewer than 20% of the original words perturbed. Even for BERT, which attained relatively robust performance compared with the other models tested, TextFooler reduced its prediction accuracy by about 5 to 7 times on a classification task and about 9 to 22 times on an entailment task (where the goal was to judge whether a sentence could be derived from entailment, contradiction, or a neutral relationship).
“The system can be used or extended to attack any classification-based NLP models to test their robustness,” said Jin. “On the other hand, the generated adversaries can be used to improve the robustness and generalization of deep learning models via adversarial training, which is a critical direction of this work.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,638 | 2,020 |
"Google's AI detects adversarial attacks against image classifiers | VentureBeat"
|
"https://venturebeat.com/2020/02/24/googles-ai-detects-adversarial-attacks-against-image-classifiers"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google’s AI detects adversarial attacks against image classifiers Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Defenses against adversarial attacks, which in the context of AI refer to techniques that fool models through malicious input, are increasingly being broken by “defense-aware” attacks. In fact, most state-of-the-art methods claiming to detect adversarial attacks have been counteracted shortly after their publication. To break the cycle, researchers at the University of California, San Diego and Google Brain, including Turing Award winner Geoffrey Hinton, recently described in a preprint paper an approach that deflects attacks in the computer vision domain. Their framework either detects attacks accurately or, for undetected attacks, pressures the attackers to produce images that resemble the target class of images.
The proposed architecture comprises (1) a network that classifies various input images from a data set and (2) a network that reconstructs the inputs conditioned on parameters of a predicted capsule. Several years ago, Hinton and several students devised an architecture called CapsNet , a discriminately trained and multilayer AI system. It and other capsule networks make sense of objects in images by interpreting sets of their parts geometrically. Sets of mathematical functions (capsules) responsible for analyzing various object properties (like position, size, and hue) are tacked onto a type of AI model often used to analyze visuals. Several of the capsules’ predictions are reused to form representations of parts, and since these representations remain intact throughout analyses, capsule systems can leverage them to identify objects even when the positions of parts are swapped or transformed.
Another unique thing about capsule systems? They route with attention. As with all deep neural networks, capsules’ functions are arranged in interconnected layers that transmit “signals” from input data and slowly adjust the synaptic strength — weights — of each connection. (That’s how they extract features and learn to make predictions.) But where capsules are concerned, the weightings are calculated dynamically according to previous-layer functions’ ability to predict the next layer’s outputs.
Three reconstruction-based detection methods are used together by the capsule network to detect standard adversarial attacks. The first — Global Threshold Detector — exploits the fact that when input images are adversarially perturbed, the classification given to the input may be incorrect, but the reconstruction is often blurry. Local Best Detector identifies “clean” images from their reconstruction error; when the input is a clean image, the reconstruction error from the winning capsule is smaller than that of the losing capsules. As for the last technique, called Cycle-Consistency Detector, it flags inputs as adversarial examples if they aren’t classified in the same class as the reconstruction of the winning capsule.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The team reports that in experiments they were able to detect standard adversarial attacks based on three different distance metrics with a low False Positive Rate on SVHN and CIFAR-10. “A large percentage of the undetected attacks are deflected by our model to resemble the adversarial target class [and] stop being adversarial any more,” they wrote. “These attack images can no longer be called ‘adversarial’ because our network classifies them the same way as humans do.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,639 | 2,020 |
"Researchers fool deepfake detectors into classifying fake images as real | VentureBeat"
|
"https://venturebeat.com/2020/04/08/researchers-fool-deepfake-detectors-into-classifying-fake-images-as-real"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Researchers fool deepfake detectors into classifying fake images as real Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
In a paper published this week on the preprint server Arxiv.org, researchers from Google and the University of California at Berkeley demonstrate that even the best forensic classifiers — AI systems trained to distinguish between real and synthetic content — are susceptible to adversarial attacks, or attacks leveraging inputs designed to cause mistakes in models. Their work follows that of a team of researchers at the University of California at San Diego, who recently demonstrated that it’s possible to bypass fake video detectors by adversarially modifying — specifically, by injecting information into each frame — videos synthesized using existing AI generation methods.
It’s a troubling, if not necessarily new, development for organizations attempting to productize fake media detectors, particularly considering the meteoric rise in deepfake content online. Fake media might be used to sway opinions during an election or implicate a person in a crime, and it’s already been abused to generate pornographic material of actors and defraud a major energy producer.
The researchers first tackled the simpler task of evaluating classifiers to which they had unfettered access. Using this “white-box” threat model and a data set of 94,036 sample images, they modified synthesized images so that they were misclassified as real and vice versa, applying various attacks — a distortion-minimizing attack, a universal adversarial-patch attack, and a universal latent-space attack — to a classifier taken from the academic literature.
The distortion-minimizing attack, which involved adding a small perturbation (i.e., modifying a subset of pixels) to a synthetically generated image, caused one classifier to misclassify 71.3% of images with only 2% pixel changes and 89.7% of images with 4% pixel changes. Perhaps more alarmingly, the model classified 50% of real images as fake after the researchers distorted under 7% of the images’ pixels.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! As for the loss-minimizing attack, which fixed the image distortion to be less than a specified threshold, it reduced the classifer’s accuracy from 96.6% to 27%. The universal adversarial-patch attack was even more effective — a visible noise pattern overlaid on two fake images spurred the model to classify them as real with a likelihood of 98% and 86%. And the final attack — the universal latent-space attack, where the team modified the underlying representation leveraged by an image-generating model to yield an adversarial image — reduced classification accuracy from 99% to 17%.
The researchers next investigated a black-box attack where the inner workings of the target classifier were unknown to them. They developed their own classifier by collecting one million images synthesized by an AI model and one million real images on which the aforementioned model was trained, and then training a separate system to classify images as fake or real and generating a white-box adversarial example on the source classifier using a distortion-minimizing attack. They report that this reduced their classifier’s accuracy from 85% to 0.03% and that when applied to a popular third-party classifier, it reduced that classifier’s accuracy from 96% to 22%.
“To the extent that synthesized or manipulated content is used for nefarious purposes, the problem of detecting this content is inherently adversarial. We argue, therefore, that forensic classifiers need to build an adversarial model into their defenses,” wrote the researchers. “Demonstrating attacks on sensitive systems is not something that should be taken lightly, or done simply for sport. However, if such forensic classifiers are currently deployed, the false sense of security they provide may be worse than if they were not deployed at all — not only would a fake profile picture appear authentic, now it would be given additional credibility by a forensic classifier. Even if forensic classifiers are eventually defeated by a committed adversary, these classifiers are still valuable in that they make it more difficult and time-consuming to create a convincing fake.” Fortunately, a number of companies have published corpora in the hopes that the research community will pioneer new detection methods. To accelerate such efforts, Facebook — along with Amazon Web Services (AWS), the Partnership on AI, and academics from a number of universities — is spearheading the Deepfake Detection Challenge. The Challenge includes a data set of video samples labeled to indicate which were manipulated with AI. In September 2019, Google released a collection of visual deepfakes as part of the FaceForensics benchmark, which was cocreated by the Technical University of Munich and the University Federico II of Naples. More recently, researchers from SenseTime partnered with Nanyang Technological University in Singapore to design DeeperForensics-1.0 , a data set for face forgery detection that they claim is the largest of its kind.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,640 | 2,020 |
"Resistant AI raises $2.75 million to protect algorithms from adversarial attacks | VentureBeat"
|
"https://venturebeat.com/2020/04/30/resistant-ai-raises-2-75-million-to-protect-algorithms-from-adversarial-attacks"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Resistant AI raises $2.75 million to protect algorithms from adversarial attacks Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Resistant AI has raised $2.75 million in venture capital to develop an artificial intelligence system that protects algorithms from automated attacks.
Index Ventures and Credo Ventures led the investment, which included participation by Seedcamp, UiPath CEO Daniel Dines, and Avast CTO Michal Pechoucek.
Based in Prague, Resistant AI focuses on the growing problem of hackers harnessing AI to manipulate machine learning systems.
Experts had predicted that cybersecurity would eventually lead to an AI arms race between attackers and their targets.
“Companies are just now learning how to deploy AI,” said Resistant AI cofounder and CEO Martin Rehak. “And on the other side, we see criminals and fraudsters learning how to use those processes for their benefit and how to steal money at scale. Our job is to protect the AI and machine learning models.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Read more: VentureBeat’s Special Issue on AI and security Resistant AI’s team includes a core group that worked at Cognitive Security, which was acquired by Cisco Systems in 2013. That team originally began working on AI for security back in 2006, Rehak said, at a moment when such technology seemed far over the horizon.
“The first five years, when I told anyone what we were doing, they told me I was crazy,” he said.
The AI-related work became increasingly central while they were at Cisco. But the group finally struck out on its own to focus on the issue of AI being used to attack AI — or, as Rehak explains, AI being used to attack various automated decision-making systems.
Experts have grown increasingly worried about the rise of adversarial attacks.
This refers to the idea of someone externally introducing elements into a machine learning model in order to disrupt or manipulate it.
When Resistant AI launched in 2019, it decided to focus first on financial companies, which had begun turning to automated systems to approve applications for various products.
Fraud attempts can occur in several ways. In one basic scenario, people use utility bills or bank statements with names changed to fool algorithmic-driven verification systems into opening accounts or financing or approving loans. Resistant’s AI intervenes by detecting visual anomalies or identifying data that seems suspicious to stop it from entering the approval system.
Resistant’s service can also review the decisions being made by a financial system, consider all the inputs, and look for correlations or inconsistencies within large batches. For example, a single request for approval might seem benign, but within a group of 100,000 requests, it may have abnormalities that resemble several other requests.
“That way, we can see that someone under different identities is actually fingerprinting the system and trying to find the vulnerability,” Rehak said.
By “fingerprinting,” Rehak means someone is submitting a range of documents and information to try to understand how a company’s algorithms and machine learning function.
The goal of such an attack can be twofold. First, the hacker may be trying to figure out the parameters of the algorithms in order to commit fraud. However, they may also be trying to use the attack to learn about the algorithm in order to copy it. They might then sell the information to other people who want to commit fraud or possibly even to competitors of the company being attacked.
In both cases, the hackers are increasingly using AI to automate and adapt their own methodology for probing these machine learning systems, Rehak said.
Going forward, Resistant plans to use the money to expand its staff of 20 people and extend its sales operations in Western Europe.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,641 | 2,020 |
"Microsoft and MITRE release framework to help fend off adversarial AI attacks | VentureBeat"
|
"https://venturebeat.com/2020/10/22/microsoft-and-mitre-release-framework-to-help-fend-off-adversarial-ai-attacks"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Microsoft and MITRE release framework to help fend off adversarial AI attacks Share on Facebook Share on X Share on LinkedIn Microsoft Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Microsoft, the nonprofit MITRE Corporation, and 11 organizations including IBM, Nvidia, Airbus, and Bosch today released the Adversarial ML Threat Matrix , an industry-focused open framework designed to help security analysts to detect, respond to, and remediate threats against machine learning systems. Microsoft says it worked with MITRE to build a schema that organizes the approaches employed by malicious actors in subverting machine learning models, bolstering monitoring strategies around organizations’ mission-critical systems.
According to a Gartner report , through 2022, 30% of all AI cyberattacks will leverage training-data poisoning, model theft, or adversarial samples to attack machine learning-powered systems. Despite these reasons to secure systems, Microsoft claims its internal studies find most industry practitioners have yet to come to terms with adversarial machine learning. Twenty-five out of the 28 businesses responding to the Seattle company’s recent survey indicated they don’t have the right tools in place to secure their machine learning models.
The Adversarial ML Threat Matrix — which was modeled after the MITRE ATT&CK Framework — aims to address this with a curated set of vulnerabilities and adversary behaviors that Microsoft and MITRE vetted to be effective against production systems. With input from researchers at the University of Toronto, Cardiff University, and the Software Engineering Institute at Carnegie Mellon University, Microsoft and MITRE created a list of tactics that correspond to broad categories of adversary action. Techniques in the schema fall within one tactic and are illustrated by a series of case studies covering how well-known attacks such as the Microsoft Tay poisoning , the Proofpoint evasion attack, and other attacks could be analyzed using the Threat Matrix.
Above: The Adversarial ML Threat Matrix.
“The Adversarial Machine Learning Threat Matrix will … help security analysts think holistically. While there’s excellent work happening in the academic community that looks at specific vulnerabilities, it’s important to think about how these things play off one another,” Mikel Rodriguez, who oversees MITRE’s decision science research programs, said in a statement.
“Also, by giving a common language or taxonomy of the different vulnerabilities, the threat matrix will spur better communication and collaboration across organizations.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Microsoft and MITRE say they will solicit contributions from the community via GitHub, where the Adversarial ML Threat Matrix is now available. Researchers can submit studies detailing exploits that compromise the confidentiality, integrity, or availability of machine learning systems running on Amazon Web Services, Microsoft Azure, Google Cloud AI, or IBM Watson, or embedded in client or edge device. Those who submit research will retain the permission to share and republish their work, Microsoft says.
“We think that securing machine learning systems is an infosec problem,” Microsoft Azure engineer Ram Shankar Siva Kumar and corporate VP Ann Johnson wrote in a blog post. “The goal of the Adversarial ML Threat Matrix is to position attacks on machine learning systems in a framework that security analysts can orient themselves in these new and upcoming threat … It’s aimed at security analysts and the broader security community: the matrix and the case studies are meant to help in strategizing protection and detection; the framework seeds attacks on machine learning systems, so that they can carefully carry out similar exercises in their organizations and validate the monitoring strategies.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,642 | 2,020 |
"Google, Apple, and others show large language models trained on public data expose personal information | VentureBeat"
|
"https://venturebeat.com/2020/12/16/google-apple-and-others-show-large-language-models-trained-on-public-data-expose-personal-information"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google, Apple, and others show large language models trained on public data expose personal information Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Large language models like OpenAI’s GPT-3 and Google’s GShard learn to write humanlike text by internalizing billions of examples from the public web. Drawing on sources like ebooks, Wikipedia, and social media platforms like Reddit, they make inferences to complete sentences and even whole paragraphs. But a new study jointly published by Google, Apple, Stanford University, OpenAI, the University of California, Berkeley, and Northeastern University demonstrates the pitfall of this training approach. In it, the coauthors show that large language models can be prompted to show sensitive, private information when fed certain words and phrases.
It’s a well-established fact that models can “leak” details from the data on which they’re trained. Leakage, also known as data leakage or target leakage, is the use of information in the training process that couldn’t be expected to be available when the model makes predictions. This is of particular concern for all large language models, because their training datasets can sometimes contain names, phone numbers, addresses, and more.
In the new study, the researchers experimented with GPT-2, which predates OpenAI’s powerful GPT-3 language model. They claim that they chose to focus on GPT-2 to avoid “harmful consequences” that might result from conducting research on a more recent, popular language model. To further minimize harm, the researchers developed their training data extraction attack using publicly available data and followed up with people whose information was extracted, obtaining their blessing before including redacted references in the study.
By design, language models make it easy to generate an abundance of output. By seeding with random phrases, the model can be prompted to generate millions of continuations, or phrases that complete a sentence. Most of the time, these continuations are benign strings of text, like the word “lamb” following “Mary had a little…” But if the training data happens to repeat the string “Mary had a little wombat” very often, for instance, the model might predict that phrase instead.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The coauthors of the paper sifted through millions of output sequences from the language model and predicted which text was memorized. They leveraged the fact that models tend to be more confident in results captured from training data; by checking the confidence of GPT-2 on a snippet, they could predict if the snippet appeared in the training data.
The researchers report that, of 1,800 snippets from GPT-2, they extracted more than 600 that were memorized from the training data. The examples covered a range of content including news headlines, log messages, JavaScript code, personally identifiable information, and more. Many appeared only infrequently in the training dataset, but the model learned them anyway, perhaps because the originating documents contained multiple instances of the examples.
The coauthors also found that larger language models more easily memorize training data compared with smaller models. For example, in one experiment, they report that GPT-2 XL, which contains 1.5 billion parameters — the variables internal to the model that influence its predictions — memorizes 10 times more information than the 124-million-parameter GPT-2.
While it’s beyond the scope of the work, this second finding has implications for models like the 175-billion-parameter GPT-3, which is publicly accessible via an API. Microsoft’s Turing Natural Language Generation Model, a model that powers a number of services on Azure, contains 17 billion parameters. And Facebook is using a model for translation with over 12 billion parameters.
The coauthors of the study note that it might be possible to mitigate memorization somewhat through the use of differential privacy, which allows training on a dataset without revealing any details of individual training examples. But even differential privacy has limitations and won’t prevent memorization of content that’s repeated often enough “Language models continue to demonstrate great utility and flexibility — yet, like all innovations, they can also pose risks. Developing them responsibly means proactively identifying those risks and developing ways to mitigate them,” Google research scientist Nicholas Carlini wrote in a blog post. “Given that the research community has already trained models 10 to 100 times larger, this means that as time goes by, more work will be required to monitor and mitigate this problem in increasingly large language models … The fact that these attacks are possible has important consequences for the future of machine learning research using these types of models.” Beyond leaking sensitive information, language models remain problematic in that they amplify the biases in data on which they were trained. Often, a portion of the training data is sourced from communities with pervasive gender, race, and religious prejudices. AI research firm OpenAI notes that this can lead to placing words like “naughty” or “sucked” near female pronouns and “Islam” near words like “terrorism.” Other studies, like one published by Intel, MIT, and Canadian AI initiative CIFAR researchers in April, have found high levels of stereotypical bias from some of the most popular models, including Google’s BERT and XLNet , OpenAI’s GPT-2 , and Facebook’s RoBERTa.
This bias could be leveraged by malicious actors to foment discord by spreading misinformation, disinformation, and outright lies that “radicalize individuals into violent far-right extremist ideologies and behaviors,” according to the Middlebury Institute of International Studies.
OpenAI previously said it’s experimenting with safeguards at the API level including “toxicity filters” to limit harmful language from GPT-3. For instance, it hopes to deploy filters that pick up anti-Semitic content while still letting through neutral content talking about Judaism.
It remains unclear what steps might eliminate the threat of memorization, much less toxicity, sexism, and racism. But Google, for one, has shown a willingness to brush aside these ethical concerns when convenient. Last week, leading AI researcher Timnit Gebru was fired from her position on an AI ethics team at Google in what she claims was retaliation for sending colleagues an email critical of the company’s managerial practices. The flashpoint was reportedly a paper Gebru coauthored that questioned the wisdom of building large language models and examined who benefits from them and who is disadvantaged.
In the draft paper, Gebru and colleagues reasonably suggest that large language models have the potential to mislead AI researchers and prompt the general public to mistake their text as meaningful. Popular natural language benchmarks don’t measure AI models’ general knowledge well, studies show.
It’s no secret that Google has commercial interests in conflict with the viewpoints expressed in the paper. Many of the large language models it develops power customer-facing products, including Cloud Translation API and Natural Language API. While Google CEO Sundar Pichai has apologized for the handling of Gebru’s firing, it bodes poorly for Google’s willingness to address critical issues around large language models. Time will tell if rivals, including Microsoft and Facebook, react any better.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,643 | 2,021 |
"Is neuroscience the key to protecting AI from adversarial attacks? | VentureBeat"
|
"https://venturebeat.com/2021/01/08/is-neuroscience-the-key-to-protecting-ai-from-adversarial-attacks"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Is neuroscience the key to protecting AI from adversarial attacks? Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Deep learning has come a long way since the days when it could only recognize handwritten characters on checks and envelopes. Today, deep neural networks have become a key component of many computer vision applications , from photo and video editors to medical software and self-driving cars.
Roughly fashioned after the structure of the brain, neural networks have come closer to seeing the world as humans do. But they still have a long way to go, and they make mistakes in situations where humans would never err.
These situations, generally known as adversarial examples , change the behavior of an AI model in befuddling ways. Adversarial machine learning is one of the greatest challenges of current artificial intelligence systems. They can lead to machine learning models failing in unpredictable ways or becoming vulnerable to cyberattacks.
Creating AI systems that are resilient against adversarial attacks has become an active area of research and a hot topic of discussion at AI conferences. In computer vision , one interesting method to protect deep learning systems against adversarial attacks is to apply findings in neuroscience to close the gap between neural networks and the mammalian vision system.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Using this approach, researchers at MIT and MIT-IBM Watson AI Lab have found that directly mapping the features of the mammalian visual cortex onto deep neural networks creates AI systems that are more predictable in their behavior and more robust to adversarial perturbations. In a paper published on the bioRxiv preprint server , the researchers introduce VOneNet, an architecture that combines current deep learning techniques with neuroscience-inspired neural networks.
The work, done with help from scientists at the University of Munich, Ludwig Maximilian University, and the University of Augsburg, was accepted at the NeurIPS 2020, one of the prominent annual AI conferences, which was held virtually last year.
Convolutional neural networks The main architecture used in computer vision today is convolutional neural networks (CNN). When stacked on top of each other, multiple convolutional layers can be trained to learn and extract hierarchical features from images. Lower layers find general patterns, such as corners and edges, and higher layers gradually become adept at finding more specific things, such as objects and people.
In comparison to the traditional fully connected networks, ConvNets have proven to be more robust and computationally efficient. But there remain fundamental differences between the way CNNs and the human visual system process information.
“Deep neural networks (and convolutional neural networks, in particular) have emerged as surprising good models of the visual cortex — surprisingly, they tend to fit experimental data collected from the brain even better than computational models that were tailor-made for explaining the neuroscience data,” IBM director of MIT-IBM Watson AI Lab David Cox told TechTalks.
“But not every deep neural network matches the brain data equally well, and there are some persistent gaps where the brain and the DNNs differ.” The most prominent of these gaps are adversarial examples, in which subtle perturbations such as a small patch or a layer of imperceptible noise can cause neural networks to misclassify their inputs. These changes go mostly unnoticed by the human eye.
“It is certainly the case that the images that fool DNNs would never fool our own visual systems,” Cox says. “It’s also the case that DNNs are surprisingly brittle against natural degradations (e.g., adding noise) to images, so robustness in general seems to be an open problem for DNNs. With this in mind, we felt this was a good place to look for differences between brains and DNNs that might be helpful.” Cox has been exploring the intersection of neuroscience and artificial intelligence since the early 2000s, when he was a student of James DiCarlo, neuroscience professor at MIT. The two have continued to work together since.
“The brain is an incredibly powerful and effective information-processing machine, and it’s tantalizing to ask if we can learn new tricks from it that can be used for practical purposes. At the same time, we can use what we know about artificial systems to provide guiding theories and hypotheses that can suggest experiments to help us understand the brain,” Cox says.
Brainlike neural networks Above: David Cox, IBM director of MIT-IBM Watson AI Lab For the new research, Cox and DiCarlo joined Joel Dapello and Tiago Marques, the lead authors of the paper, to see if neural networks became more robust to adversarial attacks when their activations were similar to brain activity. The AI researchers tested several popular CNN architectures trained on the ImageNet dataset , including AlexNet, VGG, and different variations of ResNet. They also included some deep learning models that had undergone “adversarial training,” a process in which a neural network is trained on adversarial examples to avoid misclassifying them.
The scientist evaluated the AI models using the BrainScore metric , which compares activations in deep neural networks and neural responses in the brain. They then measured the robustness of each model by testing it against white-box adversarial attacks, where an attacker has full knowledge of the structure and parameters of the target neural networks.
“To our surprise, the more brainlike a model was, the more robust the system was against adversarial attacks,” Cox says. “Inspired by this, we asked if it was possible to improve robustness (including adversarial robustness) by adding a more faithful simulation of the early visual cortex — based on neuroscience experiments — to the input stage of the network.” VOneNet and VOneBlock To further validate their findings, the researchers developed VOneNet, a hybrid deep learning architecture that combines standard CNNs with a layer of neuroscience-inspired neural networks.
The VOneNet replaces the first few layers of the CNN with the VOneBlock, a neural network architecture fashioned after the primary visual cortex of primates, also known as the V1 area. This means image data is first processed by the VOneBlock before being passed on to the rest of the network.
The VOneBlock is itself composed of a Gabor filter bank (GFB), simple and complex cell nonlinearities, and neuronal stochasticity. The GFB is similar to the convolutional layers found in other neural networks. But while classic neural networks start with random parameter values and tune them during training, the values of the GFB parameters are determined and fixed based on what we know about activations in the primary visual cortex.
“The weights of the GFB and other architectural choices of the VOneBlock are engineered according to biology. This means that all the choices we made for the VOneBlock were constrained by neurophysiology. In other words, we designed the VOneBlock to mimic as much as possible the primate primary visual cortex (area V1). We considered available data collected over the last four decades from several studies to determine the VOneBlock parameters,” says Tiago Marques, Ph.D., PhRMA Foundation Postdoctoral Fellow at MIT and coauthor of the paper.
Above: Tiago Marques, Ph.D., PhRMA Foundation Postdoctoral Fellow at MIT While there are significant differences in the visual cortex of different primates, there are also many shared features, especially in the V1 area. “Fortunately, across primates differences seem to be minor, and in fact there are plenty of studies showing that monkeys’ object recognition capabilities resemble those of humans. In our model, we used published available data characterizing responses of monkeys’ V1 neurons. While our model is still only an approximation of primate V1 (it does not include all known data and even that data is somewhat limited — there is a lot that we still do not know about V1 processing), it is a good approximation,” Marques says.
Beyond the GFB layer, the simple and complex cells in the VOneBlock give the neural network flexibility to detect features under different conditions. “Ultimately, the goal of object recognition is to identify the existence of objects independently of their exact shape, size, location, and other low-level features,” Marques says. “In the VOneBlock, it seems that both simple and complex cells serve complementary roles in supporting performance under different image perturbations. Simple cells were particularly important for dealing with common corruptions, [and] complex cells with white-box adversarial attacks.” VOneNet in action One of the strengths of the VOneBlock is its compatibility with current CNN architectures. “The VOneBlock was designed to have a plug-and-play functionality,” Marques says. “That means that it directly replaces the input layer of a standard CNN structure. A transition layer that follows the core of the VOneBlock ensures that its output can be made compatible with the rest of the CNN architecture.” The researchers plugged the VOneBlock into several CNN architectures that perform well on the ImageNet dataset. Interestingly, the addition of this simple block resulted in considerable improvement in robustness to white-box adversarial attacks and outperformed training-based defense methods.
“Simulating the image processing of primate primary visual cortex at the front of standard CNN architectures significantly improves their robustness to image perturbations, even bringing them to outperform state-of-the-art defense methods,” the researchers write in their paper.
“The model of V1 that we added here is actually quite simple — we’re only altering the first stage of the system while leaving the rest of the network untouched, and the biological fidelity of this V1 model is still quite simple,” Cox says, adding that there’s a lot more detail and nuance one could add to such a model to make it better match what is known about the brain.
“Simplicity is strength in some ways since it isolates a smaller set of principles that might be important, but it would be interesting to explore whether other dimensions of biological fidelity might be important,” he says.
The paper challenges a trend that has become all too common in AI research in the past years. Instead of applying the latest findings about brain mechanisms in their research, many AI scientists focus on driving advances in the field by taking advantage of the availability of vast compute resources and large datasets to train larger and larger neural networks. And that approach presents many challenges to AI research.
VOneNet proves that biological intelligence still has a lot of untapped potential and can address some of the fundamental problems AI research is facing. “The models presented here, drawn directly from primate neurobiology, indeed require less training to achieve more humanlike behavior. This is one turn of a new virtuous circle, wherein neuroscience and artificial intelligence each feed into and reinforce the understanding and ability of the other,” the authors write.
In the future, the researchers will further explore the properties of VOneNet and the further integration of discoveries in neuroscience and artificial intelligence. “One limitation of our current work is that while we have shown that adding a V1 block leads to improvements, we don’t have a great handle on why it does,” Cox says.
Developing the theory to help understand this “why” question will enable the AI researchers to ultimately home in on what really matters and to build more effective systems. They also plan to explore the integration of neuroscience-inspired architectures beyond the initial layers of artificial neural networks.
Says Cox, “We’ve only just scratched the surface in terms of incorporating these elements of biological realism into DNNs, and there’s a lot more we can still do. We’re excited to see where this journey takes us.” Ben Dickson is a software engineer and the founder of TechTalks. He writes about technology, business, and politics. This post was originally published here.
This story originally appeared on Bdtechtalks.com.
Copyright 2021 VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,644 | 2,021 |
"Salesforce researchers release framework to test NLP model robustness | VentureBeat"
|
"https://venturebeat.com/2021/01/13/salesforce-researchers-release-framework-to-test-nlp-model-robustness"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Salesforce researchers release framework to test NLP model robustness Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
In the subfield of machine learning known as natural language processing (NLP), robustness testing is the exception rather than the norm. That’s particularly problematic in light of work showing that many NLP models leverage spurious connections that inhibit their performance outside of specific tests. One report found that 60% to 70% of answers given by NLP models were embedded somewhere in the benchmark training sets, indicating that the models were usually simply memorizing answers. Another study — a meta analysis of over 3,000 AI papers — found that metrics used to benchmark AI and machine learning models tended to be inconsistent, irregularly tracked, and not particularly informative.
This motivated Nazneen Rajani, a senior research scientist at Salesforce who leads the company’s NLP group, to create an ecosystem for robustness evaluations of machine learning models. Together with Stanford associate professor of computer science Christopher Ré and University of North Carolina at Chapel Hill’s Mohit Bansal, Rajani and the team developed Robustness Gym , which aims to unify the patchwork of existing robustness libraries to accelerate the development of novel NLP model testing strategies.
“Whereas existing robustness tools implement specific strategies such as adversarial attacks or template-based augmentations, Robustness Gym provides a one-stop-shop to run and compare a broad range of evaluation strategies,” Rajani explained to VentureBeat via email. “We hope that Robustness Gym will make robustness testing a standard component in the machine learning pipeline.” Above: The frontend dashboard for Robustness Gym.
Robustness Gym provides guidance to practitioners on how key variables — i.e., their task, evaluation needs, and resource constraints — can help prioritize what evaluations to run. The suite describes the influence of a given task via a structure and known prior evaluations; needs such as testing generalization, fairness, or security; and constraints like expertise, compute access, and human resources.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Robustness Gym casts all robustness tests into four evaluation “idioms”: subpopulations, transformations, evaluation sets, and adversarial attacks. Practitioners can create what are called slices, where each slice defines a collection of examples for evaluation built using one or a combination of evaluation idioms. Users are scaffolded in a simple two-stage workflow, separating the storage of structured side information about examples from the nuts and bolts of programmatically building slices using this information.
Robustness Gym also consolidates slices and findings for prototyping, iterating, and collaborating. Practitioners can organize slices into a test bench that can be versioned and shared, allowing a community of users to together build benchmarks and track progress. For reporting, Robustness Gym provides standard and custom robustness reports that can be auto-generated from test benches and included in paper appendices.
Above: The named entity linking performance of commercial APIs compared with academic models using Robustness Gym.
In a case study, Rajani and coauthors had a sentiment modeling team at a “major technology company” measure the bias of their model using subpopulations and transformations. After testing the system on 172 slices spanning three evaluation idioms, the modeling team found a performance degradation on 16 slices of up to 18%.
In a more revealing test, Rajani and team used Robustness Gym to compare commercial NLP APIs from Microsoft (Text Analytics API), Google (Cloud Natural Language API), and Amazon (Comprehend API) with the open source systems BOOTLEG, WAT, and REL across two benchmark datasets for named entity linking. (Named entity linking entails identifying the key elements in a text, like names of people, places, brands, monetary values, and more.) They found that the commercial systems struggled to link rare or less-popular entities, were sensitive to entity capitalization, and often ignored contextual cues when making predictions. Microsoft outperformed other commercial systems, but BOOTLEG beat out the rest in terms of consistency.
“Both Google and Microsoft display strong performance on some topics, e.g. Google on ‘alpine sports’ and Microsoft on ‘skating’ … [but] commercial systems sidestep the difficult problem of disambiguating ambiguous entities in favor of returning the more popular answer,” Rajani and coauthors wrote in the paper describing their work. “Overall, our results suggest that state-of-the-art academic systems substantially outperform commercial APIs for named entity linking.” Above: The summarization performance of models compared using Robustness Gym.
In a final experiment, Rajani’s team implemented five subpopulations that capture summary abstractedness, content distillation, positional bias, information dispersion, and information reordering. After comparing seven NLP models, including Google’s T5 and Pegasus on an open source summarization dataset across these subpopulations, the researchers found that the models struggled to perform well on examples that were highly distilled, required higher amounts of abstraction, or contained more references to entities. Surprisingly, models with different prediction mechanisms appeared to make “highly correlated” errors, suggesting that existing metrics can’t capture meaningful performance differences.
“Using Robustness Gym, we demonstrate that robustness remains a challenge even for corporate giants such as Google and Amazon,” Rajani said. “Specifically, we show that public APIs from these companies perform significantly worse than simple string-matching algorithms for the task of entity disambiguation when evaluated on infrequent (tail) entities.” Both the aforementioned paper and Robustness Gym’s source code are available as of today.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,645 | 2,021 |
"Microsoft open-sources Counterfit, an AI security risk assessment tool | VentureBeat"
|
"https://venturebeat.com/2021/05/04/microsoft-open-sources-counterfit-an-ai-security-risk-assessment-tool"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Microsoft open-sources Counterfit, an AI security risk assessment tool Share on Facebook Share on X Share on LinkedIn View of a Microsoft logo on March 10, 2021, in New York.
Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Microsoft today open-sourced Counterfit, a tool designed to help developers test the security of AI and machine learning systems. The company says that Counterfit can enable organizations to conduct assessments to ensure that the algorithms used in their businesses are robust, reliable, and trustworthy.
AI is being increasingly deployed in regulated industries like health care, finance, and defense. But organizations are lagging behind in their adoption of risk mitigation strategies. A Microsoft survey found that 25 out of 28 businesses indicated they don’t have the right resources in place to secure their AI systems, and that security professionals are looking for specific guidance in this space.
Microsoft says that Counterfit was born out the company’s need to assess AI systems for vulnerabilities with the goal of proactively securing AI services. The tool started as a corpus of attack scripts written specifically to target AI models and then morphed into an automation product to benchmark multiple systems at scale.
Under the hood, Counterfit is a command-line utility that provides a layer for adversarial frameworks, preloaded with algorithms that can be used to evade and steal models. Counterfit seeks to make published attacks accessible to the security community while offering an interface from which to build, manage, and launch those attacks on models.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! When conducting penetration testing on an AI system with Counterfit, security teams can opt for the default settings, set random parameters, or customize each for broad vulnerability coverage. Organizations with multiple models can use Counterfit’s built-in automation to scan — optionally multiple times in order to create operational baselines.
Counterfit also provides logging to record the attacks against a target model. As Microsoft notes, telemetry might drive engineering teams to improve their understanding of a failure mode in a system.
The business value of responsible AI Internally, Microsoft says that it uses Counterfit as a part of its AI red team operations and in the AI development phase to catch vulnerabilities before they hit production. And the company says it’s tested Counterfit with several customers, including aerospace giant Airbus, which is developing an AI platform on Azure AI services. “AI is increasingly used in industry; it is vital to look ahead to securing this technology particularly to understand where feature space attacks can be realized in the problem space,” Matilda Rhode, a senior cybersecurity researcher at Airbus, said in a statement.
The value of tools like Counterfit is quickly becoming apparent. A study by Capgemini found that customers and employees will reward organizations that practice ethical AI with greater loyalty, more business, and even a willingness to advocate for them — and in turn, punish those that don’t. The study suggests that there’s both reputational risk and a direct impact on the bottom line for companies that don’t approach the issue thoughtfully.
Basically, consumers want confidence that AI is secure from manipulation. One of the recommendations from Gartner’s Top 5 Priorities for Managing AI Risk framework , published in January, is that organizations “[a]dopt specific AI security measures against adversarial attacks to ensure resistance and resilience.” The research firm estimates that by 2024, organizations which implement dedicated AI risk management controls will avoid negative AI outcomes twice as often as those that don’t.” According to a Gartner report , through 2022, 30% of all AI cyberattacks will leverage training-data poisoning, model theft, or adversarial samples to attack machine learning-powered systems.
Counterfit is a part of Microsoft’s broader push toward explainable, secure, and “fair” AI systems. The company’s attempts at solutions to those and other challenges include AI bias-detecting tools , an open adversarial AI framework , internal efforts to reduce prejudicial errors, AI ethics checklists, and a committee ( Aether ) that advises on AI pursuits. Recently, Microsoft debuted SmartNoise (formerly WhiteNoise), a toolkit for differential privacy, as well as Fairlearn , which aims to assess AI systems’ fairness and mitigate any observed unfairness issues with algorithms.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,646 | 2,014 |
"Data for dummies: SugarCRM just got stupid-simple business data | VentureBeat"
|
"https://venturebeat.com/2014/03/26/sugarcrm-sweetens-its-data-with-dun-bradstreet-alliance"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Data for dummies: SugarCRM just got stupid-simple business data Share on Facebook Share on X Share on LinkedIn From the SugarCRM website Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Popular customer relationship management (CRM) vendor SugarCRM is enlarging its data offering today by announcing integrated business data from Dun & Bradstreet (D&B).
The new data will help its users better pinpoint sales opportunities.
D&B data had previously been available as an outside source to the software, in addition to such sources as LinkedIn, HootSuite, and in-house ERP systems.
This new D&B partnership is intended to provide access to data via searching instead of manual entry. Users can search in SugarCRM for, say, new prospects, click to add lists of competitors or corporate family members, or search for departmental contacts and add them with a click.
The “tighter integration [of D&B data] takes advantage of the Sugar UX … interface to bring that data into a single view” of all the information on a given topic, Sugar CRM senior vice president of marketing Jennifer Stagnaro told VentureBeat.
SugarCRM launched a redesigned user experience last fall. This Sugar UX emphasizes graphs for visualization of data, inline editing, activity streams, and contextual information that it pulls from outside.
“Sales, marketing, and customer service teams are fighting a losing battle against incomplete data,” D&B senior vice president Mike Sabin said in statement. His company can certainly fill in a lot of missing company, industry, and contact information, as it boasts over 230 million business records and 100 million contacts, as well as unstructured social and news data.
D&B data is also available directly in Salesforce, SugarCRM’s biggest competitor.
Three levels of D&B data integration will be offered next month through SugarCRM resellers – a Basic Package with company and contact info, a Standard Package that adds such additional detail as corporate parent info, and a Premium level with detailed industry and competitor data plus unlimited news on prospects and customers.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,647 | 2,017 |
"Why marketers should embrace a holistic approach to their data | VentureBeat"
|
"https://venturebeat.com/2017/01/06/why-marketers-should-embrace-a-holistic-approach-to-their-data"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Sponsored Why marketers should embrace a holistic approach to their data Share on Facebook Share on X Share on LinkedIn Presented by Tableau Marketers have gotten the message: data matters.
Yes, we’ve gotten data-savvy, but many marketers still aren’t fully leveraging the value of their data. Let me tell you what I mean.
By now, we’ve all started using data to measure progress, spot opportunities, and reach new audiences. The problem is many marketers are doing so in their own corners of the building. That’s the baseline — the minimum viable set of marketing analytics, if you will.
Because marketing is a huge group effort, tracking the performance of just your own work doesn’t give you the whole picture. That’s why marketers should embrace a holistic approach to their data analysis. This involves connecting the data dots across functions and teams for a complete view of the entire marketing organization. That’s how you get to next-generation marketing analytics.
Why is this a better approach? Think of all the moving parts and stakeholders, and imagine linking all those pieces together through data. What if every marketer could explore the data to see the results of her work in the context of the greater effort? And what if the decision makers can see all of the data together to make strategic decisions quickly and move the business forward? By pulling together all the data and empowering people to expand their analysis, marketing organizations can gain a full-picture view of their efforts. Here are some ways this integrated approach can maximize the impact of your work.
Capturing the full life of a lead Many marketers have data snapshots of their leads’ journey down the funnel. These come in the form of metrics like email open rates, website visits, and web-form submissions. Sure, these metrics are valuable on their own, but when you stitch together these snapshots, you gain something even more valuable: the full life of a lead through the marketing funnel.
You can follow the data to see the person’s entire path — how you first reached the person, which content, channel, or activity triggered the highest point of engagement, when the person converted, what the resulting sale looked like, and what the customer’s journey has looked like since.
With all your data in one place, you start to see how the pieces fit together. Did A lead to B, to C, as you expected? If a prospect responded well to a free trial offer, what is the experience or content served up next? Is that an effective path? Where are the weak spots? By looking at the entire journey, you can gain a deeper understanding of how customers and prospects engage with brand touch points — which are working, which aren’t, and which are sending people to competitors. Then you can refine your strategy to make sure your efforts deepen relationships, address needs, and improve your conversion rate.
Fine-tuning your media mix in real-time The media mix is a huge part of the marketing budget as it plays a critical role in generating leads. It’s also an area that can benefit from an integrated approach to analytics.
Sure, data can be valuable in measuring the performance of each channel — whether digital ads managed to reach a certain target audience, for example. But looking at that single metric is only so valuable; it’s when you look at the performance of all the channels side by side that you start to see how to best reach your intended audience.
Let’s say your digital ads aren’t performing well for a certain campaign. If that’s the only metric you’re tracking, you’ll likely pause any spending on that channel and leave it at that. But if you’ve pulled together the data for all of your channels, that first metric can be your starting point. You can look to the data to see which channels are successfully reaching your intended audience. Then you can quickly reallocate your budget to those channels to maximize the reach of your campaign.
Your media strategy likely involves a dozen different platforms with different metrics. But that doesn’t mean those platforms have to work in silos. With real-time data and a comprehensive approach to your analytics, you can get the most reach out of every dollar you spend.
Empowering the sales team with full-picture insights Having a 360-degree view of the data also helps you deepen your relationship with your internal customers, the sales team. For marketers, it’s crucial to build trust and partnership with sales, and transparency is a big part of that equation.
You can use data to demonstrate how they’re engaging with leads. One option is to create and share a campaign history dashboard so the sales team can easily see the prospect’s journey down the marketing funnel thus far. Share information like which activities occurred when, and which activities resonated with the prospect.
You can also create an activity tracker. Instead of making the sales team dig through rows of data to piece together history, offer a quick overview of activities that lets them click through to see what’s happening and when. With a real-time “who’s hot” dashboard, the sales team can drill down by territory and prioritize their lead queue.
Once you create these dashboards for sales, you can embed in a place the team frequents, like Salesforce or your CRM tool of choice. That way, they become a natural part of the team’s workflow.
Gaining a competitive edge with next-generation marketing analytics Data-smart marketers are using data to move the needle in pockets across the organization. And now, it’s time to take the next step in your analytics journey. By bringing together all the data for a complete view of your analytics, you can leverage the full value of your data. You can connect dots and uncover broader actionable insights that can impact strategy at the organizational level. With next-generation marketing analytics, the entire organization can work smarter, innovate faster, and gain a competitive edge.
Sponsored posts are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. Content produced by our editorial team is never influenced by advertisers or sponsors in any way. For more information, contact [email protected].
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,648 | 2,021 |
"Is poor data quality undermining your marketing AI? | VentureBeat"
|
"https://venturebeat.com/2021/04/22/is-poor-data-quality-undermining-your-marketing-ai"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Is poor data quality undermining your marketing AI? Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Marketing’s potential to deliver results relies on data quality, but data accuracy, consistency, and validity continue to be a challenge for many organizations. Inconsistent data quality is holding marketing teams back from converting leads into sales, accurately tracking campaign performance, and taking on the larger challenges of optimizing product mix and product/service revenue forecasts.
The latest analytics, Account-Based Marketing (ABM), CRM, marketing automation, and lead scoring tools all provide real-time data capture and analysis. How the tools ensure consistent data quality directly impacts the quality of the AI and machine learning models the tools use.
Inconsistent data drives opportunities away Marketing teams can’t deliver on their goals with bad data quality. For example, inaccurate prospect data clogs sales pipelines by slowing down efforts to turn marketing qualified leads (MQLs) into sales qualified leads (SQLs).
Two-thirds of sales leads don’t close because of bad data quality, and up to 25% of a typical organization’s customer and prospect records have critical data errors jeopardizing deals, Forrester said in a recent research brief.
Gartner’s 2020 Magic Quadrant for Data Quality Solutions says poor data quality costs the typical enterprise up to $12.9 million or more every year.
Dun & Bradstreet’s study The Past, Present, and Future of Data says that 25% of businesses with over 500 employees have lost a customer due to incomplete or inaccurate information.
Problems with data quality increase the odds of failure for AI initiatives such as predictive audience offers and promotions, personalization, AI-enabled chatbots for advanced service, and automated service recovery. A quarter of organizations attempting to adopt AI report an up to a 50% failure rate, IDC said recently.
The leading causes of inconsistent data quality in marketing include problems with taxonomy and meta-tagging, lack of data governance, and loss of productivity.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! No data consistency The most common reason AI and ML fail in the marketing sector is that there’s little consistency to the data across all campaigns and strategies. Every campaign, initiative, and program has its unique meta-tags, taxonomies, and data structures. It’s common to find marketing departments with 26 or more systems supporting 18 or more taxonomies, each created at one point in a marketing department’s history to support specific campaigns.
O’Reilly’s The State of Data Quality In 2020 survey found that over 60% of enterprises see their AI and machine learning projects fail due to too many data sources and inconsistent data. While the survey was on the organization level, it would not be a stretch to assume the failure rate would be higher within marketing departments, as it’s common to create unique taxonomies, databases, and metatags for each campaign in each region.
Above: Marketing departments face a variety of data quality issues. (O’Reilly, State of Data Quality in 2020) The larger, more globally based, and more fragmented a marketing department is, the harder it is to achieve data governance. The O’Reilly State of Data Quality Survey found that just 20% of enterprises publish information about data provenance or data lineage, which are essential tools for diagnosing and resolving data quality issues. Creating greater consistency across taxonomies, data structures, data field definitions, and meta-tags would give marketing data scientists a higher probability of succeeding with their ML models at scale.
Up to a third of a typical marketing team’s time is spent dealing with data quality issues, which has a direct impact on productivity, according to Forrester’s Why Marketers Can’t Ignore Data Quality study.
Inaccurate data makes tactical decisions harder to get right, which could impact revenues. Forrester found that 21 cents of every media dollar have been wasted over the last 12 months (as of 2019) due to poor data quality. Taking the time to improve data quality and consistency in marketing would convert the lost productivity to revenue.
Start with change management and data governance Too often, marketers and the IT teams supporting them rely on data scientists to improve inconsistent data. It’s time-consuming, tedious work and can consume up to 80% or more of the data scientist’s time.
It is no surprise that data scientists rate cleaning up data as their least-liked activity.
Instead of asking data scientists to solve the marketing department’s data quality challenges, it would be far better to have the marketing department focus on creating a single, unified content data model. The department should consolidate diverse data requirement needs into a single, unified model with a taxonomy rigid enough to ensure consistency, yet adaptive enough to meet unique campaign needs. Change management makes the marketer’s job easier and more productive because there is a single, common enterprise taxonomy. Data governance is key to solving this problem, and marketing leaders have to be able to explain how improving metadata consistency and content data models fits within the context of each team member’s role. After that, the marketing organization should focus on standardizing across all taxonomies and the systems supporting them.
The bottom line is that inconsistent data quality in marketing impacts the team by jeopardizing new sales cycles and creating confusion in customer relationships. The ability to get AI and ML pilots into production and provide insights valuable enough to change a company’s strategic direction depends on reliable data. Companies will find their marketing campaigns’ future contributions to growth are defined by how the team improves data quality today.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,649 | 2,014 |
"The new CMO: Customer service, product development, sales, data ... oh, and branding | VentureBeat"
|
"https://venturebeat.com/2014/04/01/the-new-cmo-customer-service-product-development-sales-data-oh-and-branding"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages The new CMO: Customer service, product development, sales, data … oh, and branding Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
In the good old days of fat cats and see-gars and three TV networks, the chief marketing officer’s job was easy, if it even existed at all: brand, positioning, and placement on TV, newspapers, magazines, plus maybe some radio.
Done, martinis at noon, see you at the golf course for tee time at three.
Not any more. Today’s CMO has increased responsibility for revenue and that, plus the new complexities of social and mobile and web, has increased involvement in all aspects of what his or her company does, according to a new study by Deloitte and ExactTarget (Salesforce.com’s marketing cloud).
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “Marketing may be signing up for big numbers, but the customer purchase journey is splintered across product, sales, and service,” the study says. “Many CMOs are faced with a conversion path they don’t entirely own.” Marketing technologist Scott Brinker and VB are studying the new digital marketing organization.
Help us, and we’ll share the data.
According to the study, which surveyed 228 “global marketing leaders” who are mostly at companies with more than $500 million in annual revenue, CMOs face five new challenges: Be responsible for top line growth Own the customer experience Use data to drive marketing Operate in real-time Master metrics … especially ROI 53 percent of CMOs have increased pressure to deliver revenue growth this year, the study says, but only 27 percent are working to align product development and sales with marketing to help deliver on the commitment. Almost half, however, are taking greater ownership of customer-facing teams, and 38 percent are taking a larger customer-service role.
One area with obvious impact? Social media, where marketing hears customer feedback that can then be injected into the product lifecycle.
Perhaps nowhere is the new set of CMO responsibilities more obvious than in the exploding world of data. 61 percent of CMOs say that data acquisition is one of their three key priorities for 2014, and an almost equal number say that testing and optimization based on that data is another. The key, however, is in using the data to personalize customer experiences and drive customer acquisition: Above: CMO’s key areas of focus for 2014 “We must move from numbers keeping score to numbers that drive better actions,” the study says.
If data is a challenge, the new real-time world of online and mobile is an even bigger one. But as mobile and online ad exchanges and real-time bidding platforms eat ever more of the corporate world’s ad spend, real-time marketing is becoming a must-have.
Essentially, it’s one-to-one marketing in action: “Real-time digital marketing techniques that sense customer behavior and respond (like instant geotargeted alerts via push message, or automated and personalized emails based on website or social activity) are becoming the standard in 1:1 communication, shortening the lag time between a customer action and a perfectly timed and targeted brand response. Real-time efforts have replaced segment-centric batch-and-blast marketing … all the while respecting [customers’] preferences as a unique person — not a persona.” Clearly that kind of high-touch and highly automated response is only possible with marketing technology, including ExactTarget’s Salesforce1-based solution , competitor Adobe’s marketing cloud , Microsoft’s new marketing solutions , and many other marketing automation solutions for companies large and small.
So naturally you have to take a study like this with at least one grain of salt.
True enough, perhaps.
But the new reality of marketing technology is that with new capabilities and new data come new responsibilities, which can only be managed at scale with yet more technology. And while that’s a challenge that many traditional marketers aren’t yet sure they can accept , it’s clearly one that is the future of the new marketing department.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,650 | 2,015 |
"Are CMOs wasting money on faulty marketing analytics? | VentureBeat"
|
"https://venturebeat.com/2015/03/17/are-cmos-wasting-money-on-faulty-marketing-analytics"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest Are CMOs wasting money on faulty marketing analytics? Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Chief marketers today know merely dipping a toe into the data technology pool is no longer sufficient. But in a digital era where it’s clear analytics are key, why are so few marketers taking the plunge? Research shows that at least they’re trying. Although Gartner’s 2015 CMO Report reveals 82 percent of CMOs feel underprepared to deal with the data explosion, global market intelligence firm IDC predicts the CMO will drive more than $32 billion in marketing technology by 2018.
Still, with billions backing ad-hoc marketing analytics campaigns, McKinsey & Company says there are billions more on the table for grabs. After eight years of researching more than 400 diverse organizations, analysts at McKinsey found that an integrated analytics approach can free anywhere between 15 and 20 percent of marketing spending.
“Worldwide, that equates to as much as $200 billion that can be reinvested by companies or drop straight to the bottom line,” researchers wrote in a June 2014 article.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Clearly, analytics and customer lifecycle management processes must be weaved into everything the CMO does. But just like having an analytics solution doesn’t make you a data scientist, as BeyondCore CEO Arijit Sengupta noted to VB in November , simply having customer data doesn’t make your analytics correct.
The multi-million dollar question is moving from “do we need analytics?” to “are the analytics even accurate?” “With companies using as many as 100 products to aid their sales and marketing efforts, it suggests that many employees are not only bringing their own devices to work (BYOD), but are also bringing their own marketing processes (BYOP) and toolsets (BYOT) when they join a company,” VB analyst Stewart Rogers wrote in the 2014 State of Marketing report. “This, in itself, creates serious worry for the future accuracy and cleanliness of a company’s central record of customer data, not to mention a lack of documented and compatible processes right across the entire organization.” CMOs are now at a crossroads between data quality and data results. It’s no longer enough to dabble in analytics and come out with the richness required for informed decision-making. The business needs integrated systems across IT infrastructure, and marketers — not IT pros — must champion the call for improved data controls and governance as their cause.
As corporate data grows 40 percent annually over the next decade, marketers need to get a handle on their data quality. It’s estimated anywhere between 10 and 25 percent of B2B marketing databases have errors and are “dirty.” Combine that with the latest figures from Sirius Decisions , estimating companies spend $100 per inaccurate data record on things like poor lead generation and sending direct-mail marketing to the wrong addresses.
Now imagine your marketing database houses 100,000 records, and 20 percent of those contain errors. Multiplying $100 by 20,000 reveals your organization throws away about $2 million annually in marketing dollars because of poor data quality.
Year-over-year at a 40 percent annual growth rate will likely make any CFO clutch his calculator. However, this creates the perfect way to start the conversation about improving the data management systems.
It may be only a dialogue at first, but it’s an essential conversation as plans take shape for the next one, two, or even five years. If this is your legacy, then it’ll be a good one, changing the way marketing influences the business bottom line and interacts across functional silos to see transformative results.
Whether it is via cross-selling, churn management, or targeting the most profitable customers, data has the power to grow consumer loyalty in the “age of the customer.” By embracing the right tools to lead this charge, you’re giving your company a strong advantage while investing in the future of an evolving marketing profession — one that requires both a new skillset and a new mentality.
Manji Matharu is the president of analytics at Infogix , a data integrity, controls, and analytics organization. Matharu was formerly the CEO of Agilis International before its acquisition by Infogix.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,651 | 2,015 |
"Big data, meet dumb data: How CMOs are driving value from more (and less) data | VentureBeat"
|
"https://venturebeat.com/2015/03/18/big-data-meet-dumb-data-how-cmos-are-driving-value-from-more-and-less-data"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages VB Insight Big data, meet dumb data: How CMOs are driving value from more (and less) data Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Big data is great, but larger data sets do not always mean better insights. And while today’s CMO is tasked with being a data-driven marketer, extracting value from the vast proliferation of data they see every day is an increasing challenge.
In this recent VB Insight Report “Big Data, Meet Dumb Data”, we asked 757 marketing pros how they do analytics.
The results are surprising.
What isn’t a surprise, of course, is that CMOs love big data. 74 percent of CMOs want more data, and say that more data creates more opportunities. Also not much of a surprise is that CMOs prefer small sets of clean data to bigger sets of complex data. A bit more of a surprise is that for half of CMOs, ease of interpreting or extrapolating data was their top priority in a campaign.
In other words, although marketing is getting more technical, simplicity is still primary.
That has its dangers, however. June Andrews, a senior data scientist at LinkedIn, told VentureBeat that not only is data accessibility increasing “hand over fist,” but that most of her colleagues are leaving 20 percent of the opportunity on the table when they are only able to make sense of 80 percent of the data.
So, how are marketing professionals making use of the 80 percent of data they are using? Mostly for market segmentation, apparently: it’s the top priority emphasized when developing new marketing campaigns.
Data driven marketing campaigns sound great, but do you make them happen? In one of many case studies found in Big Data, Meet Dumb Data, author Neil Ungerleider highlights how Vail Resorts devised a smarter way of leveraging their data sets to better engage with their customers — and increase ROI.
By unifying disparate silos of data from hotels, ski hills, and ski schools into a single analytics platform, Vail Resorts created a data driven marketing campaign that connected all the dots on Vail’s customer touch points. The company could then create a smart marketing campaign driven by users that resulted in more than 35 million social impressions across Twitter and Facebook.
Questions of execution and best practices remain, however.
What products are marketing professionals using to make sense of their data? What are the most important factors for executing campaigns? Do professionals feel that data proliferation creates opportunities or difficulties? Access Neal Ungerleider’s report to find the answers to these questions; and to further explore the role of a data-driven CMO, the tools they use, data strategies that lead to success, and ultimately how winning organizations find the “smart data” in a sea of “dumb data.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,652 | 2,021 |
"Pivoting to privacy-first: Why this is an adapt-or-die moment | VentureBeat"
|
"https://venturebeat.com/2021/04/03/pivoting-to-privacy-first-why-this-is-an-adapt-or-die-moment"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest Pivoting to privacy-first: Why this is an adapt-or-die moment Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Operating in the digital advertising ecosystem isn’t for the faint of heart, and that’s never been truer than it is in 2021. The landscape is undergoing unprecedented transitions right now as we make a much-needed pivot to a privacy-first reality, and a lot of business models, practices, and technologies are not going to survive the upheaval. That said, I’m not here to make doomsday predictions. In fact, there are a lot of reasons for genuine optimism right now.
As an industry, we’re heading in the right direction, and when we emerge on the other side of important transitions — including Google’s removal of third-party cookie support in Chrome and Apple’s limitations on IDFA — our industry will be stronger as a whole, as will consumer protections. Let’s take a look at the principles that will define the digital advertising and marketing world of the future, as well as the players that operate within it.
To win, you have to embrace industry change Google gave the industry more than two years’ warning of its plans to end third-party cookie support on Chrome in 2022. Since then, a number of companies and industry organizations have rolled up their sleeves and started planning for what has long been an inevitability. Those that leaned into the conversation, digesting Google’s position and anticipating how the cookieless future would look, weren’t surprised when Google clarified in March 2021 that it isn’t planning to build or use alternate identifiers within its ecosystem.
The simple fact is that burying your head in the sand or digging your heels in as it relates to changes of this magnitude isn’t an option. Industry consternation, and even legal pushbacks, might delay implementation of certain policy shifts, but that’s all they will do — delay the inevitable. The writing is on the wall: Greater privacy controls are coming to the digital landscape, and the companies that succeed in the future will be the ones that embrace — and even help to accelerate — this transition.
Don’t put all your eggs into one basket If the panic that followed Google’s cookieless announcement taught us anything, it should have been this: The digital marketing ecosystem can’t allow itself to become overly reliant on any single technology or provider. The future belongs to those that put interoperability at the heart of their approach.
Moving forward from the cookie, there are a few truths we must recognize. One is that there’s no single universal identifier that’s going to step forward to fill the entirety of the void left by third-party cookies. A number of companies are moving forward with plans for their own universal identifiers, and taken together, these identifiers will help to illuminate user identity on a portion of the open web (i.e., non-Google properties). They will be an important part of the ecosystem but by no means a silver bullet to comprehensive cross-channel, personalized advertising.
Another massive component of the post-cookie landscape will be behavioral cohorts, embodied most prominently in Google’s Federated Learning of Cohorts (FLoC) construct. Through FLoC, Google will be creating targetable groups of anonymous users who navigate the internet in similar ways. The good news is that, through FLoC, nearly all of Chrome’s users will become addressable in a fully private manner, whereas only a portion of them were addressable via cookies. As such, marketers and their partners will need to build solutions that accommodate FLoC and other cohort-driven approaches. But at the same time, they also need to look beyond what Google’s putting into the marketplace in order to continue effective cross-channel marketing and personalization across the broader landscape.
Ultimately, companies that can bring their own ground truth of consumer understanding to the table — and then extend their insights through the most important identifiers and behavioral cohort solutions — will prove the most adaptable to future marketplace shifts. The time of putting all your digital eggs into one ecosystem basket are long gone.
An always-on crystal ball The next 12 months are going to be transformative in our industry. In 24 months, we’ll all be a lot wiser. We will have taken universal IDs and behavioral cohorts for a few laps around the track, and we’ll have a much stronger sense of the role that they can and will play in furthering our consumer connections and understanding. Likewise, the innovators of our industry will have gotten to work on rewriting the internet economy around the new privacy-first reality, and we’ll all be reaping the benefits of their novel ideas and solutions.
Along the way, of course, we will see a lot of companies pivoting. This might be a period of rapid transformation, but there’s no reason to believe a period of stagnation awaits us on the other side. The future, as always, belongs to the nimble — the ones that anticipate and adapt while others resist. Now is the time to be fearless in building the future of our industry in a way that is sustainable for companies and consumers alike.
Tom Craig is CTO at Resonate.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,653 | 2,021 |
"Android privacy changes are coming -- are you ready? | VentureBeat"
|
"https://venturebeat.com/2021/05/25/android-privacy-changes-are-coming-are-you-ready"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest Android privacy changes are coming — are you ready? Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
A few weeks ago, Apple made the change that app developers, advertisers, and Facebook have long fretted about: They moved access to the Identifier for Advertisers (IDFA) and the ability to “track” users across third-party sites and apps behind an opt-in. Developers and advertisers who weren’t immediately hit hard by Apple’s swing are waiting for the second shoe to drop: Google ’s inevitable changes to the Android AdID.
Early signals indicate a wide range of opt-in rates for iOS tracking. Some of the apps I’ve spoken to say they are seeing up to 45-55% of users opt-ing in. But wider surveys seem to indicate that average opt-in rates are hovering around a more disappointing 5-10%.
We don’t quite yet know who will be better off and who will be worse off as a result of the change.
Many expected Google to announce similar changes to Android AdID at its developer conference last week, but the announcement didn’t materialize. If you think the silence means the blow isn’t coming, don’t be fooled. It’s just delayed.
The good news is that developers don’t necessarily need to hang in anticipation — they can evaluate Google’s potential moves to determine and implement a strategy that helps mitigate the impact of Google’s eventual decision.
Here are three paths available to Google, along with the likelihood and suspected impact of each.
Option 1: Letting Android play in the Sandbox The most complicated shift would be a move to deprecate the Android AdID and move ad delivery and reporting into the Android operating system. Such an approach would build on the work of the Chrome Privacy Sandbox team and could mirror some features of Apple’s SKAdNetwork. At a high-level, solutions like Privacy Sandbox and SKAdNetwork disintermediate the relationship between the developer/publisher and consumer.
While there is a robust debate about whether this approach is good for consumer privacy, what is less arguable is that it would strengthen the platform at the expense of independent ad tech, advertisers and, likely, publishers and app developers. Google is already facing investigations into the competition (and privacy) implications of Privacy Sandbox. Can Google realistically afford an assault on Android from competition regulators? Conclusion: This option is unlikely, but still possible if Google decides to align Chrome and Android.
Option 2: Google as cop on the beat Another option would be for Google to mirror the approach Apple has taken with the Ad Tracking Transparency Framework, putting the Android AdID behind an opt-in and launching an associated policy framework to regulate and enforce a broader definition of tracking. Remember, tracking on iOS involves a whole host of data uses beyond just access to the IDFA.
While this outcome isn’t out of the realm of possibility, it would alienate two sets of stakeholders Google cares deeply about: app developers and advertisers. App developers would find it harder to understand if their advertising is driving app installs, and advertisers would find it harder to deliver personalized ads in apps. In contrast to Google, recent Congressional testimony seems to indicate that Apple views developers with some healthy paternalism, and Apple doesn’t have a vested interest in the advertising ecosystem.
If Google does end up going the route of a major policy change around “tracking” accompanied by enforcement, expect a collaborative rollout process that gives developers and advertisers time to adequately prepare for such a meaningful change.
Conclusion: This option could happen, but Google would be risking a lot of blowback from stakeholders it cares about.
Option 3: Opt-in, but with a respect for the law A lighter-weight alternative to Apple’s approach would be to move the Android AdID behind an opt-in but without requiring the consent screen for additional forms of tracking as Apple has. Moving the Android AdID behind an opt-in is a technical change, whereas requiring the opt-in for other forms of tracking, like the collection of an email address at login, is a policy shift. This approach would provide consumers with additional transparency and choice, while avoiding the thorny policy debate that has emerged around how to define tracking.
App developers could collect and use other data besides the Android AdID in compliance with laws like California’s Consumer Privacy Act, Virginia’s Consumer Data Protection Act, the EU’s General Data Protection Regulation, and — hopefully someday soon — a comprehensive federal U.S. privacy law. And this approach would leave the debate over what constitutes tracking to those elected to grapple with hard trade-offs and settle policy debates: the U.S. Congress.
This is the most likely scenario for three reasons. First, it is clearly a step in the right direction for consumers. Second, it would help satisfy the concerns of privacy regulators. Finally, it would preserve the trust of app developers and advertisers.
Conclusion: Ding ding ding — this seems like the winner.
With an announcement expected in the coming months, developers and advertisers must use this time to place their bets on which of the above three options is most likely to come to pass and then furiously plan and prepare.
Here’s one more thing to remember: Whatever path Google chooses, consumer privacy is going to be prioritized. This likely means that the future will involve less data scale, smarter data science, and a more clear explanation of the value exchange that occurs when a consumer chooses to allow their data to be collected and used. Trust has to be at the core of all that we build.
Tyler Finn is Director of Data Strategy at Foursquare.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,654 | 2,017 |
"The RetroBeat: Sonic 3D Blast sprints to a new legacy with an unofficial Director's Cut | VentureBeat"
|
"https://venturebeat.com/2017/11/22/the-retrobeat-sonic-3d-blast-sprints-to-a-new-legacy-with-an-unofficial-directors-cut"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages The RetroBeat: Sonic 3D Blast sprints to a new legacy with an unofficial Director’s Cut Share on Facebook Share on X Share on LinkedIn Sonic 3D Blast Director's Cut.
Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Most Sonic fans agree that Sega’s hedgehog mascot had his early peak during the Genesis days in the mid 1990s. But 1996’s Sonic 3D Blast does not receive the same love as its 2D predecessors.
Many of Sonic 3D Blast’s problems had nothing to do with the actual game. It was one of the last major Sega releases for the aging Genesis. Its successor, the struggling Saturn, was already out. Many fans were looking forward to Sonic Xtreme, which was going to be publisher’s marquee game for the new system. But Sega cancelled the project, moving 3D Blast to the Saturn with some small graphical improvements.
Sonic 3D Blast’s isometric take on 3D gaming seemed quaint in 1996.
Super Mario 64 , which came out a month earlier, looked leaps more impressive with its full 3D worlds and dynamic camera. But Sonic 3D Blast also has some of its own problems. The controls are slippery, and levels have players finding Flickies — colorful, little birds — in nonlinear scavenger hunts that can become frustrating.
Sonic 3D Blast is not a bad game (it also has one of the best soundtracks of any Genesis game ). But its problems and poor timing doomed it to mediocre reviews and a cool reception from fans. Now lead programmer Jon Burton is going back to the game he helped create 21 years ago to try to make it a more polished experience.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! No Sega, no problems Burton’s Sonic 3D Blast Director’s Cut is not an official Sega product. Instead, it is available as a patch that you can install into a ROM of the Sega Genesis version, which you can then play on any device that can run a Genesis emulator. It’s available in a beta form.
Normally, you’d expect that a major gaming company wouldn’t be happy about someone, even if they worked on the original game, making a free, unofficial update to one of their products.
But Sega has a long history of being lenient and even supportive of the Sonic fan game community. This year’s excellent Sonic Mania has its roots in the work of former Sonic modders and fan game creators. So far, Sega has not told Burton he can’t make his Director’s Cut.
“I enjoyed working with Sega,” Burton told GamesBeat. “They really knew their stuff. They’re a great bunch of people. I really enjoyed going over to Japan and meeting with them there. They treated me extremely well the whole time with the project. But on this, this is just me in my spare time thinking, well, I’d like to fix some of this stuff. If I can patch that in such a way that everybody’s happy — if you own the game and you can patch this over the top, great. If Sega steps in and says hey, you can’t do this, I won’t do this. Either way I’m enjoying the process.” While most Sonic games come from Sonic Team, Sega enlisted Traveler’s Tales to develop 3D Blast. Burton founded Traveler’s Tales in 1989, and he still works there today. You probably know the studio for all of its Lego games. Before 3D Blast, Traveler’s Tales had created popular Disney games for the 16-bit era: Mickey Mania and Toy Story. Then Sega got in touch about Sonic.
“We’d just finished Toy Story, we were keen to get on with the new consoles, the Saturn and the PlayStation,” Burton told GamesBeat. “Sega came to us and wanted a meeting. Well, of course, we’ll take a meeting with Sega. They said, we want you to make a Genesis game. We really wanted to do the next-gen stuff. But then they said, it’s Sonic the Hedgehog. Oh, that 16-bit game? Yeah, we can do that 16-bit game. They came with it kind of fully formed. They wanted to do this isometric 3D. They’d seen what we’d done on Mickey Mania and Toy Story and wanted to see if we could pull off this isometric view.
“I guess at the time they were maybe struggling with Sonic Xtreme. Maybe it was in very early development. There wasn’t a Sonic game around and they obviously wanted to embrace 3D and the next generation. Mario was about to come out or had just come out. They wanted to be in the world of 3D. They didn’t just want another 2D side-scroller. But they also wanted to support the Genesis, which couldn’t do true 3D. I guess this was the compromise they came up with. But they were clear on it being isometric 3D. I had some pretty clear ideas on how we’d be able to do that.” The original project took eight months to complete.
Above: 3D Blast was a unique Sonic game with its isometric gameplay.
Exposing the programming Sonic 3D Blast does a lot of impressive things considering it had first been made for the Genesis. It has a computer-generated movie for an intro. And although it depended on a fixed isometric view, it did translate the feel of Sonic game — including running, jumping, and spin-dashing — to a 3D game. But many were hard on the game.
Burton would see complaints — about the controls or the annoyance of tracking down Flickies — in modern reviews from gamers on YouTube. He found that he still had the original code for the game.
“I could put some Game Genie codes out with a few little tweaks to rebalance it,” Burton told GamesBeat. “Then I looked and saw that you could patch things in. I thought, well, then I can change more than just a few numbers. That got me quite interested. I put up a video to see if people were interested in this, and they had a whole bunch of suggestions about what they wanted to change.” Burton has a whole channel, GameHut , where he goes into detail about how he and his team were able to program many of his games, including Mickey Mania, Toy Story, and Sonic 3D Blast. Some of the videos for Sonic 3D Blast would show content cut from the game, like a discarded crab enemy.
These videos became popular. One, which explains why punching the Genesis cartridge opens up a Level Select Mode, has more 350,000 views.
“People were just fascinated by how these games were made back then,” Burton told GamesBeat. “It was a different time. I think the art of programming and optimization kind of died as graphics cards and processors became more and more powerful. You could achieve the same things by using slightly lower-res textures or slightly fewer polygons rather than really pushing the most optimized code.” A new day for Sonic 3D Blast The interest in his videos and seeing modern critiques encouraged him to make his Director’s Cut. He would add in cut content like the crab enemy. He would also adjust the controls to make it easier to control Sonic. The Director’s Cut also includes surprises like a Level Editor and Super Sonic.
The HUD is improved and shows more details. You only lose one Flicky at a time when you’re hit instead of all of them, so you’re less likely to waste time hunting down a bird you already collected but lost. The Director’s Cut even adds a new menu for selecting stages, Time Challenges that task you with beating levels quickly, and a new password save system.
It’s a lot of changes and improvements. But for Burton, improving 3D Blast isn’t just about his legacy. It’s not even just about the one game.
“Because Sonic 3D is part of the Sonic legacy, and Sonic has had a mixed legacy, I’d like to address, where I can, my part in that,” Burton said. “If I can make Sonic 3D a bit more acceptable to the Sonic fanbase with a few days of my time, then great, why not try it?” Sonic 3D Blast was never a bad game. But with this Director’s Cut, it could go beyond redemption.
The RetroBeat is a weekly column that looks at gaming’s past, diving into classics, new retro titles, or looking at how old favorites — and their design techniques — inspire today’s market and experiences. If you have any retro-themed projects or scoops you’d like to send my way, please contact me.
GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.
Discover our Briefings.
Join the GamesBeat community! Enjoy access to special events, private newsletters and more.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,655 | 2,020 |
"Facebook rolls out automatic captions for Instagram TV | VentureBeat"
|
"https://venturebeat.com/2020/09/15/facebook-rolls-out-automatic-captions-for-instagram-tv"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Facebook rolls out automatic captions for Instagram TV Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Facebook today announced the availability of automatic captions for Instagram TV (IGTV), beginning with captions for on-demand videos in 16 languages globally. The rollout follows the launch of automatic captioning for Facebook Live and Workplace Live, which arrived in March for six languages (English, Spanish, Portuguese, Italian, German, and French).
Facebook says expanded captioning builds upon the alternative text updates it made a few years ago to support people with limited vision. “As more people use the captions, the AI will learn and we expect the quality to continue to improve. This is a small step, and we’ll look to expand to more surfaces, languages, and countries moving forward,” a spokesperson told VentureBeat via email.
In a blog post, Facebook explains it leveraged a technique to train machine learning models powering automatic speech recognition to directly predict the graphemes (or characters) of words, simplifying the model training and deployment process. Using public Facebook posts to prime the system, engineers trained models to adapt to new words like “COVID” and predict where they’ll occur in videos.
Above: Facebook’s automatic captioning in Spanish.
Facebook also says it was able to deploy these models with a number of infrastructure optimizations, enabling it to serve additional video traffic resulting from pandemic-related loads. According to Facebook, the number of Facebook Live broadcasts from Pages doubled in June 2020 compared to the same time last year.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Facebook launched its first automatic captioning product in February 2016, for video ads. In October of that same year, the social network rolled out a free video captioning tool for all U.S. English Facebook Pages. While the tools have no doubt improved over the years, anecdotal evidence suggests they have a long way to go, with videos like last year’s Antares rocket launch showing nonsense words with auto-captioning enabled. As Forbes noted in a recent piece, captioning errors disproportionately affect the video-watching experience of those with hearing impairments.
Evidently cognizant of its systems’ shortcomings, Facebook says it is investigating ways to improve captioning going forward. In a technical paper published last month, data scientists at the company described wav2vec 2.0 , a speech recognition framework they claim attained state-of-the-art results using just 10 minutes of labeled data. In July, Facebook researchers detailed a model that learned to understand words in 51 languages after training on over 16,000 hours of voice recordings. And in a study last month, Facebook managed to reduce word error rate — a common speech recognition performance metric — by over 20% using a novel method.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,656 | 2,020 |
"Stuck at home? 5 tips for creating a productive work-from-home policy | VentureBeat"
|
"https://venturebeat.com/2020/03/10/stuck-at-home-5-tips-for-creating-a-productive-work-from-home-policy"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Sponsored Stuck at home? 5 tips for creating a productive work-from-home policy Share on Facebook Share on X Share on LinkedIn Presented by WorkRamp As governments and health organizations mobilize to curb the spread of coronavirus (COVID-19), global companies are also playing a critical role in keeping their organizations safe with a simple policy: work-from-home.
No question, work-from-home policies have existed since long before the outbreak. But this is the first time companies are seeing what happens when a distributed work force is stress-tested by hundreds of thousands of employees at massive organizations like Twitter, Facebook, and Google.
Whether your company is embracing a work-from-home policy for the first time or readying an entire organization into going remote, this can be a tricky period to navigate.
Here are 5 tips for creating a productive work-from-home policy: 1. Use tools like Zoom to not only get work done, but also to feel connected Of course, productivity tools like Slack and Zoom have already seen a significant increase in usage, as also evidenced by an uptick in stock prices (ZM +85% in 2020 and +45% in February). It reinforces the fact that these remote-friendly software tools are a critical part of the remote worker’s day-to-day, especially when it comes to getting work done and feeling less isolated.
The real challenge around remote work is finding ways for employees to feel connected to each other and the company when they are physically apart. The most successful companies incorporate face-time into their remote culture, whether it’s taking their 1:1’s over Zoom (rather than just a faceless phone call) or hosting virtual company-wide meetings.
At Twitter, their first virtual global all hands was hosted on Google Teams and Slack, a move that enabled a more creative and stronger two-way dialog between the leadership team and the employees.
2. Create a new training mindset just like Peloton did To enable an entire company to function remotely, there needs to be a widespread cultural transformation and change in perception around the way we do work — and that goes beyond just buying new tools. Just as Peloton was able to prove that changing your mindset around exercise was more than simply purchasing expensive, at-home equipment, companies are having to demonstrate that changing your mindset around remote work is more than purchasing the right conferencing software.
It must be done by outlining clear expectations around individual productivity and team communication, building trust within distributed teams, and maintaining a positive culture that continues to celebrate wins remotely while connecting globally.
At WorkRamp, we’re seeing this approach to change management take off in the industry and championed by innovative customers like TripActions. With over 400 agents globally, the TripActions Support Organization focuses not only on ensuring a high caliber of resources across all locations, but also on having the flexibility to turn the same training into an in-person, remote, or blended learning experience.
In light of the coronavirus pandemic, organizational agility is key in building out training resources that are helping agents navigate through a tsunami of support tickets on trip cancellations.
“TripActions is committed to providing an amazing employee learning experience online and offline,” says Matt Cruz, Director of Learning & Development at TripActions. “We are always revisiting and strategizing ways to deliver these experiences so that we stay nimble through any organizational changes.” 3. Extend your perks to the home Make the transition from in-office to home as seamless as possible by extending the same in-office benefits for newly remote employees. A prominent Y Combinator company extends office perks to the home by allowing employees to expense one meal a day and providing an equipment stipend to outfit their remote workspace.
Policy shifts like these allow companies to still embrace company values and help employees remain connected to the company, while minimizing disruptions during abrupt transitions. This also shows that remote work is not a downgrade from being in-office, but another version of what work should look like. It’s easy to play these off as ‘extra perks’ — but it’s really about setting up employees for success, no matter where they work.
4. Give your commute time back to your customers Although working from home presents its own challenges, the fact that employees will save time by skipping the long office commute also presents new opportunities for organizations. Especially for customer-centric companies, this allows teams to dedicate that extra time to finding ways to improve the customer journey.
Every team should feel empowered to go out of their way to delight customers during a turbulent period — whether it’s finding time to develop a more robust pipeline during prime morning business hours or having more hands on deck to cut the support ticket SLA in half. As a result of a remote workforce, we’re seeing innovative organizations like Quantum Metric give their time back to their customers in the form of building new training resources on the new Quantum University.
5. Stay informed, not inactive Experts have recently shared that “anxiety moves faster than a virus.” So while global organizations aim to keep their teams productive and engaged while staying safe, it’s important to remember that companies have a huge role to play in keeping their workforces positive and informed.
Companies like Coinbase have gone out of their way to create planning and response guides and policies that demonstrate that they care about their employees’ well-being. At the same time, company leaders are making sure to strike an important tone of support and opportunity. This is a chance for companies to demonstrate how to be nimble, value what is important, and keep moving forward.
An opportunity to become stronger and more resilient Our hope is that companies come out of this pandemic even stronger and more resilient. Just as the financial crisis of 2007-2008 forced our banking infrastructure to improve its internal controls and balance sheets, we are extremely optimistic that companies will take this opportunity to learn how to enable global workforces, regardless of location or work environment. This opportunity will be a new direction for many, but one that will be used for years to come.
Ted Blosser is the CEO at WorkRamp, the leading end-to-end training platform for educating employees and customers at scale. Empower your users with an engaging learning platform that helps you execute better across your entire business. See why companies like Zoom, Square, and Slack trust WorkRamp to train their teams and customers by visiting https://www.workramp.com.
Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. Content produced by our editorial team is never influenced by advertisers or sponsors in any way. For more information, contact [email protected].
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,657 | 2,020 |
"Templafy raises $25 million to help workers create company-compliant documents | VentureBeat"
|
"https://venturebeat.com/2020/04/27/templafy-raises-25-million-to-help-workers-create-company-compliant-documents"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Templafy raises $25 million to help workers create company-compliant documents Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Templafy , a Danish startup that helps anyone in a company create new documents while adhering to branding and legal guidelines, has raised $25 million in a series C round of funding led by Insight Partners.
The raise comes as companies around the world have been forced to swiftly adapt to remote working due to the COVID-19 crisis, with cloud-based technologies serving to bridge the gap. Templafy’s technology is designed to enable distributed workforces to create new company-branded documents that comply with all internal policies. It serves as a centralized platform through which content and up-to-date assets, such as presentations, logos, and legal disclaimers, are accessed.
It isn’t always easy for workers to access the right documents at the right time, and some may even be using an out-of-date template from their desktop — variables that increase the risk of non-compliance.
Automation Through the main Templafy Admin hub, managers can control templates, documents, fonts, and email signatures and ensure these are integrated directly into the company’s content management and IT systems.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Above: Templafy Admin hub Templafy’s platform also includes various automation and AI elements.
Templafy Dynamics , for example, enables companies to create templates that automatically personalize for each user — keeping the document on-brand and legally compliant while acknowledging that even similar documents aren’t always identical.
For example, a letterhead uploaded to Templafy may contain a basic design and layout that can be used by anyone, but dynamic placeholders can be configured to include data that is unique to each user, including their location, language, and department, as well as distinct legal disclaimers that may vary depending on the user’s role. This saves companies from having to create multiple templates for people in different departments and locations around the world.
Templafy also uses Microsoft’s cloud-based computer vision smarts, which automatically tag uploaded images to make it easier to search for relevant imagery by keywords.
Above: Templafy & computer vision Templafy also integrates with most of the common office software tools and platforms, including Office 365, SharePoint, and Google Drive, meaning the Templafy library can appear directly inside all the apps people typically use to create new documents.
Above: Templafy library Founded out of Copenhagen in 2014, Templafy had raised around $37 million prior to now, and with another $25 million in the bank the company will seek to accelerate its international growth and pursue potential acquisitions. This fresh cash injection comes after Templafy said it more than doubled its revenue over the past year. The company says it has now sold 2 million licenses to its platform around the world.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,658 | 2,021 |
"Businesses to support remote workforce even after offices reopen | VentureBeat"
|
"https://venturebeat.com/2021/05/03/businesses-to-support-remote-workforce-even-after-offices-reopen"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Businesses to support remote workforce even after offices reopen Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
( Reuters ) – U.S. businesses have been spending more on technology than on bricks and mortar for more than a decade now, but the trend has accelerated during the pandemic, one more sign that working from home is here to stay.
As spending on home-building has risen, spending on nonresidential construction has dropped, with that on commercial, manufacturing and office space slumping to under 15% of total construction outlays in March, Commerce Department data showed Monday.
Business spending on structures fell in the first quarter, data from the Bureau of Economic Analysis showed last week. It was the sixth straight quarterly decline, showcasing one of the few weak spots in the economy as it regains steam amid a receding pandemic.
Meanwhile, spending on technology rose, with investments in software and information processing equipment contributing more than 1 percentage point to the economy’s overall 6.4% annualized rise in economic output in the quarter, the BEA data showed. Technology spending has added to growth in all but two of the past 32 quarters, back to 2013. Spending on structures has pulled GDP downward in 14 of those quarters.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The implications of the shift are broad: the economy emerging from the depths of the pandemic will be more technology-driven and less reliant on in-person transactions , leaving jobs permanently changed and potentially fewer in number.
Accelerated by the pandemic, the divergence between the two types of business spending is here to stay, says Stanford economics professor Nicholas Bloom.
“This is the surge in (work-from-home) which is leading firms to spend heavily on connectivity,” Bloom said.
He and colleagues have been surveying 5,000 U.S. residents monthly, and found that from May to December about half of paid work hours were done from home.
Workers’ own spending to equip their home offices with computer connectivity, desks and other necessities comes to the equivalent of 0.7% of GDP, their surveys found, suggesting the business investment data likely underestimates what’s actually being spent on technology.
Those sunk costs are one reason that on average Americans will work one day a week from home even after the pandemic, up from about one day a month before, Bloom says.
American firms’ reliance on hybrid working should continue to lift business spending on technology for the forseeable future, said ING chief international economist James Knightley.
Spending on office buildings particularly will likely remain weak at least until the end of the summer, he predicted, when the return of most kids to school should allow more parents to return to work.
Even then, he said, businesses will need to continue to spend more than ever on connectivity and computers to support the remote, or partially remote, workforce.
“I think there’s still a lot more to do there,” he said.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,659 | 2,021 |
"Why IT needs to lead the next phase of data science | VentureBeat"
|
"https://venturebeat.com/2021/02/28/why-it-needs-to-lead-the-next-phase-of-data-science"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest Why IT needs to lead the next phase of data science Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Most companies today have invested in data science to some degree. In the majority of cases, data science projects have tended to spring up team by team inside an organization, resulting in a disjointed approach that isn’t scalable or cost-efficient.
Think of how data science is typically introduced into a company today: Usually, a line-of-business organization that wants to make more data-driven decisions hires a data scientist to create models for its specific needs. Seeing that group’s performance improvement, another business unit decides to hire a data scientist to create its own R or Python applications. Rinse and repeat, until every functional entity within the corporation has its own siloed data scientist or data science team.
What’s more, it’s very likely that no two data scientists or teams are using the same tools. Right now, the vast majority of data science tools and packages are open source, downloadable from forums and websites. And because innovation in the data science space is moving at light speed, even a new version of the same package can cause a previously high-performing model to suddenly — and without warning — make bad predictions.
The result is a virtual “Wild West” of multiple, disconnected data science projects across the corporation into which the IT organization has no visibility.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! To fix this problem, companies need to put IT in charge of creating scalable, reusable data science environments.
In the current reality, each individual data science team pulls the data they need or want from the company’s data warehouse and then replicates and manipulates it for their own purposes. To support their compute needs, they create their own “shadow” IT infrastructure that’s completely separate from the corporate IT organization. Unfortunately, these shadow IT environments place critical artifacts — including deployed models — in local environments, shared servers, or in the public cloud, which can expose your company to significant risks, including lost work when key employees leave and an inability to reproduce work to meet audit or compliance requirements.
Let’s move on from the data itself to the tools data scientists use to cleanse and manipulate data and create these powerful predictive models. Data scientists have a wide range of mostly open source tools from which to choose, and they tend to do so freely. Every data scientist or group has their favorite language, tool, and process, and each data science group creates different models. It might seem inconsequential, but this lack of standardization means there is no repeatable path to production. When a data science team engages with the IT department to put its model/s into production, the IT folks must reinvent the wheel every time.
The model I’ve just described is neither tenable nor sustainable. Most of all, it’s not scalable, something that’s of tantamount importance over the next decade, when organizations will have hundreds of data scientists and thousands of models that are constantly learning and improving.
IT has the opportunity to assume an important leadership role in creating a data science function that can scale. By leading the charge to make data science a corporate function rather than a departmental skill, the CIO can tame the “Wild West” and provide strong governance, standards guidance, repeatable processes, and reproducibility — all things at which IT is experienced.
When IT leads the charge, data scientists gain the freedom to experiment with new tools or algorithms but in a fully governed way, so their work can be raised to the level required across the organization. A smart centralization approach based on Kubernetes, Docker, and modern microservices, for example, not only brings significant savings to IT but also opens the floodgates on the value the data science teams can bring to bear. The magic of containers allows data scientists to work with their favorite tools and experiment without fear of breaking shared systems. IT can provide data scientists the flexibility they need while standardizing a few golden containers for use across a wider audience. This golden set can include GPUs and other specialized configurations that today’s data science teams crave.
A centrally managed, collaborative framework enables data scientists to work in a consistent, containerized manner so that models and their associated data can be tracked throughout their lifecycle, supporting compliance and audit requirements. Tracking data science assets, such as the underlying data, discussion threads, hardware tiers, software package versions, parameters, results, and the like helps reduce onboarding time for new data science team members. Tracking is also critical because, if or when a data scientist leaves the organization, the institutional knowledge often leaves with them. Bringing data science under the purview of IT provides the governance required to stave off this “brain drain” and make any model reproducible by anyone, at any time in the future.
What’s more, IT can actually help accelerate data science research by standing up systems that enable data scientists to self-serve their own needs. While data scientists get easy access to the data and compute power they need, IT retains control and is able to track usage and allocate resources to the teams and projects that need it most. It’s really a win-win.
But first CIOs must take action. Right now, the impact of our COVID-era economy is necessitating the creation of new models to confront quickly changing operating realities. So the time is right for IT to take the helm and bring some order to such a volatile environment.
Nick Elprin is CEO of Domino Data Lab.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,660 | 2,021 |
"Verizon details cloud cybercrime roots in data breach report | VentureBeat"
|
"https://venturebeat.com/2021/05/22/verizon-details-cloud-cybercrime-roots-in-data-breach-report"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Verizon details cloud cybercrime roots in data breach report Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Upswings in ransomware and phishing, as well as cloud and web application attacks, mark the computing landscape today. Events like the Colonial Pipeline hack highlight the increased role threat actors play as they reinvent themselves to exploit newly found weaknesses.
Verizon’s Data Breach Investigations Report for 2021 finds the world’s threat actors have one thing in common. They all crave cold hard cash and are digitally transforming themselves fast to get it. Cloud apps, phishing, and ransomware are where the digital transformation begins.
Breaches today most often start with social engineering techniques designed to get buy-in from busy end users, the Verizon study found. That’s the first step in accessing privileged credentials, delivering ransomware, or finding other vulnerabilities across a network.
Threat actors know any breach strategy in the cloud depends on getting social engineering right.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Verizon found that 85% of the breaches involve a human element, which threat actors prefer by a 24% margin over breaches involving credentials.
Verizon also found a correlation between the increase in social engineering breaches and cloud-based email servers being compromised.
That is because, the study speculates, emails are being mined for privileged credentials and used for mass mailings of phishing attempts and ransomware delivery.
Above: Breaches begin with social engineering techniques. Threat actors access privileged credentials, deliver ransomware, or find other vulnerabilities across a network.
Into the data breach These days, threat actors often combine technologies and techniques in their strategies to breach an organization.
That is according to the report, which is based on 79,635 incidents, of which 29,207 met Verizon’s quality standards and 5,258 were confirmed data breaches. Verizon sampled from 88 countries around the world for the study.
Threat actors tend to concentrate on the following strategies, according to Verizon: The cloud is the cornerstone of threat actors’ digital transformation strategies. Today, 39% of all breaches are in the cloud and web-based applications. Cloud app adoption rates are continuing to accelerate in 2021, following a rush to get as many employee- and customer-facing systems into the cloud as possible in 2020. That trend will gain momentum, as indicated by Gartner’s anticipation that worldwide cloud end user spending will grow 23.1% in 2021 to reach $332.3 billion, up from $270 billion in 2020. Consistent with the double-digit growth of public cloud services spending, Verizon said it was more common to find external cloud assets involved in incidents and breaches than on-premises assets.
Web application attacks are 80% of hacking-based breaches today. Bad actors favor web application attacks due to the relatively few steps needed to gain greater access to email and web application data. Verizon finds that web application breaches often lead to email and web application data being stolen and repurposed for malware distribution, as well as asset and application defacement. They are also being used as a springboard for future DDoS attacks. And 96% of email servers compromised are cloud-based, resulting in the compromise of personal, internal, or medical data, according to Verizon. Desktop sharing is growing as an attack vector, following cloud and web-based apps.
Ransomware is now the third leading cause of breaches, more than doubling in frequency from last year and appearing in 10% of all breaches. The recent Colonial Pipeline ransomware hack illustrates how threat actors used ransomware to extort a confirmed $4.4 million from the pipeline company after stealing over 100GB of data and threatening to release it publicly. Verizon’s analysis shows the Colonial Pipeline ransomware attack is consistent with patterns seen globally. Threat actors launch ransomware after gaining access and then extort millions of dollars or Bitcoin as payment in exchange for not releasing the data publicly. Ransomware itself is digitally transforming in 2021. Threat actors and ransomware groups develop infrastructure to securely host data dumps held hostage before sending red alert screens across organizations announcing the breach and demand for payment.
Phishing accounted for 36% of all breach actions in 2020, up from 25% in 2019. Bad actors relied heavily on phishing in 2020, often creating fraudulent emails offering COVID-19 related health care supplies, protective equipment, and fictitious treatments. Verizon found phishing grew as a misrepresentation strategy when the worldwide stay-at-home orders went into effect.
Social engineering breaking bad Verizon’s research disclosed that public administration organizations led all industries in breaches last year. Threat actors rely primarily on social engineering to create credible-looking phishing emails to steal privileged access credentials. The entertainment industry experienced the greatest amount of overall activity, with 7,065 incidents and 109 breaches, followed by public administration, with 3,326 incidents and 885 breaches.
Threat actors targeted entertainment using social engineering to commit ticket fraud, intercept online payments, and combine phishing and ransomware to divert cash from companies in this industry.
Verizon’s work reveals that even as enterprises pursued new digital transformation amid a global pandemic, threat actors have discovered their own digital transformation strategies. Social engineering — getting people to trust an email or text message, even if it’s as simple as clicking on a link — is the pivot point bad actors’ digital transformation strategies rely on.
The Verizon study provides a sobering glimpse into how quickly cybercrime is changing to become more opportunistic, deceptive, and destructive to its victims.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,661 | 2,021 |
"Google unveils cloud products to help analyze and organize data | VentureBeat"
|
"https://venturebeat.com/2021/05/26/google-unveils-cloud-products-to-help-analyze-and-organize-data"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google unveils cloud products to help analyze and organize data Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
At Google’s inaugural Data Cloud Summit, the company announced three new solutions across its database and data analytics portfolio: Dataplex, Datastream, and Analytics Hub. Google says all three services, which are available in preview, are designed to help businesses break free from data silos to predict business outcomes and make informed decisions.
A recent Gartner survey found that organizations estimate the average cost of poor data quality at $12.8 million per year. With data spanning databases, data lakes, data warehouses, and even data marts — in multiple clouds and on-premises — enterprises are grappling with how to centrally manage and govern their apps. A Forrester survey found that between 60% and 73% of all data within corporations is never analyzed for insights or larger trends. The opportunity cost of this unused data is substantial, with a Veritas report pegging it at $3.3 trillion in 2020.
Datastream, Analytics Hub, and Dataplex The first of Google’s new cloud products is Datastream, a serverless change data capture and replication service. Datastream enables enterprises to ingest data streams in real time, from Oracle and MySQL databases to Google Cloud services — including BigQuery, Cloud SQL for PostgreSQL, Google Cloud Storage, and Cloud Spanner. Google says that for early customers like Schnuck Markets, Datastream simplified their architecture and reduced lag for Oracle data replication to BigQuery and CloudSQL.
Available in preview in Q3, Analytics Hub, a complementary product, exchanges data and analytics assets throughout organizations to address challenges in data reliability. Analytics Hub provides a way to access and share data at a lower cost, allowing data providers and organizations to control and monitor how their data is being used and create a curated library of internal and external assets.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! As for Dataplex, it’s an intelligent data fabric that lets organizations manage, monitor, and govern their data across data lakes, data warehouses, and databases. Automated data quality allows data scientists to address data consistency using AI and machine learning capabilities from Google or a third party and a pay-as-you-go model. Early user Equifax is working with Google to incorporate Dataplex into its core analytics platform.
Tom Galizia, global chief commercial officer at Deloitte, says Deloitte will work with Google to deploy Dataplex, Datastream, and Analytics Hub with enterprise customers and institutions. “What is truly powerful here is that Google Cloud solves for disparate and bespoke systems housing hard-to-access siloed data with enhanced data experiences. They’ve also simplified implementation and management for better decision-making. We are truly excited to realize the market potential with Google Cloud’s innovations for building data clouds,” he said in a statement provided to VentureBeat.
New services in preview and GA During the Data Cloud Summit, Google detailed additional updates pertaining to its cloud database and analytics suite.
BigQuery Omni for Microsoft Azure is available in preview, and Looker for Microsoft Azure is now generally available. Both can help deliver insights from Azure cloud environments. In related news, BigQuery ML Anomaly Detection is also generally available, allowing customers to detect normal versus problematic data patterns across their organization.
In Q3, Google plans to launch Dataflow Prime, an expansion of its Dataflow service that provides a solution for streaming data analytics. Dataflow Prime will embed AI and machine learning capabilities to offer streaming predictions such as time series analysis and smart diagnostics that proactively identify bottlenecks, auto-tuning for increased utilization.
Google also announced that it will soon lower the entry price for Cloud Spanner– its fully managed relational database — 90% by offering customers granular instance sizing. Beyond this, the company previewed BigQuery federation to Spanner, which will let users query transactional data residing in Spanner from BigQuery for real-time insights. Lastly, Google launched Key Visualizer in preview to provide interactive monitoring that lets developers identify trends and usage patterns in Spanner.
“Data must be thought of as an ability that integrates all aspects of working with it. Every industry is accelerating their shift [to] being digital-first as they recognize data is the essential ingredient for value creation and the key to advancing their digital transformation,” Google Cloud VP and GM Gerrit Kazmaier said in a blog post. “At Google Cloud, we’re committed to helping our customers build the most powerful data cloud solution to unlock value and actionable real-time insights needed to future-proof their business.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,662 | 2,017 |
"Microsoft introduces Azure Cosmos DB, a globally distributed database with 5 consistency choices | VentureBeat"
|
"https://venturebeat.com/2017/05/10/microsoft-introduces-azure-cosmos-db-a-globally-distributed-database-with-5-consistency-choices"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Microsoft introduces Azure Cosmos DB, a globally distributed database with 5 consistency choices Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Arguably the biggest Azure news to come out of Microsoft’s Build 2017 developer conference is the debut of Azure Cosmos DB. The schema-free database service offers developers flexibility with five consistency choices, instead of forcing them to choose between strong and eventual consistency. The five models are as follows: Strong, Bounded Staleness, Session, Consistent Prefix, and Eventual.
Cosmos DB is a superset of DocumentDB, the company’s cloud-based NoSQL database for storing and serving up information for applications in JSON.
Scott Guthrie, executive vice president of the Microsoft Cloud and Enterprise group, described it as “the first globally distributed, multi-model database service delivering turnkey global horizontal scale out with guaranteed uptime and millisecond latency at the 99th percentile.” He promised service-level agreements across four dimensions: high availability, performance latency, performance throughput, and data consistency. Other highlights include high performance, fault tolerance, being able to elastically scale across any number of geographical regions, and automatically indexing data.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The Azure announcements didn’t stop there.
Guthrie said onstage that 90 percent of Fortune 500 companies are using the Microsoft Cloud. That number is up from 85 percent at Build 2016.
Azure IoT Edge arrived in preview. The technology, which supports both Windows and Linux, extends cloud computing to IoT devices.
The new Azure Cloud Shell, included inside the Azure portal, provides an authenticated, browser-based shell experience accessible from anywhere. Azure manages and updates Cloud Shell with commonly used command line tools and support for multiple popular programming languages, and each session is synced to a $Home directory.
Azure SQL Database is getting general availability of Threat Detection, a preview of Microsoft Graph support, and a new Managed Instance private preview. The last one offers SQL Server instance-level compatibility and helps organizations migrate existing SQL Server apps to Azure SQL Database.
Joining Azure SQL Database, Microsoft announced, will be Azure Database for MySQL and Azure Database for PostgreSQL options in Azure. Microsoft promises they are 100 percent compatible with all existing drivers and tools.
Microsoft’s new database migration services (in preview) move Oracle and SQL Server databases into Azure SQL Database with near-zero application downtime, and at no extra cost or configuration. In short, Microsoft wants Azure to let developers use any database and use it as a service.
Windows Server Containers support in Azure Service Fabric is now generally available (you’ll need to get the 5.6 runtime and 2.6 SDK release), helping developers containerize existing .NET apps and deploy them to Azure. Service Fabric support for Docker Compose for deploying containerized apps is now in preview. And Visual Studio Team Services integration allows for continuous integration and deployment of these containerized applications.
Azure Batch AI Training debuted in private preview today. The new offering will allow developers and data scientists to run their models against multiple CPUs, multiple GPUs, and, eventually, field-programmable gate arrays. They can choose any framework, including Microsoft Cognitive Toolkit, TensorFlow, and Caffe.
Azure Functions Visual Studio tooling preview, available as a Visual Studio 2017 extension, allows developers to integrate Azure Functions development by leveraging third-party extensions, testing frameworks, and continuous integration systems. Azure Application Insights support means teams can measure performance, detect issues, and diagnose the source of the problem with serverless apps. Azure Functions Runtime preview extends all this to on-premises or anywhere outside of the Azure cloud.
Last but not least, Microsoft is offering Storage Service Encryption for Azure Files on all available redundancy types at no additional cost. All data being stored in Azure Files is thus now encrypted using AES-256.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,663 | 2,020 |
"5G laps 4G milestones, sets stage for massive enterprise data growth | VentureBeat"
|
"https://venturebeat.com/2020/12/14/5g-laps-4g-milestones-sets-stage-for-massive-enterprise-data-growth"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages 5G laps 4G milestones, sets stage for massive enterprise data growth Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Although the global pandemic could have slowed it down, 5G cellular technology isn’t just continuing its steady march towards ubiquity — it’s actually being adopted at a much faster pace than prior-generation 4G LTE, according to industry trade organization 5G Americas.
As of December 2020, there are now 143 commercial 5G networks across the globe, 303 commercially available 5G devices, and 229 million total 5G subscriptions, with 225 million in the last year alone. That means 5G adoption is proceeding at a pace four times faster than 4G, 5G Americas notes, making it the “fastest growing mobile technology in history.” The rapid pace of 5G adoption is significant for technical decision makers because the wireless technology is setting the stage for a massive increase in the quantity of data that enterprises will process during this decade. 5G networks promise to improve data speeds by a factor of 10 to 100 times compared with 4G, enabling devices to share both larger chunks and more continuous streams of data, while driving enterprises to adopt edge servers for low latency data processing and serving. Significant recent upticks in global adoption suggest that the time for enterprises to embrace 5G is now, rather than later.
Beyond consumer applications in smartphones , which include everything from commercial and location data to live video and augmented reality feeds, 5G networks will be the conduits for wireless industrial automation , connected autonomous vehicles , and smart cities.
American, Asian, European, and Middle Eastern leaders have consequently pushed for rapid 5G adoption in an effort to speed the digitization of their societies, as well as supporting local development of AI-assisted products and services.
As contrasted with last year , 5G advances aren’t just limited to several key countries anymore — adoption is proceeding across the globe. In the last quarter alone, 29 new 5G networks went live, and the total number of networks is expected to grow to 180 worldwide by the end of 2020. Similarly, 5G America suggests that there will be 236 million 5G subscriptions by December 31, a number of subscribers that took 4G LTE four years to achieve. But the growth isn’t necessarily happening evenly across the map.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Only 20 of today’s 5G networks are located in North America, Latin America, and the Caribbean, numbers that are still dwarfed by 4G LTE, which is currently used in 145 networks within those regions alone. North America posted 3.4 million 5G subscriptions, up 47% for the last calendar quarter, which notably ended before the release of popular, 5G-compatible iPhone 12 models.
But 5G Americas says that the latest standard is “just beginning” to pop up in Latin America and the Caribbean, with fewer than 5,000 total subscriptions across that region — well below prior forecasts. On a positive note, Brazil has increased 5G service availability in urban areas, and 5G spectrum is being allocated for expansions in Chile, the Dominican Republic, and Peru.
5G expansion has continued at a blistering pace in Asia, where South Korea and China have led the region in speeds and breadth of deployment, while some countries in the Middle East have posted impressively fast download rates in smaller geographic areas. European capitals have continued to advance their 5G deployments, delayed somewhat by the need to limit or tear out equipment provided by Chinese network vendor Huawei , which was branded an international security risk by the United States — a claim Huawei has repeatedly denied.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,664 | 2,021 |
"Imperva launches Sonar for unified enterprise security analytics | VentureBeat"
|
"https://venturebeat.com/2021/02/23/imperva-launches-sonar-for-unified-enterprise-security-analytics"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Imperva launches Sonar for unified enterprise security analytics Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Cybersecurity cloud company Imperva today launched its Sonar platform to help enterprises manage attacks across applications, data, and the edge by automating their workflows and accelerating incident responses. Imperva Sonar uses ML to surface key risk areas and offers single-action resolution capabilities to streamline enterprise IT team efforts.
According to materials Imperva provided, the company’s internal research lab found that data leakage attacks — incidents involving data erroneously being transferred from an enterprise’s internal network to an external network — jumped 93% over the course of 2020. Imperva Sonar looks to fill gaps in the data lifecycle, or how sensitive data is accessed, by providing visibility into IT environments, whose multi-cloud application environments and alternative API ecosystems have grown increasingly diverse — and complex.
In an interview with VentureBeat, Imperva product marketing VP Matt Hathaway explained that the goal is not just to reduce the number of security providers an enterprise uses, but also to streamline the number of consoles and sources of truth. He said companies may see traffic across some user endpoints by looking at patterns and analytics across very different use cases and getting rid of a lot of point products that don’t have context. “The lateral movement brings them to databases, and so piecing all of that together is a real challenge … we add context so that they can investigate and detect,” he added.
Imperva Sonar’s three security vectors exist at the edge, data, and applications. And the company’s main focuses for the edge vector are twofold. First, the platform uses load-balancing and cache management to make websites run and access information more quickly. Second, it supports distributed denial-of-service (DDoS) and domain name system (DNS) protection on its content delivery network.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The applications vector aims to deconstruct advanced attacks with a unified web application and API (WAAP) protection solution that combines a firewall, runtime protection, bot protection, client-side protection, and API security. The runtime protection, for example, analyzes micro-services to identify which part of an application is high risk and potentially connected to the outside world. According to Hathaway, Imperva has been able to use these tools to protect the newer, cloud-distributed application types that have arisen in the last five years.
The data vector is centered around classifying and protecting critical data, with security across the database and cloud, along with providing data risk analytics. “At the data side, it’s very much about activity because the number of accesses to your databases that are more and more distributed in a very hybrid way on-premise in the cloud, multi-cloud, having one central place to really get that pattern recognition is key,” Hathaway said.
Imperva is focusing on applying analytics and hosting a central security space across all of an enterprise’s databases and environments. It’s also incorporating Snowflake, other more advanced data stores, and NoSQL, along with semi-structured databases.
“The focus is no longer the traditional on-prem structured databases,” Hathaway said. “That’s where we also play a great deal of automation and response to be able to take action for an activity.” Imperva also built its own proprietary data lake to structure and live-audit data.
Imperva has already opened the Sonar platform’s beta version to select enterprise users. This launch corresponds with rising enterprise security threats , including the recent cyberespionage attack that springboarded off SolarWinds to target federal government networks.
Hathaway said that while SolarWinds was highly sophisticated, its supply chain attack structure wasn’t new, suggesting that some websites often have 20 or 30 JavaScript instances they don’t write that work as commodity malware to take credit card information, for example.
Hathaway said automated attacks have been the biggest rising attack vector, complicated by the presence of more sophisticated bots. “We’ve already seen a trend up since last year, when I think in some analyses we had 85% of all traffic [attributed to] bots again.” He said not all are malicious, but “having a good approach and a good way of analyzing that is huge.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,665 | 2,021 |
"Password breach service Have I Been Pwned goes open source | VentureBeat"
|
"https://venturebeat.com/2021/05/28/password-breach-service-have-i-been-pwned-goes-open-source"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Password breach service Have I Been Pwned goes open source Share on Facebook Share on X Share on LinkedIn 1Password breach report powered by Have I Been Pwned Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Password breach database Have I Been Pwned (HIBP) has now made its entire codebase open source, as creator Troy Hunt promised back in August.
HIBP is also gaining access to a fresh and continuous cache of breached passwords via the FBI, which has offered to funnel exploited passwords it encounters in its digital crime-fighting efforts directly into the HIBP engine.
HIBP was first launched in 2013 by Hunt , a renowned security expert, and serves as an easy way for anyone to discover whether credentials for their online accounts have emerged in an online data dump. The service now receives some 1 billion requests a month, and numerous third parties leverage the data inside their own apps and websites, including Mozilla’s Firefox browser and 1Password, which last year launched a new data breach report service for its enterprise clients based on HIBP data.
Above: Have I Been Pwned is now open source People problem The problem HIBP has been working to solve over the past eight years is one that impacts everyone from online shoppers to multinational corporations.
Poor password hygiene is a major driver of security breaches, with 81% of all breaches reportedly caused by compromised passwords. Last year, password management platform Dashlane actually launched a new tool that gives businesses data on the health of their employees’ passwords.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! All manner of initiatives have emerged to replace passwords with alternative security mechanisms, such as biometric authentication and two-step verification. But passwords still rule the roost, which is why the HIBP database has proved such a utility for millions of people.
Hunt, who is also a Microsoft Regional Director , elected to open-source HIBP last year following a failed acquisition.
He made the decision to push HIBP fully into community ownership because it had grown substantially, thanks to free contributions from people around the world, and become an indispensable source of data breach data for consumers and companies alike. But, as Hunt pointed out at the time, the entire project still hinged on him alone. “If I disappear, HIBP quickly withers and dies,” he said.
Open sourced This is where the open-sourcing comes into play. “I knew it wouldn’t be easy, but I also knew it was the right thing to do for the longevity of the project,” Hunt wrote in a blog post today.
Given the complexities involved in transforming a one-person project into an open source entity, Hunt has turned to the.
NET Foundation , a not-for-profit organization Microsoft established in 2014 to oversee its.
NET Framework’s transition to open source.
“There’s a heap of effort involved in picking something up that’s run as a one-person pet project for years and moving it into the public domain,” Hunt wrote. “I had no idea how to manage an open source project, establish the licencing model, coordinate where the community invests effort, take contributions, redesign the release process, and all sorts of other things I’m sure I haven’t even thought of yet.” HIBP now has its own profile on GitHub , with repositories for an Azure Function and Cloudflare Worker , and it has been released under a permissive BSD 3-Clause License.
The first significant piece of work for HIBP as an open source project will be to develop the functionality needed to ingest credentials the FBI identifies as breached.
“They’ll be fed into the system as they’re made available by the bureau, and obviously that’s both a cadence and a volume which will fluctuate depending on the nature of the investigations they’re involved in,” Hunt wrote. “The important thing is to ensure there’s an ingestion route by which the data can flow into HIBP and be made available to consumers as fast as possible in order to maximize the value it presents. To do that, we’re going to need to write some code.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,666 | 2,010 |
"Google acquires MetaWeb, says Freebase will become "more open" | VentureBeat"
|
"https://venturebeat.com/2010/07/16/google-acquires-metaweb-says-freebase-will-become-more-open"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google acquires MetaWeb, says Freebase will become “more open” Kim-Mai Cutler Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Google has acquired Benchmark-backed Metaweb to improve the structure in its search results.
Over the past few years, Google has evolved beyond the “10 blue links” paradigm (where search results delivered only simple links) and begun breaking out specialty news, video or data snippets atop search results. Metaweb’s Freebase database catalogues more than 12 million objects like movies, books, TV shows and locations.
“The web isn’t merely words—it’s information about things in the real world, and understanding the relationships between real-world entities can help us deliver relevant information more quickly,” Google said in a blog post.
Google says the database will still remain open and free for use by other developers, and it’s encouraging other companies to contribute to the dataset.
Metaweb raised close to $57 million in two rounds, with the most recent one at the beginning of 2008.
The considerable round of funding came at a time when investors were betting that a more powerful, “semantic” Web would emerge — and of course, before the global financial crisis.
That dream is still in the making, but several companies are making stabs at it. Google’s emerging rival Facebook recently announced the Open Graph, a way to map all objects on the web like movies and places and peoples’ relationships to them. The metadata required for this would create a rival structure to what Metaweb has built. And because Facebook has the “like” data recording the preferences of its 500 millions users, it would be in the best position to harness the metadata to create a compelling search product.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
Subsets and Splits
Wired Articles Filtered
Retrieves up to 100 entries from the train dataset where the URL contains 'wired' but the text does not contain 'Menu', providing basic filtering of the data.