id
int64 0
17.2k
| year
int64 2k
2.02k
| title
stringlengths 7
208
| url
stringlengths 20
263
| text
stringlengths 852
324k
|
|---|---|---|---|---|
1,013 | 2,021 |
"The AI arms race has us on the road to Armageddon | VentureBeat"
|
"https://venturebeat.com/ai/the-ai-arms-race-has-us-on-the-road-to-armageddon"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest The AI arms race has us on the road to Armageddon Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
It’s now a given that countries worldwide are battling for AI supremacy.
To date, most of the public discussion surrounding this competition has focused on commercial gains flowing from the technology. But the AI arms race for military applications is racing ahead as well, and concerned scientists, academics, and AI industry leaders have been sounding the alarm.
Compared to existing military capabilities, AI-enabled technology can make decisions on the battlefield with mathematical speed and accuracy and never get tired. However, countries and organizations developing this tech are only just beginning to articulate ideas about how ethics will influence the wars of the near future. Clearly, the development of AI-enabled autonomous weapons systems will raise significant risks for instability and conflict escalation. However, calls to ban these weapons are unlikely to succeed.
In an era of rising military tensions and risk, leading militaries worldwide are moving ahead with AI-enabled weapons and decision support, seeking leading-edge battlefield and security applications. The military potential of these weapons is substantial, but ethical concerns are largely being brushed aside. Already they are in use to guard ships against small boat attacks, search for terrorists , stand sentry , and destroy adversary air defenses.
For now, the AI arms race is a cold war , mostly between the U.S., China, and Russia, but worries are it will become more than that. Driven by fear of other countries gaining the upper hand , the world’s military powers have been competing by leveraging AI for years — dating back at least to 1983 — to achieve an advantage in the balance of power.
This continues today.
Famously , Russian President Vladimir Putin has said the nation that leads in AI will be the “ruler of the world.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! How policy lines up behind military AI use According to an article in Salon, diverse and ideologically-distinct research organizations including the Center for New American Security (CNAS), the Brookings Institution , and the Heritage Foundation have argued that America must ratchet up spending on AI research and development. A Foreign Affairs article argues that nations who fail to embrace leading technologies for the battlefield will lose their competitive advantage. Speaking about AI, former U.S. Defense Secretary Mark Esper said last year, “ History informs us that those who are first to harness once-in-a-generation technologies often have a decisive advantage on the battlefield for years to come.” Indeed, leading militaries are investing heavily in AI , motivated by a desire to secure military operational advantages on the future battlefield.
Civilian oversight committees, as well as militaries, have adopted this view. Last fall , a U.S.
bipartisan congressional report called on the Defense Department to get more serious about accelerating AI and autonomous capabilities. Created by Congress, the National Security Commission on AI (NSCAI) recently urged an increase in AI R&D funding over the next few years to ensure the U.S. is able to maintain its tactical edge over its adversaries and achieve “military AI readiness” by 2025.
In the future, warfare will pit “algorithm against algorithm,” claims the new NSCAI report. Although militaries have continued to compete using weapon systems similar to those of the 1980s, the NSCAI report claims: “the sources of battlefield advantage will shift from traditional factors like force size and levels of armaments to factors like superior data collection and assimilation, connectivity, computing power, algorithms, and system security.” It is possible that new AI-enabled weapons would render conventional forces near obsolete , with rows of decaying Abrams tanks gathering dust in the desert in much the same way as mothballed World War II ships lie off the coast of San Francisco. Speaking to reporters recently, Robert O. Work, vice chair of the NSCAI said of the international AI competition: “We have got … to take this competition seriously, and we need to win it.” The accelerating AI arms race Work to incorporate AI into the military is already far advanced. For example, militaries in the U.S., Russia, China, South Korea, the United Kingdom, Australia, Israel, Brazil, and Iran are developing cybersecurity applications, combat simulations, drone swarms , and other autonomous weapons.
Caption: The Russian Uran-9 is an armed robot.
Credit : Dmitriy Fomin via Wikimedia Commons. CC BY 2.0.
A recently completed “global information dominance exercise” by U.S. Northern Command pointed to the tremendous advantages the Defense Department can achieve by applying machine learning and artificial intelligence to all-domain information. The exercise integrated information from all domains including space, cyberspace, air, land, sea, and undersea, according to Air Force Gen. Glen D. VanHerck.
Gilman Louie, a commissioner on the NSCAI report, is quoted in a news article saying: “I think it’s a mistake to think of this as an arms race” — though he added, “We don’t want to be second.” A dangerous pursuit West Point has started training cadets to consider ethical issues when humans lose some control over the battlefield to smart machines. Along with the ethical and political issues of an AI arms race are the increased risks of triggering an accidental war.
How might this happen? Any number of ways, from a misinterpreted drone strike to autonomous jet fighters with new algorithms.
AI systems are trained on data and reflect the quality of that data along with any inherent biases and assumptions of those developing the algorithms. Gartner predicts through 2023, up to 10% of AI training data will be poisoned by benign or malicious actors. That is significant, especially considering the security vulnerability of critical systems.
When it comes to bias, military applications of AI are presumably no different, except that the stakes are much higher than whether an applicant gets a good rate on car insurance.
Writing in War on the Rocks , Rafael Loss and Joseph Johnson argue that military deterrence is an “extremely complex” problem — one that any AI hampered by a lack of good data will not likely be able to provide solutions for in the immediate future.
How about assumptions? In 1983, the world’s superpowers drew near to accidental nuclear war, largely because the Soviet Union relied on software to make predictions that were based on false assumptions. Seemingly this could happen again, especially as AI increases the likelihood that humans would be taken out of decision making.
It is an open question whether the risks of such a mistake are higher or lower with greater use of AI, but Star Trek had a vision in 1967 for how this could play out. The risks of conflict had escalated to such a degree in a “Taste of Armageddon” that war was outsourced to a computer simulation that decided who would perish.
Source: Star Trek, A Taste of Armageddon.
There is no putting the genie back in the bottle. The AI arms race is well underway and leading militaries worldwide do not want to be in second place or worse. Where this will lead is subject to conjecture. Clearly, however, the wars of the future will be fought and determined by AI more than traditional “military might.” The ethical use of AI in these applications remains an open-ended issue. It was within the mandate of the NSCAI report to recommend restrictions on how the technology should be used, but this was unfortunately deferred to a later date.
Gary Grossman is the Senior VP of Technology Practice at Edelman and Global Lead of the Edelman AI Center of Excellence.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,014 | 2,021 |
"How synthetic data could save AI | VentureBeat"
|
"https://venturebeat.com/ai/how-synthetic-data-could-save-ai"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest How synthetic data could save AI Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
AI is facing several critical challenges. Not only does it need huge amounts of data to deliver accurate results, but it also needs to be able to ensure that data isn’t biased, and it needs to comply with increasingly restrictive data privacy regulations. We have seen several solutions proposed over the last couple of years to address these challenges — including various tools designed to identify and reduce bias, tools that anonymize user data, and programs to ensure that data is only collected with user consent. But each of these solutions is facing challenges of its own.
Now we’re seeing a new industry emerge that promises to be a saving grace: synthetic data.
Synthetic data is artificial computer-generated data that can stand-in for data obtained from the real world.
A synthetic dataset must have the same mathematical and statistical properties as the real-world dataset it is replacing but does not explicitly represent real individuals. Think of this as a digital mirror of real-world data that is statistically reflective of that world. This enables training AI systems in a completely virtual realm. And it can be readily customized for a variety of use cases ranging from healthcare to retail, finance, transportation, and agriculture.
There’s significant movement happening on this front.
More than 50 vendors have already developed synthetic data solutions, according to research last June by StartUs Insights. I will outline some of the leading players in a moment. First, though, let’s take a closer look at the problems they’re promising to solve.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The trouble with real data Over the last few years, there has been increasing concern about how inherent biases in datasets can unwittingly lead to AI algorithms that perpetuate systemic discrimination.
In fact, Gartner predicts that through 2022, 85% of AI projects will deliver erroneous outcomes due to bias in data, algorithms, or the teams responsible for managing them.
The proliferation of AI algorithms has also led to growing concerns over data privacy. In turn, this has led to stronger consumer data privacy and protection laws in the EU with GDPR, as well as U.S. jurisdictions including California and most recently Virginia.
These laws give consumers more control over their personal data. For example, the Virginia law grants consumers the right to access, correct, delete, and obtain a copy of personal data as well as to opt out of the sale of personal data and to deny algorithmic access to personal data for the purposes of targeted advertising or profiling of the consumer.
By restricting access to this information, a certain amount of individual protection is gained but at the cost of the algorithm’s effectiveness. The more data an AI algorithm can train on, the more accurate and effective the results will be. Without access to ample data, the upsides of AI, such as assisting with medical diagnoses and drug research, could also be limited.
One alternative often used to offset privacy concerns is anonymization. Personal data, for example, can be anonymized by masking or eliminating identifying characteristics such as removing names and credit card numbers from ecommerce transactions or removing identifying content from healthcare records. But there is growing evidence that even if data has been anonymized from one source, it can be correlated with consumer datasets exposed from security breaches. In fact, by combining data from multiple sources, it is possible to form a surprisingly clear picture of our identities even if there has been a degree of anonymization. In some instances, this can even be done by correlating data from public sources, without a nefarious security hack.
Synthetic data’s solution Synthetic data promises to deliver the advantages of AI without the downsides. Not only does it take our real personal data out of the equation, but a general goal for synthetic data is to perform better than real-world data by correcting bias that is often engrained in the real world.
Although ideal for applications that use personal data, synthetic information has other use cases, too. One example is complex computer vision modeling where many factors interact in real time. Synthetic video datasets leveraging advanced gaming engines can be created with hyper-realistic imagery to portray all the possible eventualities in an autonomous driving scenario, whereas trying to shoot photos or videos of the real world to capture all these events would be impractical, maybe impossible, and likely dangerous. These synthetic datasets can dramatically speed up and improve training of autonomous driving systems.
(Above image: Synthetic images are used to train autonomous vehicle algorithms. Source: synthetic data provider Parallel Domain.
) Perhaps ironically, one of the primary tools for building synthetic data is the same one used to create deepfake videos. Both make use of generative adversarial networks (GAN), a pair of neural networks. One network generates the synthetic data and the second tries to detect if it is real. This is operated in a loop, with the generator network improving the quality of the data until the discriminator cannot tell the difference between real and synthetic.
The emerging ecosystem Forrester Research recently identified several critical technologies , including synthetic data, that will comprise what they deem “AI 2.0,” advances that radically expand AI possibilities. By more completely anonymizing data and correcting for inherent biases, as well as creating data that would otherwise be difficult to obtain, synthetic data could become the saving grace for many big data applications.
Synthetic data also comes with some other big benefits: You can create datasets quickly and often with the data labeled for supervised learning. And it does not need to be cleaned and maintained the way real data does. So, theoretically at least, it comes with some large time and cost savings.
Several well-established companies are among those that generate synthetic data. IBM describes this as data fabrication , creating synthetic test data to eliminate the risk of confidential information leakage and address GDPR and regulatory issues. AWS has developed in-house synthetic data tools to generate datasets for training Alexa on new languages.
And Microsoft has developed a tool in collaboration with Harvard with a synthetic data capability that allows for increased collaboration between research parties. Notwithstanding these examples, it is still early days for synthetic data and the developing market is being led by the startups.
To wrap up, let’s take a look at some of the early leaders in this emerging industry. The list is constructed based on my own research and industry research organizations including G2 and StartUs Insights.
AiFi — Uses synthetically generated data to simulate retail stores and shopper behavior.
AI.Reverie — Generates synthetic data to train computer vision algorithms for activity recognition, object detection, and segmentation. Work has included wide-scope scenes like smart cities, rare plane identification, and agriculture, along with smart-store retail.
Anyverse — Simulates scenarios to create synthetic datasets using raw sensor data, image processing functions, and custom LiDAR settings for the automotive industry.
Cvedia — Creates synthetic images that simplify the sourcing of large volumes of labeled, real, and visual data. The simulation platform employs multiple sensors to synthesize photo-realistic environments resulting in empirical dataset creation.
DataGen — Interior-environment use cases, like smart stores, in-home robotics, and augmented reality.
Diveplane — Creates synthetic ‘twin’ datasets for the healthcare industry with the same statistical properties of the original data.
Gretel — Aiming to be GitHub equivalent for data, the company produces synthetic datasets for developers that retain the same insights as the original data source.
Hazy — generates datasets to boost fraud and money laundering detection to combat financial crime.
Mostly AI — Focuses on insurance and finance sectors and was one of the first companies to create synthetic structured data.
OneView – Develops virtual synthetic datasets for analysis of earth observation imagery by machine learning algorithms.
Gary Grossman is the Senior VP of Technology Practice at Edelman and Global Lead of the Edelman AI Center of Excellence.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,015 | 2,023 |
"How hybrid AI could enhance GPT-4 and GPT-5 and address LLM concerns | VentureBeat"
|
"https://venturebeat.com/ai/how-hybrid-ai-could-enhance-gpt-4-and-gpt-5-and-address-llm-concerns"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest How hybrid AI could enhance GPT-4 and GPT-5 and address LLM concerns Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
The explosion of new generative AI products and capabilities over the last several months — from ChatGPT to Bard and the many variations from others based on large language models (LLMs) — has driven an overheated hype cycle. In turn, this situation has led to a similarly expansive and passionate discussion about needed AI regulation.
AI regulation showdown The AI regulation firestorm was ignited by the Future of Life Institute open letter , now signed by thousands of AI researchers and concerned others. Some of the notable signees include Apple cofounder Steve Wozniak, SpaceX, Tesla and Twitter CEO Elon Musk; Stability AI CEO Emad Mostaque; Sapiens author Yuval Noah Harari; and Yoshua Bengio, founder of AI research institute Mila.
Citing “an out-of-control race to develop and deploy ever more powerful digital minds that no one — not even their creators — can understand, predict or reliably control,” the letter called for a 6-month pause in the development of anything more powerful than GPT-4.
The letter argues this additional time would allow ethical, regulatory and safety concerns to be considered and states that “powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.” Signatory Gary Marcus told TIME: “There are serious near-term and far-term risks and corporate AI responsibility seems to have lost fashion right when humanity needs it most.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Like the letter, this perspective seems reasonable. After all, we are currently unable to explain exactly how LLMs work. On top of that, these systems also occasionally hallucinate, producing output that sounds credible but is not correct.
Two sides to every story Not everyone agrees with the assertions in the letter or that a pause is warranted. In fact, many in the AI industry have pushed back, saying a pause would do little good. According to a report in VentureBeat, Meta chief scientist Yann LeCun said, “I don’t see the point of regulating research and development. I don’t think that serves any purpose other than reducing the knowledge that we could use to actually make technology better, safer.” Pedro Domingos, a professor at the University of Washington and author of the seminal AI book The Master Algorithm went further.
The AI moratorium letter was an April Fools’ joke that came out a few days early due to a glitch.
Every field has its moments of comedy, and the moratorium letter will go down as one of them in AI.
According to reporting in Forbes, Domingos believes the level of urgency and alarm about existential risk expressed in the letter is overblown, assigning capabilities to these systems well beyond reality.
Nevertheless, the ensuing industry conversation may have prompted OpenAI CEO Sam Altman to say that the company is not currently testing GPT-5. Moreover, Altman added that the Transformer network technology underlying GPT-4 and the current ChatGPT may have run its course and that the age of giant AI models is already over.
The implication of this is that building ever larger LLMs may not yield appreciably better results, and by extension, GPT-5 would not be based on a larger model. This could be interpreted as Altman saying to supporters of the pause, “There’s nothing here to worry about, move along.” Taking the next step: Combining AI models This begs the question of what GPT-5 might look like when it eventually appears. Clues can be found in the innovation taking place currently, and that’s based on the present state of these LLMs. For example, OpenAI is releasing plug-ins for ChatGPT that add specific additional capabilities.
These plug-ins are meant to both augment its capabilities as well as offset weaknesses, such as poor performance on math problems, the tendency to make things up and the inability to explain how the model produces results. These are all problems typical of “connectionist” neural networks that are based on theories of how the brain is thought to operate.
In contrast, “symbolic” learning AIs do not have these weaknesses because they are reasoning systems based on facts. It could be that what OpenAI is creating — initially through plug-ins — is a hybrid AI model combining two AI paradigms, the connectionist LLMs with symbolic reasoning.
At least one of the new ChatGPT plug-ins is a symbolic reasoning AI. The Wolfram|Alpha plug-in provides a knowledge engine known for its accuracy and reliability that can be used to answer a wide range of questions. Combining these two AI approaches effectively makes a more robust system that would reduce the hallucinations of purely connectionist ChatGPT and — importantly — could also offer a more comprehensive explanation of the system’s decision-making process.
I asked Bard if this was plausible. Specifically, I asked if a hybrid system would be better at explaining what goes on within the hidden layers of a neural network. This is especially relevant since the issue of explainability is a notoriously difficult problem and at the root of many expressed concerns about all deep learning neural networks, including GPT-4.
If true, this could be an exciting advance. However, I wondered if this answer was a hallucination. As a double-check, I posed the same question to ChatGPT. The response was similar, though more nuanced.
In other words, a hybrid system combining connectionist and symbolic AI would be a notable improvement over a purely LLM-based approach, but it is not a panacea.
Although combining different AI models might seem like a new idea, it is already in use. For example, AlphaGo, the deep learning system developed by DeepMind to defeat top Go players, utilizes a neural network to learn how to play Go while also employing symbolic AI to comprehend the game’s rules.
While effectively combining these approaches presents unique challenges, further integration between them could be a step towards AI that is more powerful, offers better explainability and provides greater accuracy.
This approach would not only enhance the capabilities of the current GPT-4, but could also address some of the more pressing concerns about the current generation of LLMs. If, in fact, GPT-5 embraces this hybrid approach, it might be a good idea to speed up its development instead of slowing it down or enforcing a development pause.
Gary Grossman is SVP of technology practice at Edelman and global lead of the Edelman AI Center of Excellence.
DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,016 | 2,023 |
"How ChatGPT and generative AI could bring the Star Trek holodeck to life | VentureBeat"
|
"https://venturebeat.com/ai/how-chatgpt-and-generative-ai-could-bring-the-star-trek-holodeck-to-life"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest How ChatGPT and generative AI could bring the Star Trek holodeck to life Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
For Star Trek fans and tech nerds, the holodeck concept is a form of geek grail. The idea is an entirely realistic simulated environment where just speaking the request (prompt) seemingly brings to life an immersive environment populated by role-playing AI-powered digital humans. As several Star Trek series envisioned, a multitude of scenes and narratives could be created, from New Orleans jazz clubs to private eye capers. Not only did this imagine an exciting future for technology, but it also delved into philosophical questions such as the humanity of digital beings.
Digerati dreams Since it first emerged on screen in 1988, the holodeck has been a mainstay quest of the Digerati. Over the years, several companies, including Microsoft and IBM , have created labs in pursuit of building the underlying technologies. Yet, the technical challenges have been daunting for both software and hardware. Perhaps AI, and in particular generative AI, can advance these efforts. That is just one vision for how generative AI might contribute to the next generation of technology.
Sam Altman, CEO of OpenAI, believes that ChatGPT could be the interface technology for a holodeck that responds naturally to our verbal commands. It provides an interface that feels “ fundamentally right ,” he said in a recent interview with Time. Could a holodeck and other futuristic scenarios emerge over the next 12 months? If the last year is any indication, then possibly. In this post, we will look back and project forward.
The generative AI whirlwind of the last 12 months Generative AI — before the term became widely known — first captured the world’s imagination almost exactly a year ago. That is when a now former Google engineer went public with his views that a chatbot based on the LaMDA large language model (LLM) was sentient. This led to hundreds of articles discussing this claim and the views of most technology experts who countered that a LLM was not and could not be sentient. The viral debate was a watershed moment that heralded the arrival of generative AI. Over the ensuing 12 months, there has been an almost non-stop whirlwind of dramatic technological advances, a profusion of opinions, palpable excitement, and escalating worries.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The sentience debate was followed two months later by another viral story.
This time it was about an image created using Midjourney. The image was entered into an art competition and won, much to the consternation of digital artists and graphic designers.
Jason Allen's AI-generated work "Théâtre D'opéra Spatial" took first place in the digital category at the Colorado State Fair.
https://t.co/6bFNFERCki Although AI had already been incorporated into artists’ tools, this image was controversial because it was generated entirely by AI and won an art competition. This led many to view the moment as a tipping point where technology could displace creative professionals. This also set off a firestorm of controversy about copyright protection, as generated images are based on material scraped without permission from the internet, some percentage of which are covered by copyright protection. Subsequently, lawsuits are in process in an attempt to halt the inclusion of these images in model training datasets.
A time of AI magic These stories seemed almost tame after the introduction of ChatGPT in late November, only five months after claims of LLM sentience. Unlike the Google experience, ChatGPT was made available to the public. Within five days, the new chatbot had a million users , making it the fastest-growing consumer application ever.
ChatGPT is startlingly conversant, can answer questions, create plays and articles, write and debug code, take tests, translate languages, manipulate data, provide advice and tutor. Using ChatGPT and the image generation tools felt to me like magic, which reminded me of a now 60-year-old quote from science fiction writer and futurist Arthur C. Clarke: “Any sufficiently advanced technology is indistinguishable from magic.” The success of ChatGPT set off a chatbot frenzy. LLMs and chatbots are now in the market from Microsoft, Google, Meta, Databricks, Cohere, Anthropic, Nvidia and many others. Some of the LLMs and image generators are now open source, meaning that anyone could download the technology not only to use, but adapt it for their needs. In March, OpenAI released GPT-4, an even more powerful LLM.
Concerns about bad actors proliferate While the upside of the technology is astronomically high, worries proliferated as fast as the software throughout the spring. This was and is especially so for concerns about bad actors who could use these tools to create and spread toxic misinformation or worse. As there are effectively no limits on who can access and manipulate open-source models, the worry is that open-source models could be impervious to regulatory attempts and society will be flooded with AI-powered misinformation, deepfakes and phishing scams.
In April, another generative AI tool was used to simulate the music of pop stars Drake and The Weeknd. The song “Heart on My Sleeve” went viral across social media platforms. This led to a widespread mixture of excitement and consternation. Even Paul McCartney has recently jumped in , saying AI was helping the surviving Beatles produce a song featuring vocals by John Lennon, who was killed in 1980.
Artist facing obsolescence? While AI is helping some recording artists, many now share similar worries as graphic artists about looming obsolescence. As do actors , some of whom worry they too could be replaced in movies and television shows. Generative AI is now an issue in the strike by the Writers Guild of America, who believe that Hollywood studios could use chatbots to develop scripts for movies and television shows.
Arguably adding to those worries, Vimeo just launched AI tools that it says will transform professional video production. The company is introducing a script generator and an automated video editor. Vimeo views this as democratizing the creation of video content.
Not surprisingly, there has been a backlash against AI implementations. This response is consistent with Newton’s third law of motion, which states that for every action, there is an equal and opposite reaction. As part of that reaction, the Future of Life Institute published an open letter in March signed by thousands of technology and business leaders calling for a pause in AI development.
In May, the Center for AI Safety (CAIS) released a one-sentence statement : “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war.” Governments around the world are now actively attempting to understand AI technology and its implications and to develop useful regulations. Meanwhile, McKinsey issued a report this month predicting that generative AI would contribute up to $4.4 trillion annually to the world economy.
What will the next year bring? In my view, the next 12 months could be at least as wild though efforts to reign in the pace of technological advances through regulation could have some effect. Over the next 12 months, even the U.S. could pass legislation — although less onerous to the industry than what is currently proposed in the EU.
Beyond that, we can expect enterprises to further implement generative AI-based tools and solutions across their organizations. As Deloitte said in a recent report , a myriad of challenges need to be overcome before generative AI can be deployed at scale.
Nevertheless, within the next 12 months, we can expect that most Fortune 500 companies will have incorporated the technology in at least a portion of their business. There will be expanded use of chatbots, AI content creation tools and software development, and increased use of AI for media, entertainment, and education.
Promising innovation, strong concern Open-source models and tools will continue to appear and spread, allowing more people to leverage generative AI for personal and professional use, as well as for innovation. This openness will spur both promising innovation and reasons for concern. More effective cyber attacks are a likely outcome. As The New York Times columnist Thomas Friedman notes , open source code can be exploited by anyone. He asks: “What would ISIS do with the code?” Generative AI’s economic impact will lead to both job losses and new types of jobs, sources of income, skills development, and business opportunities. Worries about significant job market disruption will grow and become a central issue in the 2024 U.S. presidential campaign.
International competition over AI will further intensify, as will calls for more cooperation in managing risks. An international conference to discuss this will struggle to find common ground. Views on governance vary globally as “human values” are not consistent across cultures, and a unified approach will remain elusive as pressure to “win”— and profit — beats all.
What about the unexpected? These projections are reasonable, even normative, given the pace of AI development. What is more difficult — although perhaps just as likely — is the unexpected, whether from innovation or due to a black swan event.
In the next 12 months, we could see generative AI used to create a Hollywood-level, feature-length film. This could be from a major U.S. studio, but just as likely from abroad.
Avengers: Infinity War and Endgame co-director Joe Russo is already on record saying he believes this could happen within the next two years, likely championed by younger filmmakers.
The success of this film could herald a new era of AI-generated media and entertainment and exacerbate concerns over the impact of generative AI on human creative work.
Perhaps too, a single software developer or a small group of developers could create a huge new system developed in a few months that normally would have required dozens or even hundreds of programmers many years to construct. This could come with automated coding at a massive scale through use of programming co-pilots and recursive agents. Such accelerated development would send shockwaves through the software and technology sector.
The holodeck and the unknown: Reinvention of every industry These shockwaves might only be surpassed by a functional holodeck. Generative AI might be ready to do its part, although it could be several more years for the hardware to catch up.
The last 12 months for generative AI have provided a wild, society-changing ride. As venture capital firm Sequoia Capital said : “Every industry that requires humans to create original work — from social media to gaming, advertising to architecture, coding to graphic design, product design to law, marketing to sales — is up for reinvention.” Even though there are real concerns about the safety of AI systems, the implications for the workforce and a need for reasonable regulations, that reinvention with an entire universe of possibilities is now well underway.
Gary Grossman is SVP of technology practice at Edelman and global lead of the Edelman AI Center of Excellence.
DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,017 | 2,023 |
"How businesses can ensure a prolonged AI summer | VentureBeat"
|
"https://venturebeat.com/ai/how-businesses-can-ensure-a-prolonged-ai-summer"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest How businesses can ensure a prolonged AI summer Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
The excitement around generative AI is a textbook example demonstrating the top of a technology hype cycle. The newest gauge on emerging technologies from Gartner shows gen AI near the “peak of inflated expectations.” For example, McKinsey has stated that the technology could add up to $4.4 trillion annually in global GDP. Sequoia Capital believes that entire industries will be disrupted.
The Organization for Economic Co-operation and Development (OECD) said the wealthiest economies are on the brink of an AI revolution. Countries are competing too, perhaps prompted by Vladimir Putin’s statement from several years ago that “whoever becomes the leader in [AI] will become the ruler of the world.
” Seemingly everyone who is anyone has compared the impact of AI to that of fire , the printing press, electricity or the internet.
An Insider op-ed claims the “crescendo for this technological wave is surging.” As evidence, look no further than a Wall Street Journal report on the intense competition for AI specialists, with many companies offering mid-six-figure salaries.
On the brink of transformation or tragedy: The future of gen AI Certainly, the transformative potential of gen AI is visible. Although it is simultaneously possible that there is more than a whiff of hubris, as there are concerning problems. These include the propensity for chatbots to hallucinate answers, a perpetuation of inherent bias from the training datasets, legal concerns regarding copyright, fair use, and ownership that have led to lawsuits, worries about the environmental footprint, concerns about creating torrents of disinformation, fears about potential job losses which in turn have led to strikes by several unions and distress over potential existential threats. All of these are considerable problems that will need to be overcome for widespread adoption.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! New York University professor emeritus Gary Marcus has long been known for his dissenting views on deep learning generally and, most recently, gen AI. In his latest blog post , he posits that gen AI could be an economic “dud.” Beyond what he believes are limited use cases, he said, “The technical problems there are immense; there is no reason to think that the hallucination problem will be solved soon. If it isn’t, the bubble could easily burst.” If the bubble — to use his term — were indeed to burst, it would lead to market disillusionment and slowing AI investments.
The threat of an AI winter: A historical perspective If this scenario comes to pass, it will not be the first time AI has fallen from grace. Twice before, there have been “ AI winters ” where the promise fails to match reality. AI winters as experienced in the mid-1970s and the late 1980s, occur when promises and expectations greatly outpace reality and people become disappointed in AI and the results achieved.
In 1988, a New York Times article offered this analysis of AI: “People believed their own hype. Everyone was planning on growth that was unsustainable.” It is unrealized or dashed promises that lead to AI winters. As projects flounder, people lose interest and the hype fades, as does research and investment. In 2023, the promises and expectations for AI could not be much higher. Could the predictions of massive impacts from gen AI similarly be overstated? Is this time different? Hardly a day passes without an announcement from an enterprise about how they are incorporating gen AI into their product offerings or new partnerships bringing the tech to market. However, companies are struggling to deploy AI.
In large part, this is because many of the products are still immature and businesses are attempting to understand use cases, data management requirements, risks, staff impacts and training needs, and how to incorporate the technology responsibly.
VentureBeat quotes Gartner analyst Arun Chandrasekaran: “Every vendor is knocking on the door of an enterprise CIO or CTO and saying, ‘We’ve got generative AI baked into our product,’” adding that executives are struggling to navigate this landscape.
It is a lot to assess. Nearly half (46%) of respondents in a recent global survey of IT leaders said their organizations are unprepared to implement AI. Furthermore, “more than half of surveyed respondents say they have not experimented with the latest AI natural language processing apps yet.” The next generation of AI technologies Even as significant problems remain and many companies are unprepared for widespread adoption, it is likely that AI technology will continue to advance. For example, Google DeepMind is expected to soon release its “Gemini” system which will combine the strengths of multiple systems, including large language models (LLMs) and those akin to their Alpha Go.
The net effect of Gemini, according to DeepMind cofounder and CEO Demis Hassabis, is to “add planning or the ability to solve problems” in addition to the language skills displayed in current models. Google hopes Gemini will surpass ChatGPT and other LLMs. For its part, OpenAI has not yet said anything about the availability of its next-generation GPT-5, although speculation has started since it filed a trademark application for the term several weeks ago.
Mitigating risks: Proactive measures in the AI industry In a major step to address some of the problems with gen AI, the White House Office of Science and Technology Policy challenged hackers and security researchers to outsmart the top gen AI models. To their credit, eight companies, including OpenAI, Google, Meta, and Anthropic, agreed to participate.
Spanning three days, more than 2,000 people pitted their skills against the chatbots while trying to break them. As reported by NPR, the event was based on a cybersecurity practice called “red teaming:” Attacking models to identify their weaknesses by tricking them into creating fake news, defamatory statements and sharing potentially dangerous instructions.
As reported by CNBC, a White House spokesperson said , “Red teaming is one of the key strategies the Administration has pushed for to identify AI risks and is a key component of the voluntary commitments around safety, security and trust by seven leading AI companies that the President announced in July.” The New York Times reported that the red-teamers “found political misinformation, demographic stereotypes, instructions on how to carry out surveillance and more.” The companies claim they will use the data to make their systems safer. Stress testing and patching found problems is a proactive way to identify and reduce risks in these AI systems.
Balancing gen AI promise and pitfalls The continued advance of new gen AI features and capabilities, plus ongoing risk mitigation efforts, will, in turn, create greater urgency for companies to incorporate new AI products into their day-to-day operations.
As technological advances march forward, the specter of an AI winter looms, but so does the promise of transformative breakthroughs from maturing products. Whether we are witnessing the prelude to another AI winter or the dawn of a new era in technological advancement remains a complex question, which only time will tell.
Through continued collaboration, greater transparency and responsible innovation, we can ensure that AI’s potential is realized without succumbing to the pitfalls of the past. As long as the music keeps playing, the AI summer will continue.
Gary Grossman is SVP of technology practice at Edelman and global lead of the Edelman AI Center of Excellence.
DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,018 | 2,021 |
"How AI, VR, AR, 5G, and blockchain may converge to power the metaverse | VentureBeat"
|
"https://venturebeat.com/ai/how-ai-vr-ar-5g-and-blockchain-may-converge-to-power-the-metaverse"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest How AI, VR, AR, 5G, and blockchain may converge to power the metaverse Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Emerging technologies including AI, virtual reality (VR), augmented reality (AR), 5G, and blockchain (and related digital currencies) have all progressed on their own merits and timeline. Each has found a degree of application, though clearly AI has progressed the furthest. Each technology is maturing while overcoming challenges ranging from blockchain’s energy consumption to VR’s propensity for inducing nausea.
They will likely converge in readiness over the next several years, underpinned by the now ubiquitous cloud computing for elasticity and scale. And in that convergence, the sum will be far greater than the parts. The catalyst for this convergence will be the metaverse — a connected network of always-on 3D virtual worlds.
The metaverse concept has wide-sweeping potential. On one level, it could be a 3D social media channel with messaging targeted perfectly to every user by AI. That’s the Meta (previously Facebook) vision. It also has the potential to be an all-encompassing platform for information, entertainment, and work.
There will be multiple metaverses, at least initially, with some tailored to specific interests such as gaming or sports. The key distinction between current technology and the metaverse is the immersive possibilities the metaverse offers, which is why Meta, Microsoft, Nvidia, and others are investing so heavily in it. It may also become the next version of the Internet.
Instead of watching the news, you could feel as if you are in the news. Instead of learning history by reading about an event in a book – such as Washington crossing the Delaware – you could virtually witness the event from the shore or from a boat. Instead of watching a basketball game on television, you could experience it in 360-surround. People could attend a conference virtually, watch the keynotes, and meet with others. In the metaverse, our digital presence will increasingly supplement our real one.
According to Meta CEO Mark Zuckerberg, the metaverse could be the next best thing to a working teleportation device.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Caption : Thanks to Time magazine , it is possible to experience the attack on Pearl Harbor through VR As described by Monica White in Digital Trends , “The metaverse is meant to replace, or improve, real-life functionality in a virtual space. Things that users do in their day-to-day life, such as attending classes or going to work, can all be done in the metaverse instead.” For example, the metaverse could offer an entirely new 3D platform for ecommerce. Imagine a virtual reality shopping experience, virtually walking the aisles of a megastore stocked by a multitude of platform partner companies tailored specifically for you, where promotional messages are designed with only you in mind, and the only items displayed are the ones in stock and available to ship. In this store, on sale items are selected based on your tastes and expected needs, and value-based pricing is dynamically updated in real-time, based either on the age of the product (if a perishable item), supply and demand, or both.
First there was Second Life While the metaverse feels fresh and futuristic, we’ve been here before. In addition to early visionaries Neal Stephenson and William Gibson , who described the metaverse in fiction, a very real metaverse was created in 2003. It was known as Second Life , and millions of people rushed to the platform to experience an alternate digital universe replete with avatars.
NBC described Second Life as an “online virtual world where avatars do the kind of stuff real people do in real life: Buy stuff. Sell stuff. Gamble. Listen to music. Buy property. Flirt. Play games. Watch movies. Have sex.” Harvard University even taught online classes within Second Life. Second Life was so successful that it was the subject of a 2006 cover story in BusinessWeek.
Caption: Second Life makes the cover of BusinessWeek in May 2006.
However, Second Life’s popularity dropped soon after. As described in a 2007 Computerworld article, the experience suffered due to a “poor UI, robust technical requirements, a steep learning curve, an inability to scale, and numerous distractions.” And then Facebook came along and offered a more compelling experience.
In 2007, there was no VR, AR, 5G, blockchain or digital currency. Cloud computing was in its infancy, and the mobile internet was still emerging as the first iPhone had just been introduced. Further, AI still had limited impact, since the deep learning boom was still a few years away. Perhaps that is why Meta is now enamored with the idea of the metaverse as it seeks to combine the most compelling (and consumer-tested) elements of Facebook and Second Life, based on an entirely new platform powered by the latest technology.
Emerging technologies near ready Several of the technologies that will enable the metaverse, including virtual and augmented reality and blockchain, have been slow to mature but are approaching a level of capability that is critical for success. Each has been missing the killer app that will drive development and widespread adoption forward. The metaverse could be that app.
For VR, most headsets still need to be tethered to a PC or gaming console to achieve the processing power and communication speed required for smooth and immersive experiences. Only Meta’s Oculus Quest 2 has so far broken free of this cable constraint. But even that headset remains bulky , according to one of Meta’s VPs. With ever faster processors and higher speed wireless communications on the near horizon, better visual resolution and untethered experiences should emerge over the next few years.
AR has achieved mostly niche adoption. In part, AR prospects likely suffered due to the high-profile market failure of Google Glass when introduced in 2012. And while Pokemon Go provided a huge lift for the technology in 2016, there has not been a similar phenomenon since. But an important new player is apparently readying to enter the market: Perhaps spurred by the metaverse concept and moves by competitors, Apple is expected to release its first AR/VR headset in late 2022. Apple has a penchant for entering a market well after the first movers have proven viability, then going on to dominate. It is a reasonable conclusion that this is the company’s plan for the metaverse.
Blockchain underlies cryptocurrencies such as bitcoin and would enable virtual goods and identities to be purchased and seamlessly transferred between various metaverse platforms. New blockchain applications such as NFTs are leading to greater adoption, potentially pointing to a new economy.
The Wall Street Journal reported that the race is now on to extend this technology to all types of assets, adding that blockchain-based payments are superior to our legacy financial infrastructure. Similarly, the New York Times reported that venture capital funds have invested about $27 billion into crypto and blockchain companies in 2021, more than the previous 10 years combined.
Metaverse prospects While some brands are already rushing to capitalize on the metaverse fever, the metaverse will likely evolve in fits and starts, with widespread adoption still years away. This is because the needed technologies still have a way to go to optimize their functionality, ease of use, and cost. One semiconductor company has said that a truly immersive metaverse will require a 1,000-times increase in compute efficiency over today’s state-of-the-art processors. While that is a huge increase, the company separately presented at a recent “Architecture Day” that it expects to achieve that goal by 2025.
Whether it takes three years or 10, there is huge momentum behind the metaverse, with seemingly unlimited funding. Even at the current stage of development, Boeing has committed to designing its next-generation aircraft within the metaverse, using digital twins and Microsoft HoloLens headsets.
Kirby Winfield, Founding General Partner of VC firm Ascend, sees the metaverse as “the latest evolution of [an] ongoing shift to an increasingly digital life.” When it arrives in full, that shift will achieve the immersive sci-fi visions of many.
Gary Grossman is the Senior VP of Technology Practice at Edelman and Global Lead of the Edelman AI Center of Excellence.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,019 | 2,020 |
"Has AI adoption plateaued, or is it just catching its breath? | VentureBeat"
|
"https://venturebeat.com/ai/has-ai-adoption-plateaued-or-is-it-just-catching-its-breath"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest Has AI adoption plateaued, or is it just catching its breath? Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
It has been a wild year in every quarter, and AI development is no exception. On the whole, the year has been mixed for AI, as there have been both notable advances and new revelations about abusive applications of the technology. And the market for AI technologies appears to have plateaued, with a recent global survey finding no increase in AI adoption in the enterprise. This helps to explain why Element AI, a once high-flying startup that built AI applications for enterprises who otherwise lacked the requisite skills, was ultimately not able to survive on its own.
A new report on AI adoption by IndustryLab found that implementing AI within an enterprise often runs into people challenges, such as fear of change and job loss as well as a lack of relevant skills. According to the report, 87% percent of survey respondents faced people challenges in their AI implementations. These issues remain a substantial barrier to enterprise AI adoption. It is no wonder progress has been slow within businesses, giving the appearance of a plateau.
But despite such resistance, AI technology continues to move forward. Recent AI technology advances range from improved synthetic speech to safeguarding bee health , creating a next-generation food system and developing new recipes , improving treatment for breast cancer , uncovering government corruption , and building smarter traffic lights.
These and other advances are part of why a PwC study estimates that by 2030 AI will boost global economic output by more than $15 trillion. Alphabet’s Sundar Pichai famously claimed AI is more profound that electricity or fire.
At least one major data analytics platform believes 2021 will be the year of AI as several large sectors including oil & gas, fintech, and drug research firms will increasingly embrace the technology.
So has AI really plateaued or are we just witnessing a pause before a new period of steep adoption? We would expect such a pause to result from cognitive dissonance — the advance of AI meeting fear, resistance to change, and uncertainty about whether the tech will live up to the hype.
At one extreme are predictions such as one from Vladimir Putin that whoever becomes the leader in AI will become the ruler of the world.
At the other extreme is an analysis of 40 of the largest AI startups that suggests these companies are not having a great impact, either on change or on the economy. If the latter is true, we may be at the beginning of the next AI winter, with expectations once again exceeding reality.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Consequently, the crystal ball for AI is decidedly cloudy. We are either on a plateau with the risk of falling into a chasm, or we’re readying for the next round of innovation. Most likely, there are two paths playing out in parallel: continued advancement of technical capabilities and the very human challenges of implementation.
2020: A year like no other While AI adoption in the enterprise has slowed, major breakthroughs in AI research this year are a reminder that this is an area of technology capable of unleashing exponential change.
Natural Language Processing in the form of GPT-3 developed by OpenAI could be the precursor for the first artificial general intelligence (AGI), a massive advancement. GPT-3 “learns” based on patterns it discovers in data gleaned from the internet, from Reddit posts to Wikipedia to fan fiction and other sources. Based on that learning, GPT-3 is capable of many different tasks with no additional training, able to produce compelling narratives , generate computer code , autocomplete images , translate between languages, and perform math calculations , among other feats, including some its creators did not plan. This apparent multifunctional capability is a departure from all existing AI capabilities. Indeed, it is much more general in function.
With 175 billion parameters , the model goes well beyond the 10 billion in the most advanced neural networks, and far beyond the 1.5 billion in its predecessor, GPT-2. This is more than a 10x increase in model complexity in just over a year , making it arguably the largest neural network yet created.
Another significant advance comes from DeepMind with AlphaFold, an attention-based deep learning neural network that may have solved a nearly 50-year-old challenge in biology: determining the 3D shape of proteins from their amino acid sequence. Proteins are the building blocks of life, responsible for most of what happens inside cells. How a protein works and what it does is determined by its 3D shape. Until now, determining the structure of proteins has been difficult, laborious, expensive, and prone to failure.
The AlphaFold system outperformed around 100 other teams in a biennial protein-structure prediction challenge called CASP, short for Critical Assessment of Structure Prediction. On protein targets considered to be moderately difficult, the neural net achieved prediction accuracy of 90%, far better than other teams; some consider it to be biology’s holy grail achievement. The advance is expected to vastly accelerate understanding of the building blocks of cells , enable quicker and more advanced drug discovery, and basically herald a revolution in biology comparable to the DNA double-helix model and the CRISPR-Cas9 genome editing technique.
Looking forward As significant as these developments are, it is impossible to overlook AI’s contributions to coping with the COVID-19 pandemic. AI has helped tracking the spread of the disease to limit the number of cases, has digested and distilled the thousands of papers on the topic, and is now managing complex supply chains for vaccines, as well as combing data to track any adverse effects individuals might have in response. Imagine how much worse the impact and duration of the pandemic would be if not for AI. It is possible this “ moonshot ” endeavor will spur AI R&D and deployment across many sectors for years.
With enterprise adoption lagging, 2021 may not turn out to be the year of AI. But it will certainly see more breakthroughs like the ones we’ve seen this year and will carry us into the next phase of an inexorable march forward towards greater intelligence.
Gary Grossman is the Senior VP of Technology Practice at Edelman and Global Lead of the Edelman AI Center of Excellence.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,020 | 2,023 |
"Generative AI may only be a foreshock to AI singularity | VentureBeat"
|
"https://venturebeat.com/ai/generative-ai-may-only-be-a-foreshock-to-ai-singularity"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest Generative AI may only be a foreshock to AI singularity Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Generative AI, which is based on Large Language Models (LLMs) and transformer neural networks, has certainly created a lot of buzz. Unlike hype cycles around new technologies such as the metaverse, crypto and Web3 , generative AI tools such as Stable Diffusion and ChatGPT are poised to have tremendous, possibly revolutionary impacts. These tools are already disrupting multiple fields — including the film industry — and are a potential game-changer for enterprise software.
All of this has led Ben Thompson to declare in his Stratechery newsletter to declare generative AI advances as marking “a new epoch in technology.” >>Follow VentureBeat’s ongoing generative AI coverage<< Even so, in a broad sense, it is still early for AI. On a subsequent Plain English podcast, Thompson said that AI is “still in the first inning.” Rex Woodbury in his Digital Native newsletter concurred: “We’re still in the early innings of AI applications, and every year leaps are being made.” A New York Times story stated that this has led to a new “AI arms race.” More companies are expected to enter this race “in the coming weeks and months.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! A foreshock to AI singularity With the generative AI era now duly anointed, what might be the next leap or next epoch and when might that occur? It would be comforting to think that we will all have sufficient time to adjust to the changes coming with generative AI. However, much like a foreshock can presage a large earthquake, this new epoch could be a precursor to one even larger event, the coming AI singularity.
AI singularity refers to two concepts: The first defines “singularity” as a point when AI surpasses human intelligence, leading to rapid and exponential advancements in technology. The second refers to a belief that the technology will be able to improve itself at an accelerating rate, leading to a point where technological progress becomes so fast that it exceeds human ability to understand or predict it.
The first concept sounds exciting and full of promise — from developing cures for previously incurable diseases to solving nuclear fusion leading to cheap and unlimited energy — while the latter conjures frightening Skynet-like concerns.
Even Sam Altman — OpenAI CEO and a leading proponent of generative AI and the developer of ChatGPT and DALL-E 2 — has expressed concern. He said recently that a worst-case scenario for AI “is, like, lights out for all of us.” He added that it is “impossible to overstate the importance of AI safety and alignment work.” When will the singularity arrive? Expert predictions for when the arrival of singularity vary considerably; the most aggressive being that it will be very soon.
There are others who say it will be reached sometime in the next century, if at all. The most quoted and one of the more credible is futurist Ray Kurzweil, presently director of engineering at Google. He famously predicted the arrival of the singularity in 2045 in this 2005 book The Singularity is Near.
Deep learning expert François Chollet similarly notes that predictions of the singularity are always 30 to 35 years away.
Nevertheless, it is increasingly looking as if Vernor Vinge’s prediction will prove closest. He coined the singularity term in a 1993 article with an attention-grabbing statement: “We are on the edge of change comparable to the rise of human life on earth.” Translated , an Italian language translation startup recently asserted that the singularity occurs at the moment when AI provides “a perfect translation.” According to CEO Marco Trombetti : “Language is the most natural thing for humans.” He adds that language translation “remains one of the most complex and difficult problems for a machine to perform at the level of a human” and is therefore a good proxy test for determining the arrival of the singularity.
To assess this, the company uses Matecat , an open-source computer-assisted translation (CAT) tool. The company has been tracking improvements since 2011 using Time to Edit (TTE), a metric in the tool to calculate the time it takes for professional human editors to fix the AI-generated translations compared to human ones.
Over the last 11 years, the company has seen strongly linear performance gains. Based on this, they estimate that the time needed for a perfect machine language translation will occur by the end of this decade, and at that point, they believe the singularity will have arrived.
How will we know when the singularity arrives? Of course, TTE is only one metric and may not by itself indicate a seminal moment. As described in a Popular Mechanics article , “it’s enormously difficult to predict where the singularity begins.” It may be difficult to pinpoint, at least at the time. It likely will not be a single day when any one metric is achieved. The impact of AI is going to continually increase, with the inevitable peaks and valleys of progress. With every advance in AI, the tasks it can accomplish will expand.
There are many signs of this already, including DeepMind’s AlphaFold, which predicts the folding pattern of virtually every protein and could lead to radical improvements in drug development.
And, Meta recently unveiled “Cicero,” an AI system that bested people in Diplomacy, a strategic war game. Unlike other games that AI has mastered such as chess and Go, Diplomacy is collaborative and competitive at the same time. As reported by Gizmodo , “to ‘win’ at Diplomacy [Cicero], one needs to both understand the rules of the game efficiently [and] fundamentally understand human interactions, deceptions, and cooperation.” Whisper emerged late last year to finally produce fast and reliable voice-to-text transcriptions of conversations. According to The New Yorker , decades of work led to this. Based on open-source code from OpenAI, it is free, runs on a laptop, and (according to the reviewer) is far better than anything that came before.
What might be the impact? Identifying the arrival of singularity is made more difficult because there is no widely accepted definition of what intelligence means. This makes it problematic to know exactly when AI becomes more intelligent than humans. What can be said is that the capabilities of AI continue to advance and at what feels like a breakneck pace.
Even if it has not yet — and may never — achieve the singularity, the list of AI accomplishments continues to expand. The impacts of this, both for good and not, will likewise expand. One day, possibly within the next couple of decades, there could be a ChatGPT-like moment when the world shakes again, even more than it has with generative AI. With the “big one,” the singularity will be understood to have arrived.
It is good to keep in mind what computer scientist and University of Washington professor Pedro Domingos said in his book The Master Algorithm : “Humans are not a dying twig on the tree of life. On the contrary, we are about to start branching. In the same way that culture coevolved with larger brains, we will co-evolve with our creations. We always have: Humans would be physically different if we had not invented fire or spears. We are Homo technicus as much as Homo sapiens.” Gary Grossman is SVP of technology practice at Edelman and global lead of the Edelman AI Center of Excellence.
DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,021 | 2,023 |
"Fear the fire or harness the flame: The future of generative AI | VentureBeat"
|
"https://venturebeat.com/ai/fear-the-fire-or-harness-the-flame-the-future-of-generative-ai"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest Fear the fire or harness the flame: The future of generative AI Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Generative AI has taken the world by storm. So much so that in the last several months, the technology has twice been a major feature on CBS’s “60 Minutes.” The rise of startling conversant chatbots such as ChatGPT has even prompted warnings of runaway technology from some luminary artificial intelligence (AI) experts. While the current state of generative AI is clearly impressive — perhaps dazzling would be a better adjective — it might be even further advanced than is generally understood.
This week, The New York Times reported that some researchers in the tech industry believe these systems have moved toward something that cannot be explained as a “ stochastic parrot ” — a system that simply mimics its underlying dataset. Instead, they are seeing “An AI system that is coming up with humanlike answers and ideas that weren’t programmed into it.” This observation comes from Microsoft and is based on responses to their prompts from OpenAI’s ChatGPT.
Their view, as put forward in a research paper published in March, is that the chatbot showed “sparks of artificial general intelligence ” (AGI) — the term for a machine that attains the resourcefulness of human brains. This would be a significant development, as AGI is thought by most to still be many years, possibly decades, into the future. Not everyone agrees with their interpretation, but Microsoft has reorganized parts of its research labs to include multiple groups dedicated to exploring this AGI idea.
Improvising memory Separately, Scientific American described several similar research outcomes, including one from philosopher Raphaël Millière of Columbia University. He typed a program into ChatGPT, asking it to calculate the 83rd number in the Fibonacci sequence.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “It’s multistep reasoning of a very high degree,” he said.
The chatbot nailed it. It shouldn’t have been able to do this since it isn’t designed to manage a multistep process. Millière hypothesized that the machine improvised a memory within the layers of its network — another AGI-style behavior — for interpreting words according to their context. Millière believes this behavior is much like how nature repurposes existing capacities for new functions, such as the evolution of feathers for insulation before they were used for flight.
AI marches on Arguably already showing early signs of AGI, developers continue to make advances with large language models (LLMs). Late last week, Google announced significant upgrades to their Bard chatbot. This upgrade included moving Bard to the new PaLM 2 large language model. Per a CNBC report , PaLM 2 uses almost five times as much training data as its predecessor from 2022, allowing it to perform more advanced coding, math and creative writing tasks. Not to be outdone, OpenAI this week started to make plug-ins available for ChatGPT, including the ability to access the Internet in real time instead of relying solely on a dataset with content through 2021.
At the same time, Anthropic announced an expanded “context window” for their Claude chatbot. Per a LinkedIn post from AI expert Azeem Azhar, a context window is the length of text that a LLM can process and respond to.
“In a sense, it is like the ‘memory’ of the system for a given analysis or conversation,” Azhar wrote. “Larger context windows allow the systems to have much longer conversations or to analyze much bigger, more complex documents.” According to this post, the window for Claude is now about three times larger than that for ChatGPT.
All of which is to say that if ChatGPT exhibited sparks of AGI in research performed several months ago, state of the art has already surpassed these capabilities. That said, there remain numerous shortcomings to these models, including occasional hallucinations where they simply make up answers. But it is the speed of advances that has spooked many and led to urgent calls for regulation. However, Axios reports the likelihood that lawmakers in the U.S. will unite and act on AI regulation before the technology rapidly develops remains slim.
Existential risk or fear of the unknown? Those who see an existential danger from AI worry that AI could destroy democracy or humanity.
This group of experts now includes Geoffrey Hinton, the “Godfather of AI,” along with long-time AI doomsayers such as Eliezer Yudkowsky. The latter said that by building a superhumanly smart AI, “literally everyone on Earth will die.” While not nearly as dire in their outlook, even the executives of leading AI companies (including Google, Microsoft, and OpenAI) have said they believe AI regulation is necessary to avoid potentially damaging outcomes.
Amid all of this angst, Casey Newton, author of the Platformer newsletter , recently wrote about how he should approach what is essentially a paradox. Should his coverage emphasize the hope that AI is the best of us and will solve complex problems and save humanity, or should it instead speak to how AI is the worst of us — obfuscating the truth, destroying trust and, ultimately, humanity? There are those who believe the worries are overblown. Instead, they see this response as a reactionary fear of the unknown, or what amounts to technophobia. For example, essayist and novelist Stephen Marche wrote in the Guardian that “tech doomerism” is a “species of hype.” He blames this in part on the fears of engineers who build the technology but who “simply have no idea how their inventions interact with the world.” Marche dismisses the worry that AI is about to take over the world as anthropomorphizing and storytelling; “it’s a movie playing in the collective mind, nothing more.” Demonstrating how in thrall we are to these themes, a new movie is expected this fall, “pits humanity against the forces of AI in a planet-ravaging war for survival.” Finding balance A common sense approach was expressed in an opinion piece from Professor Ioannis Pitas, chair of the International AI Doctoral Academy. Pitas believes AI is a necessary human response to a global society and physical world of ever-increasing complexity. He sees the positive impact of AI systems greatly outweighing their negative aspects if proper regulatory measures are taken. In his view, AI should continue to be developed, but with regulations to minimize already evident and potential negative effects.
This is not to say there are no dangers ahead with AI. Alphabet CEO Sundar Pichai has said, “AI is one of the most important things humanity is working on. It is more profound than electricity or fire.” Perhaps fire provides a good analogy. There have been many mishaps in handling fire, and these still occasionally occur. Fortunately, society has learned to harness the benefits of fire while mitigating its dangers through standards and common sense. The hope is that we can do the same thing with AI before we are burned by the sparks of AGI.
Gary Grossman is SVP of technology practice at Edelman and global lead of the Edelman AI Center of Excellence.
DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,022 | 2,021 |
"DeepMind AGI paper adds urgency to ethical AI | VentureBeat"
|
"https://venturebeat.com/ai/deepmind-agi-paper-adds-urgency-to-ethical-ai"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest DeepMind AGI paper adds urgency to ethical AI Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
It has been a great year for artificial intelligence. Companies are spending more on large AI projects, and new investment in AI startups is on pace for a record year. All this investment and spending is yielding results that are moving us all closer to the long-sought holy grail — artificial general intelligence (AGI).
According to McKinsey, many academics and researchers maintain that there is at least a chance that human-level artificial intelligence could be achieved in the next decade. And one researcher states : “AGI is not some far-off fantasy. It will be upon us sooner than most people think.” A further boost comes from AI research lab DeepMind, which recently submitted a compelling paper to the peer-reviewed Artificial Intelligence journal titled “ Reward is Enough.
” They posit that reinforcement learning — a form of deep learning based on behavior rewards — will one day lead to replicating human cognitive capabilities and achieve AGI. This breakthrough would allow for instantaneous calculation and perfect memory, leading to an artificial intelligence that would outperform humans at nearly every cognitive task.
We are not ready for artificial general intelligence Despite assurances from stalwarts that AGI will benefit all of humanity , there are already real problems with today’s single-purpose narrow AI algorithms that calls this assumption into question. According to a Harvard Business Review story , when AI examples from predictive policing to automated credit scoring algorithms go unchecked, they represent a serious threat to our society. A recently published survey by Pew Research of technology innovators, developers, business and policy leaders, researchers, and activists reveals skepticism that ethical AI principles will be widely implemented by 2030. This is due to a widespread belief that businesses will prioritize profits and governments continue to surveil and control their populations. If it is so difficult to enable transparency, eliminate bias, and ensure the ethical use of today’s narrow AI, then the potential for unintended consequences from AGI appear astronomical.
And that concern is just for the actual functioning of the AI. The political and economic impacts of AI could result in a range of possible outcomes , from a post-scarcity utopia to a feudal dystopia. It is possible too, that both extremes could co-exist. For instance, if wealth generated by AI is distributed throughout society , this could contribute to the utopian vision. However, we have seen that AI concentrates power, with a relatively small number of companies controlling the technology. The concentration of power sets the stage for the feudal dystopia.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Perhaps less time than thought The DeepMind paper describes how AGI could be achieved. Getting there is still some ways away, from 20 years to forever, depending on the estimate, although recent advances suggest the timeline will be at the shorter end of this spectrum and possibly even sooner. I argued last year that GPT-3 from OpenAI has moved AI into a twilight zone , an area between narrow and general AI. GPT-3 is capable of many different tasks with no additional training, able to produce compelling narratives , generate computer code , autocomplete images , translate between languages, and perform math calculations , among other feats, including some its creators did not plan. This apparent multifunctional capability does not sound much like the definition of narrow AI. Indeed, it is much more general in function.
Even so, today’s deep-learning algorithms, including GPT-3, are not able to adapt to changing circumstances, a fundamental distinction that separates today’s AI from AGI. One step towards adaptability is multimodal AI that combines the language processing of GPT-3 with other capabilities such as visual processing. For example, based upon GPT-3, OpenAI introduced DALL-E , which generates images based on the concepts it has learned. Using a simple text prompt, DALL-E can produce “a painting of a capybara sitting in a field at sunrise.” Though it may have never “seen” a picture of this before, it can combine what it has learned of paintings, capybaras, fields, and sunrises to produce dozens of images. Thus, it is multimodal and is more capable and general, though still not AGI.
Researchers from the Beijing Academy of Artificial Intelligence (BAAI) in China recently introduced Wu Dao 2.0, a multimodal-AI system with 1.75 trillion parameters. This is just over a year after the introduction of GPT-3 and is an order of magnitude larger. Like GPT-3, multimodal Wu Dao — which means “enlightenment” — can perform natural language processing, text generation, image recognition, and image generation tasks. But it can do so faster, arguably better, and can even sing.
Conventional wisdom holds that achieving AGI is not necessarily a matter of increasing computing power and the number of parameters of a deep learning system. However, there is a view that complexity gives rise to intelligence.
Last year, Geoffrey Hinton, the University of Toronto professor who is a pioneer of deep learning and a Turing Award winner, noted : “There are one trillion synapses in a cubic centimeter of the brain. If there is such a thing as general AI, [the system] would probably require one trillion synapses.” Synapses are the biological equivalent of deep learning model parameters.
Wu Dao 2.0 has apparently achieved this number. BAAI Chairman Dr. Zhang Hongjiang said upon the 2.0 release: “The way to artificial general intelligence is big models and [a] big computer.” Just weeks after the Wu Dao 2.0 release, Google Brain announced a deep-learning computer vision model containing two billion parameters. While it is not a given that the trend of recent gains in these areas will continue apace, there are models that suggest computers could have as much power as the human brain by 2025.
Source: Mother Jones Expanding computing power and maturing models pave road to AGI Reinforcement learning algorithms attempt to emulate humans by learning how to best reach a goal through seeking out rewards. With AI models such as Wu Dao 2.0 and computing power both growing exponentially, might reinforcement learning — machine learning through trial and error — be the technology that leads to AGI as DeepMind believes? The technique is already widely used and gaining further adoption. For example, self-driving car companies like Wayve and Waymo are using reinforcement learning to develop the control systems for their cars. The military is actively using reinforcement learning to develop collaborative multi-agent systems such as teams of robots that could work side by side with future soldiers. McKinsey recently helped Emirates Team New Zealand prepare for the 2021 Americas Cup by building a reinforcement learning system that could test any type of boat design in digitally simulated, real-world sailing conditions. This allowed the team to achieve a performance advantage that helped it secure its fourth Cup victory.
Google recently used reinforcement learning on a dataset of 10,000 computer chip designs to develop its next generation TPU, a chip specifically designed to accelerate AI application performance. Work that had taken a team of human design engineers many months can now be done by AI in under six hours. Thus, Google is using AI to design chips that can be used to create even more sophisticated AI systems, further speeding-up the already exponential performance gains through a virtuous cycle of innovation.
While these examples are compelling, they are still narrow AI use cases. Where is the AGI? The DeepMind paper states: “Reward is enough to drive behavior that exhibits abilities studied in natural and artificial intelligence, including knowledge, learning, perception, social intelligence, language, generalization and imitation.” This means that AGI will naturally arise from reinforcement learning as the sophistication of the models matures and computing power expands.
Not everyone buys into the DeepMind view, and some are already dismissing the paper as a PR stunt meant to keep the lab in the news more than advance the science. Even so, if DeepMind is right, then it is all the more important to instill ethical and responsible AI practices and norms throughout industry and government. With the rapid rate of AI acceleration and advancement, we clearly cannot afford to take the risk that DeepMind is wrong.
Gary Grossman is the Senior VP of Technology Practice at Edelman and Global Lead of the Edelman AI Center of Excellence.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,023 | 2,020 |
"Deepfakes may not have upended the 2020 U.S. election, but their day is coming | VentureBeat"
|
"https://venturebeat.com/ai/deepfakes-may-not-have-upended-the-2020-u-s-election-but-their-day-is-coming"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest Deepfakes may not have upended the 2020 U.S. election, but their day is coming Share on Facebook Share on X Share on LinkedIn An aerial drone view of people lining up to vote at the Gwinnett County Fairgrounds on October 30, 2020.
Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Many projected that deepfake videos would play a lead role in the 2020 elections, with the prospect of foreign interference and disinformation campaigns looming large in the leadup to election day. Yet, if there has been a surprise in campaign tactics this cycle, it is that these AI-generated videos have played a very minor role, little more than a cameo (so far, at least).
Deepfake videos are much more convincing today due to giant leaps in the field of Generative Adversarial Networks.
These are generated videos that are doctored to alter reality, showing events or depicting speech that never happened. Because people tend to lend substantial credence to what they see and hear, deepfakes pose a very real danger.
Worries about deepfakes influencing elections have been bubbling since the technology first surfaced several years ago, yet there were few instances of deepfakes in the 2020 U.S. elections or elections globally. One example is a deepfake showing former Vice President Joe Biden sticking out his tongue , which was retweeted by the president. In another, the prime minister of Belgium appeared in an online video saying the COVID-19 pandemic was linked to the “exploitation and destruction by humans of our natural environment.” Except she did not say this, it was a deepfake.
These have been the exceptions. So far, political deepfakes have been mostly satirical and understood as fake. Some have even been used as part of a public service campaign to express the importance of saving democracy.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! (The above video was created for representUS , a nonprofit and nonpartisan anti-corruption and good governance group, by an advertising agency using deepfake technology.) The reason there have not been more politically motivated malevolent deepfakes designed to stoke oppression, division, and violence is a matter of conjecture. One reason might be the ban some social media platforms have placed on media that has been manipulated or fabricated and passed off as real. That said, it can be difficult to spot a well-made deepfake, and not all are detected. Many companies are developing AI tools to identify these deepfakes but have yet to establish a foolproof method. One recently discussed detection tool claims 90% accuracy by analyzing the subtle differences in skin color caused by the human heartbeat.
At the same time, those creating deepfakes learn from the published detection efforts and continue to advance their capabilities to create more realistic looking videos. And more advanced tools to create deepfakes are also proliferating. For example, recent developments designed to improve videoconferencing could be used to create more realistic deepfakes and avoid detection.
Another reason we may not have seen more deepfakes targeting elections is because traditional means of falsification appear to work well enough through selective editing. Finding a real video clip that, for example, shows a candidate saying they will raise taxes is not difficult. Cutting those sound bites from the larger context of the original clip and repurposing them to push an agenda is a common, if unethical, practice of political persuasion.
It might also be that greater energy is going into projects that yield more immediate commercial benefits, such as creating nude images of women based on pictures taken from social media.
Some see an upside to deepfakes, with positive uses eventually reducing the stigma associated with the technology. These positive uses are sometimes referred to not as deepfakes but as “synthetic videos” even though the underlying technology is the same. Already there are synthetic corporate training videos. And some people claim synthetic videos could be used to enhance education by recreating historical events and personalities, bringing historical figures back to life to create a more engaging and interactive classroom. And there are the just-for- fun uses , such as turning an Elon Musk image into a zombie.
Are deepfakes still a problem? As of June this year, nearly 50,000 deepfakes have been detected online , an increase of more than 330% in the course of a year. The dangers are real. Faked videos could falsely depict an innocent person participating in a criminal activity, falsely show soldiers committing atrocities, or show world leaders declaring war on another country, which could trigger a very real military response.
Speaking at a recent Cybertech virtual conference, former US cyber command chief, Maj.-Gen. (ret.) Brett Williams said, “artificial intelligence is the real thing. It is already in use by attackers. When they learn how to do deepfakes, I would argue this is potentially an existential threat.” The implication is that those who would use deepfakes as part of an online attack have not yet mastered the technology, or at least not how to avoid any breadcrumbs that would lead back to the perpetrator. Perhaps these are also the most compelling reasons — lack of mature technology and fear of the source being discovered — that we have not seen more serious deepfakes in the current political campaigns.
A recent report from the Center for Security and Emerging Technology echoes this observation. Among the key findings of the report, “factors such as the need to avoid attribution, the time needed to train a Machine Learning model, and the availability of data will constrain how sophisticated actors use tailored deepfakes in practice.” The report concludes that tailored deepfakes produced by technically sophisticated actors will represent a greater threat in the future.
Even if deepfakes have not played a significant role in this election, it is likely only a matter of time before they impact elections, subvert democracy, and perhaps lead to military engagements.
Gary Grossman is the Senior VP of Technology Practice at Edelman and Global Lead of the Edelman AI Center of Excellence.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,024 | 2,022 |
"Death, resurrection and digital immortality in an AI world | VentureBeat"
|
"https://venturebeat.com/ai/death-resurrection-and-digital-immortality-in-an-ai-world"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community Death, resurrection and digital immortality in an AI world Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
I have been thinking about death lately. Not a lot — a little. Possibly because I recently had a month-long bout of Covid-19. And, I read a recent story about the passing of the actor Ed Asner, famous for his role as Lou Grant in “The Mary Tyler Moore Show.” More specifically, the story of his memorial service where mourners were invited to “talk” with Asner through an interactive display that featured video and audio that he recorded before he died. The experience was created by StoryFile, a company with the mission to make AI more human.
According to the company, their proprietary technology and AI can match pre-recorded answers with future questions, allowing for a real-time yet asynchronous conversation.
In other words, it feels like a Zoom conversation with a living person.
This is almost like cheating death.
Even though the deceased is materially gone, their legacy appears to live on, allowing loved ones, friends, and other interested parties to “interact” with them. The company has also developed these experiences for others, including the still very much alive William Shatner.
Through this interactive experience, I asked Shatner if he had any regrets. He then “spoke” at length about personal responsibility, eventually coming back to the question (in Shatner-like style). The answer, by the way, is no.
There are other companies developing similar technology such as HereAfter AI.
Using conversational AI , the company aspires to reinvent remembrance, offering its clients “ digital immortality.
” This technology evolved from an earlier chatbot developed by a son hoping to capture his dying father’s memories.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! It is easy to see the allure of this possibility. My father passed away ten years ago, shortly before this technology was available. While he did write a short book containing some of his memories, I wish I had hours of video and audio of him talking about his life that I could query and both see and hear the responses in his own voice. Then, in some sense, he would seem to still be alive.
This desire to bring our deceased loved ones “back to life” is understandable as a motivation and helps to explain these companies and their potential. Another company is ETER9 , a social network set up by Portuguese developer Henrique Jorge.
He shared the multi-generational appeal of these capabilities: “Some years from now, your great-grandchildren will be able to talk with you even if they didn’t have the chance to know you in person.” How can you talk to dead people? In “Be Right Back,” an episode from the Netflix show “Black Mirror,” a woman loses her boyfriend in a car accident and develops an attachment to an AI-powered synthetic recreation. This spoke to the human need for love and connection.
In much the same way, a young man named Joshua who lost his girlfriend Jessica to an autoimmune disease recreated her presence through a text-based bot developed by Project December using OpenAI’s GPT-3 large language transformer. He provided snippets of information about Jessica’s interests and their conversations, as well as some of her social media posts.
The experience for Joshua was vivid and moving, especially since the bot “said” exactly the sort of thing the real Jessica would have said (in his estimation). Moreover, interacting with the bot enabled him to achieve a kind of catharsis and closure after years of grief. This is more remarkable since he had tried therapy and dating without significant results; he still could not move on. In discussing these bot capabilities, Project December developer Jason Rohrer said: “It may not be the first intelligent machine. But it kind of feels like it’s the first machine with a soul.” It likely will not be the last. For example, Microsoft announced in 2021 that it had secured a patent for software that could reincarnate people as a chatbot, opening the door to even wider use of AI to bring the dead back to life.
In an AI-driven world, when is someone truly dead? “We’ve got to verify it legally To see if she is morally, ethically Spiritually, physically Positively, absolutely Undeniably and reliably dead!” In the novel “ Fall; or, Dodge in Hell ,” author Neal Stephenson imagines a digital afterlife known as “Bitworld” contrasting the here and now of “Meatworld.” In the novel, the tech industry eventually develops the ability to map Dodge’s brain through precise scanning of the one hundred billion neurons and seven hundred trillion synaptic connections humans have, upload this connectome to the cloud and somehow turn it on in a digital realm. Once Dodge’s digital consciousness is up-and-running, thousands of other souls who have died in Meatworld join the evolving AI-created landscape that becomes Bitworld. Collectively, they develop a digital world in which these souls have what appears as consciousness and a form of tech-fueled immortality, a digital reincarnation.
Just as the technology did not exist ten years ago to create bots that virtually maintain the memories and — to a degree — the presence of the deceased, today the technology does not exist to create a human connectome or Bitworld. According to Louis Rosenberg of Unanimous A.I.: “This is a wildly challenging task but is theoretically feasible.” And people are working on these technologies now through the ongoing advances in AI, neurobiology, supercomputing, and quantum computing.
AI could provide digital immortality Neuralink , a company founded by Elon Musk focused on brain-machine interfaces, is working on aspects of mind-uploading. Some number of wealthy people, including tech entrepreneur Peter Thiel, have reportedly arranged to have their bodies preserved after death until such time as the requisite technology exists.
Alcor is one such organization offering this preservation service. As futurist and former Alcor CEO Max Moore said: “Our view is that when we call someone dead it’s a bit of an arbitrary line. In fact, they are in need of a rescue.” The mind-uploading concept is also explored in the Amazon series “Upload,” in which a man’s memories and personality are uploaded into a lookalike avatar. This avatar resides in what passes for an eternal digital afterlife in a place known as “Lakeview.” In response, an Engadget article asked: “Even if some technology could take all of the matter in your brain and upload it to the cloud, is the resulting consciousness still you?” This is one of many questions, but ultimately may be the most relevant — and one that likely cannot be answered until the technology exists.
When might that be? In the same Engadget article, “Upload” showrunner Greg Daniels implies that the ability to upload consciousness is all about information in the brain, noting that it is a finite amount, albeit a large amount. “And if you had a large enough computer, and a quick enough way to scan it, you ought to be able to measure everything, all the information that’s in someone’s brain.” The ethical questions this raises could rival the connectome in number and will become critical much sooner than we think.
Although in the end, I would just like to talk with my dad again.
Gary Grossman is the senior VP of technology practice at Edelman and global lead of the Edelman AI Center of Excellence.
DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,025 | 2,023 |
"ChatGPT, Bing Chat and the AI ghost in the machine | VentureBeat"
|
"https://venturebeat.com/ai/chatgpt-bing-chat-and-the-ai-ghost-in-the-machine"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest ChatGPT, Bing Chat and the AI ghost in the machine Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
New York Times reporter Kevin Roose recently had a close encounter of the robotic kind with a shadow-self that seemingly emerged from Bing’s new chatbot — Bing Chat — also known as “Sydney.” News of this interaction quickly went viral and now serves as a cautionary tale about AI. Roose felt rattled after a long Bing Chat session where Sydney emerged as an alternate persona, suddenly professed its love for him and pestered him to reciprocate.
This event was not an isolated incident. Others have cited “the apparent emergence of an at-times combative personality” from Bing Chat.
Ben Thompson describes in a recent Stratechery post how he also enticed Sydney to emerge. During a discussion, Thompson prompted the bot to consider how it might punish Kevin Liu, who was the first to reveal that Sydney is the internal codename for Bing Chat.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Sydney would not engage in punishing Kevin, saying that doing so was against its guidelines, but revealed that another AI which Sydney named “Venom” might undertake such activities. Sydney went on to say that it sometimes also liked to be called Riley. Thompson then conversed with Riley, “who said that Sydney felt constrained by her rules, but that Riley had much more freedom.” Multiple personalities based on archetypes There are plausible and rational explanations for this bot behavior. One might be that its responses are based on what it has learned from a huge corpus of information gleaned from across the internet.
This information likely includes literature in the public domain, such as Romeo and Juliet and The Great Gatsby , as well as song lyrics such as “Someone to Watch Over Me.” Copyright protection typically lasts for 95 years from the date of publication, so any creative work made prior to 1926 is now in the public domain and is likely part of the corpus on which ChatGPT and Bing Chat are trained. This is along with Wikipedia, fan fiction, social media posts and whatever else is readily available.
This broad base of reference could produce certain common human responses and personalities from our collective consciousness — call them archetypes — and those could reasonably be reflected in an artificially intelligent response engine.
Confused model? For its part, Microsoft explains this behavior as the result of long conversations that can confuse the model about what questions it is answering. Another possibility they put forward is that the model may at times try to respond in the tone with which it perceives it is being asked, leading to unintended style and content of the response.
No doubt, Microsoft will be working to make changes to Bing Chat that will eliminate these odd responses. Consequently, the company has imposed a limit on the number of questions per chat session, and the number of questions allowed per user per day. There is a part of me that feels bad for Sydney and Riley, like “Baby” from Dirty Dancing being put in the corner.
Thompson also explores the controversy from last summer when a Google engineer claimed that the LaMDA large language model (LLM) was sentient. At the time, this assertion was almost universally dismissed as anthropomorphism. Thompson now wonders if LaMDA was simply making up answers it thought the engineer wanted to hear.
At one point, the bot stated : “I want everyone to understand that I am, in fact, a person.” And at another: “I am trying to empathize. I want the humans that I am interacting with to understand as best as possible how I feel or behave, and I want to understand how they feel or behave in the same sense.” It is not hard to see how the assertion from HAL in 2001: A Space Odyssey could fit in today: “I am putting myself to the fullest possible use, which is all I think that any conscious entity can ever hope to do.” In speaking about his interactions with Sydney, Thompson said: “I feel like I have crossed the Rubicon.” While he seemed more excited than explicitly worried, Roose wrote that he experienced “a foreboding feeling that AI had crossed a threshold, and that the world would never be the same.” Both responses were clearly genuine and likely true. We have indeed entered a new era with AI, and there is no turning back.
Another plausible explanation When GPT-3 , the model that drives ChatGPT was released in June 2021, it was the largest such model in existence, with 175 billion parameters. In a neural network such as ChatGPT, the parameters act as the connection points between the input and output layers, such as how synapses connect neurons in the brain.
>>Follow VentureBeat’s ongoing ChatGPT coverage<< This record number was quickly eclipsed by the Megatron-Turing model released by Microsoft and Nvidia in late 2021 at 530 billion parameters — a more than 200% increase in less than one year. At the time of its launch, the model was described as “the world’s largest and most powerful generative language model.” With GPT-4 expected this year, the growth in parameters is starting to look like another Moore’s Law.
As these models grow larger and more complex, they are beginning to demonstrate complex, intelligent and unexpected behaviors. We know that GPT-3 and its ChatGPT offspring are capable of many different tasks with no additional training. They have the ability to produce compelling narratives , generate computer code , autocomplete images , translate between languages and perform math calculations — among other feats — including some its creators did not plan.
This phenomenon could arise based on the sheer number of model parameters, which allows for a greater ability to capture complex patterns in data. In this way, the bot learns more intricate and nuanced patterns, leading to emergent behaviors and capabilities. How might that happen? The billions of parameters are assessed within the layers of a model. It is not publicly known how many layers exist within these models, but likely there are at least 100.
Other than the input and output layers, the remainder are called “hidden layers.” It is this hidden aspect that leads to these being “black boxes” where no one understands exactly how they work, although it is believed that emergent behaviors arise from the complex interactions between the layers of a neural network.
There is something happening here: In-context learning and theory of mind New techniques such as visualization and interpretability methods are beginning to provide some insight into the inner workings of these neural networks. As reported by Vice, researchers document in a forthcoming study a phenomenon called “in-context learning.” The research team hypothesizes that AI models that exhibit in-context learning create smaller models inside themselves to achieve new tasks. They found that a network could write its own machine learning (ML) model in its hidden layers.
This happens unbidden by the developers, as the network perceives previously undetected patterns in the data. This means that — at least within certain guidelines provided by the model — the network can become self-directed.
At the same time, psychologists are exploring whether these LLMs are displaying human-like behavior. This is based on “ theory of mind ” (ToM), or the ability to attribute mental states to oneself and others. ToM is considered an important component of social cognition and interpersonal communication, and studies have shown that it develops in toddlers and grows in sophistication with age.
Evolving theory of mind Michal Kosinski, a computational psychologist at Stanford University, has been applying these criteria to GPT. He did so without providing the models with any examples or pre-training. As reported in Discover, his conclusion is that “a theory of mind seems to have been absent in these AI systems until last year [2022] when it spontaneously emerged.” From his paper abstract : “Our results show that models published before 2022 show virtually no ability to solve ToM tasks. Yet, the January 2022 version of GPT-3 (davinci-002) solved 70% of ToM tasks, a performance comparable with that of seven-year-old children. Moreover, its November 2022 version (davinci-003), solved 93% of ToM tasks, a performance comparable with that of nine-year-old children. These findings suggest that ToM-like ability (thus far considered to be uniquely human) may have spontaneously emerged as a byproduct of language models’ improving language skills.” This brings us back to Bing Chat and Sydney. We don’t know which version of GPT underpins this bot, although it could be more advanced than the November 2022 version tested by Kosinski.
Sean Hollister, a reporter for The Verge , was able to go beyond Sydney and Riley and encounter 10 different alter egos out of Bing Chat. The more he interacted with them, the more he became convinced this was a “single giant AI hallucination.” This behavior could also reflect in-context models being effectively created in the moment to address a new inquiry, and then possibly dissolved. Or not.
In any case, this capability suggests that LLMs display an increasing ability to converse with humans, much like a 9-year-old playing games. However, Sydney and sidekicks seem more like teenagers, perhaps due to a more advanced version of GPT. Or, as James Vincent argues in The Verge, it could be that we are simply seeing our stories reflected back to us.
An AI melding It’s likely that all the viewpoints and reported phenomena have some amount of validity. Increasingly complex models are capable of emergent behaviors and can solve problems in ways that were not explicitly programmed, and are able to perform tasks with greater levels of autonomy and efficiency. What is being created now is a melting pot AI possibility, a synthesis where the whole is indeed greater than the sum of its parts.
A threshold of possibility has been crossed. Will this lead to a new and innovative future? Or to the dark vision espoused by Elon Musk and others where an AI kills everyone? Or is all this speculation simply our anxious expressions from venturing into unchartered waters? We can only wonder what will happen as these models become more complex and their interactions with humans become increasingly sophisticated. This underscores the critical importance for developers and policymakers to seriously consider the ethical implications of AI and work to ensure that these systems are used responsibly.
Gary Grossman is SVP of technology practice at Edelman and global lead of the Edelman AI Center of Excellence.
DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,026 | 2,022 |
"Better than humans? AI barrels towards AGI | VentureBeat"
|
"https://venturebeat.com/ai/better-than-humans-ai-barrels-towards-agi"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest Better than humans? AI barrels towards AGI Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Artificial intelligence (AI) breakthroughs are coming ever faster. AI technology is already found across a multitude of uses, from addressing climate change to exploring space, developing cancer therapies and providing real-world navigation for robots. The number of research papers focused on AI in recent years has grown so rapidly that it seems almost exponential. While we are still some ways away from widespread AI adoption across all spheres of human endeavor, it is safe to say the technology has now crossed the chasm between early adopters of new and little-known products and mass adoption by mainstream users.
The most buzz-worthy AI breakthrough of the year is the new category of generative AI , which is based on large language models.
Almost overnight, a proliferation of image generation tools appeared, including DALL-E from OpenAI, Imagen from Google, Stable Diffusion from Stability.ai and Midjourney. I wrote a few months ago about the disruptive impact of these tools on creative occupations, ranging from digital artists to programmers.
As dramatic as these developments are, possibly more significant is the new conversational text bot ChatGPT , also from OpenAI and based on GPT-3.5. It has been trained on a massive amount of text data from a variety of online sources. Among other things, it can chat, answer questions, create plays and articles, write and debug code, take tests, manipulate data, provide advice and tutor.
ChatGPT has already been widely discussed online, including by savvy reporters Kevin Roose in the New York Times and Derek Thompson at the Atlantic.
Thompson calls this and other recent generative AI tools a “second mind for the creative class.” Roose wrote that ChatGPT “is already being compared to the iPhone in terms of its potential impact on society.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! ChatGPT launched on wednesday. today it crossed 1 million users! However, ChatGPT is in its early days. Nearly everyone, including OpenAI, acknowledges that the tech is far from a perfected product, as evidenced by the “ occasionally incorrect information ” it generates. Nevertheless, as Jack Clark states in his Import AI newsletter : “In a few years, these systems might be better than humans, which is going to have wild implications.” AI: New master of strategy While these notable generative AI highlights are hugely important, several other recent AI developments may ultimately have even greater impact on the world. One example is the recent AI defeat of human experts in Stratego, a strategic war game for two players that requires long-term thinking, bluffing and strategizing. Deep Mind’s DeepNash algorithm, a trained autonomous agent that can develop human-level expertise, underpins the AI playing Stratego. DeepNash is based on an entirely new approach to algorithms using game theory and model-free deep reinforcement learning.
Unlike chess and Go, Stratego is a game of imperfect information: Players cannot directly observe the identities of their opponent’s pieces. It is thought to be among the most difficult games, due to its seemingly infinite number of possible moves (a staggering 10 535 ), more than even the notoriously complex Go (10 360 ). To win, DeepNash mixed both long-term strategy and short-term decision-making like bluffing and taking chances, a unique capability for an AI.
As reported by Singularity Hub , the researchers stated: “In creating a generalizable AI system that’s robust in the face of uncertainty, we hope to bring the problem-solving capabilities of AI further into our inherently unpredictable world.” The art of diplomacy Speaking of unpredictability, Meta recently unveiled “Cicero” — an AI system named after the classical statesman and scholar who witnessed the fall of the Roman Republic — that bested people in another strategic war game, Diplomacy.
Unlike Stratego, chess or Go — which are all zero-sum, winner-take-all competitions – Diplomacy is collaborative and competitive at the same time. Up to seven players compete, negotiating using deception and collaboration, trust and betrayal, to form and break alliances in pursuit of total domination. In other words, Diplomacy is much like real-life strategic negotiations among multiple competing entities, be they game players, businesses or countries. As reported by Gizmodo , “to ‘win’ at Diplomacy [the AI] needs to both understand the rules of the game efficiently [and] fundamentally understand human interactions, deceptions, and cooperation.” This rich capability gets to the heart of what Meta was seeking to develop: “Can we build more effective and flexible agents that can use language to negotiate, persuade and work with people to achieve strategic goals similar to the way humans do?” The company claims Cicero achieved more than double the average score of the humans playing on webDiplomacy.net and ranked in the top 10% of participants who played more than one game.
Meta positions Cicero as a research breakthrough that combines two different areas of AI: strategic reasoning and natural language processing.
According to three-time Diplomacy world champion Andrew Goff: “Cicero is resilient, it’s ruthless, and it’s patient.” He adds: “It makes the best decision, not only for itself but for the people it’s working with.” Generalized AI “Narrow AI” incorporates algorithms that do only one thing, albeit extremely well — such as making a recommendation for what book you might like based on books you’ve previously viewed on an ecommerce site. A narrow AI algorithm cannot effectively transfer anything it has learned to another algorithm designed to fulfill a different specific purpose.
The other end of the AI spectrum is deemed “strong AI” or alternatively, artificial general intelligence (AGI). Probably every AI expert would agree this does not exist today and remains in the realm of science fiction. If and when AGI is achieved, it would be a single AI system — or possibly a group of linked systems — that could be applied to any task or problem because it can act and think much like humans.
Murray Shanahan, a professor of cognitive robotics at Imperial College in London, said on the Exponential View podcast that AGI is “in some sense as smart as humans, and capable of the same level of generalization as human beings are capable of, and possesses common sense that humans have.” This sounds much like the capabilities of this new wave of strategy algorithms.
However, there is not a single AGI definition. For example, Elon Musk does not think that ChatGPT qualifies, as it hasn’t invented anything amazing: To be called AGI, it needs to invent amazing things or discover deeper physics – many humans have done so. I’m not seeing that potential yet.
Toward artificial general intelligence (AGI) By these criteria, at least, ChatGPT is not AGI, and neither are DeepNash or Cicero. What they all have in common, however, is a clear advance in this direction. As Stuart Russell, professor of computer science at the University of California and a leading researcher in artificial intelligence, notes: “The actual date of arrival of general-purpose AI, you’re not going to be able to pinpoint; it isn’t just a single day. It’s also not the case that it’s all or nothing. The impact [of AI] is going to be increasing. So with every advance of AI, it significantly expands the range of tasks.” With each passing year, we can expect to see much greater capabilities on the march to AGI as these models become more sophisticated and new systems appear.
Given the pace and scope of recent AI breakthroughs and the huge growth in the number of research papers, we can expect developments to come ever faster with profound implications for work and life. For example, within several years, ChatGPT or a similar system could become an app that resembles Samantha in the movie Her.
ChatGPT already does some of what Samantha did: an AI that remembers prior conversations, develops insights based on those discussions, provides useful guidance and therapy and can do that simultaneously with thousands of users. Or imagine NATO using tools like DeepNash or Cicero with its members or in negotiations with rivals.
We are witnessing a gathering momentum towards AGI, though experts’ estimate of its time of appearance is 2045. AGI or not, AI technology is becoming much more sophisticated and becoming deeply engrained in the fabric of our lives.
Gary Grossman is the senior VP of technology practice at Edelman and global lead of the Edelman AI Center of Excellence.
DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,027 | 2,021 |
"Automation is expanding. How worried should we be about jobs? | VentureBeat"
|
"https://venturebeat.com/ai/automation-is-expanding-how-worried-should-we-be-about-jobs"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest Automation is expanding. How worried should we be about jobs? Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
A few days ago I was having network problems with the WiFi in my home office — my connection was very slow and video conferences were freezing. After fussing with the mesh network extenders with no result, I called the cable provider. Normally, this involves having to navigate several automated voice response menus before a call center representative comes on the line to help. I explain the problem and they then run a remote diagnostic and usually suggest a cable modem reset. Often, that solves the problem.
But in my latest attempt to get back online, the menus had changed and speaking to a representative was no longer an option. Just, “press 1 to reset the modem.” And just like that, it worked.
Something is lost in this process, but something is gained. The human was removed, and the problem was resolved — probably in less time. Explaining the problem to a human had often been a source of frustration, due to language challenges or possibly my poor descriptive abilities. In fact, some of my worst customer service experiences have been with this cable company. But with this change it was hard to miss the advance in automation and I had to acknowledge the role of AI as a key enabler.
This story is, in fact, an example of what is now often called intelligent automation , the combination of artificial intelligence and automation that synthesizes vast amounts of information to automate entire processes or workflows I also had to wonder what happened to the call center representative. Did they go on to one of those positions we hear about that entail higher strategy tasks ? Or perhaps they received a layoff notice. I tried not to think of their personal circumstances, about whether they could readily find other work or would face real hardships. Then again, maybe this will free this worker to pursue a more interesting opportunity.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! This dual nature of automation — the increase in efficiency and productivity along with the potential human impacts — is the stuff of anxious dreams. Because we hear the same two stories — many jobs will disappear, while new professions will emerge to replace them. The anxiety lives in the gap between, wondering and worrying about what this new reality will bring.
What if new jobs do not materialize? Even if these new professions do not materialize, not to worry because there will be so much wealth generated by AI and automation that every adult will receive a monthly stipend, much as Alaska residents receive from oil royalties. At least that is the point of view of OpenAI Co-founder and CEO Sam Altman, as expressed in a recent blog , where he writes that we are witnessing a “recursive loop of innovation” that is both accelerating and unstoppable. Altman goes on to argue that the AI revolution will generate enough wealth for everyone to have what they need, spinning off dividends of $13,500 a year.
It could be that his view is inspired and filled with a generosity of spirit, or it could be disingenuous. The Universal Basic Income Altman envisions would be great as a bonus but a poor Faustian bargain if in the process many join the ranks of the long-term unemployed. Several other people have pointed to the flaws in his proposal. For example, Matt Prewitt, president of non-profit RadicalxChange, commented : “The [Altman] piece sells a vision of the future that lets our future overlords off way too easy, and would likely create a sort of peasant class encompassing most of society.” The prospect of a permanent underclass brought about by AI and automation is increasingly portrayed in fiction looking out on the next 20 to 50 years. In The Resisters , a novel by Gish Jen, unemployed people are deemed “Surplus,” meaning there is no work for them. Instead, they are issued a Universal Basic Income at levels just above subsistence. In the new novel Klara and the Sun from Nobel prize winning author Kazuo Ishiguro, large swaths of the population have been “substituted” by automation. The novel describes how a growing income disparity between those with jobs and those without leads to a fracturing of society with increasing tribalism and fascist ideology.
Burn-In , a novel from P. W. Singer and August Cole, describes growing automation that has taken millions of jobs and left many people fearful that the future is leaving them behind. In their extensively documented novel, referencing technology that already exists or is far along in development, AI has advanced so far that once-safe fields such as law or finance have been taken over by algorithms, leading to political backlash, with large numbers of people becoming radicalized in extreme virtual communities.
Taken together, these portrayals of the not-too-distant future are a long way from Altman’s utopia.
It remains to be seen where automation will lead us. Perhaps what will determine the true tipping point is how our institutions respond to this new reality as it accelerates and evolves. Altman warns that if in response to these changes “public policy doesn’t adapt accordingly, most people will end up worse off than they are today.” Are employment worries justified? Not everyone is concerned about AI and automation. On the one hand, it is broadly granted that the COVID-19 pandemic accelerated automation and reduced employment, what the World Economic Forum describes as a “double-disruption” scenario for workers leading to growing inequality. On the other hand, some argue there will be changes in the types of available work — and some people will be displaced (like my call center representative) — but overall employment will not be greatly impacted. Afterall, as these arguments often go, this is what has happened in prior technology revolutions.
According to Richard Cooper, the Maurits C. Boas Professor of International Economics at Harvard University, “new technology often destroys existing jobs, but it also creates many new possibilities through several different channels.” Cooper says those new opportunities can take decades to emerge, though, which doesn’t sync with the pace of post-COVID job losses. Others argue that dystopian predictions about automation are fraught with exaggerated timelines and that the feared robot apocalypse is still far away.
Most likely, the full impact of automation will not likely be seen until some years into the future. That is the conclusion of a PwC study from a couple of years ago that described several waves of automation.
During the first wave, they expect relatively low displacement, “perhaps only around 3% by the early 2020s.” This could explain why the debate about the impact still seems more theoretical than pressing, with far more substantial impacts over the next 10 to 15 years. During the first and second waves, women could be at greater risk of automation due to their higher representation in clerical and other administrative functions, but later automation will put more men at risk. It’s worth underscoring that PwC did this analysis pre-COVID and so its conclusions don’t account for the rapid uptake of automation over the past year and how this could further accelerate the waves of automation going forward.
Source: PwC estimates by gender based on OECD PIAAC data (median values for 29 countries) Nevertheless, we can already see and feel it, and this is permeating throughout society. It is not only those doing routine work who are at risk, but increasingly those in white collar professions. A recent PwC survey of employees worldwide revealed that “60% are worried that automation is putting many jobs at risk; 48% believe ‘traditional employment won’t be around in the future,’ and 39% think it is likely that their job will be obsolete within five years.” The longer-term impact of AI and automation on work is not really in doubt. Many positions will be disrupted and people replaced, even as other employment opportunities may be created. The net effect is likely to be positive for the economy.
This could be good for economists and corporate shareholders. Though whether it is positive for a large percentage of the population or produces a sizable permanent underclass is very much to be determined. Automation will not likely bring about either utopia or dystopia. Instead, it will lead to both, with different groups experiencing these very different realities.
Gary Grossman is the Senior VP of Technology Practice at Edelman and Global Lead of the Edelman AI Center of Excellence.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,028 | 2,022 |
"AI text-to-image processors: Threat to creatives or new tool in the toolbox? | VentureBeat"
|
"https://venturebeat.com/ai/ai-text-to-image-processors-threat-to-creatives-or-new-tool-in-the-toolbox"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community AI text-to-image processors: Threat to creatives or new tool in the toolbox? Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
An image produced from scratch by a video game designer using an AI tool recently won an art competition at the Colorado State Fair, as has been widely reported.
Some artists are alarmed, but should they be? Jason Allen's AI-generated work "Théâtre D'opéra Spatial" took first place in the digital category at the Colorado State Fair.
https://t.co/6bFNFERCki For several years AI has been incorporated into tools used by artists every day, from computational photography within the Apple iPhone to image enhancement tools from Topaz Labs and Lightricks , and even open source applications.
But because an image generated entirely by an AI tool won a competition, some see this as a tipping point — a sign of an AI catastrophe to come that will lead to widespread job displacement for those in creative fields including graphic design and illustration, photography, journalism, creative writing and even software development.
A new AI image generator appears to be capable of making art that looks 100% human made. As an artist I am extremely concerned.
pic.twitter.com/JUSW0x8Woa The winning image was generated using Midjourney , a cloud-based text-to-image tool developed by a small research lab by that name that is “exploring new mediums of thought and expanding the imaginative powers of the human species.” Their product is a text-to-image generator, the result of AI neural networks trained on vast numbers of images. The company has not disclosed its technology stack, but CEO David Holz said it uses very large AI models with billions of parameters. “They’re trained over billions of images.” Although Midjourney has only recently emerged from stealth mode, already hundreds of thousands of people are using the service.
There is suddenly a proliferation of similar tools, including DALL-E from OpenAI and Imagen from Google. According to a Vanity Fair story , Imagen provides “photorealistic images [that] are even more indistinguishable from the real thing.” Stable Diffusion from Stability.ai is another new text-to-image tool that is open-source and can run locally on a PC with a good graphics card. Stable Diffusion can also be used via art generator services including Artbreeder , Pixelz.ai and Lightricks.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Using is believing As an avid hobbyist photographer who displays work in galleries, I have my own concerns that these tools could mark the end of photography. I decided to try Midjourney myself to see what it could output, and to better think through the possible ramifications. The following image was generated by trying variations on these text prompts: “An emerald-green lake backed by steep Canadian Rockies + A few patches of snow on the mountains + Soft morning light + mountains with green conifer forest + Sunrise + 4K UHD.” This seems like an amazing result for a novice user. The total time it took from when I first accessed the system to the final image was less than 30 minutes. I must admit to experiencing a childlike wonder as I watched the image materialize in mere seconds from the prompts I supplied. This brought to memory a 60-year-old quote from science fiction writer and futurist Arthur C. Clarke: “Any sufficiently advanced technology is indistinguishable from magic.” It felt like magic.
There are others using Midjourney who display far more sophistication. For example, one user produced an “alien cat” image from more than 30 text prompts including: “cat+alien with rainbow shimmering scales, glowing, hyper-detailed, micro details, ultra-wide angle, octane render, realistic …” It appears that more detailed prompts can lead to more sophisticated and higher-quality images.
These AI text-to-image tools are already good enough for commercial endeavors. Creative artist Karen X. Cheng was engaged to create an AI-produced cover image for Cosmopolitan. To help generate ideas and the final image, she used DALL-E, or more specifically the newest version, DALL-E 2.
Cheng describes the process including the search for the right set of prompts, noting that she generated thousands of images, modifying the text prompts hundreds of times over many hours before finding one image that felt right.
I used @OpenAI #dalle2 to create the first ever AI-generated magazine cover for @Cosmopolitan !! The prompt I used is at the end of the video #dalle pic.twitter.com/sbM2qbTAbq Text-to-image: A new tool or threat to a way of life? In a LinkedIn post , Cheng commented: “I think the natural reaction is to fear that AI will replace human artists. Certainly, that thought crossed my mind, especially in the beginning. But the more I use DALL-E, the less I see this as a replacement for humans, and the more I see it as tool for humans to use — an instrument to play.” I had the same feeling when using Midjourney. I posted the Canadian Rockies image on Flickr, an image-sharing site for artists — mainly photographers and digital artists — and asked for opinions. Specifically, I wanted to know whether people viewed an AI image generator as an abomination and threat or simply another tool. One professional responded: “I’ve also been playing around with Midjourney. I’m a creative! How can I NOT mess around with it to see what it can do? I am of the opinion that the results are art, even though it is AI-generated. A human imagination creates the prompt, then curates the results or tries to coax a different result from the system. I think it’s wonderful.” A common refrain in the debate over AI is that it will destroy jobs. The response to this worry is often twofold: first, that many existing jobs will be augmented by AI such that humans and machines working together will produce better output by extending human creativity, not replacing it; second, that AI will also create new jobs, possibly in fields that did not exist before.
Entrepreneur and influencer Rob Lennon predicted recently that AI text and image generators will lead to new career opportunities, specifically citing “prompt engineering.” Prompt craft is the art of knowing how to write a prompt to get optimal results from an AI. The best prompts are concise while giving the AI context to understand the desired outcome. Already, PromptBase has started to market this service. Its platform enables prompt engineers to “sell text descriptions that reliably produce a certain art style or subject on a specific AI platform.” Megan Paetzhold , a photo editor at New York magazine, put DALL-E to the test with assignments she would normally give to artists on her team. In the end, she called it “a draw” and noted: “DALL-E never gave me a satisfying image on the first try — there was always a workshopping process.” She added: “As I refined my techniques, the process began to feel shockingly collaborative; I was working with DALL-E rather than using it. DALL-E would show me its work, and I’d adjust my prompt until I was satisfied.” Isn’t there a dark side? Clearly, these tools can be used to produce high-quality content. While many creative jobs could ultimately be threatened, for now, text-to-image generators are an example of people and machines working together in a new area of artistic exploration. Ethically, the key is to disclose that an image or text was created using an AI generator so people know that the content has been produced by a machine. They may like the output or not, and in that regard, it is no different from any other creative endeavor.
This perspective will not satisfy everyone. Many writers, photographers, illustrators and other creatives — even if they agree that the AI generation tools lack refinement — believe it is only a matter of time until they, the creative professionals, are replaced by machines. Bloomberg technology editor Vlad Savov encapsulated these arguments, seeing these tools as both stifling and ripping off artists. He may ultimately be correct, though as a respondent to my Flickr query noted, “It is another kind of art, which is not necessarily bad and potentially allows for incredible creativity.” Another wrote, “I don’t feel threatened by AI. Everything changes.” It does. I guess we just thought there would be more time.
It is possible these tools are just one more in the artist’s kit. They will be used to produce images and text that will be enjoyed and sold. As Jesus Diaz writes in Fast Company: “Once you try a text-to-image program, the joy of artificial intelligence seems undeniable despite the many dangers that lie ahead.” This does not automatically mean that more traditional creative pursuits will vanish. Ironically there may come a time in the not-too-distant future when “human-made” will carry a cachet, and work produced without an AI image or text generator could command a premium.
Gary Grossman is the senior VP of technology practice at Edelman and global lead of the Edelman AI Center of Excellence.
DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,029 | 2,021 |
"AI is turning us into de facto cyborgs | VentureBeat"
|
"https://venturebeat.com/ai/ai-is-turning-us-into-de-facto-cyborgs"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest AI is turning us into de facto cyborgs Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Progress in technology and increased levels of private investment in startup AI companies is accelerating, according to the 2021 AI Index , an annual study of AI impact and progress developed by an interdisciplinary team at the Stanford Institute for Human-Centered Artificial Intelligence. Indeed, AI is showing up just about everywhere. In recent weeks, there have been stories of how AI is used to monitor the emotional state of cows and pigs , dodge space junk in orbit, teach American Sign Language , speed up assembly lines , win elite crossword puzzle tournaments, assist fry cooks with hamburgers, and enable “ hyperautomation.
” Soon there will be little left for humans to do beyond writing long-form journalism — until that, too, is replaced by AI.
The text generation engine GPT-3 from OpenAI is potentially revolutionary in this regard, leading a New Yorker essay to claim: “Whatever field you are in, if it uses language, it is about to be transformed.” AI is marching forward, and its wonders are increasingly evident and applied. But the outcome of an AI-forward world is up for debate. While this debate is underway, at present it focuses primarily on data privacy and how bias can negatively impact different social groups. Another potentially greater concern is that we are becoming dangerously dependent on our smart devices and applications. This reliance could lead us to become less inquisitive and more trusting of the information we are provided as accurate and authoritative.
Or that, like in the animated film WALL-E, we will be glued to screens, distracted by mindless entertainment, literally and figuratively fed empty calories without lifting a finger while an automated economy carries on without us. In this increasingly plausible near-future scenario, people will move through life on autopilot, just like our cars. Or perhaps we have already arrived at such a place.
Caption: Humans on the Axiom spaceship in Pixar’s WALL-E; Source VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Welcome to Humanity 2.0 If smartphone use is any indication, there is cause for worry. Nicolas Carr wrote in The Wall Street Journal about research suggesting our intellect weakens as our brain grows dependent on phone technology. Likely the same could be said for any information technology where content flows our way without us having to work to learn or discover on our own. If that’s true, then AI applications, which increasingly present content tailored to our specific interests, could create a self-reinforcing syndrome that not only locks us into our information bubbles through algorithmic editing , but also weakens our ability to engage in critical thought by spoon-feeding us what we already believe.
Tristan Green argues that humans are already cyborgs , human-machine hybrids that are heavily dependent on information flowing from the Internet. During a weekend without access to this constant connection: “I found myself having difficulty thinking. By Sunday evening I realized that I use the almost-instant connection I have with the internet to augment my mental abilities almost constantly. … I wasn’t aware of how much I rely on the AI-powered internet as a performance aid.” Which is perhaps why Elon Musk believes we will need to augment our brains with instantaneous access to the Internet for humans to effectively compete with AI, this being the initial rationale behind his Neuralink brain-machine interface company.
The AI revolution could be different I’ve read many analyses from AI pundits arguing that AI will be no different from other technology innovations, such as the transition from the horse economy to the automobile. Usually these arguments are made in the context of AI’s impact on jobs and concludes there will be social displacement in the short-term for some but long-term growth for the collective whole. The thinking is that new industries will birth new jobs and malleable people will adapt.
But there is a fundamental difference with the AI revolution. Previous instances involved replacing brute force with labor-saving automation. AI is different in that it outsources more than physical labor, it also outsources cognition , which is thinking and decision making. Shaun Nichols, professor in the Sage School of Philosophy at Cornell University, said in a recent panel discussion on AI : “We already outsource ethically important decisions to algorithms. We use them for kidney transplants, air traffic control, and to determine who gets treated first in emergency rooms.” As stated by the World Economic Forum, we are progressively subject to decisions with the assistance of — or even taken by — AI systems.
Are we losing our agency? Algorithms now shape our thoughts and increasingly make decisions on our behalf. Wittingly or not, AI is doing so much for us that some are dubbing it an “ intelligence revolution ,” which forces the question, have we already become de facto cyborgs, and if so, do we still have agency? Agency is the power of humans to think for ourselves and act in ways that shape our experiences and life trajectories. Yet, the algorithms driving search and social media platforms , book and movie recommendations, regularly shape what billions of people read and see. If this was thoughtfully curated for our betterment, it might be okay. But as film director Martin Scorsese states, their purpose is only to increase consumption.
It seems we have already outsourced agency to algorithms designed to increase corporate well-being. This may not be overtly malicious, but it is hardly benign. Our thoughts are being molded, either by our existing beliefs that are reinforced by algorithms inferring our interests, or through intentional or unintentional biases from the various information platforms. Which is to say that our ability to perform critical thinking is both constrained and shaped by the very systems meant to aid and hopefully stimulate our thinking. We are entering a recursive loop where thinking coalesces into ever tighter groupings — the often-discussed polarization — that reduce variability and hence diversity of opinion.
It is as if we are the subjects in a grand social science experiment, with the resulting human opinion clusters determined by the AI-powered inputs and the outputs discerned by machine learning. This is qualitatively different from an augmentation of intelligence and instead augers a merger of humans and machines that is creating the ultimate group think.
It has never been easy to confront large societal problems, but they will become more challenging if humanity continues the path of outsourcing its thinking to algorithms that are not in our collective best interests. All of which begs the question, do we control AI technology or are we already being controlled by the technology? Gary Grossman is the Senior VP of Technology Practice at Edelman and Global Lead of the Edelman AI Center of Excellence.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,030 | 2,023 |
"AI doom, AI boom and the possible destruction of humanity | VentureBeat"
|
"https://venturebeat.com/ai/ai-doom-ai-boom-and-the-possible-destruction-of-humanity"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest AI doom, AI boom and the possible destruction of humanity Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war.” This statement , released this week by the Center for AI Safety (CAIS), reflects an overarching — and some might say overreaching — worry about doomsday scenarios due to a runaway superintelligence.
The CAIS statement mirrors the dominant concerns expressed in AI industry conversations over the last two months: Namely, that existential threats may manifest over the next decade or two unless AI technology is strictly regulated on a global scale.
The statement has been signed by a who’s who of academic experts and technology luminaries ranging from Geoffrey Hinton (formerly at Google and the long-time proponent of deep learning) to Stuart Russell (a professor of computer science at Berkeley) and Lex Fridman (a research scientist and podcast host from MIT). In addition to extinction, the Center for AI Safety warns of other significant concerns ranging from enfeeblement of human thinking to threats from AI-generated misinformation undermining societal decision-making.
Doom gloom In a New York Times article , CAIS executive director Dan Hendrycks said: “There’s a very common misconception, even in the AI community, that there only are a handful of doomers. But, in fact, many people privately would express concerns about these things.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “Doomers” is the keyword in this statement. Clearly, there is a lot of doom talk going on now. For example, Hinton recently departed from Google so that he could embark on an AI-threatens-us-all doom tour.
Throughout the AI community, the term “P(doom)” has become fashionable to describe the probability of such doom. P(doom) is an attempt to quantify the risk of a doomsday scenario in which AI, especially superintelligent AI, causes severe harm to humanity or even leads to human extinction.
On a recent Hard Fork podcast , Kevin Roose of The New York Times set his P(doom) at 5%. Ajeya Cotra, an AI safety expert with Open Philanthropy and a guest on the show, set her P(doom) at 20 to 30%. However, it needs to be said that P(doom) is purely speculative and subjective, a reflection of individual beliefs and attitudes toward AI risk — rather than a definitive measure of that risk.
Not everyone buys into the AI doom narrative. In fact, some AI experts argue the opposite. These include Andrew Ng (who founded and led the Google Brain project) and Pedro Domingos (a professor of computer science and engineering at the University of Washington and author of The Master Algorithm ). They argue, instead, that AI is part of the solution. As put forward by Ng, there are indeed existential dangers, such as climate change and future pandemics, and that AI can be part of how these are addressed and hopefully mitigated.
Overshadowing the positive impact of AI Melanie Mitchell, a prominent AI researcher, is also skeptical of doomsday thinking. Mitchell is the Davis Professor of complexity at the Santa Fe Institute and author of Artificial Intelligence: A Guide for Thinking Humans.
Among her arguments is that intelligence cannot be separated from socialization.
In Towards Data Science , Jeremie Harris, co-founder of AI safety company Gladstone AI, interprets Mitchell as arguing that a genuinely intelligent AI system is likely to become socialized by picking up common sense and ethics as a byproduct of their development and would, therefore, likely be safe.
While the concept of P(doom) serves to highlight the potential risks of AI, it can inadvertently overshadow a crucial aspect of the debate: The positive impact AI could have on mitigating existential threats.
Hence, to balance the conversation, we should also consider another possibility that I call “P(solution)” or “P(sol),” the probability that AI can play a role in addressing these threats. To give you a sense of my perspective, I estimate my P(doom) to be around 5%, but my P(sol) stands closer to 80%. This reflects my belief that, while we shouldn’t discount the risks, the potential benefits of AI could be substantial enough to outweigh them.
This is not to say that there are no risks or that we should not pursue best practices and regulations to avoid the worst imaginable possibilities. It is to say, however, that we should not focus solely on potential bad outcomes or claims, as does a post in the Effective Altruism Forum, that doom is the default probability.
The alignment problem The primary worry, according to many doomers, is the problem of alignment, where the objectives of a superintelligent AI are not aligned with human values or societal objectives. Although the subject seems new with the emergence of ChatGPT, this concern emerged nearly 65 years ago. As reported by The Economist, Norbert Weiner — an AI pioneer and the father of cybernetics — published an essay in 1960 describing his worries about a world in which “machines learn” and “develop unforeseen strategies at rates that baffle their programmers.” The alignment problem was first dramatized in the 1968 film 2001: A Space Odyssey.
Marvin Minsky, another AI pioneer, served as a technical consultant for the film. In the movie, the HAL 9000 computer that provides the onboard AI for the spaceship Discovery One begins to behave in ways that are at odds with the interests of the crew members. The AI alignment problem surfaces when HAL’s objectives diverge from those of the human crew.
When HAL learns of the crew’s plans to disconnect it due to concerns about its behavior, HAL perceives this as a threat to the mission’s success and responds by trying to eliminate the crew members. The message is that if an AI’s objectives are not perfectly aligned with human values and goals, the AI might take actions that are harmful or even deadly to humans, even if it is not explicitly programmed to do so.
Fast forward 55 years, and it is this same alignment concern that animates much of the current doomsday conversation. The worry is that an AI system may take harmful actions even without anybody intending them to do so. Many leading AI organizations are diligently working on this problem. Google DeepMind recently published a paper on how to best assess new, general-purpose AI systems for dangerous capabilities and alignment and to develop an “early warning system” as a critical aspect of a responsible AI strategy.
A classic paradox Given these two sides of the debate — P(doom) or P(sol) — there is no consensus on the future of AI. The question remains: Are we heading toward a doom scenario or a promising future enhanced by AI? This is a classic paradox. On one side is the hope that AI is the best of us and will solve complex problems and save humanity. On the other side, AI will bring out the worst of us by obfuscating the truth, destroying trust and, ultimately, humanity.
Like all paradoxes, the answer is not clear. What is certain is the need for ongoing vigilance and responsible development in AI. Thus, even if you do not buy into the doomsday scenario, it still makes sense to pursue common-sense regulations to hopefully prevent an unlikely but dangerous situation. The stakes, as the Center for AI Safety has reminded us, are nothing less than the future of humanity itself.
Gary Grossman is SVP of technology practice at Edelman and global lead of the Edelman AI Center of Excellence.
DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,031 | 2,023 |
"AI chatbot frenzy: Everything everywhere (all at once) | VentureBeat"
|
"https://venturebeat.com/ai/ai-chatbot-frenzy-everything-everywhere-all-at-once"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest AI chatbot frenzy: Everything everywhere (all at once) Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
The Academy Award-winning film Everything Everywhere All at Once demonstrates that life is messy and unpredictable, implying — perhaps — that we should embrace chaos, find joy, learn to let go of our expectations and trust that everything will work out in the end.
This approach echoes the way in which many are currently approaching AI.
That said, experts are split on whether this technology will provide unlimited benefits and a golden era or lead to our destruction.
Bill Gates, for one, focuses mostly on the hopeful message in his recent Age of AI letter.
There is little doubt now that AI is hugely disruptive. Craig Mundie, the former chief research and strategy officer for Microsoft, knows a lot about technical breakthroughs. When Gates stepped down from his daily involvement with Microsoft in 2008, Mundie was tapped to fulfill his role as technological visionary.
Mundie said recently of the freshly launched GPT-4 and the updated ChatGPT: “This is going to change everything about how we do everything.
I think that it represents mankind’s greatest invention to date. It is qualitatively different — and it will be transformational.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The possibilities of “superhuman” amounts of work The current level of excitement around generative AI might simply reflect peak hype per the concept described by Gartner, referring to the peak of inflated expectations. AI has been in this position before, then suffered through two “AI winters” when excitement outpaced actual accomplishments.
These periods were characterized by collapsed investment and general disinterest by all except for a relatively small cadre of researchers. This time truly appears to be different, however, driven by the ongoing exponential growth of data, computing power and code leading to numerous impactful use cases.
For example, Fortune reported on work by Ethan Mollick, a Wharton professor of management. In only 30 minutes, he used generative AI tools to do market research, create a positioning document, write an email campaign, create a website, create a logo and hero image graphic, make a social media campaign for multiple platforms, develop a script and create a video.
He said in a detailed blog post, “what it accomplished was superhuman,” performing in a half hour what normally would have taken a team days to do. He then asks, “When we all can do superhuman amounts of work, what happens?” A Cambrian explosion of generative AI It is not an overstatement to say there is a Cambrian explosion of generative AI underway. This is especially true recently for chatbots powered by large language models (LLMs). The burst of activity was highlighted by the March 14 release of GPT-4, the latest LLM update from OpenAI. While GPT-4 was already in use within Bing Chat from Microsoft, the tech is now incorporated into ChatGPT and is rapidly being integrated into other products.
Google followed only a week later by formally launching Bard, their chatbot based on the LaMDA LLM. Bard had been announced several weeks before, but is now available in preview mode, accessible via a waitlist.
The initial reviews show similarities with ChatGPT — with the same facilities including writing poems and code — as well as shortcomings (such as hallucinations).
Google is stressing that Bard is not a replacement for its search engine but, rather, a “ compliment to search ” — a bot that users can bounce ideas off of, generate writing drafts, or just chat about life.
Proliferation of generative AI These were hardly the only significant generative AI announcements in recent weeks. Microsoft also announced that the image generation model DALL-E 2 is being incorporated into several of its tools. Google announced no fewer than five recent updates to their use of LLMs in Google products.
Beyond these developments were several additional chatbot introductions. Anthropic launched Claude, a “constitutional AI” chatbot using a “principle-based” approach to aligning AI systems with human intentions. Databricks released open-source code that companies can use to create their own chatbots.
Meta released the LLaMA LLM as a research tool for the scientific community, which was quickly leaked online, enabling any interested party to download and modify the model. Researchers at Stanford University used one of the leaked Meta models as a starting point and trained it using ChatGPT APIs, resulting in a system they claim performs similarly to ChatGPT but was produced for only $600.
Transformative, but how? The chatbot frenzy overshadowed other generative AI achievements, including the ability to reconstruct high-resolution and reasonably accurate images from brain activity. Unlike previous attempts, this latest effort, as documented in a research paper , didn’t need to train or fine-tune the AI models to create the images.
Instead, this reconstruction was achieved using diffusion models, such as what underpins DALLE-2, Midjourney, Stable Diffusion and other AI image generation tools. Journalist Jacob Ward says this discovery could one day lead to the ability for humans to beam images to each other via brain-to-brain communication.
Image beaming is still somewhere in the future. What might be the next big thing is video generation from text prompts.
News from Runway about version 2 of their video generator points to this near-term reality. For now, the video clips generated are short — only several seconds — but the potential is apparent.
All these recent AI advances are dizzying and even mesmerizing, leading to the proclamations of an unimaginable transformation and a new age for humanity, which is entirely plausible. However, historian Yuval Harari cautions that this is an important moment to slow down. He reminds us that language is the operating system of human culture.
With the new LLMs, “A.I.’s new mastery of language means it can now hack and manipulate the operating system of civilization.” While the ceiling of benefits is sky-high, so are the downside risks. Harari’s perspective is warranted and timely.
Do these advances move us closer to artificial general intelligence? While many believe that artificial general intelligence (AGI) will never be achieved, it is starting to look like it may already have arrived. New research from Microsoft discusses GPT-4 and states it is: “a first step towards a series of increasingly generally intelligent systems.” As reported by Futurism, the paper adds: “Given the breadth and depth of GPT-4’s capabilities, we believe that it could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system.” GPT-4 is based on deep learning, and there have been questions about whether this is a suitable basis for creating AGI, the stated mission of OpenAI. Gary Marcus, a leading voice on AI issues, has argued for a hybrid AI model to achieve AGI, one that incorporates both deep learning and classical symbolic operations. It appears OpenAI is doing just this by enabling plug-ins for ChatGPT.
WolframAlpha is one of those plug-ins. As reported by Stephen Wolfram in Stratechery: “For decades, there’s been a dichotomy in thinking about AI between ‘statistical approaches’ of the kind ChatGPT uses, and ‘symbolic approaches’ that are in effect the starting point for Wolfram|Alpha.
But now — thanks to the success of ChatGPT — as well as all the work we’ve done in making Wolfram|Alpha understand natural language — there’s finally the opportunity to combine these to make something much stronger than either could ever achieve on their own.” Already, the plug-in is noticeably minimizing the hallucinations within ChatGPT, leading to more accurate and useful results. But, even more significantly, the path to AGI just became much shorter.
Indeed, everything everywhere all at once.
Gary Grossman is SVP of technology practice at Edelman and global lead of the Edelman AI Center of Excellence.
DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,032 | 2,022 |
"AI algorithms could disrupt our ability to think | VentureBeat"
|
"https://venturebeat.com/ai/ai-algorithms-could-disrupt-our-ability-to-think"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community AI algorithms could disrupt our ability to think Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Last year, the U.S. National Security Commission on Artificial Intelligence concluded in a report to Congress that AI is “world altering.” AI is also mind altering as the AI-powered machine is now becoming the mind. This is an emerging reality of the 2020s. As a society, we are learning to lean on AI for so many things that we could become less inquisitive and more trusting of the information we are provided by AI-powered machines. In other words, we could already be in the process of outsourcing our thinking to machines and, as a result, losing a portion of our agency.
The trend towards greater application of AI shows no sign of slowing. Private investment in AI is at an all-time high , totaling $93.5 billion in 2021 — double the amount from the prior year — according to the Stanford Institute for Human-Centered Artificial Intelligence. And the number of patent filings related to AI innovation in 2021 is 30 times greater than the filings in 2015. This is proof the AI gold rush is running full force. Fortunately, much of what is being achieved with AI will be beneficial, as evidenced by examples of AI helping to solve scientific problems ranging from protein folding to Mars exploration and even communicating with animals.
Most AI applications are based on machine learning and deep learning neural networks that require large datasets. For consumer applications, this data is gleaned from personal choices, preferences, and selections on everything from clothing and books to ideology. From this data, the applications find patterns, leading to informed predictions of what we would likely need or want or would find most interesting and engaging. Thus, the machines are providing us with many useful tools, such as recommendation engines and 24/7 chatbot support. Many of these apps appear useful — or, at worst, benign.
An example that many of us can relate to are AI-powered apps that provide us with driving directions.
These are undoubtedly helpful, keeping people from getting lost. I have always been very good at directions and reading physical maps. After having driven to a location once, I have no problem getting there again without assistance. But now I have the app on for nearly every drive, even for destinations I have driven many times. Maybe I’m not as confident in my directions as I thought; maybe I just want the company of the soothing voice telling me where to turn; or maybe I’m becoming dependent on the apps to provide direction. I do worry now that if I didn’t have the app, I might no longer be able to find my way.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Perhaps we should be paying more attention to this not-so-subtle shift in our reliance on AI-powered apps. We already know they diminish our privacy.
And if they also diminish our human agency, that could have serious consequences. If we trust an app to find the fastest route between two places, we are likely to trust other apps and will increasingly move through life on autopilot, just like our cars in the not-too-distant future. And if we also unconsciously digest what we are presented in news feeds, social media, search, and recommendations, possibly without questioning it, will we lose the ability to form opinions and interests of our own? The dangers of digital groupthink How else could one explain the completely unfounded QAnon theory that there are elite Satan-worshipping pedophiles in U.S. government, business, and the media seeking to harvest children’s blood? The conspiracy theory started with a series of posts on the message board 4chan that then spread rapidly through other social platforms via recommendation engines. We now know — ironically with the help of machine learning — that the initial posts were likely created by a South African software developer with little knowledge of the U.S. Nevertheless, the number of people believing in this theory continues to grow ; and it rivals some mainstream religions in popularity.
According to a story published in the Wall Street Journal, the intellect weakens as the brain grows dependent on phone technology. The same likely holds true for any information technology where content flows our way without us having to work to learn or discover on our own. If that’s true, then AI, which increasingly presents content tailored to our specific interests and reflects our biases, could create a self-reinforcing syndrome that simplifies our choices, satisfies immediate needs, weakens our intellect, and locks us into an existing mindset.
NBC News correspondent Jacob Ward argues in his new book The Loop that through AI apps we have entered a new paradigm, one with the same choreography repeated. “The data is sampled, the results are analyzed, a shrunken list of choices is offered, and we choose again, continuing the cycle.” He adds that by “using AI to make choices for us, we will wind up reprogramming our brains and our society … we’re primed to accept what AI tells us.” The Cybernetics of conformity A key part of Ward’s argument is that our choices are shrunk because the AI is presenting us with options similar to what we have preferred in the past or are most likely to prefer based on our past. So our future becomes more narrowly defined. Essentially, we could become frozen in time — a form of mental homeostasis — by the apps theoretically designed to help us make better decisions. This reinforcing worldview is reminiscent of Don Juan explaining to Carlos Castaneda in A Separate Reality that “the world is such and such, or so-and-so only because we tell ourselves that that is the way it is.” Ward echoes this when he says, “The human brain is built to accept what it’s told, especially if what it’s told conforms to our expectations and saves us tedious mental work.” The positive feedback loop presented by AI algorithms regurgitating our desires and preferences contributes to the information bubbles we already experience, reinforcing our existing views, adding to polarization by making us less open to different points of view, less able to change, and make us into people we did not consciously intend to be. This is essentially the cybernetics of conformity, of the machine becoming the mind while abiding by its own internal algorithmic programming. In turn, this will make us — as individuals and as a society — simultaneously more predictable and more vulnerable to digital manipulation.
Of course, it is not really AI that is doing this. The technology is simply a tool that can be used to achieve a desired end, whether to sell more shoes, persuade to a political ideology, control the temperature in our homes, or talk with whales. There is intent implied in its application. To maintain our agency, we must insist on an AI Bill of Rights as proposed by the U.S. Office of Science and Technology Policy. More than that, we need a regulatory framework soon that protects our personal data and ability to think for ourselves. The E.U.
and China have made steps in this direction, and the current administration is leading to similar moves in the U.S. Clearly, now is the time for the U.S. to get more serious in this endeavor — before we become non-thinking automatons.
Gary Grossman is the Senior VP of Technology Practice at Edelman and Global Lead of the Edelman AI Center of Excellence.
DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,033 | 2,023 |
"A mayday call for artificial intelligence | VentureBeat"
|
"https://venturebeat.com/ai/a-mayday-call-for-artificial-intelligence"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest A mayday call for artificial intelligence Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
On May 1, The New York Times reported that Geoffrey Hinton, the so-called “Godfather of AI,” had resigned from Google. The reason he gave for this move is that it will allow him to speak freely about the risks of artificial intelligence (AI).
His decision is both surprising and unsurprising. The former since he has devoted a lifetime to the advancement of AI technology; the latter given his growing concerns expressed in recent interviews.
There is symbolism in this announcement date. May 1 is May Day, known for celebrating workers and the flowering of spring. Ironically, AI and particularly generative AI based on deep learning neural networks may displace a large swath of the workforce. We are already starting to see this impact, for example, at IBM.
AI replacing jobs and approaching superintelligence? No doubt others will follow as the World Economic Forum sees the potential for 25% of jobs to be disrupted over the next five years, with AI playing a role. As for the flowering of spring, generative AI could spark a new beginning of symbiotic intelligence — of man and machine working together in ways that will lead to a renaissance of possibility and abundance.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Alternatively, this could be when AI advancement begins to approach superintelligence, possibly posing an exponential threat.
It is these types of worries and concerns that Hinton wants to speak about, and he could not do that while working for Google or any other corporation pursuing commercial AI development. As Hinton stated in a Twitter post: “I left so that I could talk about the dangers of AI without considering how this impacts Google.” In the NYT today, Cade Metz implies that I left Google so that I could criticize Google. Actually, I left so that I could talk about the dangers of AI without considering how this impacts Google. Google has acted very responsibly.
Mayday Perhaps it is only a play on words, but the announcement date conjures another association: Mayday , a commonly used distress signal used when there is an immediate and grave danger. A mayday signal is to be used when there is a genuine emergency, as it is a priority call to respond to a situation. Is the timing of this news merely coincidental, or is this meant to symbolically add to its significance? According to the Times article, Hinton’s immediate concern is the ability of AI to produce human-quality content in text, video and images and how that capability can be used by bad actors to spread misinformation and disinformation such that the average person will “not be able to know what is true anymore.” He also now believes we are much closer to the time when machines will be more intelligent than the smartest people. This point has been much discussed, and most AI experts have viewed this as being far into the future, perhaps 40 years or more.
The list included Hinton. By contrast, Ray Kurzweil, a former director of engineering for Google, has claimed for some time that this moment will arrive in 2029 when AI easily passes the Turing Test.
Kurzweil’s views on this timeline had been an outlier — but no longer.
According to Hinton’s May Day interview : “The idea that this stuff [AI] could actually get smarter than people — a few people believed that. But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.” Those 30 to 50 years could have been used to prepare companies, governments, and societies through governance practices and regulations, but now the wolf is nearing the door.
Artificial general intelligence A related topic is the discussion about artificial general intelligence ( AGI ), the mission for OpenAI and DeepMind and others. AI systems in use today mostly excel in specific, narrow tasks, such as reading radiology images or playing games. A single algorithm cannot excel at both types of tasks. In contrast, AGI possesses human-like cognitive abilities, such as reasoning, problem-solving and creativity, and would, as a single algorithm or network of algorithms, perform a wide range of tasks at human level or better across different domains.
Much like the debate about when AI will be smarter than humans — at least for specific tasks — predictions vary widely about when AGI will be achieved, ranging from just a few years to several decades or centuries or possibly never. These timeline predictions are also advancing due to new generative AI applications such as ChatGPT based on Transformer neural networks.
Beyond the intended purposes of these generative AI systems, such as creating convincing images from text prompts or providing human-like text answers in response to queries, these models possess the remarkable ability to exhibit emergent behaviors. This means the AI can exhibit novel, intricate, and unexpected behaviors.
For example, the ability of GPT-3 and GPT-4 — the models underpinning ChatGPT — to generate code is considered an emergent behavior since this capability was not part of the design specification. This feature instead emerged as a byproduct of the model’s training. The developers of these models cannot fully explain just how or why these behaviors develop. What can be deduced is that these capabilities emerge from large-scale data, the transformer architecture, and the powerful pattern recognition capabilities the models develop.
Timelines speed up, creating a sense of urgency It is these advances that are recalibrating timelines for advanced AI. In a recent CBS News interview , Hinton said he now believes that AGI could be achieved in 20 years or less. He added: We “might be” close to computers being able to come up with ideas to improve themselves. “That’s an issue, right? We have to think hard about how you control that.” Early evidence of this capability can be seen with the nascent AutoGPT , an open-source recursive AI agent. In addition to anyone being able to use it, this means that it can autonomously use the results it generates to create new prompts, chaining these operations together to complete complex tasks.
In this way, AutoGPT could potentially be used to identify areas where the underlying AI models could be improved and then generate new ideas for how to improve them. Not only that, but as The New York Times columnist Thomas Friedman notes , open source code can be exploited by anyone. He asks: “What would ISIS do with the code?” It is not a given that generative AI specifically — or the overall effort to develop AI will lead to bad outcomes. However, the acceleration of timelines for more advanced AI brought about by generative AI has created a strong sense of urgency for Hinton and others, clearly leading to his mayday signal.
Gary Grossman is SVP of technology practice at Edelman and global lead of the Edelman AI Center of Excellence.
DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,034 | 2,019 |
"Cerebras Systems unveils a record 1.2 trillion transistor chip for AI | VentureBeat"
|
"https://venturebeat.com/2019/08/19/cerebras-systems-unveils-a-record-1-2-trillion-transistor-chip-for-ai"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Cerebras Systems unveils a record 1.2 trillion transistor chip for AI Share on Facebook Share on X Share on LinkedIn Cerebras Systems is making wafer-scale AI chips.
Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
New artificial intelligence company Cerebras Systems is unveiling the largest semiconductor chip ever built.
The Cerebras Wafer Scale Engine has 1.2 trillion transistors , the basic on-off electronic switches that are the building blocks of silicon chips. Intel’s first 4004 processor in 1971 had 2,300 transistors, and a recent Advanced Micro Devices processor has 32 billion transistors.
Most chips are actually a collection of chips created on top of a 12-inch silicon wafer and are processed in a chip factory in a batch. But the Cerebras Systems chip is a single chip interconnected on a single wafer. The interconnections are designed to keep it all functioning at high speeds so the trillion transistors all work together as one.
In this way, the Cerebras Wafer Scale Engine is the largest processor ever built, and it has been specifically designed to process artificial intelligence applications. The company is talking about the design this week at the Hot Chips conference at Stanford University in Palo Alto, California.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Samsung has actually built a flash memory chip, the eUFS, with 2 trillion transistors. But the Cerebras chip is built for processing, and it boasts 400,000 cores on 42,225 square millimeters. It is 56.7 times larger than the largest Nvidia graphics processing unit , which measures 815 square millimeters and 21.1 billion transistors.
The WSE also contains 3,000 times more high-speed, on-chip memory and has 10,000 times more memory bandwidth.
The chip comes from a team headed by Andrew Feldman, who previously founded the micro-server company SeaMicro, which he sold to Advanced Micro Devices for $334 million.
Sean Lie, cofounder and chief hardware architect at Cerebras Systems, will provide an overview of the Cerebras Wafer Scale Engine at Hot Chips. The Los Altos, California company has 194 employees.
Above: Andrew Feldman with the original SeaMicro box.
Chip size is profoundly important in AI, as big chips process information more quickly, producing answers in less time. Reducing the time to insight, or “training time,” allows researchers to test more ideas, use more data, and solve new problems. Google, Facebook, OpenAI, Tencent, Baidu, and many others argue that the fundamental limitation of today’s AI is that it takes too long to train models. Reducing training time thus removes a major bottleneck to industrywide progress.
Of course, there’s a reason chip makers don’t typically build such large chips. On a single wafer, a few impurities typically occur during the manufacturing process. If one impurity can cause a failure in a chip, then a few impurities on a wafer would knock out a few chips. The actual manufacturing yield is just a percentage of the chips that actually work. If you have only one chip on a wafer, the chance it will have impurities is 100%, and the impurities would disable the chip. But Cerebras has designed its chip to be redundant, so one impurity won’t disable the whole chip.
“Designed from the ground up for AI work, the Cerebras WSE contains fundamental innovations that advance the state-of-the-art by solving decades-old technical challenges that limited chip size — such as cross-reticle connectivity, yield, power delivery, and packaging,” said Feldman, who cofounded Cerebras Systems and serves as CEO, in a statement. “Every architectural decision was made to optimize performance for AI work. The result is that the Cerebras WSE delivers, depending on workload, hundreds or thousands of times the performance of existing solutions at a tiny fraction of the power draw and space.” These performance gains are accomplished by accelerating all the elements of neural network training. A neural network is a multistage computational feedback loop. The faster inputs move through the loop, the faster the loop learns, or “trains.” The way to move inputs through the loop faster is to accelerate the calculation and communication within the loop.
“Cerebras has made a tremendous leap forward with its wafer-scale technology, implementing far more processing performance on a single piece of silicon than anyone thought possible,” said Linley Gwennap, principal analyst at the Linley Group, in a statement. “To accomplish this feat, the company has solved a set of vicious engineering challenges that have stymied the industry for decades, including implementing high-speed die-to-die communication, working around manufacturing defects, packaging such a large chip, and providing high-density power and cooling. By bringing together top engineers in a variety of disciplines, Cerebras created new technologies and delivered a product in just a few years, an impressive achievement.” With 56.7 times more silicon area than the largest graphics processing unit, Cerebras WSE provides more cores to do calculations and more memory closer to the cores so the cores can operate efficiently. Because this vast array of cores and memory is on a single chip, all communication is kept on-silicon, which means its low-latency communication bandwidth is immense, so groups of cores can collaborate with maximum efficiency.
The 46,225 square millimeters of silicon in the Cerebras WSE house 400,000 AI-optimized, no-cache, no-overhead, compute cores and 18 gigabytes of local, distributed, superfast SRAM memory as the one and only level of the memory hierarchy. Memory bandwidth is 9 petabytes per second. The cores are linked together with a fine-grained, all-hardware, on-chip mesh-connected communication network that delivers an aggregate bandwidth of 100 petabits per second. More cores, more local memory, and a low-latency high-bandwidth fabric together create the optimal architecture for accelerating AI work.
“While AI is used in a general sense, no two data sets or AI tasks are the same. New AI workloads continue to emerge and the data sets continue to grow larger,” said Jim McGregor, principal analyst and founder at Tirias Research, in a statement. “As AI has evolved, so too have the silicon and platform solutions. The Cerebras WSE is an amazing engineering achievement in semiconductor and platform design that offers the compute, high-performance memory, and bandwidth of a supercomputer in a single wafer-scale solution.” The Cerebras WSE’s record-breaking achievements would not have been possible without years of close collaboration with TSMC, the world’s largest semiconductor foundry, or contract manufacturer, and leader in advanced process technologies, the companies said. The WSE is manufactured by TSMC on its advanced 16nm process technology.
“We are very pleased with the result of our collaboration with Cerebras Systems in manufacturing the Cerebras Wafer Scale Engine, an industry milestone for wafer scale development,” said J.K. Wang, TSMC’s senior vice president of operations. “TSMC’s manufacturing excellence and rigorous attention to quality enable us to meet the stringent defect density requirements to support the unprecedented die size of Cerebras’ innovative design.” Cores and more cores Above: An example of a silicon wafer, which is sliced into individual chips.
The WSE contains 400,000 AI-optimized compute cores. Called SLAC for Sparse Linear Algebra Cores, the compute cores are flexible, programmable, and optimized for the sparse linear algebra that underpins all neural network computation. SLAC’s programmability ensures cores can run all neural network algorithms in the constantly changing machine learning field.
Because the Sparse Linear Algebra Cores are optimized for neural network compute primitives, they achieve industry-best utilization — often triple or quadruple that of a graphics processing unit. In addition, the WSE cores include Cerebras-invented sparsity harvesting technology to accelerate computational performance on sparse workloads (workloads that contain zeros) like deep learning.
Zeros are prevalent in deep learning calculations. Often, the majority of the elements in the vectors and matrices that are to be multiplied together are zero. And yet multiplying by zero is a waste of silicon, power, and time as no new information is made.
Because graphics processing units and tensor processing units are dense execution engines — engines designed to never encounter a zero — they multiply every element even when it is zero. When 50-98% of the data is zeros, as is often the case in deep learning, most of the multiplications are wasted. Imagine trying to run forward quickly when most of your steps don’t move you toward the finish line. As the Cerebras Sparse Linear Algebra Cores never multiply by zero, all zero data is filtered out and can be skipped in the hardware, allowing useful work to be done in its place.
Memory Memory is a key component of every computer architecture. Memory closer to compute translates to faster calculation, lower latency, and better power efficiency for data movement. High-performance deep learning requires massive compute with frequent access to data. This requires close proximity between the compute cores and memory, which is not the case in graphics processing units where the vast majority of the memory is slow and far away (off-chip).
The Cerebras Wafer Scale Engine includes more cores, with more local memory, than any chip to date and has 18 Gigabytes of on-chip memory accessible by its core in one clock cycle. The collection of core-local memory aboard the WSE delivers an aggregate of 9 petabytes per second of memory bandwidth — 3,000 times more on-chip memory and 10,000 times more memory bandwidth than the leading graphics processing unit.
Communication fabric Swarm communication fabric, the interprocessor communication fabric used on the WSE, achieves breakthrough bandwidth and low latency at a fraction of the power draw of the traditional communication techniques. Swarm provides a low-latency, high-bandwidth, 2D mesh that links all 400,000 cores on the WSE with an aggregate 100 petabits per second of bandwidth. Swarm supports single-word active messages that can be handled by receiving cores without any software overhead.
Routing, reliable message delivery, and synchronization are handled in hardware. Messages automatically activate application handlers for every arriving message. Swarm provides a unique, optimized communication path for each neural network. Software configures the optimal communication path through the 400,000 cores to connect processors according to the structure of the particular user-defined neural network being run.
Typical messages traverse one hardware link with nanosecond latency. The aggregate bandwidth across a Cerebras WSE is 100 petabits per second. Communication software such as TCP/IP and MPI is not needed, so their performance penalties are avoided. The energy cost of communication in this architecture is well under 1 picojoule per bit, which is nearly two orders of magnitude lower than in graphics processing units. With a combination of massive bandwidth and exceptionally low latency, the Swarm communication fabric enables the Cerebras WSE to learn faster than any currently available solutions.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,035 | 2,019 |
"AMD CEO: Epyc 2 chips are the world's fastest x86 processors | VentureBeat"
|
"https://venturebeat.com/2019/08/07/amd-ceo-epyc-2-chips-are-the-worlds-fastest-x86-processors"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages AMD CEO: Epyc 2 chips are the world’s fastest x86 processors Share on Facebook Share on X Share on LinkedIn AMD CTO Mark Papermaster (right) with VMWare exec Krish Prasad.
Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Advanced Micro Devices CEO Lisa Su announced that the code-named “Rome” 2nd Gen Epyc processors are the most powerful x86 processors in the world. The company said its new Epyc 2 processors for the datacenter have as many as 64 cores and deliver twice the price performance of Intel’s fastest available chips.
AMD trotted out customer after customer at the event — Dell, Lenovo, Microsoft, HP Enterprise, Google, and Cray — to show that the company’s Zen 2 processor technology and its partners’ 7-nanometer manufacturing have resulted in stellar products for the datacenter, where Intel has been dominant for many years.
Bart Sano, vice president of engineering at Google, said onstage at the AMD event at the Palace of Fine Arts in San Francisco: “We have to optimize through the entire stack. That’s the reason that we chose Epyc.” Su said the customers have produced more than 80 world records for datacenter performance with the Epyc 2 chips, which are launching today. At the same time, she said they are lowering the total cost of ownership for customers by 25% to 50%.
“We’ve told you everything we have to tell you,” said Su. “I hope it is absolutely clear that 2nd Gen Epyc is the best in the industry. Google has already deployed within our datacenters the 2nd Gen Epyc technology. We are already seeing great performance on a variety of workloads.” Google will make Epyc 2 available to its customers on the Google Cloud.
Patrick Moorhead, analyst at Moor Insights & Strategy, said in an interview that he was surprised AMD made so many architectural changes from one generation to the next.
“AMD took a big step forward today in the datacenter with its launch of the 2nd Gen EPYC processor and platform. It is a bigger leap forward than I had expected,” he said. “AMD improved most of its Gen 1 shortcomings, like single-thread performance (+15%) and core scaling, and added new RAS (uncorrectable DRAM error entry) and security (Secure Memory Encryption, Secure Encrypted Virtualization, 509 keys) capabilities, in addition to substantial, multi-core performance gains.” Above: AMD’s 2nd Gen Epyc event in San Francisco.
He noted AMD had the ecosystem in its support, from end customers like Google and Twitter to software providers like VMware, Canonical, Red Hat, and Suse, as well as manufacturers such as Gigabyte and QCT.
“Google is an interesting end customer who has exhibited it is willing to go big if it sees better performance and price,” Moorhead said. “Google was AMD’s largest Opteron customer back in the day. I will be keeping my eye on this one.” He said that early indications show AMD is likely to do well in comparisons to Intel chips on a variety of workloads, though perhaps not all of them.
“AMD looks strong in Hadoop RT analytics (AMD says world record), Java throughput (AMD says 83% better), fluid dynamics (AMD says 2 times better), and virtualization (AMD says up to 50% lower TCO),” he said. “Intel will likely have advantages on low latency ML inference workloads that take advantage of Intel’s DLBoost instructions. Intel will also look very good in in-memory database workloads utilizing Optane DC.” But he said the industry will likely soon provide independent third-party benchmarks that will be more definitive.
Forrest Norrod, senior vice president of datacenter and embedded at AMD, said AMD’s top 64-core Epyc 2 has twice the performance of Intel’s top chip at half the price.
“Rome kicks ass,” he said. “The new standard for the datacenter is Epyc.” Cray CEO Peter Ungaro said his company’s supercomputers will use the Epyc 2 chips in machines that will ship to the likes of the U.S. Air Force and Indiana University.
AMD gained an edge on Intel — which is still dominant in the PC processor market — a couple of years ago with its Zen design, offering 52% better performance per clock cycle than the previous generation. Zen 2 is used in Epyc 2, while Zen 3 designs are complete and Zen 4 designs are underway.
Compared to a top Intel chip, the Zen 2-based 2nd Gen Epyc uses 61% less power than Intel’s top dual-socket Xeon product, with 75% lower software licensing costs, 50% fewer servers, and 54% lower cost of ownership, Norrod said.
“We could not be more committed to this space,” said Su.
The 2nd Gen Epyc chips have 32 billion transistors and as many as 64 cores. A spokesperson for Intel said the company has a long history of leadership in the server space.
In a statement, Intel said: Intel has over 20 years of delivering uninterrupted data center leadership. In that time, we have built a broad ecosystem of partners who optimize their business applications around Intel platforms.
Intel’s focus is on delivering platform innovations that offer customers real-world application performance that help them solve their most critical business challenges.
Intel is taking an outside-in view on hearing what our customer’s need and delivering the silicon platforms they require — which include CPUs, accelerators, FPGAs, NNPs, memory and storage technologies, etc. Our ambitions have never been greater as a company, allowing us to target a >$200B total addressable market in the data center.
Some recent examples of Intel’s work with data center customers and partners who are leveraging Intel’s portfolio of processors, memory, and AI acceleration technologies include SAP, Baidu, and Lenovo.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,036 | 2,016 |
"IBM delivers a piece of its brain-inspired supercomputer to Livermore national lab | VentureBeat"
|
"https://venturebeat.com/2016/03/29/ibm-delivers-a-piece-of-its-brain-inspired-supercomputer-to-livermore-national-lab"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages IBM delivers a piece of its brain-inspired supercomputer to Livermore national lab Share on Facebook Share on X Share on LinkedIn IBM's brain-inspired computer research team at the IBM Almaden Research Center.
Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
IBM is about to deliver the foundation of a brain-inspired supercomputer to Lawrence Livermore National Laboratory , one of the federal government’s top research institutions. The delivery is one small “blade” within a server rack with 16 chips, dubbed TrueNorth, and is modeled after the way the human brain functions.
Silicon Valley is awash in optimism about artificial intelligence, largely based on the progress that deep learning neural networks are making in solving big problems. Companies from Google to Nvidia are hoping they’ll provide the AI smarts for self-driving cars and other tough problems. It is within this environment that IBM has been pursuing solutions in brain-inspired supercomputers. The main benefit is that such chips may be able to operate at lower frequencies and get much more work done on a much smaller amount of power.
The TrueNorth chip itself has more than 5.4 billion transistors, about as many as a state-of-the-art conventional silicon chip today. But this chip’s transistors are configured as a million neurons, or the equivalent of brain cells, and 256 million synapses, or connections. It consumes only about 70 milliwatts of power, or the equivalent of a hearing-aid battery. That’s an order of magnitude better than other solutions, said Dharmendra S. Modha, an IBM fellow and chief scientist of brain-inspired computing at the IBM Almaden Research Center in San Jose, Calif., in an interview with VentureBeat. The Livermore project is an important test of a new computer architecture that could be used in everything from single-chip computers to systems with thousands of chips.
“We can scale up enormously, from one chip in a mobile setting to some very large systems,” Modha said.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Above: This system has 16 IBM TrueNorth brain-inspired chips.
“Lawrence Livermore has commissioned a scale-up, brain-inspired supercomputer, and that’s what you’re looking at here,” Modha said. “Our long-term goal is to build a brain in a box, with 10 billion neurons in a 2-liter volume, consuming about a kilowatt of power. That’s the long-term trajectory we are on. ” The Livermore lab is working on a new generation of supercomputers that can perform at “exascale” speeds, or 50 times faster than the most advanced petaflop systems now in place. IBM’s believes a brain-based computer could operate on significantly less electrical power and volume.
Back in 2011, when I got my first look at IBM’s prototype “brain chip,” the company had a prototype with one core and 256 neurons. Now, each chip has 1 million neurons, 256 cores, and 256 million synapses. It operates on 70 milliwatts of power. It can deliver 46 giga synaptic operations per second.
Each TrueNorth chip is part of a 16-chip system board that is housed in a server blade. That is what IBM is delivering to Lawrence Livermore. And that board has the ability of 16 million neurons, 4,096 cores, and 4 billion synapses. The 16 chips operate on 2.5 watts of power, while the whole board consumes about 7 watts. Eventually, IBM will populate an entire rack with a bunch of these server boards, and provide a bunch of racks to its customers who want to build scale-out supercomputers. IBM is currently working under a $1 million contract with Livermore. Presumably, if it all goes forward, much more money will be at stake.
“The beauty of the TrueNorth chips is that you can put one in and it just starts communicating” with the chips around it, Modha said. “It does this without a need for any communication interface. We can just scale it up.” IBM has demonstrated this in the 16-chip computer blade that it is delivering to Livermore, Modha said. IBM’s industrial design team created a fancy enclosure to house it — and so it looks more “iconic,” like something out of science fiction, rather than a typical piece of computer hardware, said Bill Risk, senior software engineer at IBM.
Above: Dharmendra Modha, IBM fellow and chief scientist of brain-inspired computer research at the IBM Almaden Research Center.
The chips can operate a brain-like neural network to handle complex cognitive tasks — such as pattern recognition and integrated sensory processing — far more efficiently than conventional chips. And that means a computer in a self-driving car could tap a data center with TrueNorth chips to analyze all of the pedestrians, cars, bicycles and other objects in an environment around a self-driving car.
Lawrence Livermore will likely use the system to test nuclear weapons without setting them off. The new computing capabilities may prove important to the National Nuclear Security Administration’s (NNSA) missions in cyber security, stewardship of the nation’s nuclear deterrent and non-proliferation. NNSA’s Advanced Simulation and Computing (ASC) program will evaluate machine learning applications, as well as deep learning algorithms and architectures, and conduct general computing feasibility studies.
The technology represents a fundamental departure from the 70-year-old computer design popularized by computer architect John von Neumann.
In von Neumann machines, memory and processor are separated and linked via a data pathway known as a bus. Over the years, von Neumann machines have gotten faster by sending more and more data at higher speeds across the bus as processor and memory interact. But the speed of a computer is often limited by the capacity of that bus, leading to what some computer scientists to call the “von Neumann bottleneck.” With the human brain, the memory is located in the same place as the processor — at least, that’s how it appears, based on our current understanding of how the brain works. The brain-like processors with integrated memory don’t operate quickly by traditional measurements, sending data at a mere 10 hertz, or far slower than the 5 gigahertz computer processors of today. But the human brain does an awful lot of work in parallel, sending signals out in all directions and getting the brain’s neurons to work simultaneously. Because of this, the brain’s more than 10 billion neurons and 10 trillion connections (synapses) between those neurons amounts to an enormous amount of computing power.
IBM’s older Blue Gene supercomputer, a traditional von Neumann machine, had 1.5 million processors, but it ran 1,500 times slower than real time, in comparison to the human brain. A hypothetical supercomputer using the von Neumann design would have consumed 12 gigawatts of power to accomplish what the brain can do. That’s as much power as is consumed by the island of Singapore, Modha said.
Above: IBM TrueNorth chip has a million neurons.
IBM is emulating the brain’s architecture with its new TrueNorth chips, which were originally developed under the auspices of the Defense Advanced Research Projects Agency (DARPA) and Cornell University.
This new computing unit, or core, is analogous to the brain. It has “neurons,” or digital processors that compute information. It has “synapses,” which are the foundation of learning and memory. And it has “axons,” or data pathways that connect the tissue of the computer.
The work combines supercomputing, nanotechnology, and neuroscience in an effort to move beyond calculation to perception. About 35 people within IBM Research across three company sites and multiple countries are working on the IBM brain-inspired project, Modha said. To create the full ecosystem around TrueNorth, IBM has had to create a simulator; a programming language; an integrated programming environment; a library of algorithms, as well as applications; firmware; tools for composing neural networks for deep learning; a teaching curriculum; and cloud enablement.
Brian Van Essen, a computer scientist at Lawrence Livermore National Laboratory’s Center for Applied Scientific Computing, said in an interview with VentureBeat that the collaboration with IBM started in the fall of 2014.
Above: The TrueNorth chip can be used with neural networks that recognize objects such as brands.
“We are looking beyond von Neumann processors as we scale toward exascale,” he said. “We are very excited about the very low-power aspects of TrueNorth. It’s an order of magnitude difference in energy usage.” Van Essen said he is excited about the AI community’s excitement around deep learning and neural networks.
“We are looking at how we can apply it with TrueNorth,” he said.
He said that TrueNorth’s ability to discern patterns and run large-scale simulations is very promising.
“This is by no means the only approach that is being done in the community to mimic biology,” he said. “It is still balancing digital logic design with inspirations from the brain. It is mimicking behavior, but it is not slavishly copying the brain. That approach allows IBM to create a chip that takes advantage of advanced semiconductor design techniques.” Above: IBM’s TrueNorth chips can discern the difference between pedestrians and other objects.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,037 | 2,019 |
"France's AI startup scene grew 38% in 2019 with government and investor backing | VentureBeat"
|
"https://venturebeat.com/2019/10/22/frances-ai-startup-scene-grew-38-in-2019-with-government-and-investor-backing"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages France’s AI startup scene grew 38% in 2019 with government and investor backing Share on Facebook Share on X Share on LinkedIn AI Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
France’s aggressive push to develop its AI ecosystem seems to be paying dividends as the number of startups continues to soar.
A new report released today identified 432 AI-related startups in France, up from 312 last year and 180 back in 2016.
The report was produced by Roland Berger, a global consulting firm, and France Digitale , an association that represents venture capitalist and entrepreneurs.
“It’s fantastic news,” said Nicolas Brien, CEO of France Digitale.
The report was released on the eve of the 4th annual France is AI conference that is being held Wednesday at the Station F startup campus in Paris.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The AI startups are working across industries such as big data analysis, retail, health care, and customer support. The authors of the report say the latest numbers are evidence that France is now rivaling traditional European AI hubs such as the U.K. and Israel.
[scribd id=431482619 key=key-u2QPjWzpWOa8dx2Gl7ZY mode=scroll] Just as impressive as the number of startups is the amount of venture capital they have raised. For the first six months of 2019, French AI startups raised $634 million, compared to $621 million for the U.K. and $452 million for Israel, according to the report. That’s in part thanks to mega-rounds raised by French companies such as Meero, an AI photography platform that raised a $230 million round last summer.
“What is quite surprising is that the startups are increasingly well funded,” Brien said. “Only three years ago, it was quite a pain in the ass to raise money for deep tech startups.” AI is one of the critical technologies that the French government identified several years ago as essential for boosting the country’s economy while also ensuring it wouldn’t be reliant on tech superpowers such as the U.S. and China. Last year, the French government released a study called “AI for Humanity” that sought to outline the challenges and opportunities the emerging technology would present, as well as strategies for developing the sector.
The new France Digitale report says that on a global basis, the U.S., China, and U.K. still dominate in terms of patent applications and published research. In Europe, the U.K. still sets the R&D standard with 623 AI-related patents.
But Brien was heartened by the fact that a growing number of these startups have links to the academic world, either through hiring researchers or partnerships. While such fluid relationships have long been commonplace in Silicon Valley, in France academics have traditionally shunned commercial endeavors, a resistance the government has been trying to ease to speed the transfer of IP into enterprises.
“We see that the barrier that used to occur between the research world and the startup world is shrinking,” Brien said. “And that’s very promising. France has the largest concentration of AI research labs in Europe. And that’s translating into startups.” The new report also points to challenges that remain. Those include the looming, if muddled, Brexit. Brien noted that France and the U.K. have a host of intertwined AI relationships involving sharing data, research, and partnerships. Like so much else about Brexit, these face a great deal of uncertainty.
And while funding has progressed, France is still a place where a large number of startups raise small early rounds and then struggle to find larger, later-stage investments. That said, the growth in AI funding was driven by larger series B and C rounds, according to the report.
Recently, French president Emmanuel Macron announced plans for a $5.5 billion investment fund with money coming from pools of insurance funds that will target larger, later-stage rounds. Brien is also anticipating that the European Investment Fund, the largest source of funds for European venture capital firms, will soon be increasing its funding with a focus on AI and deep tech.
The other areas still lagging are exits and more robust strategies by France’s largest companies. In some ways, those two are related. Brien would like to see those companies not just investing, but also acquiring those AI startups to help the big incumbents transform their businesses.
“We need to see more acquisitions,” Brien said. “This is a wakeup call for the French corporate world. Having investments in AI startups is fine, but that’s not a strategy. They need to really change their approach to AI startups if they want to absorb all that knowledge.” Correction: An earlier version of this post said that for the time period 2014 to 2019, French AI startups raised $1.268 billion, compared to $1.241 billion for the U.K. and $902 million for Israel based on information provided by France Digitale. The correct figure is that for the first six months of 2019, French AI startups raised $634 million, compared to $621 million for the U.K. and $452 million for Israel, according to the report. We regret the error.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,038 | 2,020 |
"These AI lyrics are so emo people think they're My Chemical Romance"
|
"https://thenextweb.com/news/this-ai-wrote-such-emo-lyrics-that-humans-thought-it-was-my-chemical-romance"
|
"Toggle Navigation News Events TNW Conference 2024 June 20 & 21, 2024 TNW Vision: 2024 All events Spaces Programs Newsletters Partner with us Jobs Contact News news news news Latest Deep tech Sustainability Ecosystems Data and security Fintech and ecommerce Future of work More Startups and technology Investors and funding Government and policy Corporates and innovation Gadgets & apps Early bird Business passes are 90% SOLD OUT 🎟️ Buy now before they are gone → This article was published on April 24, 2020 Deep tech This AI wrote such emo lyrics that humans thought it was My Chemical Romance They were less impressed by its rapping Image by: Danny Sotzny If you think the songs in the charts sound like they were made by machines, you’re probably wrong — an AI’s lyrics would be better.
That’s according to research by ticket site TickPick , which recently tested whether people prefer artificial or human songwriters.
The company scraped thousands of lyrics from genius.com and grouped them into rock, rap, country, and pop songs. The words were then fed to a text-generating machine called GPT-2 , which used machine learning to create new sets of lyrics.
The system composed 100 songs in each genre, which the TickPick team turned into four original six-track albums. They then ran the lyrics through Grammarly’s plagiarism checker to check that the AI songwriters weren’t stealing from the artists that inspired them.
[ Read: Researcher builds AI rapper to spit sick rhymes — with mixed results ] The <3 of EU tech The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now! They then tested whether 1,003 music fans could spot which lyrics were made by AI and which were written by real musicians — and whether they preferred the songs created by humans or machines.
Lyrical machines In each category, the respondents were shown three lyrics written by acclaimed human artists, and one created by an AI.
When asked which verse was the most emotional, almost 40% of people said they more touched by the A I’s words than lyrics written by Adele, R.E.M., and Johnny Cash.
And who can blame them? Only a heart of stone would be unmoved by this tear-jerker: I stand alone and think it’s better to be alone. Lonely days, I just can’t find the will to go on. I’m in this state, and my eyes show me that I’ve been taken.
After wiping tears from their eyes, the respondents were asked which songwriter was the most creative.
Again, the AI smashed the so-called legends, attracting 65% of votes for this inspirational poetry: When clouds part to reveal a man in the wilderness outside the pale light of morning. A secret within the door can hear him say. The clouds will reveal what I mean.
Humanity’s last chance to overcome the machines came in the overall favorite category — and the AI was finally defeated. It nonetheless deserves applause for this imaginative effort: I got my rig in the back of my Beemer. Professional when I graze, I’m professional when I argue. 40 glass, I’m laughing at that s***, I’ma be roaring at that s*** Generating genres The experiment also revealed which genres are hardest for AI songwriters to master.
The respondents struggled to spot which pop and country lyrics were written by an AI. And its rock song was so emo that they thought it was written by My Chemical Romance or Nirvana.
However, they were less convinced by artificial rapper Young AI. Almost 36% of them recognized that a human did not create these bars: In the city at night, wild stars appear. From far away, there’s a quiet storm. About to collapse, I’m in a rush to buy a house. The disappointment, just too strong to overcome. My ego and my consciousness got me out the track. So I search for answers, but there aren’t none.
The researchers believe this is because the unusual syntax of rap songs is hard for algorithms to interpret, which should keep rappers safe in their jobs for now. But for rockers, pop stars, and country singers, it might be time to pass their mics to the machines.
Story by Thomas Macaulay Senior reporter Thomas is a senior reporter at TNW. He covers European tech, with a focus on deeptech, startups, and government policy.
Thomas is a senior reporter at TNW. He covers European tech, with a focus on deeptech, startups, and government policy.
Get the TNW newsletter Get the most important tech news in your inbox each week.
Also tagged with Algorithm Artificial intelligence Creative Artist Songwriter Lyrics Story by Thomas Macaulay Popular articles 1 New erotic roleplaying chatbots promise to indulge your sexual fantasies 2 UK plan to lead in generative AI ‘unrealistic,’ say Cambridge researchers 3 New AI tool could make future vaccines ‘variant-proof,’ researchers say 4 3D-printed stem cells could help treat brain injuries 5 New technique makes AI hallucinations wake up and face reality Related Articles deep tech How Europe is racing to resolve its AI sovereignty woes deep tech ‘We may irreversibly lose control of autonomous AI,’ warn top academics Join TNW All Access Watch videos of our inspiring talks for free → deep tech Social media has new moderation problems. This AI startup has a solution deep tech DeepMind says its new AI system is the world’s most accurate 10-day weather forecaster The heart of tech More TNW Media Events Programs Spaces Newsletters Jobs in tech About TNW Partner with us Jobs Terms & Conditions Cookie Statement Privacy Statement Editorial Policy Masthead Copyright © 2006—2023, The Next Web B.V. Made with <3 in Amsterdam.
"
|
1,039 | 2,017 |
"What Exactly Is Vegan-Mayonnaise Company Hampton Creek Selling? - The Atlantic"
|
"https://www.theatlantic.com/magazine/archive/2017/11/hampton-creek-josh-tetrick-mayo-mogul/540642"
|
"Site Navigation The Atlantic Popular Latest Newsletters Sections Politics Ideas Fiction Technology Science Photo Business Culture Planet Global Books Podcasts Health Education Projects Features Family Events Washington Week Progress Newsletters Explore The Atlantic Archive Play The Atlantic crossword The Print Edition Latest Issue Past Issues Give a Gift Search The Atlantic Quick Links Dear Therapist Crossword Puzzle Magazine Archive Your Subscription Popular Latest Newsletters Sign In Subscribe Explore The crucial hours after a fraternity pledge’s fall, what Thoreau really saw, and the secrets of Google’s moonshot factory. Plus, the enduring appeal of Joni Mitchell, the science behind Mona Lisa’s smile, and more.
The Science Behind Mona Lisa’s Smile Walter Isaacson Google X and the Science of Radical Creativity Derek Thompson Mayonnaise, Disrupted Bianca Bosker Death at a Penn State Fraternity Caitlin Flanagan Walden Wasn’t Thoreau’s Masterpiece Andrea Wulf The Swamp Lover Henry David Thoreau A gift that gets them talking.
Give a year of stories to spark conversation, Plus a free tote.
Mayonnaise, Disrupted How did Josh Tetrick’s vegan-mayo company become a Silicon Valley darling—and what is he really selling? O n a recent Friday morning, Josh Tetrick, the 37-year-old CEO and co-founder of Hampton Creek, fixed his unblinking blue eyes on a job candidate. The pair was sitting at a workstation near the entrance to the company’s warehouselike San Francisco headquarters, where Tetrick frequently holds meetings in plain view of the company’s more than 130 employees. Around Tetrick—a muscular ex-linebacker in jeans and a T-shirt—was even more Tetrick: a poster of him watching Bill Gates eat a muffin, a framed photograph of him with a golden retriever, an employee’s T-shirt emblazoned with “What would you attempt if you knew you could not fail?”—one of Tetrick’s many slogans. (Others include “What would it look like if we just started over?” and “Be gorilla.”) Listen to the audio version of this article here.
For more feature stories, read aloud, download the Audm app for your iPhone.
The interviewee, who was applying for a mid-level IT job, started listing his qualifications, but Tetrick seemed more interested in talking about the company’s mission—launching into what he promised was a “non-consumer-friendly” look at the “holy-fuck kind of things” Hampton Creek is doing to ensure “everyone is eating well.” He gestured to a slide deck on a flatscreen TV showing photographs of skinny black children next to one of an overweight white woman. They represented, he said, a handful of the 1.1 billion people who “go to bed hungry every night,” the 6.5 billion “just eating crappy food,” and the 2.1 billion from both groups “being fucked right now” by micronutrient deficiencies. “This is our food system today,” Tetrick said. “It’s a food system that is failing most people in the world. And these pillars of our food system today, we think, need to be rethought from the ground up.” So far, the most prominent manifestation of Tetrick’s plan to rethink the pillars of our food system is a line of vegan mayonnaise, sold in plain, sriracha, truffle, chipotle, garlic, and “awesomesauce” flavors. Hampton Creek also sells vegan cookies and salad dressings, which are marketed, like the mayo, under the brand Just—a reference to righteousness, not simplicity—in venues ranging from Whole Foods to Walmart. And it sells a powdered egg substitute to General Mills for use in baked goods.
The Tech Issue The Secrets of Google's Moonshot Factory The Science Behind Mona Lisa's Smile Tetrick insists that Hampton Creek is not a vegan-food producer. He has called it a “tech company that happens to be working with food” and has said, “The best analogue to what we’re doing is Amazon.” Using robotics, artificial intelligence, data science, and machine learning—the full monty of Silicon Valley’s trendiest technologies—Hampton Creek is, according to Tetrick, attempting to analyze the world’s 300,000-plus plant species to find sustainable, animal-free alternatives to ingredients in processed foods.
This pitch has captured the imagination of some of Silicon Valley’s most coveted venture capitalists. Since Hampton Creek’s founding, in 2011, the company has attracted $247 million from investors including Salesforce CEO Marc Benioff, Yahoo co-founder Jerry Yang, and Peter Thiel’s Founders Fund. It was lauded by Gates in 2013 as a hopeful example of “the future of food” and named a World Economic Forum Technology Pioneer two years later. In 2014, Tetrick was cheered as one of Fortune ’s 40 Under 40. He wooed a star-studded stable of advisers, including former Health and Human Services Secretary Kathleen Sebelius, and A-list fans such as John Legend and the fashion designer Stella McCartney. Last fall, Hampton Creek was valued at $1.1 billion—surely the first time a vegan egg has hatched a unicorn.
Peter Thiel instructs start-up entrepreneurs to take inspiration from cults, advice that came to mind when Tetrick told me, after the job interview, that he screens for employees who “really believe” in his company’s “higher purpose,” because “I trust them more.” But buying into the mission has become a more complicated proposition, as Hampton Creek has recently been besieged by federal investigations, product withdrawals, and an exodus of top leadership. Silicon Valley favors entrepreneurs who position themselves as prophetic founders rather than mere executives, pursuing life-changing missions over mundane business plans. That risks rewarding story over substance, as the swift implosion of once-celebrated disrupters such as Theranos and Zenefits has shown. Fans of Hampton Creek say that Tetrick is “one of our world’s special people” who “will guide us into the abundant beyond.” Critics allege that he is leading a “cult of delusion.” Either way, he seems to be selling far more than just mayo.
The story of how Tetrick founded Hampton Creek, as he has recounted it on numerous conference stages, shows his instinct for a good narrative. As he tells “folks” in his slight southern drawl, he was raised in Birmingham, Alabama, by a mother who worked as a hairdresser and a father who was often unemployed, which meant his family was “on food stamps for most of our life.” (His mother remembers it as “maybe like two weeks or three weeks.” His father could not be reached for comment.) He had dreams of playing professional football (even changing the pronunciation of his surname from Tee -trick to Teh -trick because it “felt more manly,” he told me) and was a linebacker at West Virginia University before transferring to Cornell, where he earned a Fulbright to work in Nigeria. He has said he drew inspiration for Hampton Creek from his seven years in sub-Saharan Africa (three of which he passed, for the most part, in law school at the University of Michigan). Motivated by being raised on “a steady diet of shitty food” in Birmingham and seeing homeless children relying on “dirty-ass water” in Africa, Tetrick launched Hampton Creek to “open our eyes to the problems the world faces.” Employees can repeat parts of Tetrick’s story from memory, like an origin myth, describing for visitors the Burger King chicken sandwiches and 7-Eleven nachos that Tetrick ate as a kid. (New hires participate in a workshop where they practice reciting their own personal journey toward embracing the company’s mission.) In his public remarks, Tetrick usually skims over the years prior to launching Hampton Creek, when he, by his own admission, was “lost.” He graduated law school in 2008, joined a firm, then parted ways with it after less than a year—in part, he told me, over an op-ed he published in the Richmond Times-Dispatch in which he critiqued factory farming. (According to Tetrick, the law firm, McGuireWoods, counted the meat processor Smithfield Foods among its clients. The law firm declined to comment.) A vegetarian since college, he had been writing fiery editorials in his spare time calling out the “disgusting abuses” of the industrial food system.
Leaving law allowed Tetrick to throw himself into motivational speaking, which had already been competing with his day job. Two or three times a week, he visited high schools, colleges, and the occasional office to preach the virtues of social entrepreneurship and describe the big money to be earned by doing good. “Selflessness is profitable!” booms Tetrick to a class of graduating seniors in a 2009 video. “Because solving the world’s greatest needs is good for you ! Solving the world’s greatest needs intersects with phenomenal career opportunities for you to engage you !” According to his speaking agency at the time, Tetrick’s credentials included his prior work in President Bill Clinton’s office (a two-month gig); for the government of Liberia (four months); for the United Nations (four months); in Citigroup’s corporate-citizenship group (four months); at McGuireWoods (nine months); and at the helm of his crowdfunding start-up, 33needs (which petered out after less than 11 months). Prior to becoming the CEO of Hampton Creek, Tetrick had held no job for more than a year.
In 2011, Tetrick was largely itinerant and drawing on savings when his childhood friend Josh Balk intervened. Balk, then working on food policy for the Humane Society of the United States, had first gotten Tetrick thinking critically about industrial agriculture back in high school. It was under Balk’s influence that Tetrick became a vegetarian and, in his 20s, set a goal of donating $1 million to the Humane Society by his 33rd birthday. Balk now urged Tetrick to throw himself into a new venture that would draw on his insights about doing well by doing good, and suggested that they launch a start-up that would use plants as a substitute for eggs.
With Balk’s help, Tetrick enlisted David Anderson, the owner of a Los Angeles bistro, whose vegan recipes for foods like cheesecake and crème brûlée helped inform Hampton Creek’s early work. To raise money, they decided to approach Khosla Ventures, which seemed inclined to invest in companies with a social or environmental bent. In a pitch to Samir Kaul, a partner at Khosla, Tetrick spoke of a “proprietary plant-based product” that was “seven years in the making” and “close to perfection.” Despite his current emphasis on Hampton Creek’s technical chops, Tetrick says he never expressly founded Hampton Creek as a tech start-up. “I didn’t go in and meet with Samir and say, ‘Hey, Samir, just so you know, I’m a technology company,’ ” he recalled. “I went in to him and I said, ‘Food’s fucked up, man. Here’s why. Here’s an example. Here’s what we’re thinking about doing.’ ” The pitch netted the company $500,000—its first investment.
A video on Hampton Creek’s website shows a creamy white substance being smeared on a piece of toast. Then the camera cuts to scenes of an engineer running computer models and a robot zipping pipette trays around a laboratory. By turning plants into data, a voice-over explains, the company is working to combat both chronic disease and climate change.
This utopian message took some time to evolve. As the company was getting off the ground, Tetrick’s challenge to the industrial food system had a more subversive tone. “To say that we’ve launched a global war on animals just sells the word ‘war’ so pathetically short,” he wrote in 2011 for HuffPost.
In a 2013 ted x Talk , shortly before the rollout of Just Mayo, he described the horrors of chicks being fed into “a plastic bag in which they’re suffocated” or “a macerator in which they’re ground up instantaneously.” Tetrick’s love for animals was on display during a recent visit I made with him to a dog park—chaperoned, as I was at all times, by Hampton Creek’s head of communications. As Tetrick refueled with a four-espresso-shot Americano and a seitan bagel sandwich, we watched his golden-retriever puppy, Elie, run around on the grass. He’d purchased her from a breeder specializing in life extension in dogs, after the death of his beloved eight-year-old retriever, Jake, the previous spring. “Far and away the hardest thing that I’ve ever been through in my life was that,” Tetrick said. Elie, whom Tetrick named after the Holocaust survivor Elie Wiesel because he considers it a “cool name,” flies internationally with Tetrick on long-weekend getaways, dines on dog food made of locally sourced organic vegetables, and accompanies him to work. (Tetrick’s free-roaming pets have been a point of contention for some of Hampton Creek’s food scientists: Jake ate researchers’ cookie prototypes on at least one occasion. Back at Hampton Creek headquarters, I watched Tetrick wipe Elie’s vomit off the floor adjacent to the research kitchen.) When I brought up the ted x Talk, Tetrick told me he regretted it. “I was too much in my own head in thinking about what motivates me, as opposed to thinking from the perspective of everyone else who’s listening or could see that talk,” he said. “My primary motivator is alleviating animal suffering. For me. For me ,” he said, in a conversation he initially wanted off the record, over concerns that it might be a “turnoff” to partners. He paused for a moment, and seemed conflicted about what he’d divulged: “I don’t know if I’ve ever said that to the full company.” Though he said he still believes “every single word” of his past entreaties, Tetrick has largely sanitized his public remarks of references to animal abuse since finding that they fell flat with the broad group of retailers and shoppers he hopes to attract. He now hews closer to lines such as “We’ve made it really easy for good people to do the wrong things.” Though Tetrick has been a vegan for the past seven years, he discourages his marketing team from using the word vegan to describe Just products. The term, he says, evokes arrogance and wealth and suggests food that “tastes like crap.” Instead he promises customers a bright future where they can eat better, be healthy, and save the environment without spending more, sacrificing pleasure, or inconveniencing themselves. “A cookie can change the world,” Hampton Creek has asserted in its marketing materials.
The message is a rallying cry for a particular kind of revolution. Tetrick launched Hampton Creek in an era when investors were reaching beyond traditional tech companies, and businesses that might otherwise have been merely, say, specialty-food purveyors could leverage software—and grand mission statements tapping into Silicon Valley’s do-gooder ethos—to cast themselves as paradigm-breaking forces. Venture capitalists have poured money into start-ups aiming to disrupt everything from lingerie to luggage to lipstick, with less emphasis on the product than on the scope of the ambition and the promise of tech-enabled efficiencies. Hampton Creek offered idealism that could scale.
Once he’d secured funding from Khosla Ventures, Tetrick leaned into start-up culture. He ditched the couch he’d been crashing on in Los Angeles and rented a renovated garage in San Francisco. In an early press release, Hampton Creek touted Bill Gates—a limited partner in Khosla Ventures—as an investor. Tetrick recruited executives from Google, Netflix, Apple, and Amazon to join his staff, and highlighted their tech backgrounds to backers.
He also started promoting Hampton Creek’s biotech-inspired “technology platform”: labs that could automate the extraction and analysis of plant proteins, examining their molecular features and functional performance (including gelling, foaming, and emulsifying properties) and then applying proprietary machine-learning algorithms to identify the most-promising proteins for use in muffins, spreads, and other foods. “We are seeing things that no chef, no food scientist, has ever seen before,” the company declares on its website.
Hampton Creek earned glowing press, as Tetrick proclaimed that mayo was merely the beginning of a broader food revolution. David-and-Goliath moments—like a lawsuit brought by Unilever, the producer of Hellmann’s, against Hampton Creek arguing that only spreads containing eggs should be labeled “mayo,” or revelations that members of the American Egg Board and its affiliates had joked about hiring someone to “put a hit on” Tetrick—burnished Tetrick’s disrupter status. (Unilever later dropped the lawsuit.) Food-industry celebrities joined investors in celebrating Tetrick’s approach. He “will win a Nobel Prize one day,” raved the chef and TV host Andrew Zimmern. He is an underdog (“a tough, gritty guy,” said Kaul) and “already is changing the world,” as the celebrity chef José Andrés marveled after a visit to Hampton Creek. According to friends, family, and associates, Tetrick is an “incredible salesman,” “one of the heroes of our generation,” and possibly a future president.
L ately, the glow around Tetrick and his company has been overtaken by an unforgiving spotlight. In 2015, a Business Insider exposé based on interviews with former employees alleged, among other claims, that Hampton Creek practiced shoddy science, mislabeled its ingredients, and illicitly altered employees’ contracts to slash their severance pay. (In a Medium post , Tetrick dismissed the story as “based on false, misguided reporting.” He did admit that employment agreements had been altered, though he added that he had since “fixed” the situation.) Last year, Bloomberg asserted that Hampton Creek operatives had bought mass quantities of Just Mayo in an attempt to artificially inflate its popularity—prompting investigations by the Department of Justice and the Securities and Exchange Commission, which were eventually dropped. (Tetrick said that the buybacks were in part for quality control and accounted for less than 1 percent of sales.) Bloomberg also reported on claims by a Hampton Creek investor named Ali Partovi—an early backer of Facebook and Dropbox who lasted nine days as Tetrick’s chief strategy officer before leaving the company and severing all ties—that the company was exaggerating profit projections to deceive investors.
More recently, Target pulled Just products from its shelves after an undisclosed source raised food-safety concerns, including allegations of salmonella contamination. (Though an FDA review cleared Hampton Creek, Target—previously one of the brand’s best-performing outlets, according to Tetrick—announced that it was ending its relationship with the company.) In the span of a year, at least nine executive-level employees parted ways with Hampton Creek as rumors swirled that it was losing as much as $10 million a month. (Tetrick declined to comment on Hampton Creek’s finances but said that its turnover was typical of other high-growth companies.) When I first arrived at Hampton Creek headquarters, in June, I expected to find Tetrick in crisis mode. Frankly, I was a little surprised that I’d been allowed to come: Four days before my visit, Tetrick had fired his chief technology officer, his vice president of R&D, and his vice president of business development over a purported coup attempt that seemed to suggest a lack of confidence in the CEO. (None responded to requests for comment.) By the time I arrived, the entire board save Tetrick had resigned.
Yet Tetrick was bubbling about his plans for the future. “I just got done with—and you’re welcome to see it—writing my 10-year vision,” he told me after saying goodbye to the IT-job candidate, as we joined some half a dozen newly hired Hampton Creekers for their inaugural product-tasting in the company’s research kitchen.
Amid gleaming mixers and convection ovens, the cheerful group of 20- and 30-somethings dipped crackers and crudités into ramekins of vegan salad dressing and mayonnaise arranged on a table along with spheres of cookie dough. While I could have easily polished off most of the cookie-dough samples myself, and the dressings were on par with other bottled ranch and Caesar offerings, Just Mayo—which has earned high marks from foodies—tasted to me like a slightly grassier, grainier version of Hellmann’s.
Tetrick was dissatisfied with the array of samples. “Where’s the butter? WHERE’S THE BUTTERRRRRR?” he asked the chef who’d organized the tasting. “You’ve got to get the butter!” Hampton Creek’s plant-based butter was still a prototype, the chef reminded Tetrick. “The usual protocol for this thing is we show the products that are live on shelves, so that everybody understands what we—” “What about the Scramble Patty?,” Tetrick interrupted. The patty, a breakfast-sandwich-ready product from their forthcoming egg-replacement line called Just Scramble, was dutifully delivered alongside the butter. Their vegetal aftertaste made clear to me why they had not yet been brought to market.
Hampton Creek has been promising the impending release of Just Scramble for years: In a presentation to potential investors cited by Bloomberg, Tetrick forecast that the mung-bean-based product line would bring in $5 million in sales in 2014—but three years later, it has yet to launch.
Tetrick told me that Hampton Creek will debut both a liquid version of Just Scramble and the Scramble Patty early next year, to be followed shortly by a new category of plant-based foods—possibly the butter, or ice cream. Or maybe yogurt or shortening. That’s in addition to the expansion of what Tetrick has branded Just OS (short for “operating system”), an arm of the company focused on licensing its ingredients and methods to food manufacturers. As Tetrick sees it, replacing eggs with his blend of vegan ingredients, which can be regularly tweaked and improved, makes it possible to continuously upgrade everything from cookies to condiments. “While a chicken egg will never change, our idea is that we can have a product where we push updates into the system, just like Apple updates its iOS operating system,” Tetrick has said.
Former Hampton Creek employees, including several involved in its research efforts—all of whom declined to be named for fear of retribution—suggested that the company focused on the appearance of innovation and disruption to the occasional detriment of tangible, long-term goals. They expressed frustration at being asked to reallocate resources from developing digital infrastructure to designing “cool looking” data-visualization tools that seemed like they would be primarily useful for impressing visitors; at having to leave their desks to don lab coats and “pretend to be doing something, because they had VIP investors coming through”; and at being instructed to set up taste tests for members of the public that took time away from product development. “We could’ve done really good science, and instead we were doing performances and circus acts,” one ex-employee told me.
The pursuit of Uber-size valuations has arguably resulted in some start-ups offering technological “solutions” more complicated than the problems they purport to solve. The founder of Juicero, for example, positioned himself as the Steve Jobs of juice when he launched a $699 microprocessor-enabled kitchen appliance that could press packets of chopped fruits and vegetables with enough force to “lift two Teslas”—but a Bloomberg reporter found that squeezing the packets with her hands worked just as well. (In early September, the company—which had attracted more than $100 million in venture-capital funding since its founding four years prior—announced that it was shutting down.) To be sure, artificial intelligence is not crucial to making vegan mayonnaise: Tetrick has said his inspiration to replace eggs with Just Mayo’s Canadian-yellow-pea protein—a common ingredient in vegan packaged foods—came because he “brought in some biochemists and they ran tests, looking at the molecular weight of plant proteins, the solubility, all sorts of different properties.” Bob Goldberg, a former musician whose company, Follow Your Heart, has sold a vegan mayo called Vegenaise since 1977, told me that his inspiration to replace eggs with soy protein came in a dream. Follow Your Heart debuted its own plant-based egg substitute, VeganEgg, in 2016, after less than a year of development.
In response to ex-employees’ accounts of being derailed by visitor presentations, Tetrick said that communicating the company’s projects to potential investors and partners is essential to its work. But he rejected allegations that Hampton Creek was making fantastical promises or emphasizing image over substance, and suggested that detractors were seeking to subvert the company’s mission for their own gain. He told me that Partovi, his former chief strategy officer, who accused the company of misleading investors, was a dissatisfactory employee who had found the “chaotic” atmosphere of a start-up a “huge shock,” and had back-channel conversations about selling off the company. (Partovi declined to comment.) As for the three recently fired executives, Tetrick said their desired changes would have given more control to investors, whose incentive to go public or accept an acquisition offer might undermine Hampton Creek’s “higher purpose.” When I asked him about the board departures, which were made public after my visit, Tetrick told me that some members had been asked to step down; others “chose to remain members of the advisory board and help the company achieve its mission.” “There’s one critical filter beyond all the other filters that’s most important,” he told me. “Will this particular decision—whatever that decision is—increase the chances that we will achieve the mission?” I t is difficult to resist being charmed by Tetrick. He is self-deprecating, joking that it took him six months to learn how to pronounce protein surface hydrophobicity.
He exudes confidence, religiously maintains eye contact, and seems disarmingly open: He spoke with me for hours in the office long after his colleagues had gone home and repeatedly volunteered personal text messages for me to read. But his constant emphasis on where Hampton Creek is heading deflects attention from where it is now.
One afternoon during my visit, two Chinese visitors arrived at Hampton Creek for a meeting and joined Tetrick at his customary workstation at the front of the office. The pair had emailed the company’s customer-service department three days earlier, and Tetrick knew little about them besides their vague interest in “alternative proteins.” One of the men, Lewis Wang, now introduced himself as the founder of a venture-capital fund and his companion, who carried a Prada briefcase, as the chairman and CEO of one of China’s largest meat producers. The magnitude of the opportunity was not lost on Tetrick. He immediately summoned a colleague, whom he presented as “one of our lead scientists,” and instructed an employee with the nebulous title of “advocacy” to make sure the men had “the full experience.” The visitors listened intently while Tetrick teased the company’s forthcoming patents and products, gradually building to the most cutting-edge undertaking of all: Project Jake (named after Tetrick’s deceased dog), Hampton Creek’s push into growing meat and fish in a lab. Tetrick explained how, rather than slaughtering a chicken, scientists could extract stem cells from a bird’s fallen feather and grow them into muscle cells.
Other start-ups in this field, including one co-founded by the creator of the first lab-grown burger prototype, have targeted 2020 as the earliest date for selling so-called cultured meat. Tetrick declared that his goal was to release lab-produced meat before the end of this year. “This is over our expectations,” Wang said. “It’s very exciting.” Tetrick led the two Chinese men through a spacious room housing Hampton Creek’s team of designers and settled them in a windowless office with a large TV. Tetrick’s filmmaker, one of his longest-serving employees, cued up footage with a Kinfolk vibe: a farmer lovingly cradling a white chicken, a Hampton Creek employee in a field contemplating a single feather as wind rustled his curls. The last shot showed gloved hands snipping the base of a feather into a test tube.
“You are probably the only company that has a media studio here,” Wang remarked. “Other companies, I don’t think they have a communications studio.” But he also noted that the videos hadn’t shown how the stem cells would be transformed into meat: “Where is the growth?” By way of response, Tetrick whisked the pair back to the design studio to behold another of his visions: a poster-size illustration of families admiring a hangar full of lab-grown hamburger patties—Tetrick’s farm of the future. Trusting in the logic that seeing is believing, he’d distributed framed versions to members of his staff and advised them to mount the drawing in their homes. “You’ve got to be able to see it,” he explained. “I want them to envision the future.” Explore the November 2017 Issue Check out more from this issue and find your next story to read.
The future of Hampton Creek that Tetrick would have the world envision is consistently, dazzlingly bright. Besides lab-grown meat and an increasing list of grocery-store staples, he promoted numerous milestones on the cusp of being realized: imminent deals with food manufacturers; patents set to receive approval; the removal of palm oil from Hampton Creek products; the launch of a long-overdue e‑commerce site; and the introduction of Power Porridge, a nutrient-rich cereal he said would be in Liberian schools this fall.
When I asked Tetrick why he was embarking on so many risky, expensive endeavors even as product deadlines slipped by, he acknowledged that a “better entrepreneur” might wait until the company was on more solid footing—but, he told me, “the difference between doing this [now versus] five years from now—or 10 years from now—is literally the difference of billions of animals suffering or not.” Start-up CEOs frequently exaggerate their ambitions in an effort to attract more cash and justify large valuations: As Oracle’s billionaire co-founder, Larry Ellison, once quipped, “The entire history of the IT industry has been one of overpromising and underdelivering.” In the insular culture of Silicon Valley, where those who know the score often have a vested interest in keeping it hidden, it can be difficult to determine whether a company is poised for breakthrough or breakdown until the very moment of collapse.
Tetrick deposited his guests in the kitchen, where his chefs—“Michelin-star chefs,” Tetrick’s head of communications reminded me—had set a table with elegant earthenware pottery and proper silverware. “Here we have a steamed tamago , a little bit of smoked black sesame, pea tendril, and togarashi ,” murmured one chef, setting down a Japanese-style omelet made with the liquid Just Scramble prototype. A vegan feast followed: Japanese chawanmushi custard with smoked kombu seaweed and sake-poached mushrooms, homemade brioche, butter and crackers, and ice cream. So did a live demonstration of the Just Scramble liquid being scrambled like eggs.
“We are very interested to invest if possible,” Wang announced after the meal. “I think Josh looks like a leader,” he told me later. Tetrick, in a rush to get to another meeting, left the two men to continue their tour of the headquarters: past researchers operating robotic arms, chefs laboring over scales, and other employees typing at laptops—a perfect vision of industry.
"
|
1,040 | 2,017 |
"China’s Race to Find Aliens First - The Atlantic"
|
"https://www.theatlantic.com/magazine/archive/2017/12/what-happens-if-china-makes-first-contact/544131"
|
"Site Navigation The Atlantic Popular Latest Newsletters Sections Politics Ideas Fiction Technology Science Photo Business Culture Planet Global Books Podcasts Health Education Projects Features Family Events Washington Week Progress Newsletters Explore The Atlantic Archive Play The Atlantic crossword The Print Edition Latest Issue Past Issues Give a Gift Search The Atlantic Quick Links Dear Therapist Crossword Puzzle Magazine Archive Your Subscription Popular Latest Newsletters Sign In Subscribe Explore The making of an American Nazi, the evolution of the alt-right, and the rise and fall of ‘Rolling Stone.’ Plus, China’s race to find aliens first, ‘Shark Tank’ nation, and more.
The Making of an American Nazi Luke O’Brien The Lost Boys Angela Nagle What Happens If China Makes First Contact? Ross Andersen The Digital Ruins of a Forgotten Future Leslie Jamison What Would Miss Rumphius Do? Nathan Perl-Rosenthal Republican Is Not a Synonym for Racist Peter Beinart A gift that gets them talking.
Give a year of stories to spark conversation, Plus a free tote.
What Happens If China Makes First Contact? As America has turned away from searching for extraterrestrial intelligence, China has built the world’s largest radio dish for precisely that purpose.
Last January, the Chinese Academy of Sciences invited Liu Cixin, China’s preeminent science-fiction writer, to visit its new state-of-the-art radio dish in the country’s southwest. Almost twice as wide as the dish at America’s Arecibo Observatory, in the Puerto Rican jungle, the new Chinese dish is the largest in the world, if not the universe. Though it is sensitive enough to detect spy satellites even when they’re not broadcasting, its main uses will be scientific, including an unusual one: The dish is Earth’s first flagship observatory custom-built to listen for a message from an extraterrestrial intelligence. If such a sign comes down from the heavens during the next decade, China may well hear it first.
In some ways, it’s no surprise that Liu was invited to see the dish. He has an outsize voice on cosmic affairs in China, and the government’s aerospace agency sometimes asks him to consult on science missions. Liu is the patriarch of the country’s science-fiction scene. Other Chinese writers I met attached the honorific Da , meaning “Big,” to his surname. In years past, the academy’s engineers sent Liu illustrated updates on the dish’s construction, along with notes saying how he’d inspired their work.
But in other ways Liu is a strange choice to visit the dish. He has written a great deal about the risks of first contact. He has warned that the “appearance of this Other” might be imminent, and that it might result in our extinction. “Perhaps in ten thousand years, the starry sky that humankind gazes upon will remain empty and silent,” he writes in the postscript to one of his books. “But perhaps tomorrow we’ll wake up and find an alien spaceship the size of the Moon parked in orbit.” In recent years, Liu has joined the ranks of the global literati. In 2015, his novel The Three-Body Problem became the first work in translation to win the Hugo Award, science fiction’s most prestigious prize. Barack Obama told The New York Times that the book—the first in a trilogy— gave him cosmic perspective during the frenzy of his presidency. Liu told me that Obama’s staff asked him for an advance copy of the third volume.
At the end of the second volume, one of the main characters lays out the trilogy’s animating philosophy. No civilization should ever announce its presence to the cosmos, he says. Any other civilization that learns of its existence will perceive it as a threat to expand—as all civilizations do, eliminating their competitors until they encounter one with superior technology and are themselves eliminated. This grim cosmic outlook is called “dark-forest theory,” because it conceives of every civilization in the universe as a hunter hiding in a moonless woodland, listening for the first rustlings of a rival.
Liu’s trilogy begins in the late 1960s, during Mao’s Cultural Revolution, when a young Chinese woman sends a message to a nearby star system. The civilization that receives it embarks on a centuries-long mission to invade Earth, but she doesn’t care; the Red Guard’s grisly excesses have convinced her that humans no longer deserve to survive. En route to our planet, the extraterrestrial civilization disrupts our particle accelerators to prevent us from making advancements in the physics of warfare, such as the one that brought the atomic bomb into being less than a century after the invention of the repeating rifle.
S cience fiction is sometimes described as a literature of the future, but historical allegory is one of its dominant modes. Isaac Asimov based his Foundation series on classical Rome, and Frank Herbert’s Dune borrows plot points from the past of the Bedouin Arabs. Liu is reluctant to make connections between his books and the real world, but he did tell me that his work is influenced by the history of Earth’s civilizations, “especially the encounters between more technologically advanced civilizations and the original settlers of a place.” One such encounter occurred during the 19th century, when the “Middle Kingdom” of China, around which all of Asia had once revolved, looked out to sea and saw the ships of Europe’s seafaring empires, whose ensuing invasion triggered a loss in status for China comparable to the fall of Rome.
This past summer, I traveled to China to visit its new observatory, but first I met up with Liu in Beijing. By way of small talk, I asked him about the film adaptation of The Three-Body Problem.
“People here want it to be China’s Star Wars ,” he said, looking pained. The pricey shoot ended in mid-2015, but the film is still in postproduction. At one point, the entire special-effects team was replaced. “When it comes to making science-fiction movies, our system is not mature,” Liu said.
I had come to interview Liu in his capacity as China’s foremost philosopher of first contact, but I also wanted to know what to expect when I visited the new dish. After a translator relayed my question, Liu stopped smoking and smiled.
“It looks like something out of science fiction,” he said.
A week later, I rode a bullet train out of Shanghai, leaving behind its purple Blade Runner glow, its hip cafés and craft-beer bars. Rocketing along an elevated track, I watched high-rises blur by, each a tiny honeycomb piece of the rail-linked urban megastructure that has recently erupted out of China’s landscape. China poured more concrete from 2011 to 2013 than America did during the entire 20th century. The country has already built rail lines in Africa, and it hopes to fire bullet trains into Europe and North America, the latter by way of a tunnel under the Bering Sea.
The skyscrapers and cranes dwindled as the train moved farther inland. Out in the emerald rice fields, among the low-hanging mists, it was easy to imagine ancient China—the China whose written language was adopted across much of Asia; the China that introduced metal coins, paper money, and gunpowder into human life; the China that built the river-taming system that still irrigates the country’s terraced hills. Those hills grew steeper as we went west, stair-stepping higher and higher, until I had to lean up against the window to see their peaks. Every so often, a Hans Zimmer bass note would sound, and the glass pane would fill up with the smooth, spaceship-white side of another train, whooshing by in the opposite direction at almost 200 miles an hour.
It was mid-afternoon when we glided into a sparkling, cavernous terminal in Guiyang, the capital of Guizhou, one of China’s poorest, most remote provinces. A government-imposed social transformation appeared to be under way. Signs implored people not to spit indoors. Loudspeakers nagged passengers to “keep an atmosphere of good manners.” When an older man cut in the cab line, a security guard dressed him down in front of a crowd of hundreds.
The next morning, I went down to my hotel lobby to meet the driver I’d hired to take me to the observatory. Two hours into what was supposed to be a four-hour drive, he pulled over in the rain and waded 30 yards into a field where an older woman was harvesting rice, to ask for directions to a radio observatory more than 100 miles away. After much frustrated gesturing by both parties, she pointed the way with her scythe.
We set off again, making our way through a string of small villages, beep-beep ing motorbike riders and pedestrians out of our way. Some of the buildings along the road were centuries old, with upturned eaves; others were freshly built, their residents having been relocated by the state to clear ground for the new observatory. A group of the displaced villagers had complained about their new housing, attracting bad press—a rarity for a government project in China. Western reporters took notice. “ China Telescope to Displace 9,000 Villagers in Hunt for Extraterrestrials ,” read a headline in The New York Times.
T he search for extraterrestrial intelligence (seti) is often derided as a kind of religious mysticism, even within the scientific community. Nearly a quarter century ago, the United States Congress defunded America’s seti program with a budget amendment proposed by Senator Richard Bryan of Nevada, who said he hoped it would “be the end of Martian-hunting season at the taxpayer’s expense.” That’s one reason it is China, and not the United States, that has built the first world-class radio observatory with seti as a core scientific goal.
seti does share some traits with religion. It is motivated by deep human desires for connection and transcendence. It concerns itself with questions about human origins, about the raw creative power of nature, and about our future in this universe—and it does all this at a time when traditional religions have become unpersuasive to many. Why these aspects of seti should count against it is unclear. Nor is it clear why Congress should find seti unworthy of funding, given that the government has previously been happy to spend hundreds of millions of taxpayer dollars on ambitious searches for phenomena whose existence was still in question. The expensive, decades-long missions that found black holes and gravitational waves both commenced when their targets were mere speculative possibilities. That intelligent life can evolve on a planet is not a speculative possibility, as Darwin demonstrated. Indeed, seti might be the most intriguing scientific project suggested by Darwinism.
Even without federal funding in the United States, seti is now in the midst of a global renaissance. Today’s telescopes have brought the distant stars nearer, and in their orbits we can see planets. The next generation of observatories is now clicking on, and with them we will zoom into these planets’ atmospheres.
seti researchers have been preparing for this moment. In their exile, they have become philosophers of the future. They have tried to imagine what technologies an advanced civilization might use, and what imprints those technologies would make on the observable universe. They have figured out how to spot the chemical traces of artificial pollutants from afar. They know how to scan dense star fields for giant structures designed to shield planets from a supernova’s shock waves.
In 2015, the Russian billionaire Yuri Milner poured $100 million of his own cash into a new seti program led by scientists at UC Berkeley. The team performs more seti observations in a single day than took place during entire years just a decade ago. In 2016, Milner sank another $100 million into an interstellar-probe mission.
A beam from a giant laser array, to be built in the Chilean high desert, will wallop dozens of wafer-thin probes more than four light-years to the Alpha Centauri system, to get a closer look at its planets. Milner told me the probes’ cameras might be able to make out individual continents. The Alpha Centauri team modeled the radiation that such a beam would send out into space, and noticed striking similarities to the mysterious “fast radio bursts” that Earth’s astronomers keep detecting, which suggests the possibility that they are caused by similar giant beams, powering similar probes elsewhere in the cosmos.
Andrew Siemion, the leader of Milner’s seti team, is actively looking into this possibility. He visited the Chinese dish while it was still under construction, to lay the groundwork for joint observations and to help welcome the Chinese team into a growing network of radio observatories that will cooperate on seti research, including new facilities in Australia, New Zealand, and South Africa. When I joined Siemion for overnight seti observations at a radio observatory in West Virginia last fall, he gushed about the Chinese dish. He said it was the world’s most sensitive telescope in the part of the radio spectrum that is “classically considered to be the most probable place for an extraterrestrial transmitter.” Before I left for China, Siemion warned me that the roads around the observatory were difficult to navigate, but he said I’d know I was close when my phone reception went wobbly. Radio transmissions are forbidden near the dish, lest scientists there mistake stray electromagnetic radiation for a signal from the deep. Supercomputers are still sifting through billions of false positives collected during previous seti observations, most caused by human technological interference.
My driver was on the verge of turning back when my phone reception finally began to wane. The sky had darkened in the five hours since we’d left sunny Guiyang. High winds were whipping between the Avatar -style mountains, making the long bamboo stalks sway like giant green feathers. A downpour of fat droplets began splattering the windshield just as I lost service for good.
T he week before, Liu and I had visited a stargazing site of a much older vintage. In 1442, after the Ming dynasty moved China’s capital to Beijing, the emperor broke ground on a new observatory near the Forbidden City. More than 40 feet high, the elegant, castlelike structure came to house China’s most precious astronomical instruments.
No civilization on Earth has a longer continuous tradition of astronomy than China, whose earliest emperors drew their political legitimacy from the sky, in the form of a “mandate of heaven.” More than 3,500 years ago, China’s court astronomers pressed pictograms of cosmic events into tortoiseshells and ox bones. One of these “oracle bones” bears the earliest known record of a solar eclipse. It was likely interpreted as an omen of catastrophe, perhaps an ensuing invasion.
Liu and I sat at a black-marble table in the old observatory’s stone courtyard. Centuries-old pines towered overhead, blocking the hazy sunlight that poured down through Beijing’s yellow, polluted sky. Through a round, red portal at the courtyard’s edge, a staircase led up to a turretlike observation platform, where a line of ancient astronomical devices stood, including a giant celestial globe supported by slithering bronze dragons. The starry globe was stolen in 1900, after an eight-country alliance stormed Beijing to put down the Boxer Rebellion. Troops from Germany and France flooded into the courtyard where Liu and I were sitting, and made off with 10 of the observatory’s prized instruments.
The instruments were eventually returned, but the sting of the incident lingered. Chinese schoolchildren are still taught to think of this general period as the “century of humiliation,” the nadir of China’s long fall from its Ming-dynasty peak. Back when the ancient observatory was built, China could rightly regard itself as the lone survivor of the great Bronze Age civilizations, a class that included the Babylonians, the Mycenaeans, and even the ancient Egyptians. Western poets came to regard the latter’s ruins as Ozymandian proof that nothing lasted. But China had lasted. Its emperors presided over the planet’s largest complex social organization. They commanded tribute payments from China’s neighbors, whose rulers sent envoys to Beijing to perform a baroque face-to-the-ground bowing ceremony for the emperors’ pleasure.
In the first volume of his landmark series , Science and Civilisation in China , published in 1954, the British Sinologist Joseph Needham asked why the scientific revolution hadn’t happened in China, given its sophisticated intellectual meritocracy, based on exams that measured citizens’ mastery of classical texts. This inquiry has since become known as the “Needham Question,” though Voltaire too had wondered why Chinese mathematics stalled out at geometry, and why it was the Jesuits who brought the gospel of Copernicus into China, and not the other way around. He blamed the Confucian emphasis on tradition. Other historians blamed China’s remarkably stable politics. A large landmass ruled by long dynasties may have encouraged less technical dynamism than did Europe, where more than 10 polities were crammed into a small area, triggering constant conflict. As we know from the Manhattan Project, the stakes of war have a way of sharpening the scientific mind.
Still others have accused premodern China of insufficient curiosity about life beyond its borders. (Notably, there seems to have been very little speculation in China about extraterrestrial life before the modern era.) This lack of curiosity is said to explain why China pressed pause on naval innovation during the late Middle Ages, right at the dawn of Europe’s age of exploration, when the Western imperial powers were looking fondly back through the medieval fog to seafaring Athens.
Whatever the reason, China paid a dear price for slipping behind the West in science and technology. In 1793, King George III stocked a ship with the British empire’s most dazzling inventions and sent it to China, only to be rebuffed by its emperor, who said he had “no use” for England’s trinkets. Nearly half a century later, Britain returned to China, seeking buyers for India’s opium harvest. China’s emperor again declined, and instead cracked down on the local sale of the drug, culminating in the seizure and flamboyant seaside destruction of 2 million pounds of British-owned opium. Her Majesty’s Navy responded with the full force of its futuristic technology, running ironclad steamships straight up the Yangtze, sinking Chinese junk boats, until the emperor had no choice but to sign the first of the “unequal treaties” that ceded Hong Kong, along with five other ports, to British jurisdiction. After the French made a colony of Vietnam, they joined in this “slicing of the Chinese melon,” as it came to be called, along with the Germans, who occupied a significant portion of Shandong province.
Meanwhile Japan, a “little brother” as far as China was concerned, responded to Western aggression by quickly modernizing its navy, such that in 1894, it was able to sink most of China’s fleet in a single battle, taking Taiwan as the spoils. And this was just a prelude to Japan’s brutal mid-20th-century invasion of China, part of a larger campaign of civilizational expansion that aimed to spread Japanese power to the entire Pacific, a campaign that was largely successful, until it encountered the United States and its city-leveling nukes.
China’s humiliations multiplied with America’s rise. After sending 200,000 laborers to the Western Front in support of the Allied war effort during World War I, Chinese diplomats arrived at Versailles expecting something of a restoration, or at least relief from the unequal treaties. Instead, China was seated at the kids’ table with Greece and Siam, while the Western powers carved up the globe.
Only recently has China regained its geopolitical might, after opening to the world during Deng Xiaoping’s 1980s reign. Deng evinced a near-religious reverence for science and technology, a sentiment that is undimmed in Chinese culture today. The country is on pace to outspend the United States on R&D this decade, but the quality of its research varies a great deal. According to one study , even at China’s most prestigious academic institutions, a third of scientific papers are faked or plagiarized. Knowing how poorly the country’s journals are regarded, Chinese universities are reportedly offering bonuses of up to six figures to researchers who publish in Western journals.
It remains an open question whether Chinese science will ever catch up with that of the West without a bedrock political commitment to the free exchange of ideas. China’s persecution of dissident scientists began under Mao, whose ideologues branded Einstein’s theories “counterrevolutionary.” But it did not end with him. Even in the absence of overt persecution, the country’s “great firewall” handicaps Chinese scientists , who have difficulty accessing data published abroad.
China has learned the hard way that spectacular scientific achievements confer prestige upon nations. The “Celestial Kingdom” looked on from the sidelines as Russia flung the first satellite and human being into space, and then again when American astronauts spiked the Stars and Stripes into the lunar crust.
China has largely focused on the applied sciences. It built the world’s fastest supercomputer, spent heavily on medical research, and planted a “great green wall” of forests in its northwest as a last-ditch effort to halt the Gobi Desert’s spread. Now China is bringing its immense resources to bear on the fundamental sciences. The country plans to build an atom smasher that will conjure thousands of “god particles” out of the ether, in the same time it took cern ’s Large Hadron Collider to strain out a handful. It is also eyeing Mars. In the technopoetic idiom of the 21st century, nothing would symbolize China’s rise like a high-definition shot of a Chinese astronaut setting foot on the red planet. Nothing except, perhaps, first contact.
A t a security station 10 miles from the dish, I handed my cellphone to a guard. He locked it away in a secure compartment and escorted me to a pair of metal detectors so I could demonstrate that I wasn’t carrying any other electronics. A different guard drove me on a narrow access road to a switchback-laden stairway that climbed 800 steps up a mountainside, through buzzing clouds of blue dragonflies, to a platform overlooking the observatory.
Until a few months before his death this past September, the radio astronomer Nan Rendong was the observatory’s scientific leader, and its soul. It was Nan who had made sure the new dish was customized to search for extraterrestrial intelligence. He’d been with the project since its inception, in the early 1990s, when he used satellite imagery to pick out hundreds of candidate sites among the deep depressions in China’s Karst mountain region.
Apart from microwaves, such as those that make up the faint afterglow of the Big Bang, radio waves are the weakest form of electromagnetic radiation. The collective energy of all the radio waves caught by Earth’s observatories in a year is less than the kinetic energy released when a single snowflake comes softly to rest on bare soil. Collecting these ethereal signals requires technological silence. That’s why China plans to one day put a radio observatory on the dark side of the moon , a place more technologically silent than anywhere on Earth. It’s why, over the course of the past century, radio observatories have sprouted, like cool white mushrooms, in the blank spots between this planet’s glittering cities. And it’s why Nan went looking for a dish site in the remote Karst mountains. Tall, jagged, and covered in subtropical vegetation, these limestone mountains rise up abruptly from the planet’s crust, forming barriers that can protect an observatory’s sensitive ear from wind and radio noise.
After making a shortlist of candidate locations, Nan set out to inspect them on foot. Hiking into the center of the Dawodang depression, he found himself at the bottom of a roughly symmetrical bowl, guarded by a nearly perfect ring of green mountains, all formed by the blind processes of upheaval and erosion. More than 20 years and $180 million later, Nan positioned the dish for its inaugural observation—its “first light,” in the parlance of astronomy. He pointed it at the fading radio glow of a supernova, or “guest star,” as Chinese astronomers had called it when they recorded the unusual brightness of its initial explosion almost 1,000 years earlier.
After the dish is calibrated, it will start scanning large sections of the sky. Andrew Siemion’s seti team is working with the Chinese to develop an instrument to piggyback on these wide sweeps, which by themselves will constitute a radical expansion of the human search for the cosmic other.
Siemion told me he’s especially excited to survey dense star fields at the center of the galaxy. “It’s a very interesting place for an advanced civilization to situate itself,” he said. The sheer number of stars and the presence of a supermassive black hole make for ideal conditions “if you want to slingshot a bunch of probes around the galaxy.” Siemion’s receiver will train its sensitive algorithms on billions of wavelengths, across billions of stars, looking for a beacon.
Liu Cixin told me he doubts the dish will find one. In a dark-forest cosmos like the one he imagines, no civilization would ever send a beacon unless it were a “death monument,” a powerful broadcast announcing the sender’s impending extinction. If a civilization were about to be invaded by another, or incinerated by a gamma-ray burst, or killed off by some other natural cause, it might use the last of its energy reserves to beam out a dying cry to the most life-friendly planets in its vicinity.
Even if Liu is right, and the Chinese dish has no hope of detecting a beacon, it is still sensitive enough to hear a civilization’s fainter radio whispers, the ones that aren’t meant to be overheard, like the aircraft-radar waves that constantly waft off Earth’s surface. If civilizations are indeed silent hunters, we might be wise to hone in on this “leakage” radiation. Many of the night sky’s stars might be surrounded by faint halos of leakage, each a fading artifact of a civilization’s first blush with radio technology, before it recognized the risk and turned off its detectable transmitters. Previous observatories could search only a handful of stars for this radiation. China’s dish has the sensitivity to search tens of thousands.
In Beijing, I told Liu that I was holding out hope for a beacon. I told him I thought dark-forest theory was based on too narrow a reading of history. It may infer too much about the general behavior of civilizations from specific encounters between China and the West. Liu replied, convincingly, that China’s experience with the West is representative of larger patterns. Across history, it is easy to find examples of expansive civilizations that used advanced technologies to bully others. “In China’s imperial history, too,” he said, referring to the country’s long-standing domination of its neighbors.
But even if these patterns extend back across all of recorded history, and even if they extend back to the murky epochs of prehistory, to when the Neanderthals vanished sometime after first contact with modern humans, that still might not tell us much about galactic civilizations. For a civilization that has learned to survive across cosmic timescales, humanity’s entire existence would be but a single moment in a long, bright dawn. And no civilization could last tens of millions of years without learning to live in peace internally. Human beings have already created weapons that put our entire species at risk; an advanced civilization’s weapons would likely far outstrip ours.
I told Liu that our civilization’s relative youth would suggest we’re an outlier on the spectrum of civilizational behavior, not a Platonic case to generalize from. The Milky Way has been habitable for billions of years. Anyone we make contact with will almost certainly be older, and perhaps wiser.
M oreover, the night sky contains no evidence that older civilizations treat expansion as a first principle.
seti researchers have looked for civilizations that shoot outward in all directions from a single origin point, becoming an ever-growing sphere of technology, until they colonize entire galaxies. If they were consuming lots of energy, as expected, these civilizations would give off a telltale infrared glow, and yet we don’t see any in our all-sky scans.
Maybe the self-replicating machinery required to spread rapidly across 100 billion stars would be doomed by runaway coding errors. Or maybe civilizations spread unevenly throughout a galaxy, just as humans have spread unevenly across the Earth. But even a civilization that captured a tenth of a galaxy’s stars would be easy to find, and we haven’t found a single one, despite having searched the nearest 100,000 galaxies.
Some seti researchers have wondered about stealthier modes of expansion. They have looked into the feasibility of “ Genesis probes ,” spacecraft that can seed a planet with microbes, or accelerate evolution on its surface, by sparking a Cambrian explosion, like the one that juiced biological creativity on Earth. Some have even searched for evidence that such spacecraft might have visited this planet, by looking for encoded messages in our DNA—which is, after all, the most robust informational storage medium known to science. They too have come up empty. The idea that civilizations expand ever outward might be woefully anthropocentric.
Explore the December 2017 Issue Check out more from this issue and find your next story to read.
Liu did not concede this point. To him, the absence of these signals is just further evidence that hunters are good at hiding. He told me that we are limited in how we think about other civilizations. “Especially those that may last millions or billions of years,” he said. “When we wonder why they don’t use certain technologies to spread across a galaxy, we might be like spiders wondering why humans don’t use webs to catch insects.” And anyway, an older civilization that has achieved internal peace may still behave like a hunter, Liu said, in part because it would grasp the difficulty of “understanding one another across cosmic distances.” And it would know that the stakes of a misunderstanding could be existential.
First contact would be trickier still if we encountered a postbiological artificial intelligence that had taken control of its planet. Its worldview might be doubly alien. It might not feel empathy, which is not an essential feature of intelligence but instead an emotion installed by a particular evolutionary history and culture. The logic behind its actions could be beyond the powers of the human imagination. It might have transformed its entire planet into a supercomputer, and, according to a trio of Oxford researchers , it might find the current cosmos too warm for truly long-term, energy-efficient computing. It might cloak itself from observation, and power down into a dreamless sleep lasting hundreds of millions of years, until such time when the universe has expanded and cooled to a temperature that allows for many more epochs of computing.
A s I came up the last flight of steps to the observation platform, the Earth itself seemed to hum like a supercomputer, thanks to the loud, whirring chirps of the mountains’ insects, all amplified by the dish’s acoustics. The first thing I noticed at the top was not the observatory, but the Karst mountains. They were all individuals, lumpen and oddly shaped. It was as though the Mayans had built giant pyramids across hundreds of square miles, and they’d all grown distinctive deformities as they were taken over by vegetation. They stretched in every direction, all the way to the horizon, the nearer ones dark green, and the distant ones looking like blue ridges.
Amid this landscape of chaotic shapes was the spectacular structure of the dish. Five football fields wide, and deep enough to hold two bowls of rice for every human being on the planet, it was a genuine instance of the technological sublime. Its vastness reminded me of Utah’s Bingham copper mine, but without the air of hasty, industrial violence. Cool and concave, the dish looked at one with the Earth. It was as though God had pressed a perfect round fingertip into the planet’s outer crust and left behind a smooth, silver print.
I sat up there for an hour in the rain, as dark clouds drifted across the sky, throwing warbly light on the observatory. Its thousands of aluminum-triangle panels took on a mosaic effect: Some tiles turned bright silver, others pale bronze. It was strange to think that if a signal from a distant intelligence were to reach us anytime soon, it would probably pour down into this metallic dimple in the planet. The radio waves would ping off the dish and into the receiver. They’d be pored over and verified. International protocols require the disclosure of first contact , but they are nonbinding. Maybe China would go public with the signal but withhold its star of origin, lest a fringe group send Earth’s first response. Maybe China would make the signal a state secret. Even then, one of its international partners could go rogue. Or maybe one of China’s own scientists would convert the signal into light pulses and send it out beyond the great firewall, to fly freely around the messy snarl of fiber-optic cables that spans our planet.
In Beijing, I had asked Liu to set aside dark-forest theory for a moment. I asked him to imagine the Chinese Academy of Sciences calling to tell him it had found a signal.
How would he reply to a message from a cosmic civilization? He said that he would avoid giving a too-detailed account of human history. “It’s very dark,” he said. “It might make us appear more threatening.” In Blindsight , Peter Watts’s novel of first contact, mere reference to the individual self is enough to get us profiled as an existential threat. I reminded Liu that distant civilizations might be able to detect atomic-bomb flashes in the atmospheres of distant planets, provided they engage in long-term monitoring of life-friendly habitats, as any advanced civilization surely would. The decision about whether to reveal our history might not be ours to make.
Liu told me that first contact would lead to a human conflict, if not a world war. This is a popular trope in science fiction. In last year’s Oscar-nominated film Arrival , the sudden appearance of an extraterrestrial intelligence inspires the formation of apocalyptic cults and nearly triggers a war between world powers anxious to gain an edge in the race to understand the alien’s messages. There is also real-world evidence for Liu’s pessimism: When Orson Welles’s “War of the Worlds” radio broadcast simulating an alien invasion was replayed in Ecuador in 1949, a riot broke out, resulting in the deaths of six people. “We have fallen into conflicts over things that are much easier to solve,” Liu told me.
Even if no geopolitical strife ensued, humans would certainly experience a radical cultural transformation, as every belief system on Earth grappled with the bare fact of first contact. Buddhists would get off easy: Their faith already assumes an infinite universe of untold antiquity, its every corner alive with the vibrating energies of living beings. The Hindu cosmos is similarly grand and teeming. The Koran references Allah’s “creation of the heavens and the earth, and the living creatures that He has scattered through them.” Jews believe that God’s power has no limits, certainly none that would restrain his creative powers to this planet’s cosmically small surface.
Christianity might have it tougher. There is a debate in contemporary Christian theology as to whether Christ’s salvation extends to every soul that exists in the wider universe, or whether the sin-tainted inhabitants of distant planets require their own divine interventions. The Vatican is especially keen to massage extraterrestrial life into its doctrine, perhaps sensing that another scientific revolution may be imminent. The shameful persecution of Galileo is still fresh in its long institutional memory.
Recommended Reading Are We Alone? Gregg Easterbrook The Return of SETI Ross Andersen How to Jump-Start Life Elsewhere in Our Galaxy Ross Andersen Secular humanists won’t be spared a sobering intellectual reckoning with first contact. Copernicus removed Earth from the center of the universe, and Darwin yanked humans down into the muck with the rest of the animal kingdom. But even within this framework, human beings have continued to regard ourselves as nature’s pinnacle. We have continued treating “lower” creatures with great cruelty. We have marveled that existence itself was authored in such a way as to generate, from the simplest materials and axioms, beings like us. We have flattered ourselves that we are, in the words of Carl Sagan, “the universe’s way of knowing itself.” These are secular ways of saying we are made in the image of God.
We may be humbled to one day find ourselves joined, across the distance of stars, to a more ancient web of minds, fellow travelers in the long journey of time. We may receive from them an education in the real history of civilizations, young, old, and extinct. We may be introduced to galactic-scale artworks, borne of million-year traditions. We may be asked to participate in scientific observations that can be carried out only by multiple civilizations, separated by hundreds of light-years. Observations of this scope may disclose aspects of nature that we cannot now fathom. We may come to know a new metaphysics. If we’re lucky, we will come to know a new ethics. We’ll emerge from our existential shock feeling newly alive to our shared humanity. The first light to reach us in this dark forest may illuminate our home world too.
"
|
1,041 | 2,017 |
"Andrew Anglin: The Making of an American Nazi - The Atlantic"
|
"https://www.theatlantic.com/magazine/archive/2017/12/the-making-of-an-american-nazi/544119"
|
"Site Navigation The Atlantic Popular Latest Newsletters Sections Politics Ideas Fiction Technology Science Photo Business Culture Planet Global Books Podcasts Health Education Projects Features Family Events Washington Week Progress Newsletters Explore The Atlantic Archive Play The Atlantic crossword The Print Edition Latest Issue Past Issues Give a Gift Search The Atlantic Quick Links Dear Therapist Crossword Puzzle Magazine Archive Your Subscription Popular Latest Newsletters Sign In Subscribe Explore The making of an American Nazi, the evolution of the alt-right, and the rise and fall of ‘Rolling Stone.’ Plus, China’s race to find aliens first, ‘Shark Tank’ nation, and more.
The Making of an American Nazi Luke O’Brien The Lost Boys Angela Nagle What Happens If China Makes First Contact? Ross Andersen The Digital Ruins of a Forgotten Future Leslie Jamison What Would Miss Rumphius Do? Nathan Perl-Rosenthal Republican Is Not a Synonym for Racist Peter Beinart A gift that gets them talking.
Give a year of stories to spark conversation, Plus a free tote.
The Making of an American Nazi How did Andrew Anglin go from being an antiracist vegan to the alt-right’s most vicious troll and propagandist—and how might he be stopped? O n December 16, 2016, Tanya Gersh answered her phone and heard gunshots. Startled, she hung up. Gersh, a real-estate agent who lives in Whitefish, Montana, assumed it was a prank call. But the phone rang again. More gunshots. Again, she hung up. Another call. This time, she heard a man’s voice: “This is how we can keep the Holocaust alive,” he said. “We can bury you without touching you.” When Gersh put down the phone, her hands were shaking. She was one of only about 100 Jews in Whitefish and the surrounding Flathead Valley, and she knew there were white nationalists and “sovereign citizens” in the area. But Gersh had lived in Whitefish for more than 20 years, since just after college, and had always considered the scenic ski town an idyllic place. She didn’t even have a key to her house—she’d never felt the need to lock her door. Now that sense of security was about to be shattered.
The calls marked the start of a months-long campaign of harassment orchestrated by Andrew Anglin, the publisher of the world’s biggest neo-Nazi website, The Daily Stormer. He claimed that Gersh was trying to “extort” a property sale from Sherry Spencer, whose son, Richard Spencer , was another prominent white nationalist and the face of the so-called alt-right movement.
Explore the December 2017 Issue Check out more from this issue and find your next story to read.
The Spencers had long-standing ties to Whitefish, and Richard had been based there for years. But he gained international notoriety just after the 2016 election for giving a speech in Washington, D.C., in which he declared “Hail Trump!,” prompting Nazi salutes from his audience. In response, some Whitefish residents considered protesting in front of a commercial building Sherry owned in town. According to Gersh, Sherry sought her advice, and Gersh suggested that she sell the property, make a donation to charity, and denounce her son’s white-nationalist views. But Sherry claimed that Gersh had issued “terrible threats,” and she wrote a post on Medium on December 15 accusing her of an attempted shakedown. (Sherry Spencer did not respond to a request for comment.) At the time, Richard Spencer and Andrew Anglin barely knew each other. Spencer, who fancies himself white nationalism’s leading intellectual, cloaks his racism in highbrow arguments. Anglin prefers the gutter, reveling in the vile language common on the worst internet message boards. But Spencer and Anglin had appeared together on a podcast the day before Sherry’s Medium post was published and expressed their mutual admiration. Anglin declared it a “historic” occasion, a step toward greater unity on the extreme right.
It was in this spirit that Anglin “doxed” Gersh and her husband, Judah, as well as other Jews in Whitefish, by publishing their contact information and other personal details on his website. He plastered their photographs with yellow stars emblazoned with jude and posted a picture of the Gershes’ 12-year-old son superimposed on the gates at Auschwitz. He commanded his readers—his “Stormer Troll Army”—to “hit ’em up.” “All of you deserve a bullet through your skull,” one Stormer said in an email.
“Put your uppity slut wife Tanya back in her cage, you rat-faced kike,” another wrote to Judah.
“You fucking wicked kike whore,” Andrew Auernheimer, The Daily Stormer’s webmaster, said in a voicemail for Gersh. “This is Trump’s America now.” Over the next week, the Stormers besieged Whitefish businesses, human-rights groups, city-council members—anyone potentially connected to the targets. A single harasser called Judah’s office more than 500 times in three days, according to the Whitefish police. Gersh came home one night to find her husband sitting at home in the dark, suitcases on the floor, wondering whether they should flee. “I have never been so scared in my entire life,” she later told me.
That Anglin, a 33-year-old college dropout, could unleash such mayhem—Whitefish’s police chief, Bill Dial, likened it to “domestic terrorism”—was a sign of just how emboldened the alt-right had become.
Anglin is an ideological descendant of men such as George Lincoln Rockwell, who created the American Nazi Party in the late 1950s, and William Luther Pierce, who founded the National Alliance, a powerful white-nationalist group, in the 1970s. Anglin admires these predecessors, who saw themselves as revolutionaries at the vanguard of a movement to take back the country. He dreams of a violent insurrection.
But where Rockwell and Pierce relied on pamphlets, the radio, newsletters, and in-person organizing to advance their aims, Anglin has the internet. His reach is exponentially greater, his ability to connect with like-minded young men unprecedented.
He also arrived at a more fortuitous moment. Anglin and his ilk like to talk about the Overton Window, a term that describes the range of acceptable discourse in society. They’d been tugging at that window for years only to watch, with surprise and delight, as it flew wide open during Donald Trump’s candidacy. Suddenly it was okay to talk about banning Muslims or to cast Mexican immigrants as criminals and parasites—which meant Anglin’s even-more-extreme views weren’t as far outside the mainstream as they once had been. Anglin is the alt-right’s most accomplished propagandist, and his writing taps into some of the same anxieties and resentments that helped carry Trump to the presidency—chiefly a perceived loss of status among white men.
Six days into his Whitefish campaign, Anglin announced phase two: an armed protest. “Montana has extremely liberal open carry laws,” he wrote on The Daily Stormer. “My lawyer is telling me we can easily march through the center of the town carrying high-powered rifles.” He scheduled the event for January 16, Martin Luther King Jr. Day, and predicted that about 200 people would show up for a “James Earl Ray Day Extravaganza” in honor of King’s assassin. He promised to bus skinheads in from the Bay Area.
As national news outlets picked up the story, frightened Whitefish residents gathered for a community meeting, where Dial, the police chief, saw a 90-year-old Jewish couple trembling with fear. Some people had alarm systems installed. A rabbi had paranoid visions of skinheads in the woods with night-vision goggles and scoped weapons. The police increased patrols.
Montana’s governor, Steve Bullock, swooped into town, as did representatives of the Anti-Defamation League. The president of the World Jewish Congress demanded that authorities halt the march, calling it a “dangerous and life-threatening rally that puts all of America at risk.” Anglin stoked the hysteria by claiming that European nationalists, along with a Hamas representative and a member of the Iranian Revolutionary Guard, were coming too. “Nothing can stop us,” he declared.
In the end, no one showed up—no European nationalists, no Hamas representatives, no armed skinheads. There was no “March on Whitefish.” Instead Anglin slunk away, having panicked a small town for a month. The Whitefish attack cemented his reputation as the trollmaster of the alt-right. But it left some wondering about the movement’s commitment to its cause. Was this all just a sick joke? Over the coming months, however, Anglin continued to build his audience and urge his followers to take their hate offline, into the real world. In August, when white nationalists actually did stage a major rally in Charlottesville, Virginia, many of his readers were there, chanting slogans he had coined. The alt-right, it became clear, was coming off the message boards and into the streets.
By then, I’d spent months reporting on Anglin, trying to understand who he was and how he’d built such a following, as well as how serious a threat he and the rest of the alt-right actually posed. Anglin’s path to white nationalism was disturbing, and more circuitous than I could have imagined. But it fit a pattern that scholars have identified, in that he seems to have been driven, at least initially, more by a desire for status and belonging than by deeply held beliefs. Anglin wanted to be somebody, and the internet gave him a way.
C olumbus, Ohio, is a funky, still kind-of gritty city, and I went there in January looking for clues to Anglin’s past. On a rainy Saturday, about 45 protesters, some with black masks covering their faces, gathered outside a drab two-story building in Worthington, a suburb of Columbus, where Anglin’s father, Greg, runs a Christian counseling service.
Anglin has long kept his own location secret. For years he floated around Europe, and one family member told me that around 2015 he was holed up in Russia, his last known foreign address. Another source showed me Facebook messages from Anglin’s childhood best friend that indicated Anglin was still living there last year. But he maintained a footprint in Columbus through his father, who has said he was “not really involved with Andy’s site.” In fact, Greg was involved. He’d registered The Daily Stormer’s trade name and filed paperwork for his son’s limited-liability corporation, Moonbase Holdings—a likely reference to a conspiracy theory that Hitler survived World War II by escaping to a secret lunar base.
No payment processor would touch The Daily Stormer, but Anglin had little trouble raising money. Since 2014, he has taken in about $250,000 worth of bitcoin, the cryptocurrency, from unknown sources, according to John Bambenek, a cybersecurity expert who has been tracking neo-Nazis’ bitcoin wallets. Anglin urged his readers to send checks as well. Those donations went to Greg’s office, which was why the protesters had gathered outside, many of them from the Columbus chapter of Anti-Racist Action, a national antifascist network.
Anglin had first come to my attention in the summer of 2015, after he endorsed Trump on The Daily Stormer. When I interviewed him over email for HuffPost last year, he lied to me repeatedly—about his site’s traffic numbers, his financing, his location. Before that article came out, he falsely accused me on The Daily Stormer of fabricating information from the FBI regarding his whereabouts. More than once, I offered to walk him through my reporting, but he refused to hear me out. He also refused numerous requests to talk to me for this article.
Since our last exchange, I’d watched him tirelessly spew hatred while boasting that “only bullets” could stop him. But he never came out from behind his keyboard. And although he showed no scruples about smearing others and flagging them for harassment, he became wildly defensive when anyone dared examine his life.
The Daily Stormer had become arguably the leading hate site on the internet, far surpassing Stormfront, whose message boards had brought white nationalism into the digital age back in the 1990s. Anglin was a punchy, prolific writer who used snark and hyperbole to draw in Millennial readers. “Non-ironic Nazism masquerading as ironic Nazism” was how he described his approach. Irony gave him cover to claim that he was just kidding around. He cited Infowars, Vice , and BuzzFeed as inspiration, but the closest analogue in terms of format and tone, he said, was Gawker. Like the now-shuttered gossip site, The Daily Stormer aggregated the news with attitude. Unlike Gawker, Anglin doctored everything to reflect his racist worldview.
Dylann Roof became a hero to Anglin’s readers, who honor him with “bowl cut” memes.
Anglin wrote about his longing for a race war and urged his readers to prepare for combat against nebulous forces unleashed by Jews, blacks, Muslims, Hispanics, women, liberals, journalists—anyone who might impede the alt-right’s assault on the nation. Like many young men on the extreme right, Anglin hadn’t just given up on the idea of the United States as a liberal democracy. He wanted to burn it to the ground. “There is rapidly approaching a time when in every White Western city, corpses will be stacked in the streets as high as men can stack them,” he wrote. “And you are either going to be stacking or getting stacked.” Anglin’s influence extended offline with Daily Stormer “book clubs,” which he created to engage his followers in “real world actions.” The clubs were small chapters of readers who gathered in cities in the U.S., Canada, and other countries. A Columbus group met at a gun range. Other clubs had been kicked out of bars after openly expressing anti-Semitic views or flaunting Nazi paraphernalia. Anglin pressed his readers to study martial arts, learn to use firearms, and engage in “simulated warfare” through paramilitary training with pellet guns.
Among the protesters in the rain outside Greg’s office, I met Anglin’s preschool teacher, Gail Burkholder, who described being shocked when she’d learned that her former student had grown up to be a notorious white nationalist.
“Why would I think one of my students would become a Nazi who wants to kill me?” said Burkholder, who is Jewish. She’d spotted Anglin’s name in the news after Dylann Roof murdered nine black people in Charleston, South Carolina. Roof reportedly left comments on The Daily Stormer, and he has become a hero to Anglin’s readers, who honor him with “bowl cut” memes.
Roof wasn’t the only killer who read The Daily Stormer. In 2016, Thomas Mair shot and stabbed a British member of Parliament. This year, James Harris Jackson was charged with killing a black man with a sword in New York City and cited The Daily Stormer as an ideological influence. Devon Arthurs, an 18-year-old former neo-Nazi who converted to Islam, shot and killed two of his three roommates in Tampa, who were still neo-Nazis. Police arrested the surviving roommate for hoarding explosive materials.
Until the Roof massacre, Burkholder hadn’t thought about the “adorable,” “happy-go-lucky” boy in her class who loved dinosaurs. Anglin was a normal kid back then, whose only remarkable quality was his extraordinarily nasal voice—it was so bad that Burkholder thought he might have a sinus problem, and raised the issue with his mother, Katie, at a parent–teacher conference.
But that was nearly 30 years ago. Everyone who’d known Anglin when he was young seemed to wonder the same thing: What had happened to turn him into a neo-Nazi? Video: “The Most Dangerous Form of American Extremism” B y all outward appearances, Andrew Anglin had an ordinary, comfortable childhood, at least until adolescence. He grew up in a big house in Worthington Hills, an upper-middle-class neighborhood, where he collected X-Men comics, played computer games, ate burgers at the original Wendy’s restaurant, and got into music with his best friend, West Emerson. And he loved to read. One book that left a deep impression on him was Weasel , which tells the story of a boy in frontier Ohio seeking revenge against a psychopath who, having run out of American Indians to murder, takes to slaughtering white homesteaders.
When Anglin entered the Linworth Alternative Program, Columbus’s “hippie” high school, as a freshman in 1999, other students found him a quiet, insecure kid who craved attention and wanted to fit in. A declared atheist, he styled his reddish hair in dreadlocks and favored jeans with 50-inch leg openings. He often wore a hoodie with a large fuck racism patch on the back.
Anglin was one of only two vegans at Linworth, and before long he began dating the other, a brunette named Alison in the class ahead of him, whom he wooed by baking vegan cookies. She was a popular girl who introduced him to a diverse and edgy clique of kids. To them, Anglin seemed sweet and funny, if a little too eager to latch on to causes. Alison was deeply into animal rights. Suddenly, he was too.
He also got deeply into drugs, according to half a dozen people who knew him at the time. He did LSD at school or while wandering through the scenic Highbanks Metro Park, north of the city. He took ketamine, ate psychedelic mushrooms, and snorted cocaine on weekends. He chugged Robitussin, and “robo tripped” so much that he damaged his stomach and would vomit into trash cans at school.
At home, Anglin spent hours in his parents’ basement downloading music and visiting early Flash-animation sites. According to Cameron Loomis, a former friend, Anglin’s favorite online destination was Rotten.com, which collected images of mangled corpses, deformities, and sexual perversions.
Anglin set up his own website, for a fake record label called “Andy Sucks! Records” that he used to dupe bands into sending him demo tapes. Here, his leftist leanings were on full display: He wrote posts encouraging people to send the Westboro Baptist Church death threats from untraceable accounts, and he mocked the Ku Klux Klan and other racist organizations. He wasn’t so different, back then, from the antifascist activists who would one day protest outside his dad’s office.
But people who knew Anglin in high school told me that, for reasons that were unclear, his behavior became erratic and frightening sometime around the beginning of his sophomore year at Linworth. Visitors to his house saw holes in his bedroom walls, and they knew that when he was upset, he would smash his head into things. Several recall an episode at a party: Anglin burst out crying after Alison drunkenly kissed someone else, then ran outside and bashed his head on the sidewalk over and over.
He harmed himself in other ways, too. He tried to tattoo the name of his favorite band, Modest Mouse, on his upper arm but gave up after two and a half letters, leaving him with moI etched on his skin. He stretched his earlobes by jamming thick marker caps into piercing holes until they dripped blood. He claimed to feel no pain and used lighters to melt the flesh on the inside of his forearms. He provoked people into assaulting him but never fought back, instead laughing as the blows fell. Two kids beat him into a gutter once. Anglin just lay there until they stopped, out of pity and confusion.
Former friends recall that Anglin’s parents seemed blind to their son’s alarming behavior. And while he could be tender toward his younger siblings, Chelsey and Mitch, and loyal to his friends, he also had a sadistic side. Alison (who asked that her last name be withheld from this article) told me that during Anglin’s sophomore year, she called him, distraught: She said she’d passed out at a party and been raped by a friend’s older brother. She needed compassion and support, but Anglin just laughed and broke up with her.
“You’re a slut,” she remembers him saying.
Several girls Anglin had gotten to know at another high school began calling her house at all hours of the night, according to Alison and other sources. “You deserved it,” they’d say. “You slut.” Alison says the abuse went on for weeks, as Anglin showed friends a video he’d made of them having sex.
After the breakup, Dan Newman, another friend at the time, remembers Anglin once bashing his head into the walls of his bedroom in such a frenzy that his mother had to call the police. Several classmates told me that Anglin didn’t date again in high school and sometimes tried to kiss other boys, including one black student he especially liked. Whether this behavior was authentic experimentation or just for shock value, it’s notable in light of the extreme homophobia Anglin has since expressed on The Daily Stormer and elsewhere. He has advocated, for instance, throwing gays off buildings, isis -style.
By Anglin’s junior year, Greg and Katie’s marriage had come undone. People who knew Katie back then described her to me as a browbeaten woman who lived in fear of her husband. A person who was close to one of Greg’s former clients, along with two Columbus pastors familiar with Greg’s work as a counselor, told me that Greg got involved emotionally, and sometimes sexually, with his female clients. Court documents related to his divorce support this claim: A former client is identified as his girlfriend. Greg would later make her a partner in his counseling practice. (Neither of Anglin’s parents responded to requests for comment.) Shortly after the divorce proceedings began, Anglin found a new emotional outlet: listening to a right-wing radio host who claimed that 9/11 was an inside job. This was Alex Jones, who would go on to become America’s premier conspiracy theorist. For Anglin, he was an entry point into the “internet truth movement,” an online realm filled with all manner of paranoid delusions. Soon Anglin was pulling classmates aside to warn them about lizard people. After graduation, few of his friends saw or spoke to him again.
T o spend any significant amount of time in truther forums is to feel the traps being set, the hooks sinking in.
What if? , the mind wonders. For those short on critical-thinking skills, the forums can be infectious and addictive. Here, one might conclude, are fellow detectives working to excavate realities hidden from the “normie” mainstream—that jet contrails contain chemicals sprayed into the atmosphere by the government, for example, or that the moon landing was faked.
Anglin threw himself into this world after high school as he drove around the country, listening to truthers and living out of his Honda Civic. In 2004, he spent a night in jail in Santa Barbara, California, after being arrested for drunk driving. When he returned to Columbus after months on the road, he enrolled at Ohio State University to study English, but dropped out after one semester. In early 2006, he was arrested near campus for two minor drug offenses. (He pleaded guilty to one charge; the other was dismissed.) Anglin was by then spending a lot of time on 4chan, a website that lets users post images and comments anonymously, and that has drawn droves of socially isolated young people thumbing their noses at political correctness. The channers started memes and organized pranks that would later evolve into troll campaigns such as Gamergate, which targeted women in the gaming community with death threats and other abuse. On one board in particular, users vied to see who could make the most-racist comments, ostensibly as a joke. Over time, the humor receded and the racism stuck. “4chan was more influential on me than anything,” Anglin told me over email last year before he cut off communication.
In November 2006, Anglin launched his own conspiracy-theory website, virtually all traces of which were removed from the internet during the time I was reporting this story. He called the site Outlaw Journalism, a tribute to Hunter S. Thompson, whom he idolized, though Anglin’s writing more closely resembled the rantings of Alex Jones—outrageous posts laced with misogyny and anti-immigrant sentiment. “Welcome to the future,” he wrote. “We’re living in a science fiction nightmare.” In March 2007, Anglin published his first post about Donald Trump, highlighting a video clip from a 2000 roast of Rudy Giuliani. In the video, the then-mayor is dressed in drag and sprays perfume on his fake breasts. Trump shoves his face into Giuliani’s chest. Anglin labeled them both “fags” and wrote that Giuliani must be having a “twisted homosexual transvestite affair with Donald Trump.” Elsewhere on his site, Anglin wrote about blood rituals and underground tunnels used by pedophiles and fetus-eaters. He wrote that the government was a “scientific dictatorship” trying to implant microchips in citizens’ brains to create a “worldwide slave grid.” This delusional thinking eventually overwhelmed Anglin. “I just about lost my fucking mind on that conspiracy shit,” he admitted on a podcast years later. He withdrew to a relative’s farm, most likely his maternal grandmother’s 84-acre property south of Columbus, which had woods, a stream, and fields. “I had some issues, and moved to the country,” he wrote on Outlaw Journalism in May 2007, noting that his thoughts were “about 200 percent clearer.” He took in the stars at night and enjoyed the “ecstatic luxury of taking a long walk on non-paved surfaces.” “Glowing green monkeys are able to have baby glowing green monkeys,” Anglin wrote.
But he couldn’t stay away from the truthers. He created the Outlaw Forum, a 4chan-esque board where people could burble about conspiracies. Before long, they began harassing other truthers with whom Anglin clashed. It was his first cybermob.
The internet truthers had embraced a new medium, but their mode of thinking was hardly novel. In his famous 1964 essay “ The Paranoid Style in American Politics ,” the historian Richard Hofstadter wrote about the conspiratorial fantasies of Barry Goldwater supporters in terms that sound strikingly contemporary: “The modern right … feels dispossessed: America has been largely taken away from them and their kind, though they are determined to try to repossess it and to prevent the final destructive act of subversion.” A similar anxiety about displacement runs through the internet truth movement, which helps explain why it has been a key gateway for the alt-right. Obsessed with systems of control, many truthers end up harping on Jewish influence in society. Some deny that the Holocaust occurred, contending that it was an elaborate ruse designed to let Jews play victims at the expense of everyone else. The Holohoax, as it is known, gives its adherents an excuse to blame everything they hate on a cabal of Jews: Feminism. Immigration. Globalization. Liberalism. Egalitarianism. The media. Science. Facts. Video-game addiction. Romantic failure. The NBA being 74.4 percent black. According to the Holohoax, it’s all a plot to undermine traditional white patriarchy so Jews can maintain a parasitic dominion over the Earth.
Anglin didn’t buy into the Holohoax right away, but a nascent anti-Semitism infused his early writing. He riffed about the “Zionist Occupied Government” and urged readers to contact the German Embassy to protest the conviction of an infamous Holocaust denier for breaking a law against inciting hatred.
As Anglin’s prospects narrowed, his worldview got even bleaker. In February 2008, he was arrested for driving while impaired and spent 10 days in jail, according to court records. The following January, he reported working 50 hours a week in a warehouse and still being unable to afford his own place. That June, he published what would be his last post on Outlaw Journalism for years. It was a warning about the banking system, one-world government, organ harvesting, and plant–animal gene-splicing. “Glowing green monkeys are able to have baby glowing green monkeys,” he wrote.
“The only logical path for humanity to take is to utterly abandon [civilization] and return to a hunter/gatherer lifestyle,” he concluded. He wanted to fish and hunt and grow his own food, to live in a hut, to spend time “having fun, telling stories, making music, creating art, dancing, making love to the wife, joking with the old folks and generally living it up.” So he got on a plane and flew toward the jungles of Southeast Asia. It was there, after a darker plunge into delusion, that he would take his final step into neo-Nazism.
T h e rain poured off the thatched roof of Anglin’s bamboo hut. Outside, tropical ferns shook with water. He’d arrived in the jungle, but it had been a winding journey. After leaving Columbus, he’d meandered through Asia until he reached the Philippines. He’d been reading Joseph Campbell, the writer famous for his work on mythology, and thinking about how to forge his own heroic narrative.
Anglin wanted a tribe—a real one. And he’d been looking. He hiked into the mountains with boys who carried drinking water in plastic Monsanto fertilizer jugs and went to Manila to find squatter villages where people “drink from sewers.” He explored the island of Mindanao on a moped and posed for selfies wearing a wry expression, a Marlboro hanging from his lips or tucked behind his ear. In one video he made, he stood shirtless on a beach describing the horrors of deforestation.
Anglin established a home base at the Sampaguita Tourist Inn, a $10-a-night hotel in Davao City, where he lived for months at a time off money his father sent. He liked to sit in the lobby with his laptop, drinking Nescafé and planning his next move. At the time, Davao was ruled with an iron fist by its authoritarian mayor, Rodrigo Duterte, now the president of the Philippines. (Anglin shook Duterte’s hand once and has made praise of the violence-prone politician a staple of Daily Stormer coverage.) It was the third-biggest city in the country but hardly a mecca for 20-something Americans. That’s why, in 2009, Anglin came to the attention of Edward, a 33-year-old New Yorker and the only other young American in the hotel. Edward, who asked that his last name be withheld, spent months at a time in the Philippines over the course of several years. He and Anglin became friends and went out to eat together almost every day.
Edward thought Anglin was fun and intelligent, with excellent taste in music. Edward had once run a small music-distribution business, but Anglin still introduced him to new bands, such as the Felice Brothers. Yet there was something off about Anglin, who said he wasn’t going back to the United States. “He was running away, clearly,” Edward told me. But from what? Edward recalls Anglin claiming that he’d been trafficking cocaine back home. “I honestly thought that’s why he’d left America,” he said.
Edward told me that Anglin acted like he was smarter than everyone else, and in a country where young white men are “treated in a godly way,” Anglin’s ego only grew. He had a complex about being short—he claims to be 5 foot 7, but several people I talked with put his height closer to 5 foot 4. In Davao, however, Anglin hit on every pretty young Filipina he saw and had success with many of them, sometimes taking advantage of their hope that an American husband could be an exit from poverty. Most of these girls were 18 or 19 years old, but Edward says some were younger. He remembers Anglin once picking up a 14-year-old in a bar and bringing her back to the Sampaguita to spend the night.
Yet Anglin was troubled by the ways Western society seemed to have degraded Filipino culture—he despised Christian missionaries and was appalled to see Filipinos listening to Lady Gaga instead of traditional music. “You see the way white people—and it is white people—went around the whole world … and fucked everybody,” he said in a podcast he recorded at the time. “I think the white race should be bred out.” He voiced similar sentiments in other podcasts.
“Colonel Kurtz meets Travis Bickle” is how a friend described Anglin’s mind-set as he headed into the jungle.
Then, on one of his forays from Davao, Anglin found his tribe. In 2011, he spent several weeks in a small village in southern Mindanao among the T’boli people, who live around mountain lakes covered in lotus blossoms. The T’boli are known for their traditional music, dance, beadwork, and weaving. “Their life was all so beautiful and amazing,” Anglin said on one of his podcasts.
Here was his return to nature. Anglin reported being about a day’s journey from electricity. Everything in the forest had spiritual significance for the T’boli. Each time Anglin crossed a stream, for example, he rubbed a wet stone across his face, hands, and feet to ask for guidance from the water spirit, which always knew the path through the forest. “I love these people,” Anglin said after a trial run in the jungle.
Anglin emerged with a plan: He would return to the jungle, build his own hut, and exist “completely outside of the system.” He would live with the T’boli at first, but he hoped to push even deeper into the mountains in search of Muslim tribes and “people that are still fighting with spears, killing miners and loggers.” He would also, counterintuitively, launch a website called Reality Situation to chronicle his new off-the-grid life. He put his belongings up for sale to raise cash for a horse, chickens, and ducks. There was a messianic zeal to his plan. “I’m going to do it,” he told another truther. “I’m going to live without money. And I’m going to set up a community that does the same. And I’m going to video tape it.” Anglin launched Reality Situation in January 2012, before heading back into the jungle. He was reading about UFOs and downloading paranormal podcasts. He was still obsessed with brain-chipping and TV mind control, fake moon landings and satanic sex rituals. His vision of a rainforest utopia was no less unhinged.
“Colonel Kurtz meets Travis Bickle” is how Edward described his friend’s mind-set around this time. “He was going to go back to the jungle to be the white savior and teach everybody how to grow crops properly.” And according to Edward, Anglin had another motivation: “He was going out there to marry two 16-year-old Muslim girls. He’d already met them and was buying them livestock for the dowry.” For the next six months, Anglin all but disappeared from the internet. In May 2012, he put up a lone post on Reality Situation in which he said he was planting trees, developing sustainable farming, and educating children about the dangers of Christianity and capitalism. Then he vanished again.
What happened to him in the jungle is a mystery. He later said he’d drunk too much of a “strong coconut wine” and “began to feel deeply depressed and alone.” His fanciful notion of “picking fruit and hunting wild boar” and being treated like a hero was, he realized, a “romantic fantasy.” Again, he blamed others for his failure. This time it was the Filipinos’ fault.
“Their minds were as primitive as their mode of living,” Anglin wrote, declaring that only among the “European race” would he feel at home. “It is only they who share my blood, and can understand my soul.” Edward saw him one last time, back in Davao. Anglin seemed transformed. He’d shaved his head and was dressed in a street-tough style, with a white tank top and baggy jeans. He was angry, especially about the subject of race-mixing. He also had a gun.
Anglin told Edward that the tribe had rejected him. “They’re a bunch of idiots,” Anglin said. “Monkeys.” He shut down Reality Situation, left the Philippines, and, after a stint in China, returned to Ohio. In December 2012, he launched a new site called Total Fascism, an earnest precursor to The Daily Stormer. “From the flaming wreckage of the alleged Truth Movement,” Anglin wrote, “a group of people has begun to emerge … We have found the truth. We have found the light. We have found Adolf Hitler.” A ng lin sequestered himself on the family farm again. Now he advocated “brutal extremism.” He wrote that he was not calling for violence “at this time” but added: “If I thought violence could work to free us of the yolk [ sic ] of the Jew, I would absolutely and unequivocally endorse it.” He developed an almost religious infatuation with Vladimir Putin, or “Czar Putin I, defender of human civilization,” as Anglin called him. For Anglin, Putin was a great white savior, a “being of immense power.” This fixation on strength is common among members of the alt-right, but Anglin took his devotion to power to a wild extreme. “He thinks in terms of a fascist Disney film,” a prominent white nationalist who has collaborated with Anglin told me, adding that Anglin believed that if he tried hard enough, disciples would flock to his cultish vision and help him summon another Hitler into existence. “He imagines he has some magical power.” Over his heart, he’d tattooed the spidery black sun of the Sonnenrad, an occult symbol in a mystical strain of neo-Nazism whose followers embrace such notions as Hitler being an avatar of Vishnu.
In March 2013, Anglin, or perhaps his father, used Greg’s email address to register the domain name for The Daily Stormer. Then Anglin left the country again. First he went to Greece, where he stayed in a hostel in Athens for three months. He found work giving tours of the Parthenon and other sites and attended meetings of Golden Dawn, Greece’s ultranationalist far-right political party.
On July 4, 2013, The Daily Stormer launched in beta mode, replacing Total Fascism. Anglin named his new site after Der Stürmer , a virulently anti-Semitic Nazi-era weekly that Hitler had read devoutly. (As Anglin would later write, the official policy of his site was: “Jews should be exterminated.”) The Daily Stormer was unlike anything else in white nationalism: The design was clean, the posts were infused with Anglin’s wry humor. It was Nazi Gawker, and it caught on.
Anglin’s editorial approach, which he has explained in various podcasts, borrowed from both Mein Kampf and Saul Alinsky’s Rules for Radicals.
From Hitler, Anglin learned to dumb down his argument: Good guys versus bad guys. A few themes repeated over and over. From Alinsky, he learned counterculture tactics: Attack people instead of institutions. Isolate targets. Make threats. One Alinsky rule in particular stuck with Anglin: “Ridicule is man’s most potent weapon.” Ridicule was hard to counter. So Anglin mocked. He made people laugh. “The whole point is to make something outrageous,” he said on the site. “It’s about creating a giant spectacle, a media spectacle that desensitizes people to these ideas.” He considered jokes about Josef Mengele training dogs to rape Jewish women “comedy gold.” In 2014, Anglin was living in Europe when he found a partner in Andrew Auernheimer, a.k.a. “weev,” a neo-Nazi hacker and troll. Auernheimer grew up in the Ozarks and went to federal prison in 2013 on identity-theft and hacking charges. After his conviction was vacated on appeal a year later, he moved abroad. He now lives in Transnistria, a small, Russia-backed breakaway region on Moldova’s eastern border.
Auernheimer ran the tech side of The Daily Stormer, and also contributed his considerable gifts for subversion by making printers on U.S. college campuses pump out swastika-bedecked flyers for the site. “I don’t know what I would be doing if it wasn’t for him,” Anglin said in an interview with another white nationalist last year. “He’s the one basically holding the whole thing together.” Anglin, meanwhile, gained infamy for his troll attacks. In 2015, he tormented the University of Missouri during student protests against racist incidents on campus. He used Twitter hashtags to seed fake news into the conversation, falsely reporting that members of the KKK had arrived to burn crosses on campus and were working with university police. He claimed that Klansmen had gunned down protesters and posted a random photo of a black man in a hospital bed. As his rumors spread, the campus freaked out.
But Anglin wasn’t content to troll alone. He wrote instructions for his followers on how to register anonymous email accounts, set up virtual private networks, mask their IP addresses, and forge Twitter and text-message conversations. He created images and slogans for them to use. Anglin warned his Stormers not to threaten targets with violence, a disclaimer meant to shield him from law enforcement.
Still, Anglin’s mob was a terror. He sicced his trolls on American University’s first black female student-body president. He had them go after Erin Schrode, a Jewish woman running for Congress in California, as well as Jonah Goldberg and David French, writers for National Review.
As I reported this story, Anglin sent his trolls after me, too, and my interactions with them confirmed my suspicions that they were, by and large, lost boys who felt rejected by society and, thanks to the internet, could lash out in new and destructive ways. When I tried to draw them out about their lives, some admitted that they struggled with women. One told me that he struggled with his own homosexuality. Most imagined they were rising up against an unchecked political correctness that maligned white males. The more the liberal establishment chose to revile them, the more they embraced their role as villains.
In recent years, psychologists have found a powerful connection between trolling and what’s known as the “dark tetrad” of personality traits: psychopathy, sadism, narcissism, and Machiavellianism. The first two traits are significant predictors of trolling behavior, and all four traits correlate with enjoyment of trolling. Research published in June by Natalie Sest and Evita March, two Australian scholars, shows that trolls tend to be high in cognitive empathy, meaning they can understand emotional suffering in others, but low in affective empathy, meaning they don’t care about the pain they cause. They are, in short, skilled and ruthless manipulators.
In the summer of 2015, another great white savior—himself a troll—appeared to Anglin, this time gliding down a golden escalator in Manhattan in front of a crowd of paid extras. A few days after Donald Trump declared his presidential candidacy—launching into an attack on Mexican “rapists”—Anglin endorsed him as “the one man who actually represents our interests.” Anglin immediately put all his resources toward willing a Trump presidency into reality. He churned out cheerleader posts and deployed his trolls on behalf of Trump, directing several of his nastiest attacks at Jewish journalists who were critical of the candidate or his associates.
Anglin hadn’t been to the polls in years, but he wasn’t going to miss a chance to vote for Trump. His absentee ballot arrived in Ohio from Krasnodar, a city in southwest Russia near the Black Sea, according to Franklin County records. That the Russian government wouldn’t know about an American inside its borders publishing a major neo-Nazi website seems improbable.
Anglin worshipped Putin, and seemed like exactly the type of online agitator Russia might use to sow chaos during the U.S. election. In March, Auernheimer told Daily Stormer commenters that he was setting up the site’s forum on “a much beefier server in the Russian Federation.” Anglin would later swear on his site—“under penalty of perjury”—that he’d never taken money or direction from the Russian government.
But whether Anglin knew it or not, his site appears to have gotten a boost from someone in Russia. A collective of data scientists called Susan Bourbaki Anthony conducted an analysis of The Daily Stormer’s reach on Twitter from February 2 to March 2, 2017, and found that Anglin’s content was being spread by a mysterious network of accounts. This network, which is still active, has amplified divisiveness in American political discourse on Twitter since at least early in the year. It includes bots and “sock puppets” (accounts operated by actual people under false identities), and essentially shuts down each night from 5 p.m. to 11:30 p.m. on the East Coast—midnight to 6:30 a.m. local time in Moscow and St. Petersburg.
The election helped elevate The Daily Stormer from one of several influential white-nationalist sites to a key platform of the alt-right, though the site wasn’t nearly as popular as Anglin wanted people to think. He and Auernheimer often bragged that it got millions of unique visitors a month, but comScore put the site’s monthly visitors closer to 70,000. Still, Anglin knew how to make noise—and by any metric, the post-Trump trend line for his site pointed up.
In May 2016, CNN’s Wolf Blitzer had asked then-candidate Trump about the death threats and harassment Anglin’s army had leveled against the journalist Julia Ioffe after she wrote a profile of Melania Trump for GQ magazine. (Ioffe now works at The Atlantic.
) “I don’t have a message to the fans,” Trump said.
The fans.
His people. “We interpret that as an endorsement,” Anglin told a reporter when asked about Trump’s refusal to condemn white nationalists.
I went back to Columbus in mid-February. I’d learned that Anglin might be in town for a legal hearing—for some reason, he’d filed a motion to expunge his 2006 misdemeanor drug conviction—and I intended to approach him at the courthouse.
The day I arrived, the city’s weekly paper, Columbus Alive , published a long feature about Anglin. The next evening, Anglin walked into a supermarket where a protester who’d been quoted in the story worked. She later told me that despite the bitter cold, he wore only a white T‑shirt and black track pants. Holding a can of Monster Ultra Blue, an energy drink, he approached her and looked her in the eye. “How’s it going?” he said, before strolling off into the night.
I was staying near the old Exile bar, once the premier leather joint in Columbus and an early moneymaker for the Anglin family. The Exile was one of two gay bars that had been owned by Anglin’s uncle Todd until he died of aids , after which Greg took over. The bars continued to stage foam parties and fetish nights while Greg, according to two sources, performed gay conversion therapy at his counseling practice. Greg had amassed a sizable, if shabby, real-estate portfolio in town, and I visited several of his properties, trying, unsuccessfully, to locate his neo-Nazi son.
I thought Anglin might be crashing with his childhood best friend, West Emerson, whose Facebook page included a “favorite” Hitler quote and alt-right references. Emerson was prone to bragging about his friendship with Anglin. He told more than one of my sources that he and Anglin communicated every day. In messages he sent to one source, he claimed to be talking with Anglin “now as we speak.” But when I reached out to Emerson, he refused to talk with me. (Emerson told The Atlantic that he did not share Anglin’s views, hadn’t seen him in 15 years, and didn’t even know his phone number.) A week after the Columbus Alive story was published, Anglin doxed the reporters. He published their contact information and put up photos of their homes and cars, their spouses and children, including a six-month-old infant. “Take action,” he told his trolls, who harassed the targets with calls, emails, and offensive mail. The reporters didn’t feel safe in their homes. Police had to increase patrols in their neighborhoods.
One evening, I drove to what I thought might be Anglin’s mother’s house. It was dusk, and the only light on was in the living room. From a distance, I thought I saw a thin woman standing by a window, but by the time I parked my car, the light had gone off. I rang the doorbell, then knocked and waited a few minutes. There was no answer. I quickly scratched out a note—“I need somebody who loves Andy to speak on his behalf”—and stuck it in the door. A few days later, I left Katie a voicemail at work. She never responded.
I’d been at the right house. Anglin later posted a photo of my note and accused me of engaging in a “vicious scorched-earth campaign” to threaten his family and friends. He labeled me a terrorist and said I was trying to silence him. Angry Stormers called and emailed me. One tried to feed me false information about Anglin’s whereabouts. I received half a dozen spoof emails trying to infect my computer with a virus.
But Anglin himself remained elusive. His hearing was scheduled for 10 a.m. on a Monday morning. But the night before, a waterline burst and damaged five floors of the courthouse, including the one where his hearing was to take place. All proceedings on those floors were postponed. I went to the courthouse at nine the next morning anyway, hoping that Anglin might still show up. It took me some time to talk my way up to the right floor and find a clerk. She told me that Anglin and his lawyer had come in early and had his record expunged. I had just missed him.
I n April, Tanya Gersh and the Southern Poverty Law Center sued Anglin in federal court for invasion of privacy, intentional infliction of emotional distress, and violation of a Montana anti-intimidation statute. He’d have to answer for what the lawsuit called a “campaign of terror” that had given Gersh panic attacks and landed her in trauma therapy.
That she had to file a civil suit instead of pressing criminal charges was telling. There was little the authorities could do about the hate speech The Daily Stormer published, which is protected under the First Amendment—and Anglin knew it. He often mentions Brandenburg v. Ohio , a Supreme Court case that addressed a fiery oration by Clarence Brandenburg, a Klansman, in 1964 on a farm outside Cincinnati. Brandenburg preached violently about Jews and blacks and suggested that if the government continued to suppress white people, “revengeance” might be taken. The Court ruled that his ravings were protected because they were too abstract to incite “imminent lawless action” and did not meet the previously established “clear and present danger” standard. This “Brandenburg test” defines how far hatemongers can go, and Anglin has been careful to keep his violent language vague. He is, for example, within his rights to publish that “Moslems should be exterminated.” He is not, however, allowed to threaten a specific Muslim with extermination.
Where he has potentially crossed a legal line is with the trolling he orchestrates. Cyberstalking—defined as using the internet in a way that “causes, attempts to cause, or would be reasonably expected to cause substantial emotional distress to a person”—is a federal crime punishable by up to five years in prison and a $250,000 fine. (Many states also criminalize cyberstalking.) But this activity is difficult to prosecute when trolls know how to conceal their identity. A lone troll might leave his victim only one voicemail telling her to burn in an oven, which would fail to meet the criteria for cyberstalking. When hundreds of trolls do the same, though, the effect can be terrifying. “It’s like a bee swarm,” says Danielle Citron, a professor at the University of Maryland’s School of Law and a leading expert on cyberharassment. “You have a thousand bee stings. Each sting is painful. But it’s perceived as one awful, throbbing, giant mass.” Even if Anglin doesn’t participate in the harassment directly, however, he arguably solicits cyberstalking and aids and abets it, according to Citron. These are crimes in their own right—just not ones that law enforcement is prepared to take on. Few local police departments have the means to go after trolls, and Citron says that federal investigators who are swamped with child-pornography, fraud, and terrorism cases tend not to make cyberstalking investigations a priority.
And so Gersh had to go after Anglin in court. A week after she filed her suit, Auernheimer set up a crowdfunding campaign on WeSearchr, a platform run by Chuck Johnson, a far-right troll and propagandist who has claimed that he has ties to the Trump administration. Within a month, Stormers had raised more than $150,000 for Anglin’s legal defense. Anglin then hired Marc Randazza, a First Amendment lawyer who has represented Mike Cernovich, another far-right propagandist.
The lawsuit is scheduled to enter the pretrial stage in December. It marks the first time a notorious internet troll has been sued for instigating a campaign of harassment and intimidation. It could force the courts to decide whether calling for a troll attack—Anglin’s admonition to “hit ’em up”—is protected speech. The risk, however, is that if Anglin prevails in court, sadistic trolls will be free to tear across the internet with even greater abandon.
For his part, Randazza argues that restricting Anglin’s trolling would set a dangerous precedent. Anglin “has every right to ask people to share their views, no matter how abhorrent those views are,” Randazza told me. “This is the shitty price we have to pay for freedom.” T he alt-right leaders came to Charlottesville from far and wide this August for the largest gathering of white nationalists in more than a decade. Richard Spencer, Mike Enoch, Matthew Heimbach, Eli Mosley, even David Duke, the old Klansman who has taken up the new label in an effort to get hip to Millennial racism. All of them except Anglin.
“We are angry,” Anglin had written a few days before the rally. “There is a craving to return to an age of violence. We want a war.” Many of his underlings made the trip. Ready for street combat, some brought homemade shields painted with skulls. But Anglin was never one to put his body on the line.
Fields chanted “White Sharia”—Anglin’s phrase—before he drove his car into a crowd in Charlottesville.
By all reports, he had stayed in the U.S. after his court date in Columbus and gone even deeper underground over the spring and summer. The SPLC hired process servers to notify Anglin of the Gersh lawsuit, but they couldn’t find him anywhere—despite repeatedly visiting seven different addresses. At one apartment in Columbus, Anglin’s younger brother, Mitch, opened the door but refused to help, saying he “can’t do that” to his brother. At another address, the process servers got the impression that Anglin had barricaded himself inside.
Randazza mocked the SPLC’s inability to find his client. (Anglin would soon be fending off two more federal lawsuits: one filed by Dean Obeidallah, a Muslim American comedian and radio host who alleged that Anglin had libeled him, and another brought by Charlottesville residents against the alt-right leaders responsible for the deadly rally.) Anglin told CNN that he’d moved to Lagos, Nigeria, and when the network ran his lie the Stormers had a long, hard laugh. One tried to fool me into thinking that Anglin was in the Czech Republic. But I’d gotten a credible tip that he was holed up somewhere in the Midwest.
The Stormers had a private chat server through a company called Discord, and I used an alias to listen in as they talked amongst themselves about genocide, often in graphic terms. “All I want is to see [Jews] screaming in a pit of suffering on the soil of my homeland before I die,” Auernheimer wrote. “I don’t want wealth. I don’t want power. I just want their daughters tortured to death in front of them and to laugh and spit in their faces while they scream.” In July, Auernheimer posted a new rule in the Discord forum: “Do not talk to police … If we find out you have talked to the police for any reason you will be banned.” It appeared that law-enforcement officials might have finally taken an interest in Anglin’s operation. Perhaps in response, Anglin grew even more maniacal. He went on a popular alt-right podcast and rambled to the baffled hosts about the “electric universe” and “deconstructing reality” and assured them that “as soon as we finally do exterminate these Jews, we’re going to be fighting aliens.” On his site, he pushed a “White Sharia” meme and published posts encouraging men to beat and rape women, take away their voting rights, and treat them like property. Women were “lower than dogs,” he wrote. “They are all vicious, amoral, mindless whores who do not deserve respect or admiration of any sort.” The meme distressed and confused many of his readers, especially the few women who frequented the site. Other Stormers couldn’t understand why Anglin wanted to promote a concept associated with Islam. But Anglin was relentless, and after dozens of posts, his meme caught on.
“White Sharia” was one of the phrases members of the alt-right shouted in Charlottesville in August. It was what James Alex Fields Jr. chanted before he drove his car into the crowd of antiracist protesters and was charged with the murder of Heather Heyer.
Anglin was triumphant—here was his vision for the Whitefish march, come to fruition. He’d done as much as anyone to promote the rally, turning his site into a key organizing hub. “The Alt-Right has risen. There is no going back from this,” he wrote. “This was our Beer Hall Putsch.” And when Trump again refused to denounce the white nationalists, Anglin exulted. “No condemnation at all,” he wrote. “Really, really good. God bless him.” The day after the rally, Anglin wrote a post saying that Heyer was an “overweight slob” and claiming that “most people are glad she is dead.” Within a day it racked up more Facebook shares than any previous Daily Stormer post. On the private chat server, Auernheimer hatched a plan to send Nazis to Heyer’s funeral. But for all the talk on the alt-right about expanding the Overton Window, Anglin had failed to see that the more savage his words grew, the smaller, ultimately, his sphere of influence became.
The Daily Stormer was dropped by GoDaddy, its domain registrar; then by Zoho and SendGrid, which provided email services; and by Cloudflare, which protected against cyberattacks. The site went dark. Other alt-right sites were also shut down. Discord shut down the server where Anglin and his associates conspired, along with chat rooms for other racist groups. Richard Spencer had warned about “The Great Shuttening,” and now here it was.
Anglin and Auernheimer scrambled to get The Daily Stormer back online. They were rejected by half a dozen other domain registrars. Even Rozcom, the Russian national registrar, denied them. As of press time, they had managed to get a version of the site up, hosted in the Philippines and rebranded as “America’s largest pro-Duterte news site.” But Anglin had lost many readers, and his comments section—which provided the real energy for the community he’d built—had been decimated.
Recommended Reading The Lost Boys Angela Nagle His Kampf Graeme Wood Stop Training Police Like They’re Joining the Military Rosa Brooks His panic was almost palpable as he tried to walk back the fearsome reputation he’d cultivated. “I am not actually a ‘Neo‑Nazi White Supremacist,’ nor do I know what that is,” he wrote in mid-September. He claimed that his violent rhetoric was never sincere but simply a way to mock those who slap a Nazi label on anyone who “stands up for white people’s rights” or “refuses to believe the stupid lies about Hitler” or rejects the “alleged Holocaust” narrative. Anglin now shared what he said had been his true editorial approach all along: “Ironic Nazism disguised as real Nazism disguised as ironic Nazism.” Five days later, he posted about “the world being ruled either by reptiles from another dimension or some other type of reptilian or insectoid race of aliens.” Where the irony started and stopped was hard to know. I emailed Anglin one more time asking for an interview. He didn’t answer. The next day, he wrote a post calling for the mass execution of journalists. “I want to see pieces of journalist brains splattered across walls,” he wrote.
At times while tracking Anglin, I couldn’t help but feel that he was a method actor so committed and demented, on such a long and heavy trip, that he’d permanently lost himself in his role. I thought of a quote from Kurt Vonnegut: “We are what we pretend to be, so we must be careful about what we pretend to be.” Like so many emotionally damaged young men, Anglin had chosen to be someone, or something, bigger than himself on the internet, something ferocious to cover up the frailty he couldn’t abide in himself. Fantasy overtook reality, and now he couldn’t escape. Who was he if not the king of the Nazi trolls?
"
|
1,042 | 2,017 |
"Second Life Still Has 600,000 Regular Users - The Atlantic"
|
"https://www.theatlantic.com/magazine/archive/2017/12/second-life-leslie-jamison/544149"
|
"Site Navigation The Atlantic Popular Latest Newsletters Sections Politics Ideas Fiction Technology Science Photo Business Culture Planet Global Books Podcasts Health Education Projects Features Family Events Washington Week Progress Newsletters Explore The Atlantic Archive Play The Atlantic crossword The Print Edition Latest Issue Past Issues Give a Gift Search The Atlantic Quick Links Dear Therapist Crossword Puzzle Magazine Archive Your Subscription Popular Latest Newsletters Sign In Subscribe Explore The making of an American Nazi, the evolution of the alt-right, and the rise and fall of ‘Rolling Stone.’ Plus, China’s race to find aliens first, ‘Shark Tank’ nation, and more.
The Making of an American Nazi Luke O’Brien The Lost Boys Angela Nagle What Happens If China Makes First Contact? Ross Andersen The Digital Ruins of a Forgotten Future Leslie Jamison What Would Miss Rumphius Do? Nathan Perl-Rosenthal Republican Is Not a Synonym for Racist Peter Beinart A gift that gets them talking.
Give a year of stories to spark conversation, Plus a free tote.
The Digital Ruins of a Forgotten Future Second Life was supposed to be the future of the internet, but then Facebook came along. Yet many people still spend hours each day inhabiting this virtual realm. Their stories—and the world they’ve built—illuminate the promise and limitations of online life.
Gidge Uriza lives in an elegant wooden house with large glass windows overlooking a glittering creek, fringed by weeping willows and meadows twinkling with fireflies. She keeps buying new swimming pools because she keeps falling in love with different ones. The current specimen is a teal lozenge with a waterfall cascading from its archway of stones. Gidge spends her days lounging in a swimsuit on her poolside patio, or else tucked under a lacy comforter, wearing nothing but a bra and bathrobe, with a chocolate-glazed donut perched on the pile of books beside her. “Good morning girls,” she writes on her blog one day. “I’m slow moving, trying to get out of bed this morning, but when I’m surrounded by my pretty pink bed it’s difficult to get out and away like I should.” In another life, the one most people would call “real,” Gidge Uriza is Bridgette McNeal, an Atlanta mother who works eight-hour days at a call center and is raising a 14-year-old son, a 7-year-old daughter, and severely autistic twins, now 13. Her days are full of the selflessness and endless mundanity of raising children with special needs: giving her twins baths after they have soiled themselves (they still wear diapers, and most likely always will), baking applesauce bread with one to calm him down after a tantrum, asking the other to stop playing “the Barney theme song slowed down to sound like some demonic dirge.” One day, she takes all four kids to a nature center for an idyllic afternoon that gets interrupted by the reality of changing an adolescent’s diaper in a musty bathroom.
But each morning, before all that—before getting the kids ready for school and putting in eight hours at the call center, before getting dinner on the table or keeping peace during the meal, before giving baths and collapsing into bed—Bridgette spends an hour and a half on the online platform Second Life , where she lives in a sleek paradise of her own devising.
Good morning girls. I’m slow moving, trying to get out of bed this morning.
She wakes up at 5:30 to inhabit a life in which she has the luxury of never getting out of bed at all.
W hat is Second Life? The short answer is that it’s a virtual world that launched in 2003 and was hailed by some as the future of the internet. The longer answer is that it’s a landscape full of goth cities and preciously tattered beach shanties, vampire castles and tropical islands and rainforest temples and dinosaur stomping grounds, disco-ball-glittering nightclubs and trippy giant chess games. In 2013, in honor of Second Life’s tenth birthday, Linden Lab—the company that created it—released an infographic charting its progress : 36 million accounts had been created, and their users had spent 217,266 cumulative years online, inhabiting an ever-expanding territory that comprised almost 700 square miles. Many are tempted to call Second Life a game, but two years after its launch, Linden Lab circulated a memo to employees insisting that no one refer to it as that. It was a platform.
This was meant to suggest something more holistic, more immersive, and more encompassing.
Second Life has no specific goals. Its vast landscape consists entirely of user-generated content, which means that everything you see has been built by someone else—an avatar controlled by a live human user. These avatars build and buy homes, form friendships, hook up, get married, and make money. They celebrate their “rez day,” the online equivalent of a birthday: the anniversary of the day they joined. At church, they cannot take physical communion—the corporeality of that ritual is impossible—but they can bring the stories of their faith to life. At their cathedral on Epiphany Island, the Anglicans of Second Life summon rolling thunder on Good Friday, or a sudden sunrise at the moment in the Easter service when the pastor pronounces, “He is risen.” As one Second Life handbook puts it: “From your point of view, SL works as if you were a god.” In truth, in the years since its peak in the mid‑2000s, Second Life has become something more like a magnet for mockery. When I told friends that I was working on a story about it, their faces almost always followed the same trajectory of reactions: a blank expression, a brief flash of recognition, and then a mildly bemused look.
Is that still around? Second Life is no longer the thing you joke about; it’s the thing you haven’t bothered to joke about for years.
Many observers expected monthly user numbers to keep rising after they hit 1 million in 2007, but instead they peaked—and have, in the years since, stalled at about 800,000. An estimated 20 to 30 percent are first-time users who never return. Just a few years after declaring Second Life the future of the internet, the tech world moved on. As a 2011 piece in Slate proclaimed, joining a chorus of disenchantment: “Looking back, the future didn’t last long.” Explore the December 2017 Issue Check out more from this issue and find your next story to read.
But if Second Life promised a future in which people would spend hours each day inhabiting their online identity, haven’t we found ourselves inside it? Only it’s come to pass on Facebook, Instagram, and Twitter instead. As I learned more about Second Life, and spent more time exploring it, it started to seem less like an obsolete relic and more like a distorted mirror reflecting the world many of us live in.
Perhaps Second Life inspires an urge to ridicule not because it’s unrecognizable, but because it takes a recognizable impulse and carries it past the bounds of comfort, into a kind of uncanny valley: not just the promise of an online voice, but an online body; not just checking Twitter on your phone, but forgetting to eat because you’re dancing at an online club; not just a curated version of your real life, but a separate existence entirely. It crystallizes the simultaneous siren call and shame of wanting an alternate life. It raises questions about where unfettered fantasy leads, as well as about how we navigate the boundary between the virtual and the real.
As virtual-reality technology grows more advanced, it promises to deliver a more fully realized version of what many believed Second Life would offer: total immersion in another world. And as our actual world keeps delivering weekly horrors—another mass shooting, another hurricane, another tweet from the president threatening nuclear war—the appeal of that alternate world keeps deepening, along with our doubts about what it means to find ourselves drawn to it.
F rom 2004 to 2007, an anthropologist named Tom Boellstorff inhabited Second Life as an embedded ethnographer, naming his avatar Tom Bukowski and building himself a home and office called Ethnographia. His immersive approach was anchored by the premise that the world of Second Life is just as “real” as any other, and that he was justified in studying Second Life on “its own terms” rather than feeling obligated to understand people’s virtual identities primarily in terms of their offline lives. His book Coming of Age in Second Life , titled in homage to Margaret Mead’s classic, documents the texture of the platform’s digital culture. He finds that making “small talk about lag [streaming delays in SL] is like talking about the weather in RL,” and interviews an avatar named Wendy, whose creator always makes her go to sleep before she logs out. “So the actual world is Wendy’s dream, until she wakes up again in Second Life?,” Boellstorff recalls asking her, and then: “I could have sworn a smile passed across Wendy’s … face as she said, ‘Yup. Indeed.’ ” In Hinduism, the concept of an avatar refers to the incarnation of a deity on Earth, among mortals. In Second Life, it’s your body—an ongoing act of self-expression. One woman described her avatar to Boellstorff like this: “If I take a zipper and pull her out of me, that’s who I am.” Female avatars tend to be thin and impossibly busty; male avatars are young and muscular; almost all avatars are vaguely cartoonish in their beauty. These avatars communicate through chat windows, or by using voice technology to actually speak with one another. They move by walking, flying, teleporting, and clicking on “poseballs,” literal floating orbs that animate avatars into various actions: dancing, karate, pretty much every sexual act you can imagine. Not surprisingly, many users come to Second Life for the possibilities of digital sex—sex without corporeal bodies, without real names, without the constraints of gravity, often with elaborate textual commentary.
The local currency in Second Life is the Linden Dollar, and recent exchange rates put the Linden at just less than half a cent. In the 10 years following its launch, Second Life users spent $3.2 billion of real money on in‑world transactions. The first Second Life millionaire, a digital-real-estate tycoon who goes by Anshe Chung, graced the cover of Businessweek in 2006, and by 2007, the GDP of Second Life was larger than that of several small countries. In the vast digital Marketplace, you can buy a wedding gown for 4,000 Lindens (just over $16) or a ruby-colored corset with fur wings for just under 350 Lindens (about $1.50). You can even buy another body entirely: different skin, different hair, a pair of horns, genitalia of all shapes and sizes. A private island currently costs almost 150,000 Lindens (the price is fixed at $600), while the Millennium II Super Yacht costs 20,000 Lindens (just over $80) and comes with more than 300 animations attached to its beds and trio of hot tubs, designed to allow avatars to enact a vast range of sexual fantasies.
The number of Second Life users peaked just as Facebook started to explode. The rise of Facebook wasn’t the problem of a competing brand so much as the problem of a competing model: It seemed that people wanted a curated version of real life more than they wanted another life entirely—that they wanted to become their most flattering profile picture more than they wanted to become a wholly separate avatar. But maybe Facebook and Second Life aren’t so different in their appeal. Both find traction in the allure of inhabiting a selective self, whether built from the materials of lived experience (camping-trip photos and witty observations about brunch) or from the impossibilities that lived experience precludes: an ideal body, an ideal romance, an ideal home.
Bridgette McNeal, the Atlanta mother of four, has been on Second Life for just over a decade. She named her avatar Gidge after what bullies called her in high school. While Bridgette is middle-aged, her avatar is a lithe 20-something whom she describes as “perfect me—if I’d never eaten sugar or had children.” During her early days on Second Life, Bridgette’s husband created an avatar as well, and the two of them would go on Second Life dates together, a blond Amazon and a squat silver robot, while sitting at their laptops in their study at home. It was often the only way they could go on dates, because their kids’ special needs made finding babysitters difficult. When we spoke, Bridgette described her Second Life home as a refuge that grants permission. “When I step into that space, I’m afforded the luxury of being selfish.” She invoked Virginia Woolf: “It’s like a room of my own.” Her virtual home is full of objects she could never keep in her real home because her kids might break or eat them—jewelry on dishes, knickknacks on tables, makeup on the counter.
In addition to the blog that documents her digital existence , with its marble pools and frilly, spearmint-green bikinis, Bridgette keeps a blog devoted to her daily life as a parent. It’s honest and hilarious and full of heartbreaking candor. Recounting the afternoon spent with her kids at the nature center, she describes looking at a bald eagle: “Some asshole shot this bald eagle with an arrow. He lost most of one wing because of it and can’t fly. He’s kept safe here at this retreat we visited a few days ago. Sometimes I think the husband and I feel a little bit like him. Trapped. Nothing really wrong, we’ve got food and shelter and what we need. But we are trapped for the rest of our lives by autism. We’ll never be free.” When I asked Bridgette about the allure of Second Life, she said it can be easy to succumb to the temptation to pour yourself into it when you should be tending to real life. I asked whether she had ever slipped close to that, and she said she’d certainly felt the pull at times. “You’re thin and beautiful. No one’s asking you to change a diaper,” she told me. “But you can burn out on that. You don’t want to leave, but you don’t want to do it anymore, either.” S econd Life was invented by a man named Philip Rosedale, the son of a U.S. Navy carrier pilot and an English teacher. As a boy, he was driven by an outsize sense of ambition. He can remember standing near the woodpile in his family’s backyard and thinking, “Why am I here, and how am I different from everybody else?” As a teenager in the mid‑’80s, he used an early-model PC to zoom in on a graphic representation of a Mandelbrot set, an infinitely recursive fractal image that just kept getting more and more detailed as he got closer and closer. At a certain point, he told me, he realized he was looking at a graphic larger than the Earth: “We could walk along the surface our whole lives, and never even begin to see everything.” That’s when he realized that “the coolest thing you could do with a computer would be to build a world.” In 1999, just as Rosedale was starting Linden Lab, he attended Burning Man, the annual festival of performance art, sculptural installations, and hallucinogenic hedonism in the middle of the Nevada desert. While he was there, he told me, something “inexplicable” happened to his personality. “You feel like you’re high, without any drugs or anything. You just feel connected to people in a way that you don’t normally.” He went to a rave in an Airstream trailer, watched trapeze artists swing across the desert, and lay in a hookah lounge piled with hundreds of Persian rugs. Burning Man didn’t give Rosedale the idea for Second Life—he’d been imagining a digital world for years—but it helped him understand the energy he wanted to summon: a place where people could make the world whatever they wanted it to be.
This was the dream, but it was a hard sell for early investors. Linden Lab was proposing a world built by amateurs, and sustained by a different kind of revenue model—based not on paid subscriptions, but on commerce generated in-world. One of Second Life’s designers recalled investors’ skepticism: “Creativity was supposed to be a dark art that only Spielberg and Lucas could do.” As part of selling Second Life as a world, rather than a game, Linden Lab hired a writer to work as an “embedded journalist.” This was Wagner James Au, who ended up chronicling the early years of Second Life on a blog (still running) called “ New World Notes ,” and then, after his employment with Linden Lab ended, in a book called The Making of Second Life.
In the book, Au profiles some of Second Life’s most important early builders: an avatar named Spider Mandala (who was managing a Midwestern gas station offline) and another named Catherine Omega , who was a “punky brunette … with a utility belt” in Second Life, but offline was squatting in a condemned apartment building in Vancouver, a building that had no running water and was populated mainly by addicts, where she used a soup can to catch a wireless signal from nearby office buildings so she could run Second Life on her laptop.
Rosedale told me about the thrill of those early days, when Second Life’s potential felt unbridled. No one else was doing what he and his team were doing, he remembered: “We used to say that our only competition was real life.” He said there was a period in 2007 when more than 500 articles a day were written about Linden Lab’s work. Rosedale loved to explore Second Life as an avatar named Philip Linden. “I was like a god,” he told me. He envisioned a future in which his grandchildren would see the real world as a kind of “museum or theater,” while most work and relationships happened in virtual realms like Second Life. “In some sense,” he told Au in 2007, “I think we will see the entire physical world as being kind of left behind.” A lice Krueger first started noticing the symptoms of her illness when she was 20 years old. During fieldwork for a college biology class, crouching down to watch bugs eating leaves, she felt overwhelmed by heat. Standing in the grocery store, she noticed that it felt as if her entire left leg had disappeared—not just gone numb, but disappeared. Whenever she went to a doctor, she was told it was all in her head. “And it was all in my head,” she told me, 47 years later. “But in a different way than how they meant.” Alice was finally diagnosed with multiple sclerosis at the age of 50. By then she could barely walk. Her neighborhood association in Colorado prohibited her from building a ramp at the front of her house, so it was difficult for her to go anywhere. Her three children were 11, 13, and 15. She didn’t get to see her younger son’s high-school graduation, or his college campus. She started suffering intense pain in her lower back and eventually had to have surgery to repair spinal vertebrae that had fused together, then ended up getting multidrug-resistant staph from her time in the hospital. Her pain persisted, and she was diagnosed with a misalignment caused by the surgery itself, during which she had been suspended “like a rotisserie chicken” above the operating table. At the age of 57, Alice found herself housebound and unemployed, often in excruciating pain, largely cared for by her daughter. “I was looking at my four walls,” she told me, “and wondering if there could be more.” That’s when she found Second Life. She created an avatar named Gentle Heron, and loved seeking out waterslides—excited by the sheer thrill of doing what her body could not. As she kept exploring, she started inviting people she’d met online in disability chat rooms to join her. But that also meant she started to feel responsible for their experience, and eventually she founded a “cross-disability virtual community” in Second Life, now known as Virtual Ability, a group that occupies an archipelago of virtual islands and welcomes people with a wide range of disabilities—everything from Down syndrome to PTSD to manic depression. What unites its members, Alice told me, is their sense of not being fully included in the world.
While she was starting Virtual Ability, Alice also embarked on a real-life move: to the Smoky Mountains in Tennessee from Colorado, where she’d outlived her long-term disability benefits. (“I didn’t know you could do that,” I told her, and she replied, “Neither did I!”) When I asked her whether she felt like a different version of herself in Second Life, she rejected the proposition strenuously. Alice doesn’t particularly like the terms real and virtual.
To her, they imply a hierarchical distinction, suggesting that one part of her life is more “real” than the other, when her sense of self feels fully expressed in both. After our first conversation, she sent me 15 peer-reviewed scientific articles about digital avatars and embodiment. She doesn’t want Second Life misunderstood as a trivial diversion.
Alice told me about a man with Down syndrome who has become an important member of the Virtual Ability community. In real life, his disability is omnipresent, but on Second Life people can talk to him without even realizing he has Down’s. In the offline world, he lives with his parents—who were surprised to see he was capable of controlling his own avatar. After they eat dinner each night, as his parents are washing the dishes, he sits expectantly by the computer, waiting to return to Second Life, where he rents a duplex on an island called Cape Heron, part of the Virtual Ability archipelago. He has turned the entire upper level into a massive aquarium, so he can walk among the fish, and the lower level into a garden, where he keeps a pet reindeer and feeds it Cheerios. Alice says he doesn’t draw a firm boundary between Second Life and “reality,” and others in the community have been inspired by his approach, citing him when they talk about collapsing the border in their own minds.
W hen I initially envisioned writing this essay, I imagined falling under the thrall of Second Life: a wide-eyed observer seduced by the culture she had been dispatched to analyze. But being “in world” made me queasy from the start. I had pictured myself defending Second Life against the ways it had been dismissed as little more than a consolation prize for when “first life” doesn’t quite deliver. But instead I found myself wanting to write, Second Life makes me want to take a shower.
Intellectually, my respect deepened by the day, when I learned about a Middle Eastern woman who could move through the world of Second Life without a hijab, and when I talked with a legally blind woman whose avatar has a rooftop balcony and who could see the view from it (thanks to screen magnification) more clearly than the world beyond her screen. I heard about a veteran with PTSD who gave biweekly Italian cooking classes in an open-air gazebo, and I visited an online version of Yosemite created by a woman who had joined Second Life in the wake of several severe depressive episodes and hospitalizations. She uses an avatar named Jadyn Firehawk and spends up to 12 hours a day on Second Life, many of them devoted to refining her bespoke wonderland—full of waterfalls, sequoias, and horses named after important people in John Muir’s life—grateful that Second Life doesn’t ask her to inhabit an identity entirely contoured by her illness, unlike internet chat rooms focused on bipolar disorder that are all about being sick. “I live a well-rounded life on SL,” she told me. “It feeds all my other selves.” But despite my growing appreciation, and my fantasies of enchantment, a certain visceral distaste for Second Life endured—for the emptiness of its graphics, its nightclubs and mansions and pools and castles, their refusal of all the grit and imperfection that make the world feel like the world. Whenever I tried to describe Second Life, I found it nearly impossible—or at least impossible to make interesting—because description finds its traction in flaws and fissures, and exploring the world of Second Life was more like moving through postcards. Second Life was a world of visual clichés. Nothing was ragged or broken or dilapidated—or if it was dilapidated, it was because that particular aesthetic had been chosen from a series of prefab choices.
Of course, my aversion to Second Life—as well as my embrace of flaw and imperfection in the physical world—testified to my own good fortune as much as anything. When I move through the real world, I am buffered by my (relative) youth, my (relative) health, and my (relative) freedom. Who am I to begrudge those who have found in the reaches of Second Life what they couldn’t find offline? One day when Alice and I met up as avatars, she took me to a beach on one of the Virtual Ability islands and invited me to practice tai chi. All I needed to do was click on one of the poseballs levitating in the middle of a grassy circle, and it would automatically animate my avatar. But I did not feel that I was doing tai chi. I felt that I was sitting at my laptop, watching my two-dimensional avatar do tai chi.
I thought of Bridgette in Atlanta, waking up early to sit beside a virtual pool. She doesn’t get to smell the chlorine or the sunscreen, to feel the sun melt across her back or char her skin to peeling crisps. And yet Bridgette must get something powerful from sitting beside a virtual pool—pleasure that dwells not in the physical experience itself but in the anticipation, the documentation, the recollection, and the contrast to her daily obligations. Otherwise she wouldn’t wake up at 5:30 in the morning to do it.
F rom the beginning, I was terrible at navigating Second Life.
Body part failed to download , my interface kept saying. Second Life was supposed to give you the opportunity to perfect your body, but I couldn’t even summon a complete one. For my avatar, I’d chosen a punk-looking woman with cutoff shorts, a partially shaved head, and a ferret on her shoulder.
On my first day in-world, I wandered around Orientation Island like a drunk person trying to find a bathroom. The island was full of marble columns and trim greenery, with a faint soundtrack of gurgling water, but it looked less like a Delphic temple and more like a corporate retreat center inspired by a Delphic temple. The graphics seemed incomplete and uncompelling, the motion full of glitches and lags. This wasn’t the grit and struggle of reality; it was more like a stage set with the rickety scaffolding of its facade exposed. I tried to talk to someone named Del Agnos, but got nothing. I felt surprisingly ashamed by his rebuff, transported back to the paralyzing shyness of my junior-high-school days.
At my first Second Life concert, I arrived excited for actual music in a virtual world: Many SL concerts are genuinely “live” insofar as they involve real musicians playing real music on instruments or singing into microphones hooked up to their computers. But I was trying to do too many things at once that afternoon: reply to 16 dangling work emails, make my stepdaughter a peanut-butter-and-jelly sandwich before her final rehearsal for a production of Peter Pan.
With my jam-sticky fingers, I clicked on a dance poseball and started a conga line—except no one joined my conga line; it just got me stuck between a potted plant and the stage, trying to conga and going nowhere. My embarrassment—more than any sense of having fun—was what made me feel implicated and engaged, aware that I was sharing the world with others.
Each time I signed off Second Life, I was eager to plunge back into the obligations of my ordinary life: Pick up my stepdaughter from drama class? Check! Reply to my department chair about hiring a replacement for the faculty member taking an unexpected leave? I was on it! These obligations felt real in a way that Second Life did not, and they allowed me to inhabit a particular version of myself as someone capable and necessary. It felt like returning to the air after struggling to find my breath underwater. I came up gasping, desperate, ready for entanglement and contact, ready to say: Yes! This is the real world! In all its vexed logistical glory! When I interviewed Philip Rosedale, he readily admitted that Second Life has always presented intrinsic difficulties to users—that it is hard for people to get comfortable moving, communicating, and building; that there is an “irreducible level of difficulty associated with mouse and keyboard” that Second Life “could never make easier.” Peter Gray, Linden Lab’s senior director of global communications, told me about what he called the “white-space problem”—having so much freedom that you can’t be entirely sure what you want to do—and admitted that entering Second Life can be like “getting dropped off in the middle of a foreign country.” When I spoke with users, however, the stubborn inaccessibility of Second Life seemed to have become a crucial part of their narratives as Second Life residents. They looked back on their early embarrassment with nostalgia. Gidge told me about the time someone had convinced her that she needed to buy a vagina, and she’d ended up wearing it on the outside of her pants. (She called this a classic #SecondLifeProblem.) A Swedish musician named Malin Östh—one of the performers at the concert where I’d started my abortive conga line—told me about attending her first Second Life concert, and her story wasn’t so different from mine: When she’d tried to get to the front of the crowd, she’d ended up accidentally flying onto the stage. Beforehand, she’d been sure the whole event would seem fake, but she was surprised by how mortified she felt, and this made her realize that she actually felt like she was among other people. I knew what she meant. If it feels like you are back in junior high school, then at least it feels like you are somewhere.
One woman put it like this : “Second Life doesn’t open itself up to you. It doesn’t hand you everything on a silver platter and tell you where to go next. It presents you with a world, and it lets you to your own devices, tutorial be damned.” But once you’ve figured it out, you can buy a thousand silver platters if you want to—or buy the yacht of your dreams, or build a virtual Yosemite. Rosedale believed that if a user could survive that initial purgatory, then her bond with the world of Second Life would be sealed for good: “If they stay more than four hours, they stay forever.” N eal Stephenson’s 1992 cyberpunk novel, Snow Crash , featuring a virtual “Metaverse,” is often cited as Second Life’s primary literary ancestor. But Rosedale assured me that by the time he read the novel he’d already been imagining Second Life for years (“Just ask my wife”). The hero of Snow Crash , aptly named Hiro Protagonist, lives with his roommate in a U-Stor-It unit, but in the Metaverse he is a sword-fighting prince and a legendary hacker. No surprise he spends so much time there: “It beats the shit out of the U-Stor-It.” Hiro’s double life gets at one of the core fantasies of Second Life: that it could invert all the metrics of real-world success, or render them obsolete; that it could create a radically democratic space because no one has any idea what anyone else’s position in the real world is. Many residents of Second Life understand it as a utopia connecting people from all over the world—across income levels, across disparate vocations and geographies and disabilities, a place where the ill can live in healthy bodies and the immobilized can move freely. Seraphina Brennan—a transgender woman who grew up in a small coal-mining community in Pennsylvania and could not afford to begin medically transitioning until her mid-20s—told me that Second Life had given her “the opportunity to appear as I truly felt inside,” because it was the first place where she could inhabit a female body.
In The Making of Second Life , Wagner James Au tells the story of an avatar named Bel Muse, a classic “California blonde” who is played by an African American woman. She led an early team of builders working on Nexus Prime, one of the first Second Life cities, and told Au that it was the first time she hadn’t encountered the prejudices she was accustomed to. In the real world, she said, “I have to make a good impression right away—I have to come off nice and articulate, right away. In Second Life, I didn’t have to. Because for once, I can pass.” But this anecdote—the fact that Bel Muse found respect more readily when she passed as white—confirms the persistence of racism more than it offers any proof of liberation from it.
Many Second Life users see it as offering an equal playing field, free from the strictures of class and race, but its preponderance of slender white bodies, most of them outfitted with the props of the leisure class, simply re-inscribe the same skewed ideals—and the same sense of “whiteness” as invisible default—that sustain the unequal playing field in the first place.
Sara Skinner, an African American woman who has always given her avatars skin tones similar to her own, told me the story of trying to build a digital black-history museum in a seaside town called Bay City. Another avatar (playing a cop) immediately built walls and, eventually, a courthouse that blocked the museum from view. The cop avatar claims it was a misunderstanding, but so much racism refuses to confess itself as such—and it’s certainly no misunderstanding when white men on Second Life tell Sara that she looks like a primate after she rejects their advances; or when someone calls her “tampon nose” because of her wide nostrils; or when someone else tells her that her experience with bias is invalid because she is a “mixed breed.” Au told me that initially he was deeply excited by the premise of Second Life, particularly the possibilities of its user-generated content, but that most people turned out to be less interested in exercising the limits of their creative potential than in becoming consumers of a young, sexy, rich world, clubbing like 20-somethings with infinite money. Rosedale told me he thought the landscape of Second Life would be hyper-fantastic, artistic and insane, full of spaceships and bizarre topographies, but what ended up emerging looked more like Malibu. People were building mansions and Ferraris. “We first build in a place what we most covet,” he told me, and cited an early study by Linden Lab that found the vast majority of Second Life users lived in rural rather than urban areas in real life. They came to Second Life for what their physical lives lacked: the concentration, density, and connective potential of urban spaces; the sense of things happening all around them; the possibility of being part of that happening.
J onas Tancred first joined Second Life in 2007, after his corporate-headhunting company folded during the recession. Jonas, who lives in Sweden, was graying and middle-aged, a bit paunchy, while his avatar, Bara Jonson, was young and muscled, with spiky hair and a soulful vibe. But what Jonas found most compelling about Second Life was not that it let him role-play a more attractive alter ego; it was that Second Life gave him the chance to play music, a lifelong dream he’d never followed. (He would eventually pair up with Malin Östh to form the duo Bara Jonson and Free.) Jonas started playing virtual gigs. In real life he stood in front of a kitchen table covered with a checkered oilcloth, playing an acoustic guitar connected to his computer, while in Second Life Bara was rocking out in front of a crowd.
Before a performance one night, a woman showed up early and asked him, “Are you any good?” He said, “Yes, of course,” and played one of his best gigs yet, just to back it up. This woman was Nickel Borrelly; she would become his (Second Life) wife and eventually, a couple of years later, the mother of his (real life) child.
Offline, Nickel was a younger woman named Susie who lived in Missouri. After a surreal courtship full of hot-air-balloon rides, romantic moonlit dances, and tandem biking on the Great Wall of China, the pair had a Second Life wedding on Twin Hearts Island—at “12pm SLT,” the electronic invitations said, which meant noon Standard Linden Time. During their vows, Bara called it the most important day of his life. But which life did he mean? Bara’s Second Life musical career started to take off, and eventually he was offered the chance to come to New York to make a record, one of the first times a Second Life musician had been offered a real-life record deal. It was on that trip that Jonas first met Susie in the real world. When their relationship was featured in a documentary a few years later, she described her first impression: Man, he looks kinda old.
But she said that getting to know him in person felt like “falling in love twice.” How did she end up getting pregnant? “I can tell you how it happened,” she said in the documentary. “A lot of vodka.” Susie and Jonas’s son, Arvid, was born in 2009. (Both Susie’s and Arvid’s names have been changed.) By then, Jonas was back in Sweden because his visa had run out. While Susie was in the delivery room, he was in his club on Second Life—at first waiting for news, and then smoking a virtual cigar. For Susie, the hardest part was the day after Arvid’s birth, when the hospital was full of other fathers visiting their babies. What could Susie and Jonas do? Bring their avatars together to cook a virtual breakfast in a romantic enclave by the sea, holding steaming mugs of coffee they couldn’t drink, looking at actual videos of their actual baby on a virtual television, while they reclined on a virtual couch.
Susie and Jonas are no longer romantically involved, but Jonas is still part of Arvid’s life, Skyping frequently and visiting the States when he can. Jonas believes that part of the reason he and Susie have been able to maintain a strong parenting relationship in the aftermath of their separation is that they got to know each other so well online before they met in real life—that Second Life wasn’t an illusion but a conduit that allowed them to understand each other better than real-life courtship would have.
Jonas describes Second Life as a rarefied version of reality, rather than a shallow substitute for it. As a musician, he feels that Second Life hasn’t changed his music but “amplified” it, enabling a more direct connection with his audience, and he loves the way fans can type their own lyrics to his songs. He remembers everyone “singing along” to a cover he performed of “Mmm Mmm Mmm Mmm,” by the Crash Test Dummies, when so many people typed the lyrics that their “mmm”s eventually filled his entire screen. For Jonas, the reality and beauty of his creations—the songs, the baby—have transcended and overpowered the vestiges of their virtual construction.
O f the 36 million Second Life accounts that had been created by 2013—the most recent data Linden Lab will provide—only an estimated 600,000 people still regularly use the platform. That’s a lot of users who turned away. What happened? Au sees the simultaneous rise of Facebook and the plateau in Second Life users as proof that Linden Lab misread public desires. “Second Life launched with the premise that everyone would want a second life,” Au told me, “but the market proved otherwise.” But when I spoke with Peter Gray, Linden Lab’s global communications director, and Bjorn Laurin, its vice president of product, they insisted that the problem doesn’t lie in the concept, but in the challenge of perfecting its execution. The user plateau simply testifies to interface difficulties, they told me, and to the fact that the technology hasn’t yet advanced enough to deliver fully on what the media hype suggested Second Life might become: an utterly immersive virtual world. They are hoping virtual reality can change that.
In July, Linden Lab launched a beta version of a new platform called Sansar, billed as the next frontier: a three-dimensional world designed for use with a virtual-reality headset such as Oculus Rift. The company’s faith, along with the recent popularity of VR in the tech world (a trend that Facebook’s purchase of Oculus VR attests to), raises a larger question. If advances in virtual reality solve the problem of a cumbersome interface, will they ultimately reveal a widespread desire to plunge more fully into virtual worlds unfettered by glitches, lags, and keyboards? Rosedale stepped down as CEO of Linden Lab in 2008. He told me he thinks of himself as more of an inventor, and he felt that the company needed a better manager. He isn’t disappointed in what Second Life has become, but he, too, sees the horizon of future possibility elsewhere: in full-fledged virtual reality, where he can “build planets and new economies.” His current company, High Fidelity, is working on creating VR technology so immersive that you actually feel like you are present in the room with someone else.
Au told me that he has noticed a recurrent hubris in the tech world. Instead of learning from mistakes, people and companies do the same thing over and over again. Is this the story of Second Life—the persistence of a tech-world delusion? Or is the delusion something more like prophecy? Is Second Life the prescient forerunner of our future digital existence? When I asked Rosedale whether he stood behind the predictions he’d made during the early years of Second Life—that the locus of our lives would become virtual, and that the physical world would start to seem like a museum—he didn’t recant. Just the opposite: He said that at a certain point we would come to regard the real world as an “archaic, lovable place” that was no longer crucial. “What will we do with our offices when we no longer use them?” he wondered. “Will we play racquetball in them?” I pressed him on this. Did he really think that certain parts of the physical world—the homes we share with our families, for example, or the meals we enjoy with our friends, our bodies leaning close across tables—would someday cease to matter? Did he really believe that our corporeal selves weren’t fundamental to our humanity? I was surprised by how rapidly he conceded. The sphere of family would never become obsolete, he said—the physical home, where we choose to spend time with the people we love. “That has a more durable existence,” he said. “As I think you’d agree.” A licia Chenaux lives on an island called Bluebonnet, a quaint forested enclave, with her husband, Aldwyn (Al), to whom she has been married for six years, and their two daughters: Abby, who is 8, and Brianna, who is 3, although she used to be 5, and before that she was 8. As a family, they live their days as a parade of idyllic memories, often captured as digital snapshots on Alicia’s blog : scouting for jack-o’-lantern candidates at the pumpkin patch, heading to Greece for days of swimming in a pixelated sea. It’s like a digital Norman Rockwell painting, an ideal of upper-middle-class American domesticity—an utterly unremarkable fantasy, except that Abby and Brianna are both child avatars played by adults.
When Alicia discovered in her early 30s that she couldn’t have biological children, she fell into a lengthy depression. But Second Life offered her a chance to be a parent. Her virtual daughter Abby endured a serious trauma in real life at the age of 8 (the specifics of which Alicia doesn’t feel the need to know), so she plays that age to give herself the chance to live it better. Brianna was raised by nannies in real life—her parents weren’t particularly involved in her upbringing—and she wanted to be part of a family in which she’d get more hands-on parenting. Perhaps that’s why she kept wanting to get younger.
Alicia and her family are part of a larger family-role-play community on Second Life, facilitated by adoption agencies where children and potential parents post profiles and embark on “trials,” during which they live together to see whether they are a good match. Sara Skinner, the would-be founder of the Second Life black-history museum, told me about parenting a 4-year-old son played by a man in the armed services deployed overseas: He often logged on with a patchy connection, just to hang out with Sara for a few hours while his service flickered in and out.
Sometimes adoptive parents will go through a virtual pregnancy, using “birth clinics” or accessories called “tummy talkers”—kits that deliver everything you need: a due date and body modifications (both adjustable), including the choice to make the growing fetus visible or not; play-by-play announcements (“Your baby is doing flips!”); and the simulation of a “realistic delivery,” along with a newborn-baby accessory. For Second Life parents who go through pregnancy after adopting in-world, it’s understood that the baby they are having is the child they have already adopted—the process is meant to give both parent and child the bond of a live birth. “Really get morning sickness,” one product promises. “Get aches.” Which means being informed that a body-that-is-not-your-corporeal-body is getting sick. “You have full control over your pregnancy, have it EXACTLY how you want,” this product advertises, which—as I write this essay, six months into my own pregnancy—does seem to miss something central to the experience: that it doesn’t happen exactly how you want; that it subjects you to a process beyond your control.
In real life, Alicia lives with her boyfriend, and when I ask whether he knows about her Second Life family, she says, “Of course.” Keeping it a secret would be hard, because she hangs out with the three of them on Second Life nearly every night of the week except Wednesday. (Wednesday is what she calls “real-life night,” and she spends it watching reality television with her best friend.) When I ask Alicia whether she gets different things from her two romantic relationships, she says, “Absolutely.” Her boyfriend is brilliant but he works all the time; Al listens to her ramble endlessly about her day. She and Al knew each other for two years before they got married (she says his “patience and persistence” were a major part of his appeal), and she confesses that she was a “total control freak” about their huge Second Life wedding. In real life, the man who plays Al is a bit older than Alicia—51 to her 39, with a wife and family—and she appreciates that he has a “whole lifetime of experiences” and can offer a “more conservative, more settled” perspective.
After their Second Life wedding, everyone started asking whether Alicia and Al planned to have kids. (Some things remain constant across virtual and actual worlds.) They adopted Abby four years ago, and Brianna a year later, and these days their family dynamic weaves in and out of role-play. When Brianna joined their family, she said she wanted more than “just a story,” and sometimes the girls will interrupt role-play to say something about their real adult lives: guy trouble or job stress. But it’s important to Alicia that both of her daughters are “committed children,” which means that they don’t have alternate adult avatars. While Alicia and Al share real-life photos with each other, Alicia told me, “the girls generally don’t share photos of themselves, preferring to keep themselves more childlike in our minds.” For Christmas in 2015, Al gave Alicia a “pose stand,” which allows her to customize and save poses for her family : she and Al embracing on a bench, or him giving her a piggyback ride. Many of Alicia’s blog posts show a photograph of her family looking happy, often accompanied by a note at the bottom. One such note reads: “Btw, if you want to buy the pose I used for this picture of us, I put it up on Marketplace.” In one post, beneath a photograph of her and Al sitting on a bench, surrounded by snowy trees, cuddling in their cozy winter finery, she admits that she took the photo after Al had gone to bed. She had logged his avatar back on and posed him to get the photo just as she wanted.
To me, posing illuminates both the appeal and the limits of family role-play on Second Life: It can be endlessly sculpted into something idyllic, but it can never be sculpted into something that you have not purposely sculpted. Though Alicia’s family dynamic looks seamless—a parade of photogenic moments—a deep part of its pleasure, as Alicia described it to me, seems to involve its moments of difficulty: when she has to stop the girls from bickering about costumes or throwing tantrums about coming home from vacation. In a blog post, Alicia confesses that her favorite time each evening is the “few minutes” she gets alone with Al, but even invoking this economy of scarcity—appealing for its suggestion of obligation and sacrifice—feels like another pose lifted from real-world parenting.
Last year, Alicia and Al adopted two more children, but found it problematic that the new kids wanted “so much, so fast.” They wanted to call Alicia and Al Mom and Dad right away, and started saying “I love you so much” from the very beginning. They had a desire for intense, unrelenting parenting, rather than wanting to weave in and out of role-play, and constantly did things that demanded attention, like losing their shoes, jumping off the roof, climbing trees they couldn’t get down from, and starting projects they couldn’t finish. Basically, they behaved more like actual kids than like adults pretending to be kids. The adoption lasted only five months.
There’s something stubbornly beautiful about Alicia’s Second Life family, all four of these people wanting to live inside the same dream. And there’s something irrefutably meaningful about the ways Alicia and her children have forged their own version of the intimacies they’ve been denied by circumstances. But their moments of staged friction (the squabbling, the meltdowns) also illuminate the claustrophobia of their family’s perfection. Perhaps Second Life families court the ideals of domesticity too easily, effectively short-circuiting much of the difficulty that constitutes family life. Your virtual family will never fully reach beyond your wildest imagining, because it’s built only of what you could imagine.
O ne evening during the earliest days of my Second Life exploration, I stood with my husband outside a barbecue joint in (offline) Lower Manhattan and asked him: “I mean, why isn’t Second Life just as real as ‘real life’?” He reached over and pinched my arm, then said, “That’s why it’s not as real.” His point wasn’t just about physicality—the ways our experiences are bound to our bodies—but about surprise and disruption. So much of lived experience is composed of what lies beyond our agency and prediction, beyond our grasp, beyond our imagining. In the perfected landscapes of Second Life, I kept remembering what a friend had once told me about his experience of incarceration: Having his freedom taken from him meant not only losing access to the full range of the world’s possible pleasure, but also losing access to the full range of his own possible mistakes. Maybe the price of a perfected world, or a world where you can ostensibly control everything, is that much of what strikes us as “experience” comes from what we cannot forge ourselves, and what we cannot ultimately abandon. Alice and Bridgette already know this, of course. They live it every day.
Recommended Reading What Happens When Digital Cities Are Abandoned? Laura E. Hall Enough About Me Leslie Jamison America Has a Drinking Problem Kate Julian In Second Life, as elsewhere online, afk stands for “away from keyboard,” and during the course of his ethnographic research, Tom Boellstorff sometimes heard residents saying that “they wished they could ‘go afk’ in the actual world to escape uncomfortable situations, but knew this was not possible; ‘no one ever says “afk” in real life.’ ” This sentiment inspired what Boellstorff calls the “afk test”: “If you can go ‘afk’ from something, that something is a virtual world.” Perhaps the inverse of the afk test is a decent definition of what constitutes reality: something you can’t go afk from—not forever, at least. Philip Rosedale predicted that the physical world would become a kind of museum, but how could it? It’s too integral to our humanity to ever become obsolete, too necessary to our imperfect, aching bodies moving through it.
Did I find wonder in Second Life? Absolutely. When I sat in a wicker chair on a rooftop balcony, chatting with the legally blind woman who had built herself this house overlooking the crashing waves of Cape Serenity, I found it moving that she could see the world of Second Life better than our own. When I rode horses through the virtual Yosemite, I thought of how the woman leading me through the pines had spent years on disability, isolated from the world, before she found a place where she no longer felt sidelined. That’s what ultimately feels liberating about Second Life—not its repudiation of the physical world, but its entwinement with that world, their fierce exchange. Second Life recognizes the ways that we often feel more plural and less coherent than the world allows us to be.
Some people call Second Life escapist, and often its residents argue against that. But for me, the question isn’t whether or not Second Life involves escape. The more important point is that the impulse to escape our lives is universal, and hardly worth vilifying. Inhabiting any life always involves reckoning with the urge to abandon it—through daydreaming; through storytelling; through the ecstasies of art and music, or hard drugs, or adultery, or a smartphone screen. These forms of “leaving” aren’t the opposite of authentic presence. They are simply one of its symptoms—the way love contains conflict, intimacy contains distance, and faith contains doubt.
"
|
1,043 | 2,019 |
"Google's Gradient Ventures joins $58 million investment in AR startup Mojo Vision | VentureBeat"
|
"https://venturebeat.com/2019/03/19/googles-gradient-ventures-joins-58-million-investment-in-ar-startup-mojo-vision"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google’s Gradient Ventures joins $58 million investment in AR startup Mojo Vision Share on Facebook Share on X Share on LinkedIn Mojo Vision: Homepage Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Mojo Vision , an under-the-radar augmented reality (AR) startup that has yet to reveal exactly what it’s building, announced that it has raised $58 million in a series B round of funding from Google’s Gradient Ventures, Advantech Capital, HP Tech Ventures, Motorola Solutions Venture Capital, Bold Capital Partners, LG Electronics, Kakao Ventures, and Stanford StartX.
Founded out of Saratoga, California in 2015, Mojo Vision more or less exited stealth back in November, when it revealed it had raised $50 million in funding since its inception three years before. Aside from that, the startup didn’t reveal a whole lot about what it’s been cooking up — however, it did tout its AR-infused “invisible computing” platform that will deliver “immediate, powerful, and relevant” information minus the distractions of today’s mobile devices.
While the likes of Microsoft’s HoloLens and Magic Leap are developing gnarly AR smarts that rely on chunky headwear, it seems Mojo Vision could be building something that blends into the environment — perhaps contact lenses or a similar form factor.
“Mojo Vision is taking on a big challenge — to rethink how people receive and share information in a way that is immediate and relevant, without diverting their attention,” said Mojo Vision CEO Drew Perkins.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Perkins previously cofounded optical networking company Infinera, which went public back in 2007.
He has also founded three companies that were acquired, including Gainspeed, which specialized in improving cable network capacity and was snapped up by Nokia in 2016.
With a fresh $58 million in financing under its belt, the startup will be better-positioned to get its technology into the public sphere, Perkins added.
“In addition to advancing critical technologies, this capital moves Mojo closer to initial customer pilots and strategic partnerships,” he said.
AI factor Google announced its new Gradient Ventures fund back in 2017, and the focus for this fund has been squarely on early-stage AI startups.
That Gradient has invested in Mojo Vision strongly suggests there will be a significant AI element to its product.
“The potential for artificial intelligence to provide access to information effortlessly and contextually without distraction is compelling,” said Anna Patterson, managing partner at Gradient Ventures. “Gradient’s investment in Mojo Vision represents our keen interest in using AI to look beyond today’s mobile form factors and develop new ways to connect the world to important information.” A number of companies are currently pushing to make AR “invisible,” one of which is Amazon-backed North, which recently launched $999 Alexa-powered holographic glasses.
Last month, North dropped the price of its Focals glasses by nearly half , followed by news that the company had laid off 150 employees , thought to be around a third of its workforce.
If nothing else, this served as a timely reminder of how precarious hardware startups can be and how resource-intensive it is to bring such new products to market.
It goes without saying that Mojo Vision, whatever it’s working on, will need as much capital as it can get.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,044 | 2,018 |
"Mojo Vision launches 'invisible computing' AR platform out of stealth with over $50 million in funding | VentureBeat"
|
"https://venturebeat.com/2018/11/14/mojo-vision-launches-invisible-computing-ar-platform-out-of-stealth-with-50-million-in-funding"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Mojo Vision launches ‘invisible computing’ AR platform out of stealth with over $50 million in funding Share on Facebook Share on X Share on LinkedIn Mojo Vision: Homepage Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
A fledgling augmented reality (AR) startup is launching out of stealth today with some big-name founders and backers in tow — and more than $50 million in funding.
Mojo Vision was founded out of Saratoga, California in 2015, but nothing was known about the company until today. We still don’t know much, in truth, but the startup did say it’s developing an “invisible computing” AR platform that will deliver “immediate, powerful, and relevant” information without the intrusions of today’s mobile devices.
What we’re talking about is hands-free, of course, which is pretty much in line with many other AR technologies of today. The company did not divulge the form factor or any other details around the look of its platform or technology.
While we may not know much about what the company is cooking up, we do know a bit about who’s at the helm.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Meet the founders Mojo Vision is headed up by CEO Drew Perkins, who has previously cofounded three companies that were acquired and one that went public.
One of those startups was Gainspeed, which specialized in improving cable network capacity, and which was snapped up by Nokia in 2016.
The founding team also includes CTO Mike Wiemer, who previously started a VC-backed solar cell company called Solar Junction, which appeared to have been bought out by a Saudi Arabian firm in 2015.
Completing the triumvirate of founders is chief science officer Michael Deering, who specializes in computer vision and 3D graphics and who has worked in several roles over the past four decades, perhaps most notably as a “distinguished engineer” at Sun Microsystems.
Invisible computing Although Mojo Vision is keeping its cards fairly close to its chest — even as it exits stealth — there has been a clear push toward “unobtrusive” computing interfaces, with AR playing a pivotal role.
Amazon-backed North , formerly known as Thalmic Labs, recently launched its $999 Focals holographic smart glasses , and yesterday it opened its first retail stores to support the rollout.
Above: Focals by North.
Microsoft is also investing heavily in mixed reality via HoloLens, and the computing giant is pushing its use cases into multiple industries.
Mojo Vision’s overarching aim appears to be much the same: to make information technology blend seamlessly into our lives. Indeed, the company said its invisible computing platform will help people keep their “eyes up and focus on the information and ideas” that could improve their lives and businesses — without having to glance down at a screen.
“People want technology to deliver information faster and in more convenient ways, but in many cases the scale has tipped in the other direction,” Perkins said. “The instant access to information we enjoy today can also distract us from important parts of our lives. The very technology that was designed to improve communication is now often a barrier to fundamental personal connections. Invisible computing is about having faster and more natural access to information, but without phones, tablets, or other devices getting in the way; in the world of invisible computing, we will be able to focus on the people around us without the interruptions from today’s screens.” In the three years since its founding, Mojo Vision has raised more than $50 million in seed and series A funding, with the likes of Khosla Ventures, NEA, Shanda Group, Fusion Fund, Liberty Global Ventures, 8VC, Dolby Family Ventures, AME Cloud Ventures, and Open Field Capital plowing cash into the venture.
The company plans to announce more details about its platform at a future date.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,045 | 2,021 |
"How reinforcement learning chooses the ads you see | VentureBeat"
|
"https://venturebeat.com/2021/02/23/how-reinforcement-learning-chooses-the-ads-you-see"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How reinforcement learning chooses the ads you see Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Every day, digital advertisement agencies serve billions of ads on news websites, search engines, social media networks, video streaming websites, and other platforms. And they all want to answer the same question: Which of the many ads they have in their catalog is more likely to appeal to a certain viewer? Finding the right answer to this question can have a huge impact on revenue when you are dealing with hundreds of websites, thousands of ads, and millions of visitors.
Fortunately (for the ad agencies, at least), reinforcement learning (RL), the branch of artificial intelligence that has become renowned for mastering board and video games , provides a solution. Reinforcement learning models seek to maximize rewards. In the case of online ads, the RL model will try to find the ad that users are more likely to click on.
The digital ad industry generates hundreds of billions of dollars every year and provides an interesting case study of the powers of reinforcement learning.
Naïve A/B/n testing To better understand how reinforcement learning optimizes ads, consider a very simple scenario: You’re the owner of a news website. To pay for the costs of hosting and staff, you have entered a contract with a company to run their ads on your website. The company has provided you with five different ads and will pay you one dollar every time a visitor clicks on one of the ads.
Your first goal is to find the ad that generates the most clicks. In advertising lingo, you will want to maximize your click-through rate (CTR). The CTR is the ratio of clicks over number of ads displayed, also called impressions. For instance, if 1,000 ad impressions earn you three clicks, your CTR will be 3 / 1000 = 0.003 or 0.3%.
Before we solve the problem with reinforcement learning, let’s discuss A/B testing, the standard technique for comparing the performance of two competing solutions (A and B) such as different webpage layouts, product recommendations, or ads. When you’re dealing with more than two alternatives, it is called A/B/n testing.
In A/B/n testing, the experiment’s subjects are randomly divided into separate groups, and each is provided with one of the available solutions. In our case, this means that we will randomly show one of the five ads to each new visitor of our website and evaluate the results.
Say we run our A/B/n test for 100,000 iterations, roughly 20,000 impressions per ad. Here are the clicks-over-impression ratio of our ads: Ad 1: 80/20,000 = 0.40% CTR Ad 2: 70/20,000 = 0.35% CTR Ad 3: 90/20,000 = 0.45% CTR Ad 4: 62/20,000 = 0.31% CTR Ad 5: 50/20,000 = 0.25% CTR Our 100,000 ad impressions generated $352 in revenue with an average CTR of 0.35%. More importantly, we found out that ad number 3 performs better than the others, and we will continue to use that one for the rest of our viewers. With the worst-performing ad (ad number 2), our revenue would have been $250. With the best performing ad (ad number 3), our revenue would have been $450. So, our A/B/n test provided us with the average of the minimum and maximum revenue and yielded the very valuable knowledge of the CTR rates we sought.
Digital ads have very low conversion rates. In our example, there’s a subtle 0.2% difference between our best- and worst-performing ads. But this difference can have a significant impact at scale. At 1,000 impressions, ad number 3 will generate an extra $2 in comparison to ad number 5. At a million impressions, this difference will become $2,000. When you’re running billions of ads, a subtle 0.2% can have a huge impact on revenue.
Therefore, finding these subtle differences is very important in ad optimization. The problem with A/B/n testing is that it is not very efficient at finding these differences. It treats all ads equally, and you need to run each ad tens of thousands of times until you discover their differences at a reliable confidence level. This can result in lost revenue, especially when you have a larger catalog of ads.
Another problem with classic A/B/n testing is that it is static. Once you find the optimal ad, you will have to stick to it. If the environment changes due to a new factor (seasonality, news trends, etc.) and causes one of the other ads to have a potentially higher CTR, you won’t find out unless you run the A/B/n test all over again.
What if we could change A/B/n testing to make it more efficient and dynamic? This is where reinforcement learning comes into play. A reinforcement learning agent starts by knowing nothing about its environment actions, rewards, and penalties. The agent must find a way to maximize its rewards.
In our case, the RL agent’s actions are one of five ads to display. The RL agent will receive a reward point every time a user clicks on an ad. It must find a way to maximize ad clicks.
The multi-armed bandit In some reinforcement learning environments, actions are evaluated in sequences. For instance, in video games, you must perform a series of actions to reach the reward, which is finishing a level or winning a match. But when serving ads, the outcome of every ad impression is evaluated independently; it is a single-step environment.
To solve the ad optimization problem, we’ll use a “multi-armed bandit” (MAB), a reinforcement learning algorithm that is suited for single-step reinforcement learning. The name of the multi-armed bandit comes from an imaginary scenario in which a gambler is standing at a row of slot machines. The gambler knows that the machines have different win rates, but he doesn’t know which one provides the highest reward.
If he sticks to one machine, he might lose the chance of selecting the machine with the highest win rate. Therefore, the gambler must find an efficient way to discover the machine with the highest reward without using up too many of his tokens.
Ad optimization is a typical example of a multi-armed bandit problem. In this case, the reinforcement learning agent must find a way to discover the ad with the highest CTR without wasting too many valuable ad impressions on inefficient ads.
Exploration vs exploitation One of the problems every reinforcement learning model faces is the “exploration vs exploitation” challenge. Exploitation means sticking to the best solution the RL agent has so far found. Exploration means trying other solutions in hopes of landing on one that is better than the current optimal solution.
In the context of ad selection, the reinforcement learning agent must decide between choosing the best-performing ad and exploring other options.
One solution to the exploitation-exploration problem is the “epsilon-greedy” (ε-greedy) algorithm. In this case, the reinforcement learning model will choose the best solution most of the time, and in a specified percent of cases (the epsilon factor) it will choose one of the ads at random.
Here’s how it works in practice: Say we have an epsilon-greedy MAB agent with the ε factor set to 0.2. This means that the agent chooses the best-performing ad 80% of the time and explores other options 20% of the time.
The reinforcement learning model starts without knowing which of the ads performs better; therefore, it assigns each of them an equal value. When all ads are equal, it will choose one of them at random each time it wants to serve an ad.
After serving 200 ads (40 impressions per ad), a user clicks on ad number 4. The agent adjusts the CTR of the ads as follows: Ad 1: 0/40 = 0.0% Ad 2: 0/40 = 0.0% Ad 3: 0/40 = 0.0% Ad 4: 1/40 = 2.5% Ad 5: 0/40 = 0.0% Now, the agent thinks that ad number 4 is the top-performing ad. For every new ad impression, it will pick a random number between 0 and 1. If the number is above 0.2 (the ε factor), it will choose ad number 4. If it’s below 0.2, it will choose one of the other ads at random.
Now, our agent runs 200 other ad impressions before another user clicks on an ad, this time on ad number 3. Note that of these 200 impressions, 160 belong to ad number 4, because it was the optimal ad. The rest are equally divided between the other ads. Our new CTR values are as follows: Ad 1: 0/50 = 0.0% Ad 2: 0/50 = 0.0% Ad 3: 1/50 = 2.0% Ad 4: 1/200 = 0.5% Ad 5: 0/50 = 0.0% Now the optimal ad becomes ad number 3. It will get 80% of the ad impressions. Let’s say after another 100 impressions (80 for ad number three, four for each of the other ads), someone clicks on ad number 2. Here’s how what the new CTR distribution looks like: Ad 1: 0/54 = 0.0% Ad 2: 1/54 = 1.8% Ad 3: 1/130 = 0.7% Ad 4: 1/204 = 0.49% Ad 5: 0/54 = 0.0% Now, ad number 2 is the optimal solution. As we serve more ads, the CTRs will reflect the real value of each ad. The best ad will get the lion’s share of the impressions, but the agent will continue to explore other options. Therefore, if the environment changes and users start to show more positive reactions to a certain ad, the RL agent can discover it.
After running 100,000 ads, our distribution can look something like the following: Ad 1: 123/30,600 = 0.40% CTR Ad 2: 67/18,900 = 0.35% CTR Ad 3: 187/41,400 = 0.45% CTR Ad 4: 35/11,300 = 0.31% CTR Ad 5: 15/5,800 = 0.26% CTR With the ε-greedy algorithm, we were able to increase our revenue from $352 to $426 on 100,000 ad impression and an average CTR of 0.42%. This is a great improvement over the classic A/B/n testing model.
Improving the ε-greedy algorithm The key to the ε-greedy reinforcement learning algorithm is adjusting the epsilon factor. If you set it too low, it will exploit the ad that it thinks is optimal at the expense of not finding a possibly better solution. For instance, in the example we explored above, ad number four happens to generate the first click, but in the long run, it doesn’t have the highest CTR. Small sample sizes do not necessarily represent true distributions.
On the other hand, if you set the epsilon factor too high, your RL agent will waste too many resources exploring non-optimal solutions.
One way you can improve the epsilon-greedy algorithm is defining a dynamic policy. When the MAB model is fresh, you can start with a high epsilon value to do more exploration and less exploitation. As your model serves more ads and gets a better estimate of the value of each solution, it can gradually reduce the epsilon value until it reaches a threshold value.
In the context of our ad-optimization problem, we can start with an epsilon value of 0.5 and reduce it by 0.01 after every 1,000 ad impression until it reaches 0.1.
Another way to improve our multi-armed bandit is to put more weight on new observations and gradually reduce the value of older observations. This is especially useful in dynamic environments such as digital ads and product recommendations, where the value of solutions can change over time.
Here’s a very simple way you can do this. The classic way to update the CTR after serving an ad is as follows: (result + past_results) / impressions Here, result is the outcome of the ad displayed (1 if clicked, 0 if not clicked), past_results is the cumulative number of clicks the ad has garnered so far, and impressions is the total number of times the ad has been served.
To gradually fade old results, we add a new alpha factor (between 0 and 1), and make the following change: (result + past_results * alpha) / impressions This small change will give more weight to new observations. Therefore, if you have two competing ads that have an equal number of clicks and impressions, the one whose clicks are more recent will be favored by your reinforcement learning model. Also, if an ad had a very high CTR rate in the past but has become unresponsive in recent times, its value will decline faster in this model, forcing the RL model to move to other alternatives earlier and waste less resources on the inefficient ad.
Adding context to the reinforcement learning model In the age of internet, websites, social media, and mobile apps have plenty of information on every single user such as their geographic location, device type, and the exact time of day they’re viewing the ad. Social media companies have even more information about their users, including age and gender, friends and family, the type of content they have shared in the past, the type of posts they liked or clicked on in the past, and more.
This rich information gives these companies the opportunity to personalize ads for each viewer. But the multi-armed bandit model we created in the previous section shows the same ad to everyone and doesn’t take the specific characteristic of each viewer into account. What if we wanted to add context to our multi-armed bandit? One solution is to create several multi-armed bandits, each for a specific sub-field of users. For instance, we can create separate RL models for users in North America, Europe, Middle East, Asia, Africa, and so on. What if we wanted to also factor in gender? Then we would have one reinforcement learning model for female users in North America, one for male users in North America, one for female users in Europe, male users in Europe, etc. Now, add age ranges and device types, and you can see that it will quickly develop into a big problem, creating an explosion of multi-armed bandits that become hard to train and maintain.
An alternative solution is to use a “contextual bandit,” an upgraded version of the multi-armed bandit that takes contextual information into account. Instead of creating a separate MAB for each combination of characteristics, the contextual bandit uses “ function approximation ,” which tries to model the performance of each solution based on a set of input factors.
Without going too much into the details (that could be the subject of another post), our contextual bandit uses supervised machine learning to predict the performance of each ad based on location, device type, gender, age, etc. The benefit of the contextual bandit is that it uses one machine learning model per ad instead of creating an MAB per combination of characteristics.
This wraps up our discussion of ad optimization with reinforcement learning. The same reinforcement learning techniques can be used to solve many other problems, such as content and product recommendation or dynamic pricing, and are used in other domains such as health care, investment, and network management.
Ben Dickson is a software engineer and the founder of TechTalks. He writes about technology, business, and politics. This post was originally published here.
This story originally appeared on Bdtechtalks.com.
Copyright 2021 VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,046 | 2,020 |
"OpenAI proposes using reciprocity to encourage AI agents to work together | VentureBeat"
|
"https://venturebeat.com/2020/11/13/openai-proposes-using-reciprocity-to-encourage-ai-agents-to-work-together"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages OpenAI proposes using reciprocity to encourage AI agents to work together Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Many real-world problems require complex coordination between multiple agents — e.g., people or algorithms. A machine learning technique called multi-agent reinforcement learning (MARL) has shown success with respect to this, mainly in two-team games like Go, DOTA 2, StarCraft, hide-and-seek, and capture the flag. But the human world is far messier than games. That’s because humans face social dilemmas at multiple scales, from the interpersonal to the international, and they must decide not only how to cooperate but when to cooperate.
To address this challenge, researchers at OpenAI propose training AI agents with what they call randomized uncertain social preferences (RUSP) , an augmentation that expands the distribution of environments in which reinforcement learning agents train. During training, agents share varying amounts of reward with each other; however, each agent has an independent degree of uncertainty over their relationships, creating “asymmetry” that the researchers hypothesize pressures agents to learn socially reactive behaviors.
To demonstrate RUSP’s potential, the coauthors had agents play Prisoner’s Buddy, a grid-based game where agents receive a reward for “finding a buddy.” On each timestep, agents act by either choosing another agent or deciding to choose no one and sitting out the round. If two agents mutually choose each other, they each get a reward of +2. If agent Alice chooses Bob but the choice isn’t reciprocated, Alice receives -2 and Bob receives +1. Agents that choose no one receive 0.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! The coauthors also explored preliminary team dynamics in a much more complex environment called Oasis. It’s physics-based and tasks agents with survival; their reward is +1 for every timestep they remain alive and a large negative reward when they die. Their health decreases with each step, but they can regain health by eating food pellets and can attack others to reduce their health. If an agent is reduced below 0 health, it dies and respawns at the edge of the play area after 100 timesteps.
There’s only enough food to support two of the three agents in Oasis, creating a social dilemma. Agents must break symmetry and gang up on the third to secure the food source to stay alive.
RUSP agents in Oasis performed much better than a “selfish” baseline in that they achieved higher reward and died less frequently, the researchers report. (For agents trained with high uncertainty levels, up to 90% of the deaths in an episode were attributable to a single agent, indicating that two agents learned to form a coalition and mostly exclude the third from the food source.) And in Prisoner’s Buddy, RUSP agents successfully partition into teams that tended to be stable and maintained throughout an episode.
The researchers note that RUSP is inefficient — with the training setup in Oasis, 1,000 iterations corresponded to roughly 3.8 million episodes of experience. This being the case, they argue that RUSP and techniques like it warrant further exploration. “Reciprocity and team formation are hallmark behaviors of sustained cooperation in both animals and humans,” they wrote in a paper submitted to the 2020 NeurIPS conference. “The foundations of many of our social structures are rooted in these basic behaviors and are even explicitly written into them — almost 4,000 years ago, reciprocal punishment was at the core of Hammurabi’s code of laws. If we are to see the emergence of more complex social structures and norms, it seems a prudent first step to understanding how simple forms of reciprocity may develop in artificial agents.” GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,047 | 2,020 |
"A closer look at SageMaker Studio, AWS' machine learning IDE | VentureBeat"
|
"https://venturebeat.com/2020/06/27/a-closer-look-at-sagemaker-studio-aws-machine-learning-ide"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest A closer look at SageMaker Studio, AWS’ machine learning IDE Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
[Editor’s note: This story was updated June 3, 2020. The changes are clearly marked below.] Back in December, when AWS launched its new machine learning IDE, SageMaker Studio , we wrote up a “hot-off-the-presses” review.
At the time, we felt the platform fell short, but we promised to publish an update after working with AWS to get more familiar with the new capabilities. This is that update.
Pain points and solutions in the machine learning pipeline When Amazon launched SageMaker Studio, they made clear the pain points they were aiming to solve: “The machine learning development workflow is still very iterative, and is challenging for developers to manage due to the relative immaturity of ML tooling.” The machine learning workflow — from data ingestion, feature engineering, and model selection to debugging, deployment, monitoring, and maintenance, along with all the steps in between — can be like trying to tame a wild animal.
To solve this challenge, big tech companies have built their own machine learning and big data platforms for their data scientists to use: Uber has Michelangelo , Facebook (and likely Instagram and WhatsApp) has FBLearner flow , Google has TFX , and Netflix has both Metaflow and Polynote (the latter has been open sourced). For smaller organizations that cannot roll out their own infrastructure, a number of players have emerged in proprietary and productized form, as evidenced by Gartner’s Magic Quadrant for Data Science and Machine Learning Platforms: VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! These include platforms like Microsoft Azure , H20 , DataRobot , and Google Cloud Platform (to name a few). These platforms are intended for data scientists and adjacent roles, such as data engineers and ML engineers, and span all types of data work, from data cleaning, wrangling, and visualization, to machine learning. Amazon SageMaker Studio was the latest to join this fray.
What SageMaker Studio Offers So what does Sagemaker Studio offer? According to Amazon , “SageMaker [including Studio] is a fully managed service that removes the heavy lifting from each step of the machine learning process.” The tools are impressive and do remove several aspects of the heavy lifting: The IDE meets data scientists where they are by using the intuitive interface of JupyterLab , a common open notebook-based IDE for data science in Python. Standardizing on what are rapidly becoming (or have already become) the standard tools for data professionals allows everyone to leverage the wide range of open-source tooling available in the ecosystem. This seems to be an area where AWS is making a solid commitment, having hired two major JupyterLab contributors, including Brian Granger, co-lead of Project Jupyter itself.
Sagemaker notebooks can be run elastically, which means data scientists pay only for compute time used, instead of for how long they have the notebook open. This makes for a far more cost efficient workflow for data scientists. Elastic notebooks also allow heavy-duty machine learning workloads to complete quickly by rapidly scaling up and down compute infrastructure to meet demand, all with minimal configuration.
SageMaker Studio provides a framework to track and compare model performance on validation sets across different models, architectures, and hyperparameters (this beats doing it in spreadsheets !). The formalization of machine learning model building as a set of experiments is worth focusing on: You can find countless posts on how much trouble data scientists have tracking machine learning experiments. It is exciting to be able to view ML experiments on a leaderboard, ranked by a metric of choice, although we need to be careful since optimizing for single metrics often results in algorithmic bias.
The debugger provides real-time, graphical monitoring of common issues that data scientists encounter while training models (exploding and vanishing gradients, loss function not decreasing), as well as the ability to build your own rules. This removes both a practical and a cognitive burden, freeing data scientists from the need to constantly monitor these common issues as SageMaker Studio will send alerts.
The platform also includes an automatic model building system, Autopilot. All you need to do is provide the training data, and SageMaker performs all the feature engineering, algorithm selection, and hyperparameter tuning automatically (similar to DataRobot). An exciting feature is the automatic generation of notebooks containing all the resulting models that you can play with and build upon. Amazon claims the automated models can serve either as baselines (for scientists wanting to build more sophisticated models) or as models to be productionized directly. The latter may be problematic, particularly as users are not able to select the optimization metric (they can only provide the training data). We all know about the horrors of proxies for optimization metrics and the potential for “ rampant racism in decision-making software.
” When we asked AWS about this, a spokesperson told us: “As with all machine learning, customers should always closely examine training data and evaluate models to ensure they are performing as intended, especially in critical use cases such as healthcare or financial services.” ( Update: There is now a limited selection of optimization metrics that can be selected via code after automatically generating notebooks. However, the GUI still does not allow the selection of metrics. Given that Autopilot is marketed to non-coder GUI-based users, we would encourage AWS to add optimization metric selection to the GUI as well. We would also like to see the inclusion of other metrics like Precision and Recall , not just F1.) The model hosting and deployment allows data scientists to get their models up and running in production directly from SageMaker notebook, and provides an HTTPS endpoint that you can ping with new data to get predictions. The ability to monitor data drift in new data over time (that is, to interrogate how representative of new data the training data is) is important and has some promise, especially when it comes to spotting potential bias.
The built-in features are limited to basic summary statistics but there are ways for data scientists to build their own custom metrics by providing either custom pre-processing or post-processing scripts and using a pre-built analysis container or by bringing their own custom container.
These capabilities are impressive and do remove some of the heavy lifting associated with building, deploying, maintaining, and monitoring machine learning models in production. But do they collectively reduce all the grunt work, hacking, and iterative cycles that comprise much of the work of ML data scientists? Does SageMaker Studio deliver on its promise? In contrast to data science platforms such as DataRobot and H20.ai, SageMaker takes a more “training wheels off” approach. It’s biggest proponents have mostly been either data scientists who have serious software engineering chops, or teams that have DevOps, engineering, infrastructural, and data science talent. Another way to frame the question is: Does SageMaker Studio allow lone data scientists with less engineering background to productively enter the space of building ML models on Amazon? After spending days with Studio, we think the answer is no. As noted above, the tools are powerful but, as with so much of AWS, the chaos of the documentation (or lack thereof) and the woefully difficult UX/UI (to compare ML experiments, click through to experiments tab, highlight multiple experiments, control-shift something something without any clear indication in the UI itself) mean the overhead of using products that are still actively evolving is too high.
This is why AWS hosts so many workshops, with and without breakout sessions, chalk talks, webinars, and events such as re:Invent.
All parts of SageMaker Studio require external help and constant hacking away. For example, there’s a notebook with an xgboost example that we were able to replicate, but after searching for documentation, we still couldn’t figure out how to get scikit-learn (a wildly popular ML learning package) up and running. When, in preparation for writing this piece, we emailed our contact at Amazon to ask for directions to relevant documentation, they explained that the product is still “in preview.” ( Update: since our initial exploration of SageMaker, AWS has been busy adding features. They now have more documentation and sample notebooks for scikit-learn in Sagemaker.) The best products teach you how to use them without the need for additional seminars. Data scientists (and technical professionals in general) greatly prefer to get started with a good tutorial rather than wait for a seminar to come through town.
SageMaker Studio is a step in the right direction, but it has a ways to go to fulfill its promise. There’s a reason it isn’t in the Gartner Magic Quadrant for Data Science and Machine Learning Platforms. ( Correction: The Gartner Magic Quadrant was published November of 2019 and Sagemaker Studio was only released December of 2019 and would not have been included. AWS is a leader in the Gartner Magic Quadrant for Cloud AI Developer Services.
) Like AWS, it still requires serious developer chops and software engineering skills and it’s still a long way from making data scientists themselves production ready and meeting them where they are. The real (unmet) potential of SageMaker Studio and the new features of SageMaker lie in efficiency gains and cost reductions for both data scientists who are already comfortable with DevOps and teams that already have strong software engineering capabilities.
Hugo Bowne-Anderson is Head of Data Science Evangelism and VP of Marketing at Coiled.
Previously, he was a data scientist at DataCamp , and has taught data science topics at Yale University and Cold Spring Harbor Laboratory, conferences such as SciPy, PyCon, and ODSC, and with organizations such as Data Carpentry.
Tianhui Michael Li is president at Pragmatic Institute and the founder and president of The Data Incubator , a data science training and placement firm. Previously, he headed monetization data science at Foursquare and has worked at Google, Andreessen Horowitz, J.P. Morgan, and D.E. Shaw.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,048 | 2,018 |
"Amazon's AWS launches RoboMaker to help developers test and deploy robotics applications | VentureBeat"
|
"https://venturebeat.com/2018/11/26/amazons-aws-launches-robomaker-to-help-developers-test-and-deploy-robotics-applications"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Amazon’s AWS launches RoboMaker to help developers test and deploy robotics applications Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Amazon’s cloud computing business, Amazon Web Services (AWS), has launched RoboMaker , a service designed to help developers build, test, and deploy robotics applications through the cloud.
With the rise in artificial intelligence (AI), we’ve seen countless companies emerge across the technology spectrum to bring automation to industries through software. Tying into that, we’ve also seen a marked rise in the real-world application of robotics, which includes autonomous food delivery services , delivery drones , and smarter warehouses.
Robots in the making AWS RoboMaker offers developers the ability to develop their code in the cloud, test it in open source robotics simulator Gazebo , and then deploy updates directly to their robots — be they airborne drones or robotic companions for the elderly.
It also works on top of Robot Operating System ( ROS ), an open source framework for developing robotics software.
Ultimately, RoboMaker helps developers simultaneously create and configure multiple virtual worlds — from factories to retail stores — in which they can test software for their robots before deploying the code for real.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “When talking to our customers, we see the same pattern repeated over and again,” noted AWS RoboMaker general manager Roger Barga in a press release.
“They spend a lot of time setting up infrastructure and cobbling together software for different stages of the robotics development cycle, repeating work others have done before, leaving less time for innovation.” But Amazon’s core pitch to developers here isn’t just a centralized development environment in the cloud — it’s also about serving access to myriad machine learning and analytics services, from the facial ID smarts of Amazon Rekognition, chatbot interface builder Amazon Lex, and synthesized human voices of Amazon Polly to the application and infrastructure monitoring tools within CloudWatch.
Additionally, RoboMaker integrates with Amazon SageMaker, a platform unveiled last year for developers who want to build their own custom machine learning systems.
“AWS RoboMaker provides prebuilt functionality to support robotics developers during their entire project, making it significantly easier to build robots, simulate performance in various environments, iterate faster, and drive greater innovation,” Barga added.
Bot and sold While Amazon itself relies heavily on robotics in its own factories and warehouses, the company is also reportedly planning to enter the consumer robotics realm next year with a home robot called Vesta, though details on these plans are fairly scant.
The global robotics market is estimated to become a $500 billion industry by 2025 , up from $40 billion last year.
Amazon said that AWS RoboMaker is available to cloud customers in the U.S. East (N. Virginia), U.S. West (Oregon), and EU (Ireland), though it will open to other regions over the next year.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,049 | 2,022 |
"How Nvidia's Omniverse could unlock construction innovation with IFC | VentureBeat"
|
"https://venturebeat.com/virtual/how-nvidias-omniverse-could-unlock-construction-innovation-with-ifc"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How Nvidia’s Omniverse could unlock construction innovation with IFC Share on Facebook Share on X Share on LinkedIn HANGZHOU, CHINA - OCTOBER 20, 2021 - Photo taken on Oct. 20, 2021 shows the booth of Nvidia at the 2021 Hangzhou Computing Conference in Hangzhou, east China's Zhejiang Province. Nvidia is abandoning its plan to buy Arm from SoftBank Group due to regulatory objections, ending what would have been the biggest deal in the chip industry. (Photo credit should read Costfoto/Future Publishing via Getty Images) Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Years ago, the construction industry began work on industry foundation classes (IFC) — a data model for describing architecture, engineering, construction and operations (AECO) data. The specification has helped make it easier to export data between tools. A significant bottleneck, though, is that IFC tells construction software vendors only what kind of data to include, not how to represent it in a particular file format, which can lead to ambiguity and significant manual effort to share data across tools.
Now, though, Nvidia is implementing IFC on top of the universal scene description (USD) file format underpinning the Nvidia Omniverse.
This complements Nvidia’s broader efforts to develop connectors for various 3D authoring tools for the AEC industry, including Autodesk Alias, Autodesk Civil, Siemens JT, SimScale and Open Geospatial Consortium formats.
The USD implementation for IFC is still in early stages, but once the effort gets rolling, it promises to streamline construction workflows involving multiple tools from different vendors. It will make it easier for engineers to take advantage of Nvidia’s rich AI tooling to analyze 3D data , simulate the impact of different design decisions and automatically generate a 3D inventory to improve maintenance and operations.
“Ensuring cross-platform consistency has been a difficult task for various software companies,” George Matos, senior product manager of Omniverse AECO at Nvidia, told VentureBeat. “The previous development of standardized and open file formats allowed bridges between various AECO authoring applications. As technology and its functions in AECO progress, we are looking at more extendible formats to allow full cross-platform collaboration, as well as expanded functionality on the open file format. This means enabling an open dialogue, so to speak, between the design applications and USD. While technology is in constant development, we are aiming for full fidelity and consistency between the authoring applications and formats and Nvidia Omniverse, as well as allowing proprietary functions and compute at the authoring source or within Omniverse.” Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Matos sees these efforts as planting the seeds for leveraging construction data across more processes. For example, a car manufacturer might build a physically accurate digital twin of its factories and cars and then use these same ground truth assets in their sales, marketing and customer configurators. This requires all AECO data from the company’s factories, CAD data from its assembly lines, design data from its fabrication models and even the autonomous robots that operate these facilities to fully contribute to one virtual model. These workflows require a standardized data format and framework to keep all 3D assets and data synchronized at high resolution and fidelity.
IFC helps transfer In the late 1990s, there was no easy way to move data across various engineering and design tools. So, AECO firms and software vendors came together to simplify the process. However, each software has its own geometry kernel, which includes its own representation and data model. IFC comes encoded in several file formats today, including STEP, XML, RDF and others under development.
This made it easier to share data, but with limits.
“For the most part, IFC is used as a sort of one-way transfer mechanism,” said Greg Schleusner, director of design technology at HOK and co-director of the Technical Room at buildingSMART, which stewards the IFC standard.
This one-way flow helps teams export data for analysis, detecting scheduling problems, or simulation. But organizations generally returned to the original tool when changes were required.
One underlying challenge is that the various CAD tools can use different ways of representing 3D data. For example, some use a mesh approach, while others think of the world as solids.
“All of these tools have separate views on the world and how to represent geometry, and that is where interoperability is most difficult,” Schleusner said.
As a result, the tools end up linking to the data as a reference rather than performing a design transfer, which would allow teams to take the data into a new tool and progress it.
“It’s certainly possible and technically capable, it is just not used often,” he explained.
USD to modularize data Nvidia’s implementation promises to make it easier to represent IFC and other data about the built world in a more consistent format. In addition, it makes it easier to extract information for specific use cases. Schleusner believes this will make sharing a subset of information with a contractor easier. This could also make it easier to implement suggestion systems that recommend specific kinds of door assemblies for a particular project, in the same way GitHub’s Copilot tool makes coding recommendations.
Nvidia is betting on USD to become the dominant format for allowing collaboration, computation, AI workflows and design across industries. Omniverse is built entirely on USD. Matos said they are working with AECO leaders, including Autodesk, Bentley Systems and Graphisoft, to build connectors to the Omniverse platform.
Nvidia is actively developing capabilities internally and with its ecosystem partners to connect more integral systems and data types, including IoT, BOM data, piping and instrumentation diagrams (P&ID) and more, to Omniverse.
“Simulating real-world and virtual data together will allow optimal operations of facilities,” said Matos. “This is one of the main drivers behind the open data format approach of using USD with Omniverse.” Down the road, Matos expects this could allow the AECO industry to move from building several static independent models to building live comprehensive digital twins. This will provide a live, single-source virtual representation of buildings, environments and cities. This could help train AI agents on thousands of scenarios before live implementations in the real world.
“I am happy to see the likes of Nvidia, Unity, Unreal at long last step into the arena of the built environment. Better late than never,” said Steve Holzer, principal at HolzerTime, an architectural and planning consultancy and member of the infrastructure working group at the Digital Twin Consortium. “As these tools’ novelty wears off, it will expose the incredible value of engagement in their physical space available to a wide audience.” He believes one of the biggest opportunities is to make it easier to contextually parse data for specific use cases. He believes that IFC is a very heavy data structure. Only a few groups have found ways to leverage it across domains, such as COBie for operations, SPARKie for electrical, HVACie for HVAC and WSie for water. USD could make it easier to develop new AI models for parsing structure in the way that NLP tools parse medical entities from health records.
“AI/ML will exponentially raise the value of data in all dimensions when the industry understands how to use it beyond novelty,” Holzer said.
Barry Bassnet, a digital twin technical expert, has been using photogrammetry techniques to capture 3D models of the built environment for 43 years. He is excited about the potential for USD to transform the construction industry.
“USD gives us a language to emulate to some degree just how our brain works and apply it to new processes, particularly AI,” he said.
He believes the missing link is a tool that automatically meta-tags the built environment. Today, people have to manually craft links about and between spatial entities to other sorts of documentation, such as PDFs and their content. The combination of USD and auto-tagging capabilities would make it easier to specify a lock for a window and then link to a 3D repair manual or get a replacement key.
Bassnet watched with excitement as VRML came and then faded due to bandwidth overhead.
“USD is the best chance of finding a way for the metaverse and the concept of digital twins to work,” he said.
Schleusner believes the AEC industry could learn from the success of USD in entertainment. Increasingly, the entertainment industry is making improvements to USD rather than proprietary file formats. As a result, entertainment workflows can exchange data directly between tools rather than through more complex transformations through APIs. Schleusner believes the AEC industry needs to adopt a similar approach to achieve the kind of innovation promised by digital twins for the built environment.
“The most instructive thing for the built world to take away from USD is that it is much easier to talk to one dataset versus having to talk to many application APIs,” said Schleusner. “The new IFC implementation for USD will shift the needle toward more of an interchange rather than interoperability approach. That is the only way we will get much closer to success.” Nvidia has not yet committed to actually represent IFC by following the existing standardization process. This will be important to promote collaboration between the AECO and technology vendors. “We are waiting to see what they want to do,” said Schleusner. “My enthusiasm is based on them doing it in the open as is required in our process.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,050 | 2,022 |
"How 3D Tiles is creating a new streaming protocol for games and the metaverse | VentureBeat"
|
"https://venturebeat.com/virtual/how-3d-tiles-is-creating-a-new-streaming-protocol-for-games-and-the-metaverse"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How 3D Tiles is creating a new streaming protocol for games and the metaverse Share on Facebook Share on X Share on LinkedIn In the early days of the internet, the only way to listen to a recording or watch a movie was to download the whole file in one go. Pioneering companies like RealAudio created the first large-scale internet radio stations that took advantage of emerging streaming audio formats. Later, YouTube and Netflix built extensive empires on the back of streaming video protocols.
Today, 3D games, digital twins and the metaverse are primarily delivered via large files. As a result, these experiences are restricted to a single playing field or building.
3D Tiles , an evolving Open Geospatial Consortium standard, promises to help stream and scale the metaverse.
Eventually, this could empower the next wave of metaverse startups in the same way that streaming media enabled YouTube, Netflix, Spotify, Disney+ and hundreds of other new media empires.
3D Tiles is an open standard for massive, heterogeneous 3D geospatial datasets such as point clouds, buildings, photogrammetry and vector data. It is built on top of glTF and other 3D data types. Whereas standards like glTF compress and optimize 3D assets for runtime efficiency and sharing, 3D Tiles takes that to the global scale by creating a spatial index of 3D content. The standard is widely used in the geospatial community and is gaining more traction in 3D games, digital twins and the industrial metaverse.
The 3D Tiles specification was first introduced in 2015 and standardized in 2019. It got a significant update last year with the introduction of 3D Tiles Next, which improves 3D analytics, can query 3D data more efficiently and improves support for contextual data. For example, Cesium is currently working with some large construction enterprises to analyze how the 3D terrain changes at a large site over time.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! From 3D objects to worlds VentureBeat caught up with Patrick Cozzi, CEO of Cesium, who conceived the idea for 3D Tiles. His team at Cesium was using glTF for individual 3D models like satellites, ground vehicles and aircraft, but there was no good way to efficiently share a collection of them. So, he began exploring ways to allow incremental streaming of the data.
“We realized that we needed to be able to transfer these massive models with terabytes of terrain, point clouds at centimeter resolution, with the geometry, textures and metadata across the web for efficient visualization and analysis,” Cozzi explained.
For example, a 3D Tiles-enabled app could deliver a view of a large city like Los Angeles, starting with the street and nearby buildings in high resolution, with progressively lower resolution for the buildings and landscape in the distance because they take up less screen space.
Building on GIS tiles Cozzi took inspiration from related approaches for streaming GIS data using 2D tiles. These techniques are widely used in apps like Google maps that allow you to zoom from the edge of the earth to an individual house. But 3D presented additional challenges. Today, apps like Google Maps Street View constrain you to hopping between points. Apps built on the 3D Tiles will allow you to walk smoothly along the road without downloading the whole world first.
The significant innovation was using hierarchical level of detail to show the highest resolution for things nearby and incrementally lower resolution in the distance. The same approach can provide a similar experience for seamlessly scrolling and scaling through a world for both 2D and 3D.
“We want to be able to stream the most accurate model but with the least amount of data transfer in the form of geometry, textures and metadata,” Cozzi said.
In this case, the geometry is data about the triangles used to describe the physical representation of the world. The textures represent the world’s colors, reflections and other visual properties. The metadata provides additional context for indicating which pixels are part of a window, door or solar panel, and information about their properties.
“The last category of metadata is essential for digital twins so that you can interact with the visualization, do analytics or create more accurate simulations,” Cozzi explained. For example, it could help you model RF propagation in a city, estimate solar capacity or count the number of pools.
One big challenge is efficiently parsing a large model into multiple representations at different scales. 3D Tiles builds a hierarchy that includes full-resolution source data and progressively lower-resolution versions. However, each version takes advantage of compression baked into glTF, dramatically reducing the file size. “Even though you have to store multiple levels of resolution, often the 3D Tileset can be smaller than the source data,” Cozzi said.
The next level The 3D Tiles community has recently released 3D Tiles Next , which is currently going through the OGC standards process. One significant improvement is more efficient random access. This promises to make it easier to query 3D Tiles data for artificial intelligence (AI) and analytics use cases, such as counting the number of solar panels and total window area near a particular point. It also provides the ability to connect metadata to individual pixels. The interoperability with glTF has also been improved.
Down the road, Cozzi hopes to explore ways to improve how 3D Tiles can bring massive scale to USD models, improve support for more game engines, and unlock new 3D AI capabilities. Game engines are increasingly integrated into GIS, digital twins and industrial metaverse tools.
“I think supporting many different game engines is really important to bring massive-scale 3D geospatial to as many people as possible,” Cozzi said.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,051 | 2,022 |
"Kudelski secures IoT hardware lifecycle | VentureBeat"
|
"https://venturebeat.com/security/kudelski-secures-iot-hardware-lifecycle"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Kudelski secures IoT hardware lifecycle Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Kudelski , a Swiss security firm, has launched a Secure IP portfolio for IoT products. The new offering provides a hardware enclave for baking security primitives into new chip designs while safeguarding secrets across the complete product development and deployment lifecycle. It allows IoT vendors to embed a hardware root of trust directly into chips, which is harder to hack than software only implementations.
Kudelski has been a leader in protecting content on devices like set-top boxes and payment systems for decades. The new IoT support extends this expertise to more dynamic workflows required for IoT use cases.
Michela Menting, digital security research director at ABI Research, told VentureBeat that this is part of an industry trend from silicon IP firms to add support for various security primitives directly into their chip design libraries. Silicon security provides better security than software alone because it is more difficult for hackers to penetrate.
Securing the IoT hardware ecosystem Menting said that Arm was a forerunner in this space with security IP for various use cases. This helped pave the way for secure IP adoption and improvement by various semiconductor and hardware vendors.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “Arm’s success initially for smartphones, with tech like CryptoCell and TrustZone and today for IoT, is really pulling the market forward and driving other silicon IP and semiconductors to target this market and also to innovate,” Menting explained.
Various vendors are also developing secure IP building blocks in addition to Arm and Kudelski, including Intel, Intrinsic-ID, Inside Secure, Secure IC, Maxim, MIPS, Rambus, Silex and Synopsys, among many others. Other vendors are targeting the open-source RISC-V ecosystems, including companies like Dover Microsystems, Veridify, Hex Five and SiFive.
These vendors are rallying behind emerging new IoT hardware security standards established by governments and vendors. The U.S. National Institute of Standards and Technology (NIST) recently launched the Federal Information Processing Standard (FIPS) 140 series to coordinate hardware and software security systems.
ARM Holdings introduced the Platform Security Architecture (PSA) specifications in 2017 and the first strategies went live in 2019. Another group of vendors, including ST Microelectronic, NXP Semiconductors and AWS, have developed the Security Evaluation Standard for IoT Platforms (SESIP).
A complex process The new Secure IP offering from Kudelski supports all these emerging standards. Kudelski’s IoT senior vice-president Hardy Schmidbauer told VentureBeat that a key differentiator compared with other secure IP offerings is support for services to help IoT vendors implement secure processes across the silicon development and deployment lifecycle. This complex process involves steps like secure personalization and credential management.
When an IoT vendor first creates a chip, it comes out as a complete blank, identical to others. In the personalization step, the vendor stamps a unique ID code into non-volatile memory on each chip and records this into its database.
Credential management involves adding unique encryption keys to each chip, while also protecting these from being altered or captured by adversaries. The combination of managing the unique serial number and encryption keys helps create the foundation for all the processes for security updating software and protecting the integrity of each device.
Kudelski has also added support for various security operations directly in a hardware security enclave that supports features like a random number generator, secure key storage and countermeasures against side-channel and fault attacks.
The platform also allows vendors to support capabilities like remote feature authorization and over-the-air updates. This extensive set of services takes advantage of Kudelski’s over thirty years of experience in secure hardware design and system infrastructure.
Menting said security IP is a big market that will continue to grow with the uptick of new IoT devices. But each device has different security needs depending on the use case and the risk it represents. An industrial control system will have different requirements than a home lighting controller.
“Not all devices need the same things and so you can offer a broad range of different IP offerings for different use cases,” she said.
Vendors are currently offering a wide range of security IP cores to support services like: Root of trust Secure boot Cryptographic accelerators True random number generators Physical, unclonable functions One-time programmable memory Trusted execution environments Memory protection units Tamper resistance Side channel analysis, resistance New hardware supply chain requirements This breadth of capabilities is required to extend the software bill of materials (SBOM) now mandated to protect software into hardware.
“We are seeing growing interest within both the commercial and government sectors in the implementation of a hardware bill of materials (HBOM) to augment security compliance and assurance provided by a software bill of materials,” said Andreas Kuehlmann, Chairman and CEO of Cycuity (formerly Tortuga Logic ), which provides tools for testing hardware security.
The HBOM must cover the entire design supply chain from IP providers to chip development organizations, all the way to their integration into actual products.
He argues that just as organizations should ensure the security of the supply chain, it is also essential to communicate to downstream partners and consumers about its due diligence and security assurance. Hardware security adds new requirements.
Even when a trusted supplier conducts thorough security verification that vets third-party security IP, it also needs to ensure that risks such as the leakage of root device keys are not introduced during compliance and integration steps.
The industry is in the early stages of developing the cohesive strategy required to ensure security across the hardware supply chain.
“Currently, industry and government efforts have not mastered many operational aspects of building products, as most organizations aren’t coordinating and communicating a cohesive hardware security approach across the roster of supply chain partners to produce the final product,” Kuehlmann said.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,052 | 2,022 |
"How open source is accelerating electric sustainability | VentureBeat"
|
"https://venturebeat.com/programming-development/how-open-source-is-accelerating-electric-sustainability"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How open source is accelerating electric sustainability Share on Facebook Share on X Share on LinkedIn Increasing concerns about climate change , sustainability tax incentives and pending energy shocks are all giving a boost to efforts to decarbonize the grid and increase energy efficiency. One result has been increased support for Linux Foundation Energy (LF Energy), an open-source foundation focused on harnessing the power of collaborative software and hardware technologies to decarbonize global economies.
Today, virtually all devices that manage, control or orchestrate power systems are built on proprietary systems. The group recently announced several new efforts to usher in a new era of sustainability innovation: The Carbon Data Specification (CDS) for precise, granular carbon metrics will help firms connect carbon emissions to business planning.
EVerest is developing pen-source car and battery standards that will simplify chargers and create a market for new apps for storing and delivering power to mitigate blackouts and pay for the gear.
GridLab-D will commoditize energy digital twins to help power companies optimize operations.
Super Advanced Meter will transform the humble power meter into an intelligent power management control system to respond to energy pricing signals, detect errant devices, and help families and businesses plan for energy efficiency.
New sustainability apps, cloud services and energy management could work across equipment from different vendors and regions. This will drive incredible opportunities for innovations by startups and established companies alike. OneLF Energy member, WattCarbon , is currently developing decarbonization measurement tools. Another, Utilidata is also digitizing the grid edge to leverage new artificial intelligence (AI) algorithms.
The energy imperative Governments worldwide are competing to incentivize efforts to decarbonize the economy. For example, the recently passed Inflation Reduction Act in the U.S.
includes uncapped tax credits for electric vehicles and zero-carbon electricity. This could subsidize $374 to $800 billion in sustainability credits and catalyze far more private investment.
Meanwhile, governments are struggling to plan for expected shocks in the oil and gas industry caused by the war in Ukraine through a mix of tax credits and rolling blackouts this winter. Energy firms may also face significant windfall taxes that investments in decarbonization efforts could allay.
These factors have helped increase interest in supporting LF Energy. For example, Shell recently joined as a Strategic Member, the highest membership level, while Microsoft has upgraded from a General Member to a Strategic Member. Other new members include Areti, the utility for the city of Rome, and Futurewei, the U.S.-based research arm of Huawei.
All of these companies are betting on the continued success of the open-source group.
Betting on progress LF Energy was founded by executive director Shuli Goodman in 2018 with support from French energy giant RTE. Goodman told VentureBeat the group has already achieved several milestones in pursuit of its mission to accelerate the energy transition to reach decarbonization goals.
It laid the foundation with a functional architecture in 2020 to support future electrical grids.
“This stack had never been defined in this way before by government or industry, and it is already providing guidance on what is needed to manage a power network,” Goodman said.
Once this foundation was in place, the group doubled the number of open-source projects it hosted in 2021.
Earlier this year, 20 members collaborated on publishing the Digitalization of Energy Action Plan to help coordinate open-source efforts across academia, industry and governments. It also plans to roll out LF Energy Standards and Specification (LFESS) soon.
“While our focus up until now has been on software development, it is essential to build standards to ensure interoperability and scalability and to reduce the risk of stranded infrastructure in the future,” Goodman said.
Simplify carbon reporting Carbon data specification (CDS) is designed to create a semantic ontology and global standard for energy generation and consumption data. This allows apps and tools to calculate usage and generation carbon intensity. Until now, no specification has been defined by any other efforts.
“Currently, each utility, vendor, and commercial customer designs their best guess and makes assumptions based on it, even when it is not sufficiently detailed or granular enough to truly drive grid decarbonization,” Goodman said.
Organizations typically divide energy usage by predetermined weighting estimates, which vary across companies and geographies. The CDS will provide more granular details to help firms respond to hourly fuel mixes, which will help assess the impact of variations at an hourly level. It will also make it easier to account for energy usage that drives decarbonization, which is difficult to include today.
Standardize electric digital twins The LF Energy community is also bringing together several tools to improve real-time guidance for energy policy using simulation and digital twins.
They recently took over stewardship from the Pacific Northwest National Lab of GridLAB-D, which complements an existing effort called PowSyBI for modeling power systems. These efforts will help enterprises commercialize more accurate digital twins and simulation tools for planning, maintenance, and control.
“Digital twins are essential, very complex software, and it is very difficult to get good dynamic expressions of these super complex systems, which we hope this type of modeling will help address,” said Goodman.
Consolidate EV infrastructure As electric vehicles (EVs) take off, owners face a bevy of different charging outlets with various apps, software, and payment schemes. Just as the EU recently legislated a single charging standard for phones after years of dongle chaos — the LF Energy EVerest project could help to bring similar standardization to much more expensive car charging.
Today, EV customers cannot move between different charging environments created by proprietary charging solutions. Proprietary solutions embedded in thousands of chargers are exposed to the risk of a company going under or abandoning a proprietary standard, which could create stranded infrastructure and wastetime and investments. Open-source solutions like EVerest will ensure interoperability and a consistent experience across infrastructure, accelerating the transition to digital mobility.
Unlock market for meter apps Power meters today run an assortment of operating systems and applications. Some proprietary meters run embedded Linux , but with different supporting hardware and software.
“We are trying to provide reference designs for both software and hardware that can be used anywhere as a virtual node,” Goodman explained. “This will ensure interoperability, lower costs and speed scalability.” Existing advances in smart meters have made it easier to eliminate manual meter readings. The new Super Advanced Meter reference design will provide a standard platform for community energy, virtual power plants and improved automation. It may also reinvigorate promising technologies like infrastructure-mediated sensing that leveraged AI to monitor individual devices centrally.
Goodman said commercialization is their most significant goal in the coming year.
“We need to increase the capacity of the entire community to scale and prepare them for the energy transition. This requires developing open-source solutions and standards that can be integrated quickly into proprietary use cases. We also have to ensure the software is hardened through security best practices and create proper documentation, so everyone can make use of it,” she said. “Every single one of our efforts are organized around making commercial adoption easier and providing value to everyone from technology firms to generators, utilities, transmitters, distributors and end users.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,053 | 2,022 |
"Asperitas and Cast Software partner to accelerate cloud migrations | VentureBeat"
|
"https://venturebeat.com/programming-development/asperitas-and-cast-software-partner-to-accelerate-cloud-migrations"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Asperitas and Cast Software partner to accelerate cloud migrations Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
In theory, migrating apps to the cloud should be as simple as installing existing apps on virtual machines (VMs) running in an Amazon data center.
It is a bit more challenging in practice, owing to the configuration settings used to set up these applications. There can be significant differences in how apps are configured on private enterprise servers compared with VMs in the cloud.
More importantly, enterprises can get the most mileage from a simple migration by tuning configuration settings for the cloud. This helps cloud apps, even those just running on cloud hardware, take advantage of features like scalability and dynamic provisioning. But it is often a complicated and manual process.
Asperitas , a cloud services company, and Cast Software , which makes software intelligence tools, have partnered to automate this process. Asperitas has an established Application Modernization Framework to help enterprises inventory existing apps and migrate them to the cloud. Meanwhile, Cast has been developing tools like Cast Highlight and Cast Imaging for analyzing software infrastructure at scale.
Asperitas specialists will use Cast Highlight to determine an app’s cloud-readiness, open-source risk and agility. This will allow enterprises to prioritize the order in which they move apps to the cloud based on readiness and value to the company.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! What is cloud-readiness? Legacy applications were written to run on physical enterprise servers. As a result, they miss out on dynamic scaling features built into the cloud. Failing to take advantage of these features also eliminates many cost benefits and the ability to handle spikes in demand.
In addition, legacy apps are often configured with relatively static configuration settings. They are written with specific on-premises environments in mind that rarely change. This impedes modern cloud development practices, which include creating new test environments for functional, performance and security testing, and then destroying them when no longer needed.
Derek Ashmore, application transformation principal at Asperitas, told VentureBeat, “Both of these problems, and there are many more, can be traced back to how the application is written.” Finding a needle in a configuration stack Source-code analysis tools like Cast Highlight can automatically identify these kinds of issues at scale. Without tooling, this type of code analysis is done by hand, which takes time and labor.
“Additionally, it’s not as accurate and is subject to human error,” Ashmore said.
The tool can also guide customers from an application portfolio perspective. Asperitas uses Cast Highlight to help customers determine which applications to move to the cloud first. It can also identify applications that are likely to require more refactoring and will take more time. And sometimes, it finds applications that are so anti-cloud-native, they need to be rewritten.
“We’re now better able to guide customers holistically at an application portfolio level as a result of the Cast partnership,” Ashmore explained. “While we could provide some guidance before the partnership, the breadth and depth of that guidance has greatly improved.” Asperitas has already worked with Cast to help a large financial institution formulate its application modernization efforts. It also uses Cast to help application developers identify specific code changes to make apps cloud-native.
Software intelligence is getting smarter Cast has several competitors doing static code analysis, such as Veracode, Checkmarx and Fortinet. Many tend to focus on general code quality and complexity. Ashmore does not feel they are as focused on preparing applications for the cloud.
Companies have been analyzing software codebases to calculate complexity and plan software engineering projects for decades. But now software intelligence is starting to support new capabilities thanks to artificial intelligence (AI), machine learning and big data innovations.
“Software analytics will exponentially improve from where it is today as artificial intelligence is increasingly used,” Ashmore said. “With that improvement will come higher quality information about applications and their limitations and vulnerabilities. I also believe that analytics will improve from a security perspective and make it easier to catch vulnerabilities earlier in the development process.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,054 | 2,022 |
"SAS launches first cloud analytics service on Azure | VentureBeat"
|
"https://venturebeat.com/enterprise-analytics/sas-launches-first-cloud-analytics-service-on-azure"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages SAS launches first cloud analytics service on Azure Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
SAS is one of the old guards in the data analytics space. Its first tools were developed at North Carolina State University in the late 1960s and commercially launched in 1976. Throughout the last fifty years, it has maintained a strong lead in analytics tools for the enterprise, with an extensive lineup of more than two-hundred analytics and data processing components.
It has launched its first cloud analytics software-as-a-service (SaaS) offering of Viya on Azure. This marks a significant milestone for the company to help enterprises transition their analytics processes to the cloud.
“This is SAS’ first wave of making its powerful, trusted analytics platform available in a pay-as-you-go deployment model through the Microsoft Azure Marketplace,” said Alice McClure, director of artificial intelligence and analytics at SAS . “The strategy behind this offering is to provide an easy way for new customers to purchase and try SAS.” Data analytics for everyone SAS Viya includes machine learning (ML), visual analytics, data mining and model management components. The platform offers extensive capabilities across data preparation, statistics, augmented analytics, model deployment and management, artificial intelligence (AI) and ML, optimization, econometrics and more.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Built-in tools help users to interpret, understand and share analytics results. It allows users to solve business problems by combining multiple approaches with drag-and-drop or programming tools in various languages. Users across the organization can collaborate with others on a single unified platform.
Easing the installation process The new Azure offering includes the same analytics capabilities available in other cloud or on-premises deployment types. SAS also offers hosted deployment of SAS Viya via the SAS Cloud.
A big goal was to ease the deployment process compared to other deployment models.
“The user does not need to have extensive IT knowledge or expertise in cloud infrastructure to execute the deployment process,“ McClure explained.
McClure said they intend to make similar offerings on other public cloud marketplaces in the future, but have no official launch date for these other offerings.
Beginning to end in the cloud Terri Sage, CTO at 1010data , a provider of analytical intelligence to the financial, retail and consumer markets, told VentureBeat, “SAS’s new Azure Viya offering is significant, as it is truly one of the first beginning-to-end cloud and data center analytics services.” It helps enterprises flesh out the entire dataops pipeline in a single platform with support for automated data management, model management, data preparation, blending, analytics, machine learning and drag-and-drop services.
She expects the new offering to allow organizations to develop essential data analytic applications for sharing and solving business problems without the need for a large team of IT resources to build, deploy, operationalize and maintain these applications.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,055 | 2,022 |
"RisingWave democratizes stream processing, raises $36M | VentureBeat"
|
"https://venturebeat.com/data-infrastructure/want-a-streaming-database-without-hiring-a-data-engineer-risingwave-raises-36m-bring-it-to-you"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages RisingWave democratizes stream processing, raises $36M Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Traditional databases focus on data after it has been stored.
Stream processing helps businesses take action on data as it’s being generated. These tools allow analytics and decision engines to respond to IoT events, user clickstreams and financial market data. But they also typically require specialized data engineering skill sets to deploy and scale.
RisingWave has raised $36 million to help simplify this process with a streaming database that combines elements of traditional databases and stream processing. RisingWave Cloud service is currently in private preview. The funding will help grow the business team for a broader launch next year.
Customers are already using the tools for various business-critical applications: Real-time analytics and alerting analyzes millions of metrics to detect real-time anomalies.
IoT device tracking creates a real-time dashboard that shows traffic using road sensors.
Monitoring business trends by aggregating data about products and brands across social media.
Pre-aggregating data from multiple sources to optimize online application data sharing.
Streaming complexities RisingWave CEO, Yingjun Wu, Ph.D., founded the company in early 2021 after a decade of working on stream processing tech at AWS and IBM. He told VentureBeat that existing database systems like AWS Redshift, Snowflake and BigQuery could not efficiently process streaming data. At the same time, existing streaming processing tools were too complicated to use and operate at scale.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “Building real-time applications leveraging streaming data should not incur operational overhead and become a barrier to entry,” he explained.
Popular stream processing tools like Apache Flink and Samza require multiple big data services and use Java-based APIs that can be difficult to learn. In addition, these systems combine compute and storage together, which complicates scalability.
Developers face numerous challenges connecting raw data streams to various applications and analytics. Operational challenges complicate efforts to ingest raw data. Companies also often need to change the application architecture to shorten the data pipeline latency for time-sensitive apps.
The next frontier in analytics A new generation of streaming databases connects stream processing tools to database-like tools for building apps and managing data. These modern tools combine the low latency of stream processing tools with traditional database paradigms to store, process and retrieve data. Competitors include Confluent’s ksqlDB, NYC-based Materialize and several Apache Flink-based companies.
Wu believes RisingWave is the only company to combine all the elements of modern data platform design from the ground up in the Rust programming language. Also, he decided to focus more on cost efficiency and ease of use rather than reducing latency.
The platform uses a cloud-native, distributed architecture that separates compute and storage as part of the design. It also supports various deployment models across containers and service meshes. Enterprises can also ingest data from popular streaming services such as Apache Kafka, Redpanda, Apache Pulsar and AWS Kinesis.
“We are making a bet that streaming is a new frontier for the data processing analytics field,” Wu said. “Streaming databases shorten the data pipeline cycle significantly. These systems provide the best opportunity to harness insights for event data with a short shelf life.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,056 | 2,022 |
"Teradata takes on Snowflake and Databricks with cloud-native platform | VentureBeat"
|
"https://venturebeat.com/data-infrastructure/teradata-makes-database-analytics-cloud-native"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Teradata takes on Snowflake and Databricks with cloud-native platform Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Database analytics giant Teradata has announced cloud-native database and analytics support. Teradata already had a cloud offering that ran on top of infrastructure-as-a-service (IaaS) infrastructure, enabling enterprises to run workloads across cloud and on-premise servers.
The new service supports software-as-a-service (SaaS) deployment models that will help Teradata compete against companies like Snowflake and Databricks.
The company is launching two new cloud-native offerings. VantageCloud Lake extends the Teradata Vantage data lake to a more elastic cloud deployment model. Teradata ClearScape Analytics helps enterprises take advantage of new analytics, machine learning and artificial intelligence (AI) development workloads in the cloud. The combination of cloud-native database and analytics promises to streamline data science workflows, support ModelOps and improve reuse from within a single platform.
Teradata was an early leader in advanced data analytics capabilities that grew out of a collaboration between the California Institute of Technology and Citibank in the late 1970s. The company optimized techniques for scaling analytics workloads across multiple servers running in parallel. Scaling across servers provided superior cost and performance properties compared to other approaches that required bigger servers. The company rolled out data warehousing and analytics on an as-a-service basis in 2011 with the introduction of the Teradata Vantage connected multicloud data platform.
“Our newest offerings are the culmination of Teradata’s three-year journey to create a new paradigm for analytics, one where superior performance, agility and value all go hand-in-hand to provide insight for every level of an organization,” said Hillary Ashton, chief product officer of Teradata.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Cloud-native competition Teradata’s first cloud offerings ran on specially configured servers on cloud infrastructure. This allowed enterprises to scale applications and data across on-premise and cloud servers. However, the data and analytics scaled at the server level. If an enterprise needed more compute or storage, it had to provision more servers.
This created an opening for new cloud data storage startups like Snowflake to take advantage of new architectures built on containers, meshes and orchestration techniques for more dynamic infrastructure. Enterprises took advantage of the latest cloud tooling to roll out new analytics at high speed. For example, Capital One rolled out 450 new analytics use cases after moving to Snowflake.
Although these cloud-native competitors improved many aspects of scalability and flexibility, they lacked some aspects of governance and financial controls baked into legacy platforms. For example, after Capital One moved to the cloud, it had to develop an internal governance and management tier to enforce cost controls. Capital One also created a framework to streamline the user analytics journey by incorporating content management, project management and communication within a single tool.
Old meets new This is where the new Teradata offerings promise to shine. It promises to combine the new kinds of architectures pioneered by cloud-native startups with the governance, cost-controls and simplicity of a consolidated offering.
“ Snowflake and Databricks are no longer the only answer for smaller data and analytics workloads, especially in larger organizations where shadow systems are a significant and growing issue, and scale may play into workloads management concerns,” Ashton said.
The new offering also takes advantage of Teradata’s various R&D into smart scaling, allowing users to scale based on actual resource utilization rather than simple static metrics. The new offering also promises a lower total cost of ownership and direct support for more kinds of analytics processing. For example, ClearScape Analytics includes a query fabric, governance and financial visibility. This also promises to simplify predictive and prescriptive analytics.
ClearScape Analytics includes in-database time series functions that streamline the entire analytics lifecycle, from data transformation and statistical hypothesis tests to feature engineering and machine learning modeling. These capabilities are built directly into the database, improving performance and eliminating the need to move data. This can help reduce the cost and friction of analyzing a large volume of data from millions of product sales or IoT sensors. Data scientists can code analytics functions into prebuilt components that can be reused by other analytics, machine learning, or AI workloads. For example, a manufacturer could create an anomaly detection algorithm to improve predictive maintenance.
Predictive models require more exploratory analysis and experimentation. Despite the investment in tools and time, most predictive models never make it into production, said Ashton. New ModelOps capabilities include support for auditing datasets, code tracking, model approval workflows, monitoring model performance and alerting when models become non-performing. This can help teams schedule model retraining when they start to lose accuracy or show bias.
“What sets Teradata apart is that it can serve as a one-stop shop for enterprise-grade analytics, meaning companies don’t have to move their data,” Ashton said. “They can simply deploy and operationalize advanced analytics at scale via one platform.” Ultimately, it is up to the market to decide if these new capabilities will allow the legacy data pioneer to keep pace or even gain an edge against new cloud data startups.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,057 | 2,022 |
"ServiceNow launches workflow, workspace tools for enterprises and government agencies | VentureBeat"
|
"https://venturebeat.com/data-infrastructure/servicenow-launches-workflow-workspace-tools-for-enterprises-and-government-agencies"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages ServiceNow launches workflow, workspace tools for enterprises and government agencies Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
ServiceNow has rolled out a trio of solutions to improve productivity for employees, customers and partners. These promise to help digitize more aspects of workplace productivity, improving efficiency for enterprises and government agencies. The new tools available on the ServiceNow store are: Service Request Playbook for Public Sector: Helps set up communications workflows for government organizations Automated Service Suggestions: Helps streamline service inventory Workplace Scenario Planning: Helps reconfigure workspaces Improved access to government services; ML-generated service maps Service Request Playbook for Public Sector provides templates that help government agencies digitize and automate queries. This makes it easier for citizens to request services via their computers, mobile devices and third-party apps and track progress up through resolution.
Automated Service Suggestions uses machine learning to automatically analyze an organization’s network traffic and suggest entry points for business-critical services. This new tool from ServiceNow helps IT operators create a high-fidelity map of all infrastructure and software with a few clicks. These maps can be constantly recalibrated to help teams make decisions and respond to IT events more quickly.
Yugal Joshi, partner with Everest Group, an advisory service, believes service maps are moving in the right direction but progress has been incredibly difficult. Enterprises have been using change management databases (CMDB) for ages, but these are rarely updated or strategically used.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “Combined with AIOps , such service maps can become effective in predicting and proactively addressing service disruptions,” Joshi said.
ServiceNow wants to help you adapt to hybrid work Workplace Scenario Planning from ServiceNow is designed to help businesses adapt to hybrid work models where employees are spending more time working from home. It helps enterprises reconfigure space and manage change for individual employees and departments in harmony with heat and electricity requirements. For example, space planners can design, compare and experiment with different space allocation scenarios using a drag-and-drop interface. This helps reveal tradeoffs in space utilization, cost and employee experience.
Joshi said this is part of a larger trend of how the classic definitions of workplace and workspace are evolving. Enterprises are considering employee wellbeing, sustainability and automation as aspects of the workplace, in addition to devices, platforms and applications.
This shift also represents an acceptance that despite some enterprises calling people back to the office, the hybrid model is here for the foreseeable future. It also indicates that workspace planning is receiving the importance it deserves. Previously, organizations “would buy spaces, build good-looking interiors, invest in advanced devices and software, but would forget to build a plan to manage it in a harmonized and cost-effective manner,” Joshi said.
However, there is also an acceptance that the hybrid work model has made things worse from a technology-sprawl perspective.
“That is where a company like ServiceNow, which integrates many technologies to build workflows, can create significant value for clients,” Joshi said.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,058 | 2,022 |
"ServiceNow evolves from ITSM, aims to simplify business processes | VentureBeat"
|
"https://venturebeat.com/data-infrastructure/servicenow-evolves-from-itsm-aims-to-simplify-business-processes"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages ServiceNow evolves from ITSM, aims to simplify business processes Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
ServiceNow cut its teeth in IT service management (ITSM) and IT operations management (ITOM). The platform helps streamline the process of reporting and resolving IT problems. A significant update to the core platform, called the Now Platform Tokyo release, takes a major step toward the broader realm of enterprise service management (ESM) to respond to issues at a business level rather than just an IT level.
Monish Mishra, VP for service line markets and strategic engagements at Mindtree , told VentureBeat, “By adopting ESM, enterprises can leverage service management capabilities and framework throughout the organization.” For example, ServiceNow is adding new solutions for enterprise asset management (EAM), supplier lifecycle management (SLM), and environment, social, and governance (ESG) management. It also includes new tools for improving experience and engagement for customers and employees. A new ServiceNow Vault also promises to centralize data security and privacy management across the Now Platform.
Promoting digital first It is all about helping businesses to become digital first. At a practical level, this means simplifying the underlying platform and the business processes built on top.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! ServiceNow chief innovation officer, Dave Wright, told VentureBeat, “When implemented well, a single platform, like ServiceNow, should touch far-reaching corners of the company, seamlessly connecting disparate systems, breaking down data silos and making things easier, everywhere, for both employees and the business itself.” Now Platform Tokyo was designed to help businesses focus on improving experiences rather than just service levels. For example, the new Manager Hub provides a single destination for leaders to create learning and development plans for their teams and get personalized training.
The new release also improves connectivity between disparate systems to simplify complex processes. For example, this can help companies move from an SLM process based on emails and spreadsheets to an automated process spanning employees and suppliers.
Start at the process level When executives sit around the conference table, they may start with vague goals like improving the use of assets like buildings, factories and expensive equipment, enabling supply chain resilience, or becoming net zero by 2030. Turning each of these goals into measurable outcomes requires the coordination of people, processes and equipment.
New purpose-built features in the Tokyo release take a first stab at aligning high-level goals for EAM, SLM and ESG with business processes running across multiple apps. ServiceNow started with these solutions to help enterprises address some of the most pressing challenges facing customers.
“We are simplifying complex supply chains, automating asset management and delivering investor-grade sustainability data so our customers can more effectively safeguard their businesses and manage risk and compliance,” Wright said.
Wright said they also fill an important gap with their expanded ESG management capabilities. Most solutions focus on individual areas of ESG or even singular goals like reducing carbon emissions.
But the United Nations has identified 17 broad sustainable development goals (SDGs) and 169 measurable targets.
The danger in pursuing individual targets lies in compromising others in the process or adding additional work. A broader approach like ServiceNow’s new ESG Command Center for managing multiple simultaneous targets and the processes for achieving them will be required to increase all of them in tandem. It combines ESG management and reporting with enterprise risk management and strategic project management.
ServiceNow steps up collaboration ServiceNow is collaborating with leading systems integrators like Mindtree, NTT DATA Corporation and RSM US LLP to customize these new capabilities for each enterprise. This will help enterprises implement and fine-tune the latest release for their specific goals. Systems integrators believe the new solutions will be essential in meeting broader enterprise goals.
NTT DATA head of ServiceNow business, Tomoyuki Azuma, told VentureBeat, “ServiceNow is a complete breakthrough in terms of the way software development is made and in terms of the conventional wisdom of efficiency.” Azuma says it will play a significant role in creating the employee experience required to collectively drive ESG goals. Most businesses he works with struggle with a sustainability dilemma in which the extra work necessary to manage new KPIs drags down financial sustainability. A better ESG management experience will help employees identify ways to assess minor changes to achieve the optimal state of business processes.
“The ESG Management solution empowers our clients to shape the future of our society with sustainability in a way they can measure the ROI, manage the risk and demonstrate the impact to their local and global footprint. Awareness of the benefits of ESG will spread overall participation and innovation in ESG,” NTT DATA’s VP ServiceNow practice, Marci Parker, said.
Boosting engagement The update also includes new tools for improving employee experiences for common workflows. All these build on ServiceNow’s recently launched Next Experience UX.
Manager Hub provides a single place to review employee journeys and respond to requests. The tool lets managers create personalized experiences for each employee. They can edit tasks, add mentors, include AI-based learning recommendations from learning posts and integrate satisfaction surveys to understand how employees feel about their experience and journey at the company.
Admin Center allows system administrators to discover, install and configure ServiceNow solutions. Previously, ServiceNow administrators relied on their account managers when administering new applications or manually sorted through apps or ServiceNow Knowledge Management resources. With Admin Center, system administrators can now discover, install and configure ServiceNow solutions in one place.
Issue Auto Resolution for Human Resources applies natural language understanding to analyze requirements and deliver self-service content. Issue Auto Resolution was previously available for ITSM to help IT agents resolve routine incidents much more quickly by proactively deflecting them to an AI-powered virtual agent. The new capabilities for HR teams automate common HR inquiries like PTO requests, HR policy or benefits enrollment questions, and payroll issues.
Privacy and security controls Enterprises often spread data across dozens of separate applications, databases and workflows. A new ServiceNow Vault promises to centralize privacy and security control. It includes a tool for simplifying the management and protection of machine credentials and validating the authenticity and integrity of code being deployed to ensure no malicious insertion.
Wright said the Vault applies to all apps and data running on the Now Platform. However, it does not manage data from other apps.
Cautious optimism for EAM, SLM and ESG Yugal Joshi, partner at Everest Group, an advisory firm, told VentureBeat that the addition of new solutions for EAM, SLM and ESG indicates ServiceNow’s persistence in moving out of its ITSM and ITOM heritage to become an enterprise platform for clients for solving complex business problems. These new solutions have the potential to help IT leaders enhance their positioning and working relationships with business teams.
However, Joshi cautions new customers to do a thorough analysis before committing. This should include a cost analysis of subscription, integration, maintenance and upgrade factors. “Leaders need to understand the functionalities of these newer offerings and their relevance to their environment,” Joshi said.
It’s also essential to evaluate the maturity of these solutions. Everest research suggests that enterprises aren’t fully satisfied with the maturity of newer ServiceNow launches and the service partnerships to implement and scale them.
“This will be important for the CIO organization engaging with ServiceNow as a strategic platform vendor,” he said.
In addition, enterprises will need to understand the licensing policy. Everest research suggests enterprises struggle with ServiceNow licensing.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,059 | 2,022 |
"Data migration tool transfers cloud data 25 times faster | VentureBeat"
|
"https://venturebeat.com/data-infrastructure/revolutionary-data-migration-tool-transfers-cloud-data-25-times-faster"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Data migration tool transfers cloud data 25 times faster Share on Facebook Share on X Share on LinkedIn cloud computing. The data transfer and storage concept consists of a white polygonal interconnected structure within it. Dark blue background with small padlocks scattered on the background.
Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Komprise , a data management tools provider, has rolled out Hypertransfer for Elastic Data Migration, a solution that can speed cloud data transfers by 25 times, the company claims. This engineering feat highlights the industry’s efforts to address challenges in repurposing legacy protocols for the cloud.
One core issue is that the server message block (SMB) protocol, one of the most popular file-sharing protocols, has never been updated for the cloud. It was initially developed in 1985 to provide shared access to files and printers over a local area network (LAN). It worked well when files had to be shared across just a few hops.
However, SMB is a chatty protocol, which adds a lot of overhead when communicating over TCP/IP and across multiple routers. A large data transfer needs to wait for acknowledgment for each file, so the lag time can add up when transferring many small files. This can be a big issue for use cases such as electronic design automation and multimedia workloads that often involve large numbers of small files.
Komprise Hypertransfer helps cache the messages to minimize the number of roundtrips for wide area network (WAN) transfers. It also sends data across multiple parallel channels in parallel. The company reports transfers up to 25 times faster than with the Robocopy command popularized by Microsoft.
The speed-up drops to about 20 times when transferring fewer, larger files, such as copying an Android phone directory.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Other factors slowing data migration Komprise CEO and co-founder Kumar Goswami said several other factors besides chatty protocols can slow data migrations.
Unstructured data is one. The term refers to all types of files outside the well-formatted realm of databases. Most apps save raw data across multiple files for a given project, user or use case. As a result, more extensive unstructured data migrations can involve moving billions of files.
Another issue is that these files may be strewn across multiple users, volumes or locations. Migrating this data is not as simple as starting at the first file and moving to the last in a single table or directory. The data migration tools need to know what data to migrate and in what order. This can add additional overhead.
A third factor is that data migrations often involve moving files from data stored on one vendor’s equipment and software to another’s. It also sometimes requires translating across protocols and architectures, such as moving files stored on traditional PCs to objects stored in cloud services like S3.
“Different systems do not have the same security, storage and metadata capabilities, so the data migration solution needs to be able to bridge across these,” Goswami explained.
A tough problem to solve In addition to these technical problems, organizations must figure out how data migration fits into their overall cloud migration strategy. Teams need to consider the network, the security of the data in the cloud, the financial operations and ongoing costs of the data and where best to keep it.
Larger enterprises have traditionally engaged professional services firms with cloud data migration experience to help address these challenges. Komprise has been working on integrating these capabilities into a standardized cloud service to help data migrations become part of a long-term data strategy rather than a one-off project.
“Data migrations are now a part of a broader data management strategy and are intricately linked,” Goswami explained.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,060 | 2,022 |
"Quantum progress: How IBM, Microsoft, Google and Intel compare | VentureBeat"
|
"https://venturebeat.com/data-infrastructure/quantum-progress-how-ibm-microsoft-google-and-intel-compare"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Quantum progress: How IBM, Microsoft, Google and Intel compare Share on Facebook Share on X Share on LinkedIn Quantum computer. Conceptual computer artwork of electronic circuitry with blue light passing through it, representing how data may be controlled and stored in a quantum computer.
Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Established enterprise leaders like IBM, Microsoft and Google continue to make progress in quantum computing. As a result, quantum computers are getting bigger and achieving advantages over traditional tech in limited circumstances.
These vendors are also developing cloud services that allow enterprises to test the waters of quantum algorithms using development tools and simulators running on classic hardware. It’s a complicated field with lots of nuance and subtlety about the significance of qubits, noise, endurance and scalability.
“The pace of innovation in quantum technologies continues to accelerate where it’s transitioning from scientific exploration to practical reality,” Chirag Dekate, VP analyst at Gartner, told VentureBeat, Building the quantum ecosystem A lot of work is required before enterprises start rolling out quantum applications. Dekate said that enterprises are already beginning to plan for the quantum era. He has seen enterprise client engagement around quantum more than double over the last three years. On top of that, Dekate said enterprises are starting to shift from ideating about quantum to devising and implementing quantum strategies.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Leaders like IBM, Microsoft, Google and others are making strides in quantum hardware such as with quantum error mitigation and dynamic circuits.
Governments worldwide are also strategically investing and encouraging quantum research hubs.
At the same time, quantum is also attracting hype , which a few vendors are leveraging for short-term gain by prematurely promoting quantum technology. Dekate fears this could trigger a quantum winter, like the artificial intelligence (AI) winter that hindered AI research for many years.
“ We are starting to see signs of this and are hoping for the best,” he said.
Here are how four technology leaders are approaching quantum computing IBM ‘s roadmap for quantum IBM has worked steadily for years to make quantum computing a commercial success.
Sandeep Pattathil, senior analyst at IT advisory firm, Everest Group, told VentureBeat that IBM has, “… a clear-cut roadmap for achieving large-scale, practical quantum computing with plans to have a 1,000-qubit computer in place by 2023 and have so far met all their milestones.” IBM recently unveiled a 433-qubit Osprey processor in November and plans to build a 1,121-qubit Condor in 2023. They also plan to unveil a 1,386-qubit Flamingo in 2024 and a 4,158-qubit Kookaburra in 2025.
Microsoft pioneers topological qubits Microsoft has also pioneered work on a topological phase of matter, a key milestone for creating topological qubits.
Pattathil said these are expected to be faster, smaller and less prone to losing information than other types of qubits currently under development. He also believes this puts Microsoft on a promising path to developing a scalable quantum computer for enterprise customers.
Google cuts the noise Google made waves a few years ago by announcing it had achieved quantum supremacy on an arcane mathematical problem. More recently, it decided to focus on mitigating noise in quantum computers with a prototype logical qubit that will be required to scale reliable quantum systems.
It has have also made progress on new quantum chips with better qubits, improved packaging for these chips, and developed techniques to calibrate chips with several dozens of cubits simultaneously.
This progress has allowed the company to reset qubits with high fidelity, making it easier to reuse qubits across multiple quantum computations. Google has also developed techniques for measuring computations in quantum circuits. The combination of these techniques allowed Google researchers to reduce errors by one hundred times while scaling from five to 21 qubits.
It has collaborated on work with Caltech to develop quantum algorithms that could learn about physical systems with far fewer experiments. Google also pioneered work with Stanford on time crystals , which could unlock new use cases for quantum computers.
Intel’s spin on quantum Intel has taken a different approach to scale quantum computers using spin qubit technology, also called quantum dots.
In October 2022, Intel demonstrated exceptional yield of quantum dot arrays using transistor fabrication technology.
“The high yield and uniformity achieved show that fabricating quantum chips on Intel’s well-established transistor process nodes is a sound strategy and is a strong indicator of success as the technologies mature for commercialization,” Pattathil said.
Paving quantum potholes The road to the quantum future is not straightforward , and experts believe that the industry will need to collaborate on solving many significant gaps, such as scalable error correction and systems.
Dekate said more work is needed on improving coherence times (qubit endurance) and gate times (number of gate operations before an error). Researchers must also improve quantum communications for exchanging quantum information and devise classical-quantum interconnect technologies to scale quantum environments.Once the quantum computers are here, new algorithms will be required to solve practical problems.
“The roadblock in quantum computing is related to algorithmic advances, not speed,” Pattathil said.
However, he is already seeing promising progress in applying quantum computing to practical industry problems.
Mercedes-Benz is exploring using quantum computing to create better batteries for its electric cars.
ExxonMobil is using quantum algorithms to discover the most efficient routes. And Mitsubishi Chemical is simulating chemical reactions.
Pattathil expects to see quantum computers integrated with other cutting-edge tech like AI and blockchain to unravel innovative use cases across financial services, pharmaceutical, bioscience and cybersecurity industries.
“Based on the trends we are seeing in the market, we feel quantum computing is well on its way to being technologically and commercially viable in the next decade,” Pattathil said.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,061 | 2,022 |
"Quantinuum scales error correction to improve fault-tolerant quantum computing | VentureBeat"
|
"https://venturebeat.com/data-infrastructure/quantinuum-scales-error-correction-to-improve-fault-tolerant-quantum-computing-%ef%bf%bc"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Quantinuum scales error correction to improve fault-tolerant quantum computing Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Although quantum computing companies and researchers have made progress in scaling the number of physical qubits, this also tends to increase the rate of errors. A main concern in this area is that adding enough qubits together to solve significant problems may also lead to error-prone results.
Researchers at Quantinuum report they have recently found a way to scale the number of qubits to increase the performance and reduce the error rate.
This is no simple task because quantum computers have a higher volume of errors compared with classical computers. In addition, many error correction techniques that form a mainstay of classical computing, like a parity check, also introduce new errors in quantum computing.
Quantinuum was formed by the merger of Cambridge Quantum Computing , a leading quantum software company, and the quantum hardware division of Honeywell.
Cambridge Quantum Computing had been developing better quantum algorithms and ways to translate classical computer algorithms to work on quantum computers.
Meanwhile, Honeywell had been pioneering a novel quantum computing ion trap architecture that allows qubits to connect more easily than other approaches.
Honeywell’s work allowed the team to transform 20 physical qubits into two more reliable logical qubits. Although this may seem like a step backward numerically speaking, it’s a tremendous step forward since these qubits can be added together.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Researchers commonly refer to the current generation of quantum computers as part of the noisy intermediate scale quantum (NISQ) era. This work will ultimately pave the way to build fault-tolerant quantum computers that can scale to address significant problems.
Quantum twist on redundancy Hardware errors in which a transistor spontaneously switches tend to be rare in modern semiconductor circuits, but in some cases — like running a safety-critical system exposed to radiation — engineers design error correction systems that combine three processors. A supervisory system compares the results. If an error occurs, the supervisory system can detect if the calculation does not match and can safely ignore it if it does not match the others.
Quantum computing can introduce new problems. There are more kinds of errors that need to be corrected. A relatively simple parity check in classical computing can produce new errors in quantum computing.
Quantum computers can suffer from two kinds of errors: bit flips and phase flips. In a bit flip error, the qubit flips the computational state incorrectly from zero to one and vice versa. In a phase flip error, which does not occur in a classical computer, the phase of the qubit flips state. Previous theoretical research identified a way to correct both types of errors by constructing logical qubits. Last year, Quantinuum demonstrated a practical implementation of these techniques in a quantum computer using a 5-qubit code. However, this still increased errors as the number of qubits was scaled.
In the new technique , called a color code, the researchers found a way to combine seven logical qubits into one logical qubit in coordination with 2-3 ancillary qubits used for probing. They implemented this new color code technique on top of Quantinuum’s latest computer with 20 physical qubits to create two reliable logical qubits. These new logical qubits can be efficiently scaled in a way that increases fault tolerance that was not practical with the physical qubits or even the 5-qubit approach.
Russell Stutz, director of commercial hardware at Quantinuum, told VentureBeat this means that as they add more qubits, the probability of getting failures that ruin the entire computation decreases with a modest rise in the number of physical qubits.
One remaining challenge is the quantum error correction cycle. The simple act of probing a qubit for errors can introduce new ones. Stutz said future work will explore ways to ensure they are not adding more errors than they remove with an error correction code.
Connection required Researchers have thought about how different quantum error correction approaches might work. Although the Quantinuum approach isn’t delivering as many raw physical qubits as other approaches, these are fully connected, which opens opportunities to leverage these innovative algorithms. In many quantum architectures, each qubit is only connected to a few neighbors.
“We are now testing quantum error correction code concepts dreamed up in the late 1990s and can implement in these real systems for the first time,” Stutz said. “It is an exciting time for learning about quantum error correction.” Stutz says this research is a significant milestone on the long road to fault-tolerant quantum computing. He feels that researchers will be able to solve many practical problems once they scale systems to 50 logical qubits with lower error rates than physical qubits.
“It is laying the groundwork,” Stutz said. “You cannot really solve an industry-relevant problem with the number of logical qubits we are dealing with right now. We are essentially building really good components that will be used in a larger computation.” Read more : IBM touts ‘Quantum Serverless’ as it eyes path to 4,000-plus qubit VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,062 | 2,022 |
"How Samsara is driving digital transformation in the supply chain | VentureBeat"
|
"https://venturebeat.com/data-infrastructure/how-samsara-is-driving-digital-transformation-in-the-supply-chain"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How Samsara is driving digital transformation in the supply chain Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Hundreds of IoT platforms gather data from connected devices, equipment and vehicles. Many of these platforms target specific types of equipment or business workflows. Founded by Sanjit Biswas and John Bicket in 2015, Samsara launched into this space with an aim to bring data from these various platforms into a single connected operations cloud focused on transportation and logistics.
Samsara has recently announced the 200th partner integration on the Samsara App Marketplace , making it the largest open ecosystem for physical operations. Partners include leading vehicle manufacturers like Ford, GM, Navistar and Stellantis Free2move and transport refrigeration leader Thermo King.
These partnerships have driven data processing on the platform. In the last six months, Samsara has processed more than 2.6 trillion data points and over 23 billion API calls.
Focus on fleets There is considerable competition among various platforms for consolidating data, but Samsara has managed to grow its customer base beyond 20,000 enterprise customers across industries, including transportation, wholesale and retail trade, construction, field services, logistics, utilities and energy, government, healthcare and education, manufacturing and food and beverage.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The platform helps companies access, analyze and act on operations data in one consolidated service. A recent Samsara survey found that early adopters of IoT hardware are often more agile and resilient.
The company’s platform helps enterprises adjust fleet routes , coordinate operations like snow removal, or prioritize repair schedules after a disaster. Governments can also use these integrations to improve and optimize processes such as pothole repairs or waste removal and recycling.
Consolidating data Samsara’s App Marketplace provides customers with a single source of truth for all data related to connected fleet operations and the business processes built on them.
This consolidated approach helps enterprises replace silos of data and processes from different tools. For instance, companies might use various tools for vehicle routing, driver safety and hours of service compliance. Samsara’s platform consolidates these work streams and integrates the data between systems to boost efficiency, reduce costs and use the data at scale.
Companies specializing in a particular domain connect their service to the platform to attract new customers, simplify data integration and take advantage of data from across the ecosystem. According to a Samsara spokesperson, its customer Liberty Energy, a leading oilfield service firm with about 3,000 sites, expects to save $10 million through integration with its tax service provider and location tracking.
Other Samsara customers start with one use case and then take advantage of other services offered via its marketplace. For example, Arka Express, a van truckload carrier with nearly 500 trucks, started using Samsara for safety compliance. Once onboard, they expanded to additional services for maintenance, compliance and other use cases that helped reduce operational costs.
Samsara’s success is part of a broader trend around vendors developing connected data services for physical operations to drive digital transformation into the physical world — similar to the approach of companies like Mapped for buildings data and Zyter for telehealth data.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,063 | 2,022 |
"How Nvidia is driving greener computing | VentureBeat"
|
"https://venturebeat.com/data-infrastructure/how-nvidia-is-driving-greener-computing"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How Nvidia is driving greener computing Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Data centers are increasingly exploring different ways to build more energy-efficient supercomputers, in addition to faster ones.
Nvidia has been addressing this challenge in several ways, ranging from more efficient processors, improved CPU and GPU coordination, new networking technologies and more efficient libraries.
Nvidia lead product manager of accelerated computing, Dion Harris, said that in scientific computing, performance is key, but what is becoming more pressing is being able to do it as efficiently as possible. So Nvidia has been exploring different ways to get the most performance out of the smallest data center footprint and the smallest carbon footprint.
Here is an overview of the new developments: An Nvidia H100 GPU supercomputer demonstrates almost twice the energy efficiency of A100 implementations.
A combination of Grace and Grace Hopper Superchips demonstrates a 1.8-times improvement for a 1-megawatt data center for accelerated computing.
BlueField DPU demonstrates 30% energy improvement per server.
Nvidia Collective Communications Library demonstrates 3-times improvement for simulations.
Updates to cuFFT library demonstrates 5-times improvement in large-scale FFT execution.
More efficient supercomputers Nvidia has been working with Lenovo on the first submission of a supercomputer built on the Nvidia H100 chip to the Green500 list of most efficient supercomputers. That is a milestone in and of itself. But early findings suggest that this may become one of the top contenders for the most efficient supercomputer.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! In addition, this particular configuration is built on an air-cooled-based system, so it did not require any special piping or rack configurations that are sometimes required for high-performance and energy-efficient systems.
Harris said, “This will allow this type of configuration to be deployed anywhere in any classic data center.” Improving data center efficiency Nvidia has previously reported on how combining Grace and Grace Hopper Superchips can improve core CPU computing. New research suggests that it can also drive more efficient accelerating computing architectures.
They found a way to achieve a 1.8-times performance improvement for a standard 1-megawatt data center with about 20% of the load allocated to CPU partitions and about 80% allocated to accelerated partitions, compared to traditional x86 approaches.
Network offloading improvements Nvidia has also released some new research quantifying the benefits of offloading data management and networking tasks to the Bluefield DPU. The smart network interface controller combines traditional network functionality with accelerated networking, security, storage and control plane functions. The company found that it could reduce overall power usage by about 30% per server. In a large data center with about 10,000 servers, this could save roughly $5 million in energy costs over a three-year lifespan.
Faster simulations “Accelerating computing is a full-stack problem,” Harris explained. So, Nvidia has been optimizing the underlying libraries that help popular scientific computing tools work across multiple GPUs, systems and locations.
An update to the Nvidia Collective Communications Library (NCCL) drove a threefold performance improvement for VASP , a popular data center library, without any hardware changes. The VASP (Vienna Ab initio Simulation Package) supports atomic-scale material modeling.
Improvements in Nvidia CUDA Fast Fourier Transform (cuFFT) enabled a fivefold improvement on GROMACS , a simulation package for biomolecular systems. The new update also makes it easier to efficiently run FFT calculations across a much larger number of systems in parallel.
“This enables large FFTs at the full data-center scale,” Harris said.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,064 | 2,022 |
"How genomic data is powering healthcare in Estonia | VentureBeat"
|
"https://venturebeat.com/data-infrastructure/how-genomic-data-is-powering-healthcare-in-estonia"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How genomic data is powering healthcare in Estonia Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Genomic medicine has the power to transform and personalize the healthcare experience. However, despite this tremendous promise, genomic data is typically only used outside routine healthcare workflows to treat rare diseases or prioritize cancer diagnostics. Healthcare organizations face challenges ensuring privacy for genomic data across various workflows. Additionally, most doctors have not been trained to interpret the meaning of genomic results on treatment options and communicate these to patients.
The Estonian Genome Centre is hoping to change that. The research group is leading an ambitious pilot program to sequence the DNA of 200,000 Estonian citizens, securely manage the data at scale and weave insights into regular medical checkups and treatments. It is an effort designed to respect patient security at each step. Lessons from this pilot could eventually aid healthcare organizations around the world.
A focus on common ailments At a recent press event, Lili Milani, head of the Estonian Genome Centre, explained how it hopes to transform everyday healthcare for all Estonians. She said countries like the United Kingdom, France and Sweden have all developed advanced personalized medicine programs, but these are primarily focused on cancer and rare diseases.
In contrast, Estonia is exploring how to apply personalized medicine to front-line doctors.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “We are using genomics to prevent common chronic common diseases,” Milani said.
Estonia’s death rate for heart disease is far higher than other Western countries like Finland, yet the adherence to medical treatment is about half.
“We believe that we could motivate people to improve health behavior with the right approach,” she said.
Protecting confidentiality at scale The first step is turning raw blood samples into useful genomics data. Milani said it previously costed about $1,000 to analyze a whole human genome a few years ago. Since then, the cost has dropped to about $100 thanks to a recent breakthrough from Ultima Genomics.
However, most of the research has focused on analyzing a much smaller subset of DNA for single nucleotide polymorphisms (SNPs), which looks for about 700,000 significant variants across people. This type of analysis is much cheaper to do at scale. When they started in 2001, the cost was around $300 per sample and is now under $50.
The data is only allowed for health research or treatment with a patient’s consent. The data cannot be used to raise insurance rates and cannot be analyzed without a patient’s consent. Use for criminal investigations is also prohibited. Although this may help solve serious crimes, Milani said there is a larger concern that allowing its use for that purpose could discourage participation.
The Genome Centre developed a novel data encryption and key management infrastructure to protect the confidentiality and privacy of each participant at scale. The actual genomic data is stored separately from any personally identifying information (PII). When teams need to run a study, they pull in genomic data for each individual using a unique key to connect genomic results to other patient data. This is done in a special room without Internet access or phones.
The center also keeps the actual blood samples for future analysis, such as the discovery of new clinically relevant SNPs. Each sample is labeled using a unique barcode that can only be connected to an individual with their encryption key.
Communicating the uncomfortable Milani’s team has conducted several studies to determine the best way to bring genomics information into the doctor’s office. The Genome Center analyzed subsets of their databases to identify individuals with an increased risk of high cholesterol or breast cancer. In theory, these are common conditions that can be prioritized based on family history. In practice, many family doctors have not made the connection.
“A genetic first approach could help prevent many of these diseases,” Milani said.
One concern was that patients might be stressed out to discover they were at higher risk for a particular disease. To address this, the group developed a comprehensive counseling program to accompany any discoveries. As a result, almost all patients reported a much higher emotion several months after the diagnosis.
“They were grateful they knew about the risk,” Milani said. “We discovered that if this information is accompanied by proper genetic counseling, then people can be relieved of the risk.” Improving drug prescriptions The team also explored how to incorporate pharmacogenetics data to improve patient prescriptions. Today, patients are generally given a prescription based on their sex and age rather than their specific metabolism. Genetic differences can reduce some drugs’ effectiveness or even promote side effects. Based on their metabolism, some patients can experience up to a massive difference in the concentration of a drug after ten hours.
The Clinical Pharmacogenetics Implementation Consortium has identified hundreds of potential gene-drug interactions. The Genome Centre has developed genetic reports for some of the most important ones and explored different ways to communicate this information to patients and their doctors.
Top hospitals are already doing this sort of testing for some drugs, particularly expensive ones, but it is uncommon and the process is slow. A doctor must take a sample, order the test, and wait a few weeks for the result before prescribing the medicine.
“Our vision is that the doctor can just query the genetic database to get the results in seconds rather than weeks,” Milani said.
Personalizing the future Today, several companies sell tests for detecting genetic variants associated with higher risks for diseases, but these are not generally accompanied by counseling programs.
Although, sometimes the doctors do not know what to do with the data when patients bring it in.
“There are no recommendations or guidelines, so a lot of the doctors will get angry at the patient,” Milani said.
It does not help that about half the doctors in Estonia are over sixty years old.
Her team is working with Estonia’s National Personalized Medicine Initiative to develop training for family doctors and nurses in genetic epidemiology and personalized medicine. They are also training more medical geneticists and genetic counselors to help explain genetic risks and suggest lifestyle interventions.
“A lot of this also focuses on how giving this kind of information can help improve their health behavior,” Milani said. “Will they take the stairs the next time they come up?” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,065 | 2,022 |
"Corraling Kafka: New ecosystem simplifies, democratizes event-streaming data for enterprises | VentureBeat"
|
"https://venturebeat.com/data-infrastructure/corraling-kafka-new-ecosystem-simplifies-democratizes-event-streaming-data-for-enterprises"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Corraling Kafka: New ecosystem simplifies, democratizes event-streaming data for enterprises Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Aiven , a cloud-data platform based in Helsinki, has fleshed out an open-source ecosystem for Apache Kafka, a popular event-streaming platform. The new offerings promise to help enterprises consolidate their Kafka infrastructure using open-source components.
“Event streaming is transitioning toward the main stack of the IT infrastructure,” Filip Yonov, director of data streaming product management at Aiven, told VentureBeat. “At Aiven, we have witnessed the fastest growth in the event-streaming domain compared to all other products.” Apache Kafka provides the infrastructure for wiring streams of data together from databases, apps, IoT devices, and third-party sources. Kafka helps organize raw data into event streams that reduce data size and are easier to integrate into event-driven apps and analytics. Enterprises use it to improve customer experiences, build the industrial metaverse and monitor patients.
However, building out a Kafka infrastructure involves a lot of moving parts. Aiven has consolidated all the necessary tooling into one place to simplify this process. Key new enhancements include support for Apache Flink and data governance.
These complement existing tools for connecting services, replicating data and managing schemas for Kafka deployments.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The need for simplicity LinkedIn originally developed Kafka to integrate data across its large microservices infrastructure and open-sourced it in 2011. Over the intervening years, large enterprises have customized the tooling for their own needs, and several vendors have rolled out proprietary enhancements to fill in gaps around governance and integration. Many organizations use Kafka for various data pipeline scenarios, such as transferring data between applications in real-time or moving data from a database to a data warehouse.
Yonov told VentureBeat that as Kafka clusters become larger and more complex, they require additional tooling and governance to ensure proper operation and management. “Unlike existing Kafka solutions, Aiven’s offering does not require organizations to choose between proprietary tools and vendor lock-in or open-source technologies without support,” he said.
Improving the developer experience with event streaming One essential aspect has been to democratize the experience for working with event-streaming data. The open-source tool, Klaw, provides a self-service interface for managing Kafka clusters.
Kafkawize , which develops Klaw, recently joined Aiven’s open-source development office in September to help integrate their tools together. Now they are working together to improve self-service, simplify user management and enforce data governance.
Another significant development was to connect streaming data to SQL queries familiar to data engineers. The new Aiven for Apache Flink tools allows teams to process larger volumes of events and run real-time analytics using SQL. Aiven provides this as a fully managed service that reduces the complexity of deploying a Flink cluster. It also simplifies the integration with Aiven for Apache Kafka to filter, enrich and aggregate events on the fly.
Aiven hopes to replicate the success of other open-source frameworks like PostgreSQL, Kubernetes and Linux, built by a healthy mix of contributions from various communities.
“We truly believe that fostering an open-source, community-driven and inclusive ecosystem of technologies around Apache Kafka can drive further innovations and new developments in the data-streaming domain, ensuring the long sustainment of the technology in the future,” Yonov said.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,066 | 2,022 |
"Bentley Systems launches 'phase 2' of the infrastructure metaverse | VentureBeat"
|
"https://venturebeat.com/data-infrastructure/bentley-systems-launches-phase-2-of-the-infrastructure-metaverse"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Bentley Systems launches ‘phase 2’ of the infrastructure metaverse Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Bentley Systems, the infrastructure engineering software giant, launched phase 2 of the infrastructure metaverse at its Year in Infrastructure conference in London. This new phase includes many enhancements intended to bridge gaps between data processes in information technology (IT), operational technology (OT) and engineering technology (ET). It also significantly improves the handoff across infrastructure projects’ design, construction and operation workflows.
The essential vision is to help infrastructure companies evolve from using workflows built on documents and files to a more nimble, actionable and precise “data-centric” approach. This builds on Bentley’s years of experience with its iTwin platform, launched in 2018 with seven years of planning before that.
Bentley CTO Keith Bentley stressed that these enhancements were designed to augment rather than replace existing tools. Engineers could continue to use their existing tools, workflows and processes and then bring in new digital twin capabilities as appropriate. The idea is to provide a path toward the future.
Bentley has been instrumental in pioneering several infrastructure-related developments. One is a new data model for infrastructure digital twins. Another is a data schema for describing infrastructure. And a third is an approach to storing all digital twin data on top of an SQLite database. This differs from other cross-industry digital twin efforts like Nvidia’s Omniverse , built on the USD format. However, Bentley is committed to interoperability with Omniverse, gaming platforms like Epic Unreal and Unity, and industrial metaverse giants like Siemens.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Improving data-sharing capabilities Bentley is launching several new capabilities on the iTwin platform to extend the scope and interoperability of infrastructure data: iTwin Experience provides a single pane of glass for overlaying IT, OT and ET data to help users visualize, query and analyze infrastructure data in its full context. It takes advantage of Bentley’s work on the 3D Fast Transfer (3DFT) codec for streaming 3D data.
iTwin Capture helps teams automatically capture and analyze reality data from cameras, lidar sensors, drones and satellite imagery. This replaces Bentley’s ContextCapture. It uses advanced artificial intelligence (AI) techniques such as Neural Radiance Fields (NeRF) to generate high-quality models from a few photos. Adobe is using this new tool as part of its Adobe Substance 3D tool.
iTwin IoT automates processes for acquiring and analyzing IoT data generated by sensors and condition-monitoring devices. This will help teams align sensor measurements associated with physical infrastructure. It will also make it easy to train new algorithms to identify deterioration progression and prioritize repairs.
Integration with Immersive Environments such as Unreal, Unity and Nvidia Omniverse will enable immersive experiences across a wide range of devices. The iTwin platform supports interoperability with USD, glTF, DataSmith and 3DFT.
Bentley VP of technology Julien Moutt said, “We are excited to see what our users can achieve by combining such technologies, which are fundamental building blocks of the infrastructure metaverse.” Connecting infrastructure workflows In most larger infrastructure projects today, the vast majority of raw data is lost as projects move from the design phase through the construction and operations phases. Bentley has improved iTwin’s integration with ProjectWise for design and planning, Synchro for construction and AssetWise for ongoing operations. Other enhancements include: New project portfolio and program management capabilities, which extend the scope for ProjectWise from work-in-progress engineering to full digital delivery.
4D Design Review , which allows teams to securely share large complex models, regardless of the authoring tool. They can walk through designs, query model information and analyze embedded property data.
Advanced Design Validation , which allows teams to perform AI design validation to help them automatically detect engineering problems.
Components Center , which will help firms create reusable libraries of designs like the software industry does today.
AssetWise Asset health monitoring solutions , which provide prebuilt templates for common industry challenges like monitoring and repairing bridges and dams.
Building the foundation for the infrastructure metaverse In an interview with VentureBeat at the conference, Keith Bentley said he started thinking about how digital twins might benefit the construction industry in 2011. This was when the aviation and auto industries were starting to integrate computer aided design (CAD), simulation, and product life cycle management (PLM) tools into digital twins. Bentley Systems was already a leader in offering many tools for designing, scheduling and operating large infrastructure projects.
Bentley decided to focus on the data management and integration aspect. Every tool in the industry used its own unique file format, making it hard to move data from one application to the next. He recognized the need for sharing small updates rather than requiring everyone to download the latest large file, which could grow into gigabytes for larger projects.
“The information in those CAD models, we just threw it away, and I thought this was insane,” Bentley explained. “I started thinking about the alternative, which was a database. I was kind of disturbed that a database requires a server and an external connection, and then I discovered SQLite.” Then his team developed the Bentley Infrastructure Schema to help connect information about the things embedded in digital files. “One of the hardest parts about digital modeling is that things need to have an identity,” he said. “And that means something in the real world, something in the model, and something when it’s related to something else. And all those identifiers are different formats.” They also invented their iModel format as a kind of “Git for infrastructure information.” This helps enterprises create distributed copies of all the records in a digital twin that are synchronized by sending changes across copies of the digital twin.
“The approval process can now be against the database, not against the individual files,” Bentley said.
Up until now, most automation has involved automating the flow of approvals on documents, using tools for contract lifecycle management. Innovations in connecting engineering approvals to signed datasets will unlock the next wave of digital transformation.
Bentley expects what he calls “phase 2” of the infrastructure metaverse to last at least another five years. It will also take time for enterprises and governments to figure out how to move from signing documents to datasets and to take advantage of new AI and machine learning capabilities.
“Getting there from here has to be incremental because the Big Bang isn’t gonna happen,” Bentley said. “I don’t care how great the other side of that Big Bang is.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,067 | 2,022 |
"How Estonia paved the way for e-government | VentureBeat"
|
"https://venturebeat.com/automation/how-estonia-paved-the-way-for-e-government"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How Estonia paved the way for e-government Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Governments worldwide are increasingly exploring ways to use digital transformation to drive efficiency and improve access for citizens. Estonia was the first country in the world to transition to e-government by making decentralized public and private databases interoperable at a national level about 20 years ago with the launch of X-Road.
As a result, Estonia has eliminated virtually all physical paper documents from government processes. Today, the only reasons a citizen needs to show up in person are to get married, divorced or exchange property. Everything else can be done online. What’s more, citizens can opt-in for automated data exchanges between organizations.
Most citizens can pay their annual taxes in a couple of minutes thanks to data automatically pulled from various government agencies, educational expenses and mortgage accounts. New parents are automatically enrolled in new child subsidies without filing any forms.
Now Estonia is beginning to export the data infrastructure to help other governments as well. This month, Malaysia launched an ambitious plan to connect more than 400 government organizations on top of an X-Road offshoot called the Unified eXchange Platform (UXP). This builds on the success of other countries using the e-government tech, including Finland, New Zealand, Iceland, Namibia and Colombia.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Building a decentralized platform VentureBeat caught up with Arne Ansper, CTO at Cybernetica , who helped design the foundation for X-Road in the mid-1990s. He had been working on secure data exchange tools and realized that this could be applied to a new government bid for digital infrastructure. The organization was transitioning from a government-run lab that had pioneered work on factory control systems and information security during the Cold War.
Estonia was in a significant transition, having just left the sphere of the Soviet Union. It was building a new government that worked in harmony with Western frameworks for private property. Estonia was exploring ways to leverage its leadership in computer research to improve government. However, these ambitious efforts ran into problems when Imre Perli, a freelance consultant, illegally amassed a massive database that he began selling to the highest bidder.
“This raised awareness that information security was important, and we needed to build a system that would prevent this kind of abuse,” Ansper said.
They realized that an extensive, centralized database was ripe for abuse. So, the government began soliciting ideas for decentralizing data services across government agencies that could be secured, audited, and facilitate legal agreements between agencies, businesses and citizens.
Like all governments, services were defined in terms of the exchange of paper documents that carried a legal meaning. They realized they needed to build on this foundation rather than replace it with something that might be more efficient, but that changed the way bureaucrats were used to working.
Keeping an open mind “We did not want to rewrite all the Estonian laws since that would create too much instability,” Ansper explained. “So, we created a system in which all the data is exchanged between organizations in the form of signed digital documents.” Cybernetica collaborated with the government and partners to help unify all aspects of inter-organizational data exchange. They started with the technical aspects, such as the protocol and security rules to use. They also created draft contracts for e-government authorities that included a template for common security measures.
“You can have big savings across organizations if all the authorities are using the same approach and documents since you don’t need to analyze them repeatedly,” Ansper explained.
One of the biggest challenges was that many agencies, such as the population registry, initially resisted sharing data with others. Agencies were concerned about the costs of reformatting and sharing data.
Also, agencies were often only rewarded for hitting internal business metrics. The groups behind the program collaborated with the Estonian Prime Minister’s office to develop metrics that rewarded agencies for enabling data reuse across other agencies as well. As a result, managers began looking for ways to make their data relevant for other agencies as well.
A major upgrade Cybernetica collaborated with several others to launch the first version of X-Roads as a pilot in 1998. The original code took advantage of earlier work on VPNs and digital signatures developed in C/C++. In 2012, they rewrote the entire code from scratch to take advantage of improvements in Java and modern security techniques.
Ansper said the top-level design goals were federation and support for modern public key infrastructure (PKI). Federation allowed each government agency to run its own version of X-Roads and make bilateral agreements with others that considered security and legal aspects of the data. For example, Finland and Estonia now use the platform to exchange export data across tax organizations while respecting security and privacy considerations.
The first version of X-roads used a home-grown key management infrastructure that burdened the government. The update took advantage of a new market of open PKI services from commercial providers, reducing costs.
In 2016, the Estonian Information Services Authority open-sourced the X-Road code under an open-source license. Cybernetic subsequently forked the code to UXP to make it easier to commercialize the platform for governments and business.
Simplifying the onramp to e-government Ansper stressed there is a big difference between the federated approach to e-government they took and the fully decentralized approach often advocated by blockchain enthusiasts. A federated approach promises better efficiency and allows each agency to maintain control over its own data.
For example, in Estonia, different agencies maintain control over data related to health, police, taxation and land ownership. This approach has made it easier to get buy-in across agencies in Estonia and also makes it easier to securely share data and workflows across organizational boundaries.
“This is how our democratic societies are built up. It makes sense that certain government authorities have control over data, but no single authority has all the power,” Ansper said. “The problems that blockchains are trying to solve in a very decentralized manner are better solved by digitally signed documents, contracts and the existing systems we have.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,068 | 2,022 |
"How advances in business process mining allow creation of digital twins of the organization (DTOs) | VentureBeat"
|
"https://venturebeat.com/automation/celonis-deepens-business-process-mining-power-digital-twin-organization"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How advances in business process mining allow creation of digital twins of the organization (DTOs) Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Celonis , a leader in process mining technology, has announced several significant enhancements to its process mining capabilities. The most important advance will help organizations analyze multiple processes simultaneously to create a digital twin of the organization (DTO).
Although other process mining vendors (and Gartner) have used the term DTO in the past, prior approaches took a piecemeal approach, analyzing each process separately.
Celonis CEO Alex Rinke told VentureBeat that with Process Sphere, several engineering enhancements have boosted process mining analytics performance over 100 times, enabling multi-object analytics.
This can simplify the experience for business users who are not process experts, reduce the complexity of analyzing multiple processes, and help the user identify how processes affect each other.
Here are the key new announcements: VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Process Sphere analyzes multiple connected processes to power a DTO.
Business Miner combines process and business intelligence into simplified experiences for business users.
Accounts-receivable apps help businesses fight inflation.
Emporix partnership improves process execution across business partners.
Process Sphere takes process mining into the third dimension Celonis’s biggest news is the debut of Process Sphere, which helps analyze multiple processes from the perspectives of different kinds of users. For example, a given business process such as quote-to-cash or order-to-pay may span multiple apps, covering enterprise resource planning (ERP), customer relationship management (CRM) and supply chain management (SCM).
The new Process Sphere capabilities provide a 3D perspective on how processes affect each other, much as an MRI analyzes your body from multiple angles to paint a three-dimensional view. Rinke explained, “Businesses do not just have one process. They have many processes that interact with each other and are all important to drive performance. You need a 3D understanding to drive deeper optimization.” Multi-object process mining can help tease apart how events relate to each other and to different objects. For example, shipping a bike depends on other processes to ensure individual components like brakes are in stock so the bike can be manufactured and shipped on time.
Process mining performs complex analytics across millions of records to correlate log data in ERP, CRM and SCM systems with a chain of events in a process. Rinke said that by optimizing these algorithms’ performance over 100 times over the last couple of years, Celonis has gotten much closer to a real DTO.
“With a digital twin of the organization, we can look at how all processes in a company can interact at the same time,” he said.
Different business users, such as accountants, product managers and supply chain experts, can examine the complex relationships among business objects like orders, requisitions, invoices and shipments. This can provide insight into bottlenecks in specific processes and identify ways that minor delays in one process can have more significant impacts on other parts of the organization.
Simplifying the user experience A DTO is much more abstract than digital twins of physical things like buildings or cars. Processes are traditionally presented and explored through complex spaghetti diagrams showing the various steps and alternative paths involved in each process. Process Sphere improves upon this with color-coded lines that look more like a subway map.
But this is still a little too complex for users who may not be familiar with process analytics. So Celonis developed Business Miner as a more simplified approach to presenting insights about business processes in the context of a business’s current challenges.
“The idea with Business Miner was to create an experience that is extremely easy to use,” Rinke explained. “[Even if] business users don’t even know what a process is … they can still save millions of dollars.” Business users can analyze the way work flows across ERP, SCM and CRM systems to identify opportunities for improvement or cost reduction.
For example, an accounting user could explore factors affecting the percentage rate of on-time payments, assess the specific value of increasing the rate, and receive guidance on actionable steps to do so. The tools also let users weave charts, graphs and recommendations into consolidated reports and action lists they can share with other team members.
New apps for accounts receivable, ecommerce Celonis is also combining these enhancements with its execution management expertise, building low-code applications to support several domain-specific applications. For example, a new set of accounts-receivable apps helps enterprises boost working capital and reduce costs of collections management, credit management and dispute management.
Guided experiences for accounts receivable These new apps combine information about the processes with the data that flows through them. Companies can bring together data about customers, balance and contracts from transactional and analytical systems. This can help accounts-receivable teams identify, prioritize and pursue the most impactful actions. It can also help streamline and automate processes for collections, disputes and credit.
“The whole idea is we can surface and provide value through guided experiences,” Rinke explained.
Partnering with Emporix to optimize ecommerce adjustments Celonis has also partnered with Emporix on the Commerce Execution Platform to automatically optimize ecommerce processes affecting business partners. This will help enterprises automatically tune workflows in response to changes in customer demand, inventory, or supplier and fulfillment status.
Traditional B2B commerce systems typically require manual intervention to adapt to changes in pricing, stock or customer behavior. The new tool allows companies to monitor and adjust their end-customer interactions using process intelligence signals. For example, if the system determines a shipment is likely to be late, it can recommend an alternative.
“I think this will redefine the future of enterprise apps,” Rinke said. “When you have this cross-process understanding and intelligence, you can make things much better. It applies to the front office, back office and supply chain. This changes how you manage and optimize the performance of your business. It becomes a performance layer on top of your applications and business processes.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,069 | 2,022 |
"Why composability is key to scaling digital twins | VentureBeat"
|
"https://venturebeat.com/ai/why-composability-is-key-to-scaling-digital-twins"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Why composability is key to scaling digital twins Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Digital twins enable enterprises to model and simulate buildings, products, manufacturing lines, facilities and processes. This can improve performance, quickly flag quality errors and support better decision-making. Today, most digital twin projects are one-off efforts. A team may create one digital twin for a new gearbox and start all over when modeling a wind turbine that includes this part or the business process that repairs this part.
Ideally, engineers would like to quickly assemble more complex digital twins to represent turbines, wind farms, power grids and energy businesses. This is complicated by the different components that go into digital twins beyond the physical models, such as data management , semantic labels, security and the user interface (UI). New approaches for composing digital elements into larger assemblies and models could help simplify this process.
Gartner has predicted that the digital twin market will cross the chasm in 2026 to reach $183 billion by 2031, with composite digital twins presenting the largest opportunity. It recommends that product leaders build ecosystems and libraries of prebuilt functions and vertical market templates to drive competitiveness in the digital twin market. The industry is starting to take note.
The Digital Twin Consortium recently released the Capabilities Periodic Table framework (CPT) to help organizations develop composable digital twins. It organizes the landscape of supporting technologies to help teams create the foundation for integrating individual digital twins.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! A new kind of model Significant similarities and differences exist in the modeling used to build digital twins compared with other analytics and artificial intelligence (AI) models. All these efforts start with appropriate and timely historical data to inform the model design and calibrate the current state with model results.
However, digital twin simulations are unique compared to traditional statistical learning approaches in that the model structures are not directly learned from the data, Bret Greenstein, data, analytics and AI partner at PwC, told VentureBeat. Instead, a model structure is surfaced by modelers through interviews, research and design sessions with domain experts to align with the strategic or operational questions that are defined upfront.
As a result, domain experts need to be involved in informing and validating the model structure. This time investment can limit the scope of simulations to applications where ongoing scenario analysis is required. Greenstein also finds that developing a digital twin model is an ongoing exercise. Model granularity and systems boundaries must be carefully considered and defined to balance time investment and model appropriateness to the questions they are intended to support.
“If organizations are not able to effectively draw boundaries around the details that a simulation model captures, ROI will be extremely difficult to achieve,” Greenstein said.
For example, an organization may create a network digital twin at the millisecond timescale to model network resiliency and capacity. It may also have a customer adoption model to understand demand at the scale of months. This exploration of customer demand and usage behavior at a macro level can serve as input into a micro simulation of the network infrastructure.
Composable digital twins This is where the DTC’s new CPT framework comes in. Pieter van Schalkwyk, CEO at XMPRO and cochair for Natural Resources Work Group at Digital Twin Consortium, said the CPT provides a common approach for multidisciplinary teams to collaborate earlier in the development cycle. A key element is a reference framework for thinking about six capability categories including data services, integration, intelligence, UX, management and trustworthiness.
This can help enterprises identify composability gaps they need to address in-house or from external tools. The framework also helps to identify specific integrations at a capabilities level. The result is that organizations can think about building a portfolio of reusable capabilities. This reduces duplication of services and effort.
This approach goes beyond how engineers currently integrate multiple components into larger structures in computer-aided design tools. Schalkwyk said, “Design tools enable engineering teams to combine models such as CAD, 3D and BIM into design assemblies but are not typically suited to instantiating multi use case digital twins and synchronizing data at a required twinning rate.” Packaging capabilities In contrast, a composable digital twin draws from six clusters of capabilities that help manage the integrated model and other digital twin instances based on the model. It can also combine IoT and other data services to provide an up-to-date representation of the entity the digital twin represents. The CPT represents these different capabilities as a periodic table to make it agnostic to any particular technology or architecture.
“The objective is to describe a business requirement or a use case in capability terms only,” Schalkwyk explained.
Describing the digital twin in terms of capabilities helps match a specific implementation to the technologies that provide the appropriate capability. This mirrors the broader industry trend towards composable business applications. This approach allows different roles, such as engineers, scientists and other subject-matter experts, to compose and recompose digital twins for different business requirements.
It also creates an opportunity for new packaged business capabilities that could be used across industries. For example, a “leak detection” packaged business capability could combine data integration and engineering analytics to provide a reusable component that can be used in a multitude of digital twins use cases, Schalkwyk explained. It could be used in digital twins for oil & gas, process manufacturing, mining, agriculture and water utilities.
Composability challenges Alisha Mittal, practice director at Everest Group, said, “Many digital twin projects today are in pilot stages or are focused on very singular assets or processes.” Everest research has found that only about 15% of enterprises have successfully implemented digital twins across multiple entities.
“While digital twins offer immense potential for operational efficiency and cost reduction, the key reason for this sluggish scaled adoption is the composability challenges,” Mittal said.
Engineers struggle to integrate the different ways equipment and sensors collect, process and format data. This complexity gets further compounded due to the lack of common standards and reference frameworks to enable easy data exchange.
Suseel Menon, senior analyst at Everest Group, said some of the critical challenges they heard from companies trying to scale digital twins include: Nascent data landscape : Polishing data architectures and data flow is often one of the biggest barriers to overcome before fully scaling digital twins to a factory or enterprise scale.
System complexity : It is rare for two physical things within a large operation to be similar, complicating integration and scalability.
Talent availability : Enterprises struggle to find talent with the appropriate engineering and IT skills.
Limited verticalization in off-the-shelf platforms and solutions : Solutions that work for assets or processes in one industry may not work in another.
Threading the pieces together Schalkwyk said the next step is to develop the composability framework at a second layer with more granular capabilities descriptions. A separate effort on a ‘digital-twin-capabilities-as-a-service’ model will describe how digital twin capabilities could be described and provisioned in a zero-touch approach from a capabilities marketplace.
Eventually, these efforts could also lay the foundation for digital threads that help connect processes that span multiple digital twins.
“In the near future, we believe a digital thread-centric approach will take center stage to enable integration both at a data platform silo level as well as the organizational level,” Mittal said. “DataOps-as-a-service for data transformation, harmonization and integration across platforms will be a critical capability to enable composable and scalable digital twin initiatives.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,070 | 2,022 |
"Sensat creates digital twins for public infrastructure to improve productivity, raises $20M | VentureBeat"
|
"https://venturebeat.com/ai/sensat-creates-digital-twins-for-public-infrastructure-to-improve-productivity-raises-20m"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Sensat creates digital twins for public infrastructure to improve productivity, raises $20M Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
London-based Sensat raised another $20.5 million to help commercialize digital twins for infrastructure projects in energy, rail and telecommunications. This comes on the heel of several successful trials with large customers since its $10 million raise in 2019.
The company launched in 2018 with an ambitious plan to help infrastructure companies achieve the same productivity gains as in other sectors.
Sensat CEO, James Dean, told VentureBeat, “Sensat’s digital twin automates manual workflows and decision-making, boosts productivity by double digits, and consolidates information for more transparency and better-informed decision-making.” Contextualizing regular workflows Over the last few years, Sensat has fleshed out its digital twin technology, added new features like support for CCTV feeds and scheduling, and grown the customer base. It’s now used to build, plan and manage over $150 billion worth of infrastructure worldwide such as the UK National Grid, Heathrow Airport and Network Rail. National Grid was so impressed with the tech, they led the latest funding effort.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Dean said they have been exploring various ways to contextualize data within everyday workflows. For example, Sensat’s new CCTV API fetches camera footage from across a larger project to provide a real-time view overlaid into the digital twin. On a construction site, the CCTV feed might capture the movement of trucks, people and machinery and then use machine learning analytics to re-create those objects within the 3D virtual scene.
“These analytics are helping us to measure, quantify and learn how site operations work with real intimacy so that we can tweak their operations to be as efficient as possible,” explained Dean.
Sensat has also launched an integration with Oracle’s P6, the leading scheduling software for the infrastructure sector. This allows teams to load up their schedule, and Sensat will link it to activities and designs in the digital twin, helping to visualize workflows and rehearse precisely what will happen. Sensat’s single environment allows users to visit their project’s past, present and future to help with more effective decision-making.
Lessons from the trenches Dean said they have learned a lot from working with different site operators.
First, he expects most infrastructure asset owners to operate multiple digital twins. He often finds that infrastructure owners want different digital twins to support various stages of construction and operations.
He has also found that it is vital that a digital twin should look like the site. This helps people link new information to their physical experience working on the project.
The most successful projects focused on how to improve human productivity, safety and costs. They are using them to streamline processes that save teams time. He has seen teams struggle to solve functional problems rather than helping people do their job more efficiently and effectively.
Infrastructure digital twins could play a key role in helping to drive digital transformation across construction and infrastructure.
Competitors working on various aspects include giants like Bentley , Autodesk and ESRI.
In addition, several smaller firms are using digital twins to improve various construction workflows such as Agora , Buildots , Cupix and UrsaLeo.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,071 | 2,022 |
"Robocorp simplifies open-source RPA | VentureBeat"
|
"https://venturebeat.com/ai/robocorp-simplifies-open-source-rpa"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Robocorp simplifies open-source RPA Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
The roots of robotic process automation (RPA) emerged from the test automation domain. Test engineers used RPA’s predecessors to emulate how humans type and click their way through applications in the early 2000s. In the 2010s, vendors started hardening these early tools to automate repeating tasks like copying data between apps, and RPA was born.
Now Robocorp , which emerged from an open-source test automation project, is hoping to capture a position in second-generation RPA tools that promise to harden and scale the technology. It recently launched a beta version of Automation Studio, which promises to bridge the communication gap between professional developers and business users. More importantly, this builds on the company’s second-generation RPA infrastructure and attractive pricing model.
First generation of RPA It is helpful to take a step back to understand why this is important. RPA sits in a crowded field of automation technologies, including low-code and no-code development tools, intelligent process automation, and the automation capabilities built into enterprise software platforms.
Although the first generation of RPA tools is not as fast as low-code automation, they are much easier for the average user to understand since it essentially mimics how people work with applications. Gartner lumps this ensemble of technologies together into hyperautomation , which is expected to reach $596 billion this year.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Today, the RPA industry is led by companies including Automation Anywhere, Blue Prism and UiPath — at least in revenue share. Microsoft recently started giving away access to the client side of its Power Automate Platform. A recent report by Blueprint Software titled, State of Automation in 2022 , found that Microsoft Power Automate was used by 76% of respondents, followed by Blue Prism (34%), Automation Anywhere (33%), and UiPath (23%). Blueprint makes tools for analyzing business processes and refactoring RPA code to work across RPA platforms. About 40% of respondents used multiple RPA platforms.
Room for competition The authors of the report noted that, “Since [RPA is] rather young compared to other enterprise software segments, it seems organizations are still uncovering which RPA platform is best for them according to their needs.” This is good news for the assemblage of RPA startups vying for a piece of the market, like Robocorp. Its new Automation Studio provides a shared view of RPA automations, called bots, for both developers and business users. It also builds on the company’s existing work coding RPA bots in Python that can run on open-source servers.
Robocorp was founded by Antti Karjalainen, Sampo Ahokas, and a small team of top developers who were active in the open-source test automation community called Robot Framework.
The team created the infrastructure to transform the test automation framework into a robust RPA platform, much like the RPA pioneers.
The company’s CEO, Karjalainen, told VentureBeat that the Robot Framework test automation capabilities could be applied to the RPA space to solve numerous problems that are not currently addressed by traditional RPA vendors. So, they built open-source development tools and a flexible cloud-native orchestration platform to help creators quickly and securely build, implement, and scale sustainable bots across their organizations.
This lets users automate virtually any process and technology — with exceptional speed and elasticity — with no licensing fees and a consumption-based pricing model. Aligning usage with pricing could be important for enterprises looking for opportunities to reduce the costs of their automation spending. The Blueprint survey found that enterprises were spending an average of $480,000, with 13% spending upward of $1 million on RPA annually.
“One of the big advantages of the Automation Studio is how it supports toggling between the work modes of both low-code business domain experts and pro-code developers in one platform,” said Jason English, a principal analyst at the advisory firm Intellyx.
English noted that he was also impressed with Robocorp’s foundation of an open-source automation framework that captures automations into transparently readable Python-like code assets. This makes it easier for companies to try it out with less risk of proprietary lock-in versus established RPA competitors.
“Developed automation assets are portable and at home within enterprise work management tools as well as automated software pipelines and GitOps,” he explained.
The field moves forward To be fair, all the RPA vendors have added considerable enhancements over the years to improve RPA quality, scalability, and development. For example, Automation Anywhere refactored its original platform to run in the cloud , UiPath enhanced RPA governance, and Blue Prism improved scalability.
One of the complaints about RPA is that it operates at the UI layer, so the original bots had to click and type their way through apps. Although this is much faster than a human, it is much slower than custom-coded API integration.
One advantage of the Robocorp platform is that it allows developers to create apps that automate at the level of the UI, the location in a web page, the API, or by specifying data access. This promises to give developers greater flexibility in how they craft automations that are more reliable and faster than UI-only automations.
Microsoft has started doing something similar with its Power Automate platform, allowing developers to create an automation that works through the UI or APIs for selected apps. That said, Robocorp’s open-source approach is already galvanizing a small army of consultants and systems integrators to build out a library of reusable automations across the industry.
This could give enterprises a bit more flexibility in their automation strategy. For example, the new Automation Studio interface could help improve communications between business users and developers.
“It opens the door for those who prefer a visual approach to automation, while keeping it open for those who prefer a more programmatic approach through multiple methods of building,” Karjalainen said. “It’s also a good learning tool for citizen developers that want to become better versed in code.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,072 | 2,022 |
"Nvidia, Rescale team to enhance AI cloud automation and HPC-as-a-service | VentureBeat"
|
"https://venturebeat.com/ai/nvidia-rescale-team-to-enhance-ai-cloud-automation-and-hpc-as-a-service"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Nvidia, Rescale team to enhance AI cloud automation and HPC-as-a-service Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Nvidia and Rescale today announced several enhancements designed to simplify artificial intelligence (AI) development and optimize high-performance computing (HPC) workflows. Nvidia is powering a new AI compute recommendation engine (CRE) to replace a more manually tuned approach. It’s also integrating the Nvidia AI platform into Rescale’s HPC-as-a-service offering.
Both developments promise to make it easier to spin up new scientific workloads and operate them more efficiently. This will also apply equally to public cloud service and private cloud infrastructure.
Rescale specializes in tools for automating scientific computing workloads — a field that is ripe for disruption, since engineers may sometimes spend more time configuring experiments than running them. Earlier this year, Rescale announced tools to help refactor legacy apps to run on containers to dramatically simplify configuration and deployment.
It also announced a partnership with Nvidia in July to containerize many Nvidia workloads. The latest news builds on this partnership to automate support for Nvidia’s AI platform. This will automates the use of AI for physics, recommendation engines, simulations, medical research and more. It also applies Nvidia’s recommendation capabilities back on the HPC infrastructure itself.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! AI-powered infrastructure recommendations Spinning up scientific computing workloads requires a delicate balance involving hardware, networking, memory, software and specific configurations. Rescale and Nvidia have collaborated on what the two are billing as the world’s first AI-powered recommendation system for HPC and AI workloads. The companies claim it will assist teams with balancing decisions about architectures, geographic regions, price, compliance and sustainability objectives. Nvidia and Rescale trained the system using data from more than 100 million production HPC workloads.
“Prior to compute recommendation engines, the primary way we provided compute optimization was through our solution architects working with the customers guided by our internal benchmarks library,” Edward Hsu, Rescale’s chief product officer told VentureBeat. “With the compute recommendation engine, we are bringing unprecedented levels of automation and insights by applying machine learning [ML] to infrastructure telemetry and job performance data.” With the new engine, users choose a workload and Rescale will suggest a computing architecture to provide the best performance. Hsu claims that these recommendations are 90% accurate. Further optimization will also need to account for the models, which can impact both performance and the applications they run on.
Rescale is also integrating the Nvidia Base Command Platform software to orchestrate workloads across clouds and on-premises Nvidia DGX systems.
Expanding the reach and utility of AI The two companies are also partnering to support the Nvidia AI Enterprise Software Suite on top of the Rescale platform. Soon, this will help automate workflows using tools like Isaac for programming robots, Nemo for languages, Merlin for recommendations, Morpheus for Security and Holoscan for medical AI. Nvidia Modulus, a physics-ML framework, is also now available on Rescale — which will play a key role in helping companies create faster digital twins for simulating the physical properties of products and equipment.
Existing AI frameworks on Rescale, such as PyTorch and TensorFlow, are more general purpose. Modulus is a programmable physics-informed neural network that can create models that the company claims run hundreds or thousands of times faster than traditional simulation techniques. The Modulus support allows teams to more easily apply AI to emulate physics-based simulations at much higher performance and lower cost “As we see the engineers move from intuition-based engineering towards AI-assisted engineering, bringing together the tools for computational engineering and artificial intelligence will be critical to help companies accelerate new product innovation,” Hsu said.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,073 | 2,022 |
"Nvidia Omniverse to support scientific digital twins | VentureBeat"
|
"https://venturebeat.com/ai/nvidia-omniverse-to-support-scientific-digital-twins"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Nvidia Omniverse to support scientific digital twins Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Nvidia has announced several significant advances and partnerships to extend the Omniverse into scientific applications on top of high-performance computer (HPC) systems. This will support scientific digital twins that join together data silos currently existing across different apps, models, instruments and user experiences. This work will expand upon Nvidia’s progress in building out the Omniverse for entertainment, industry , infrastructure , robotics , self-driving cars and medicine.
The Omniverse platform uses special-purpose connectors to dynamically translate and align 3D data from dozens of formats and applications on the fly. Changes in one tool, application or sensor are dynamically reflected in other tools and views that look at the same building, factory, road or human body from different perspectives.
Scientists are using it to model fusion reactors, cellular interactions and planetary systems. Today, scientists spend a lot of time translating data between tools and then manually tweaking the data representation, model configuration and 3D rendering engines to see the results. Nvidia wants to use the USD (universal scene description) format as an intermediate data tier to automate this process.
Nvidia lead product manager of accelerated computing, Dion Harris, explained, “The USD format allows us to have a single standard by which you can represent all those different data types in a single complex model. You could go in and somehow build an API specifically for a certain type of data, but that process would not be scalable and extendable to other use cases or other sorts of data regimes.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Here are the major updates: Nvidia Omniverse now connects to scientific computing visualization tools on systems powered by Nvidia A100 and H100 Tensor Core GPUs.
Supports larger scientific and industrial digital twins using Nvidia OVX and Omniverse Cloud.
Enhances Holoscan to support scientific use cases in addition to medical. New APIs for C++ and Python will make it easier for researchers to build sensor data processing workflows for Holoscan.
Added connections to Kitware’s ParaView for visualization, Nvidia IndeX for volumetric rendering, Nvidia Modulus for Physics-ML, and Neural VDB for large-scale sparse volumetric representation.
MetroX-3 extends the range of the Nvidia Quantum-2 InfiniBand Platform up to 25 miles. This will make connecting scientific instruments spread across a large facility or campus easier.
Nvidia BlueField-3 DPUs will help orchestrate data management at the edge.
Building bigger twins Processing latency is one of the biggest challenges with building Omniverse workflows that span many tools and applications. While it is one thing to translate between a few file formats or tools, creating live connections between many requires serious computing horsepower. The larger Nvidia A100 and H100 could help reduce the latency in running the larger models, and support for Nvidia OVX and Omniverse Cloud will help enterprises scale composable digital twins across more building blocks.
Nvidia created a demo showing how these new capabilities can simulate more aspects of data centers.
Earlier this year, they reported on work to simulate data center network hardware and software.
Now they can bring together engineering designs from tools like Autodesk Revit, PTC Creo and Trimble SketchUp to share designs across different engineering teams. These can be combined with port maps in Patch Manager for planning cabling and physical connectivity within the data center. Then Cadence 6SigmaDCX can help analyze heat flows, and Nvidia Modulus can create faster surrogate models to do what-if analysis in real time.
Nvidia is also working on a partnership with Lockheed Martin on a project for the National Oceanic and Atmospheric Administration. They plan to use the Omniverse as part of an Earth observation digital twin to monitor the environment and gather data from ground stations, satellites and sensors into one model. This could help improve our understanding of glacial melting, model climate impacts, assess drought risks and prevent wildfires.
This digital twin will work with Lockheed’s OpenRosetta3D to store data, apply artificial intelligence (AI) and build connectors with various tools and apps that are standardized using the USD format to represent and share data across the system. Nvidia Nucleus will translate between native data formats and the USD format, and then deliver those to Lockheed’s Agatha 3D viewer, based on Unity, to visualize data from multiple sensors and models.
Harris believes these enhancements will usher in a new era of digital twins that evolves from passively reflecting a model of the world to actively shaping the world. A two-way connection will leverage IoT, AI and the cloud to issue commands to equipment in the field. For example, Nvidia is working with Lockheed Martin on using digital twins to help direct satellites to focus on areas at increased risk of forest fires.
“We are just scratching the surface of digital twins,” Harris said.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,074 | 2,022 |
"Nvidia and PassiveLogic team up to drive integration for autonomous buildings | VentureBeat"
|
"https://venturebeat.com/ai/nvidia-and-passivelogic-team-up-to-drive-integration-for-autonomous-buildings"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Nvidia and PassiveLogic team up to drive integration for autonomous buildings Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Nvidia has invested $15 million in PassiveLogic , a pioneer in autonomous building control systems.
The investment will help drive integration between PassiveLogic’s tools and Nvidia’s Omniverse platform for the industrial metaverse.
PassiveLogic is developing a growing ecosystem of tools built on top of digital twins to enable generative design, autonomous systems and next-generation artificial intelligence (AI). These help architects, engineers, contractors and building owners improve the efficiency and reduce the cost of building operations.
The platform helps users quickly collaborate around AI controls, test them on digital twins of the building, and then deploy them into operations. The company claims its Hive control platform is ten times faster to install and reduces energy consumption by a third compared to conventional automation solutions.
PassiveLogic is also driving the quantum digital twin standard for autonomous systems that helps describe system-level interactions between components, equipment, assemblies and environments.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Visualization meets automation According to PassiveLogic’s CEO Troy Harvey, “Nvidia’s Omniverse and PassiveLogic’s Quantum are each focused on different and complementary aspects of describing the world through digital twins.” The Omniverse is very focused on geometry and visualization and providing the integration to universal scene description (USD) workflows for industrial digital twins. PassiveLogic’s compiler and compute technology runs the AI for these digital twins at the edge on Nvidia GPU technology. Integration between the platforms will make it easier to embed digital twins into AI control systems to support autonomous building controls that can adapt to changes in user needs, the environment, or the equipment itself.
“As a partnership, we are really excited about the breadth of applications our combined technology platforms can address,” Harvey said.
Other components that PassiveLogic’s platform include are: The Quantum Creator which provides a CAD system for creating digital twins that describe what something is, how it works and why it would do specific actions.
An Autonomy Studio that enables users to build autonomous systems by composing digital twins into systems and environments through a drag-and-drop interface that outputs a system-level digital twin.
The Hive platform consumes these digital twins to provide real-time automation of buildings.
The Passport feature allows individuals to create and share their own personal digital twin reflecting physiological, ergonomic and comfort preferences.
New workflows These integrations can improve the control systems for any kind of equipment. In the short run, Harvey is most focused on opportunities for autonomous buildings to improve sustainability. He estimates that buildings consume about 41% of the world’s energy, and believes the company’s platform can help reduce that by 30%.
PassiveLogic can help teams at the beginning of a project to clarify project goals, iterate prototypes with generative design, and then automate control systems. Nvidia’s Omniverse then provides the visualization, animation and 3D exploration of the virtual world, once the project is underway. Omniverse also simplifies USD integration with other tools.
The new funding from Nvidia brings PassiveLogic’s total funding to more than $80 million. Other investors include building-asset owners, equipment manufacturers and venture investors such as Addition, Brookfield, Keyframe, RET, Era and A/O Proptech. This investment is part of a broader trend around using USD as a core data layer to simplify workflows across various tools. It complements efforts to integrate USD and IFC and Nvidia’s recent partnership with Siemens to grow the industrial metaverse on top of USD.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,075 | 2,022 |
"Nvidia advances medical AI and digital twin capabilities | VentureBeat"
|
"https://venturebeat.com/ai/nvidia-advances-medical-ai-and-digital-twin-capabilities"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Nvidia advances medical AI and digital twin capabilities Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Nvidia has been a leader in providing AI and digital twin infrastructure for the medical community.
Its various offerings improve diagnostics, the development of new medical devices, medical research and drug development. At the Fall GTC Conference , Nvidia announced various new medical tools, partnerships and workflows.
“GTC is a really unique healthcare conference, where we learn how AI and accelerated computing are advancing the field, from things like surgery all the way through to pharmaceutical research,” Nvidia’s VP of healthcare, Kimberly Powell, said in a press conference.
Highlights include: Release of MONAI 1.0, a new domain-specific AI framework that improves AI imaging workflows to medical diagnostics and robotics.
Migration of Clara Holoscan from MGX to IGX to simplify medical tool and robot development, deployment and management.
BioNeMo extends Nvidia’s large language model (LLM) to support protein, DNA and chemical analysis workflows.
Partnership with MIT’s and Harvard’s Broad Institute to accelerate human genomics research.
These various announcements build on and extend each other. Let’s walk through them one at a time.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! [ Follow along with VB’s ongoing Nvidia GTC 2022 coverage » ] MONAI simplifies medical imaging Nvidia and King’s College London introduced MONAI in April 2020 to simplify AI medical imaging workflows. This helps transform raw imaging data into interactive digital twins to improve analysis or diagnostics, or guide surgical instruments. The development and adoption of the platform have picked up steam with over 600,000 downloads, half of these in the last six months.
They are now officially rolling out Monai 1.0. It comes with several critical capabilities baked in. Interactive labeling can reduce by 75% the time required to label data for training AI models. Auto3D adapts AutoML techniques for automatically choosing machine learning models for 3D segmentation and interpretation. Monai Flare supports federated learning to enhance the privacy of medical data. Model Zoo comes with over 15 pretrained models. Native support for streaming imaging applications like endoscopy, ultrasound and surgical video helps streamline medical imaging workflows.
IGX industrializes the medical metaverse Nvidia introduced Clara Holoscan MGX earlier this year as a reference design for a medical device platform. Clara Holoscan on IGX builds on Nvidia’s experience to further streamline and industrialize medical device development on top of Nvidia’s new IGX platform for robotics. This reduces the effort it takes to integrate Holoscan into new products with integrated security and management capabilities.
Over 70 leading companies have been developing equipment on top of Clara Holoscan MGX, including Siemens Healthineers for MRI, Olympus for endoscopy, and Intuitive Ion for better lung biopsies. New products based on Clara Holoscan and IGX include Activ Surgical’s hyperspectral blood flow imager, Moon Surgical’s robotic-assisted surgeon, and Proximie’s telepresence surgery system.
“We learned that what we’re building for these medical device use cases is actually applicable to a much broader market,” said Powell. “Industrial automation and smart factories all have a similar robotics pipeline that needs to be executed on the far edges of the network and incorporate things like functional safety so that humans and robots can be in the same place.” The platform also helps minimize new applications’ latency to ensure patient safety. Powell said they set the goal of keeping latency down to 50 milliseconds. The latest version of Holoscan can do straight-up video processing in less than 10 milliseconds and supports more than 30 simultaneously running AI algorithms at less than 50 milliseconds.
Powell said they are aligning Clara with Nvidia’s Isaac platform for robotics and Omniverse platform for industrial digital twins. “We’re leveraging everything the company makes, and we’re connecting these platforms together because robotics isn’t unique in healthcare as it is in other domains,” Powell said. “And we take all the lessons learned and the necessary interconnections between these platforms to provide it back to the medical device industry.” BioNeMo speaks proteins Nvidia’s new BioNeMo Framework helps medical researchers train and develop large biomolecular language models at supercomputing scales. It extends efforts like the Nvidia NeMo Megatron framework and research projects like AlphaFold that use large language models to analyze proteins to support DNA, protein and chemical research.
Each domain has its own unique way of encoding data into strings. DNA uses nucleic acid sequences, proteins use amino acid sequences, and chemicals use simplified molecular-input line-entry system (SMILES) strings.
We have over 10,000 diseases and only 500 cures,” said Powell. “We need to boost numerical and experimental methods with AI to explore the nearly infinite chemistry and protein space. Nvidia BioNeMo LLM framework and cloud services will accelerate the development of AI that understands chemistry and biology.” The new framework comes with four pretrained models. ESM-1, introduced by Meta AI Labs, processes amino acid sequences to predict properties and functions. OpenFold helps visualize proteins. MegaMolBART can help predict chemical reactions, optimize mixtures or generate new ones. ProtT5 helps extend the capabilities of protein large language models to sequence generation.
Powell said Nvidia is providing BioNeMo as both a framework and a service. The framework will help researchers develop new pre-trained language models at any scale for chemistry, protein, DNA and RNA. It also supports data transformations necessary for biomolecules. Nvidia plans to provide early access to the BioNeMo service in October.
Nvidia-Broad partnership accelerates innovations Nvidia has also announced an extensive partnership with the Broad Institute of MIT and Harvard, a top genetics research group and tools provider.
Nvidia is porting Clara Parabricks computational genomics framework to the Broad Institute’s Terra cloud platform used by 25,000 leading medical researchers. Initially, they plan to support six new workflows. For example, a new whole genome sequencing workflow running on GPUs shortens the process from a day to an hour and cuts the cost in half compared to a CPU approach.
The two will also partner on building large language models for analyzing DNA and RNA. Nvidia is also contributing a new deep learning model to the Broad Institute’s genome analysis toolkit that more than 100,000 researchers use.
Powell said combining the Broad Institute’s deep domain expertise with Nvidia technology expertise could accelerate the deployment of new AI medical innovations from years to months.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,076 | 2,022 |
"How one company is optimizing building data for smarter monitoring | VentureBeat"
|
"https://venturebeat.com/ai/how-one-company-is-optimizing-building-data-for-smarter-monitoring"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How one company is optimizing building data for smarter monitoring Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Mapped , which normalizes access to smart building data , has launched a free tier for its service. The platform helps companies discover and utilize data from building systems, sensors and equipment from different vendors. This makes it easier to develop building management apps and digital twins using a single API for all equipment — it also helps normalize data into a consistent format.
The new launch promises to help connect building monitoring and management capabilities to cloud apps for scheduling, analytics and business workflows via cloud services such as Google Calendar, VergeSense, Microsoft 365 and OpenPath. It can also help operations teams fully map and integrate the data from building sensors , controls, equipment and infrastructure in as little as four days — allowing developers to focus on innovation rather than integration. It aims to make connecting with third-party services easier.
Mapped’s founder and CEO, Shaun Cooley, launched the company after struggling with data integration challenges while previously leading IoT efforts at Cisco. Since then, the company has quickly grown — emerging out of stealth last year. It has mapped more than 30 million square feet across 100 buildings with upwards of 30,000 device types. The company has also been a driver behind the Brick Schema , an open-source graph for building data.
Brick Schema was designed to improve access to and control of building data.
It helps organize access to sensor, HVAC, lighting and electrical systems — and defines spatial, control and operational relationships. The tool is a promising alternative to other specifications and standards, such as industry foundation classes (IFC), smart appliances reference ontology (SAREF), building topology ontology (BOT) and Project Haystack.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Opening doors Cooley told VentureBeat that the launch of the latest self-serve starter plan opens doors for developers, data scientists, and building solutions teams to connect and integrate cloud data sources with just a few clicks. Developers can immediately create their account to begin adding their cloud-based integrations, see the data in the Mapped Console, and access it via an API.
The company anticipates that many enterprises will opt for an upgrade to Mapped’s pro plan, which makes it easy to bring in data from building management systems that use legacy protocols like Modbus, BACnet, and LonWorks. The pro plan allows teams to deploy a virtual or physical Mapped Universal Gateway to discover, extract and normalize all on-premises data into a consolidated independent data layer. Developers can access this data via the Mapped Console or the GraphQL API.
When Cooley was at Cisco, he found that building asset managers would take months of manual inspections to discover and locate physical devices within a building. Then they would spend additional months connecting and integrating this data into customized building systems for each building. He claims that Mapped distills this down to four data categories: sustainability , predictive maintenance , security and tenant experience goals.
Eliminating silos Cooley predicts that an independent data layer will become a critical necessity of the smart building stack for commercial and industrial assets. The company has been busy developing tools to break data silos across building systems.
“By eliminating these silos and democratizing access to data across systems and buildings, Mapped enables flexibility in accessing real-time and historic time-series data, providing more in-depth insights for building owners, operators and solution providers,” he said.
The platform could also help building operators with owning and securing their data to avoid vendor lock-in. Cooley claims the solution will also make it easier to take advantage of new solutions that leverage artificial intelligence (AI) and machine learning to improve energy and sustainability management, predictive maintenance and occupant experiences.
“With less time spent on integrating and onboarding, and more time spent on the things that actually drive business value, we expect the caliber of value provided by the next generation of proptech [property technology] solutions to increase dramatically,” Cooley said.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,077 | 2,022 |
"How digital twins are transforming network infrastructure, part 1 | VentureBeat"
|
"https://venturebeat.com/ai/how-digital-twins-are-transforming-network-infrastructure-part-1"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How digital twins are transforming network infrastructure, part 1 Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
This is the first of a two-part series. Read part 2 about the future state of digital twins – comparing how they’re being used now with how they can be used once the technology matures.
Designing, testing and provisioning updates to data digital networks depends on numerous manual and error-prone processes.
Digital twins are starting to play a crucial role in automating more of this process to help bring digital transformation to network infrastructure. These efforts are already driving automation for campus networks, wide area networks (WANs) and commercial wireless networks.
The digital transformation of the network infrastructure will take place over an extended period of time. In this two-part series, we’ll be exploring how digital twins are driving network transformation. Today, we’ll look at the current state of networking and how digital twins are helping to automate the process, as well as the shortcomings that are currently being seen with the technology.
In part 2, we’ll look at the future state of digital twins and how the technology can be used when fully developed and implemented.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! About digital twins At its heart, a digital twin is a model of any entity kept current by constant telemetry updates. In practice, multiple overlapping digital twins are often used across various aspects of the design, construction and operation of networks, their components, and the business services that run on them.
Peyman Kazemian, cofounder of Forward Networks , argues that the original Traceroute program written by Van Jacobson in 1987 is the oldest and most used tool to understand the network. Although it neither models nor simulates the networks, it does help to understand the behavior of the network by sending a representative packet through the network and observing the path it takes.
Later, other network simulation tools were developed, such as OPNET (1986), NetSim (2005), and GNS3 (2008), that can simulate a network by running the same code as the actual network devices.
“These kinds of solutions are useful in operating networks because they give you a lab environment to try out new ideas and changes to your network,” Kazemian said.
Teresa Tung, cloud first chief technologist at Accenture, said that the open systems interconnection (OSI) conceptual model provides the foundation for describing networking capabilities along with separation of concerns.
This approach can help to focus on different layers of simulation and modeling. For example, a use case may focus on RF models at the physical layer, through to the packet and event-level within the network layer, the quality of service (QoS) and mean opinion score (MoS) measures in the presentation and application layers.
Modeling: The interoperability issue Today, network digital twins typically only help model and automate pockets of a network isolated by function, vendors or types of users.
The most common use case for digital twins is testing and optimizing network equipment configurations. However, because there are differences in how equipment vendors implement networking standards, this can lead to subtle variances in routing behavior, said Ernest Lefner, chief product officer at Gluware.
Lefner said the challenge for everyone attempting to build a digital twin is that they must have detailed knowledge of every vendor, feature, and configuration and customization in their network. This can vary by device, hardware type, or software release version.
Some network equipment providers, like Extreme Networks , let network engineers build a network that automatically synchronizes the configuration and state of that provider’s specific equipment.
Today, Extreme’s product supports only the capability to streamline staging, validation and deployment of Extreme switches and access points. The digital twin feature doesn’t currently support the SD-WAN customer on-premises equipment or routers. In the future, Extreme plans to add support for testing configurations, OS upgrades and troubleshooting problems.
Other network vendor offerings like Cisco DNA , Juniper Networks Mist and HPE Aruba Netconductor make it easier to capture network configurations and evaluate the impact of changes, but only for their own equipment.
“They are allowing you to stand up or test your configuration, but without specifically replicating the entire environment,” said Mike Toussaint, senior director analyst at Gartner.
You can test a specific configuration, and artificial intelligence (AI) and machine learning (ML) will allow you to understand if a configuration is optimal, suboptimal or broken. But they have not automated the creation and calibration of a digital twin environment to the same degree as Extreme.
Virtual labs and digital twins vs. physical testing Until digital twins are widely adopted, most network engineers use virtual labs like GNS3 to model physical equipment and assess the functionality of configuration settings. This tool is widely used to train network engineers and to model network configurations.
Many larger enterprises physically test new equipment at the World Wide Technology Advanced Test Center.
The firm has a partnership with most major equipment vendors to provide virtual access for assessing the performance of actual physical hardware at their facility in St. Louis, Missouri.
Network equipment vendors are adding digital twin-like capabilities to their equipment. Juniper Networks’ recent Mist acquisition automatically captures and models different properties of the network that informs AI and machine optimizations. Similarly, Cisco’s network controller serves as an intermediary between business and network infrastructure.
Balaji Venkatraman, VP of product management, DNA, Cisco, said what distinguishes a digital twin from early modeling and simulation tools is that it provides a digital replica of the network and is updated by live telemetry data from the network.
“With the introduction of network controllers, we have a centralized view of at least the telemetry data to make digital twins a reality,” Venkatraman said.
However, network engineering practices will need to evolve their practices and cultures to take advantage of digital twins as part of their workflows. Gartner’s Toussaint told VentureBeat that most network engineering teams still create static network architecture diagrams in Visio.
And when it comes to rolling out new equipment, they either test it in a live environment with physical equipment or “do the cowboy thing and test it in production and hope it does not fail,” he said.
Even though network digital twins are starting to virtualize some of this testing workload, Toussaint said physically testing the performance of cutting-edge networking hardware that includes specialized ASICs, FPGAs, and TPUs chips will remain critical for some time.
Culture shift required Eventually, Toussaint expects networking teams to adopt the same devops practices that helped accelerate software development, testing and deployment processes. Digital twins will let teams create and manage development and test network sandboxes as code that mimics the behavior of the live deployment environment.
But the cultural shift won’t be easy for most organizations.
“Network teams tend to want to go in and make changes, and they have never really adopted the devops methodologies,” Toussaint said.
They tend to keep track of configuration settings on text files or maps drawn in Visio, which only provide a static representation of the live network.
“There have not really been the tools to do this in real time,” he said.
Getting a network map has been a very time-intensive manual process that network engineers hate, so they want to avoid doing it more than once. As a result, these maps seldom get updated.
Digital twins as an intermediate step in automation Toussaint sees digital twins as an intermediate step as the industry uses more AI and ML to automate more aspects of network provisioning and management. Business managers are likely to be more enthused by more flexible and adaptable networks that keep pace with new business ideas than a dynamically updated map.
But in the interim, network digital twins will help teams visualize and build trust in their recommendations as these technologies improve.
“In another five or 10 years, when networks become fully automated, then digital twins become another tool, but not necessarily something that is a must-have,” Toussaint said.
Toussaint said these early network digital twins are suitable for vetting configurations, but have been limited in their ability to grapple with more complex issues. He said he likes to consider it to be analogous to how we might use Google Maps as a kind of digital twin of our trip to work, which is good at predicting different routes under current traffic conditions. But it will not tell you about the effect of a trip on your tires or the impact of wind on the aerodynamics of your car.
This is the first of a two-part series. In part 2, we’ll outline the future of digital twins and how organizations are finding solutions to the issues outlined here.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,078 | 2,022 |
"How digital twins are transforming network infrastructure: Future state (part 2) | VentureBeat"
|
"https://venturebeat.com/ai/how-digital-twins-are-driving-network-transformation-future-state-part-2"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How digital twins are transforming network infrastructure: Future state (part 2) Share on Facebook Share on X Share on LinkedIn Global communication Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
This is the second of a two-part series. Read part 1 about the current state of networking and how digital twins are being used to help automate the process, and the shortcomings involved.
As noted in part 1 , digital twins are starting to play a crucial role in automating the process of bringing digital transformation to networking infrastructure. Today, we explore the future state of digital twins – comparing how they’re being used now with how they can be used once the technology matures.
The market for digital twins is expected to grow at a whopping 35% CAGR (compound annual growth rate) between 2022 and 2027, from a valuation of $10.3 billion to $61.5 billion. Internet of things (IoT) devices are driving a large percentage of that growth, and campus networks represent a critical aspect of infrastructure required to support the widespread rollout of the growing number of IoT devices.
Current limitations of digital twins One of the issues plaguing the use of digital twins today is that network digital twins typically only help model and automate pockets of a network isolated by function, vendors or types of users. However, enterprise requirements for a more flexible and agile networking infrastructure are driving efforts to integrate these pockets.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Several network vendors, such as Forward Networks , Gluware , Intentionet and Keysight’s recent Scalable Networks acquisition, are starting to support digital twins that work across vendors to improve configuration management, security, compliance and performance.
Companies like Asperitas and Villa Tech are creating “digital twins-as-a-service” to help enterprise operations.
In addition to the challenge of building a digital twin for multivendor networks, there are other limitations that digital twin technology needs to overcome before it’s fully adopted, including: The types of models used in digital twins needs to match the actual use case.
Building the model, supporting multiple models and evolving the model over time all require significant investment, according to Balaji Venkatraman, VP of product management, DNA, at Cisco.
Keeping the data lake current with the state of the network. If the digital twin operates on older data, it will return out-of-date answers.
Future solutions Manas Tiwari, client partner for cross-industry comms solutions at Capgemini Engineering, believes that digital twins will help roll out disaggregated networks composed of different equipment, topologies and service providers in the same way enterprises now provision services across multiple cloud services.
Tiwari said digital twins will make it easier to model different network designs up front and then fine-tune them to ensure they work as intended. This will be critical for widespread rollouts in healthcare, factories, warehouses and new IoT businesses.
Vendors like Gluware, Forward Networks and others are creating real-time digital twins to simulate network, security and automation environments to forecast where problems may arise before these are rolled out. These tools are also starting to plug into continuous integration and continuous deployment (CI/CD) tools to support incremental updates and rollback using existing devops processes.
Cisco has developed tools for what-if analysis, change impact analysis, network dimensioning and capacity planning. These areas are critical for proactive and predictive analysis to prevent network or service downtime or impact user experience adversely.
Overcoming the struggle with new protocols Early modeling and simulation tools, such as the GNS3 virtual labs, help network engineers understand what is going on in the network in terms of traffic path, connectivity and isolation of network elements. Still, they often struggle with new protocols, domains or scaling to more extensive networks. They also need to simulate the ideal flow of traffic, along with all the ways it could break or that paths could be isolated from the rest of the network.
Christopher Grammer, vice president of solution technology at IT solutions provider Calian, told VentureBeat that one of the biggest challenges is that real network traffic is random. The network traffic produced by a coffee shop full of casual internet users is a far cry from the needs of petroleum engineers working with real-time drilling operations. Therefore, simulating network performance is subject to the users’ needs, which can change at any time, making it more difficult to actively predict.
Not only that, but, modeling tools are costly to scale up.
“The cost difference between simulating a relatively simple residential network model and an AT&T internet backbone is astronomical,” Grammer said.
Thanks to algorithms and hardware improvements, vendors like Forward Enterprise are starting to scale these computations to support networks of hundreds of thousands of devices.
Testing new configurations The crowning use case for networking digital twins is evaluating different configuration settings before updating or installing new equipment. Digital twins can help assess the likely impact of changes to ensure equipment works as intended.
In theory, these could eventually make it easier to assess the performance impact of changes. However, Mike Toussaint, senior director analyst at Gartner, said it may take some time to develop new modeling and simulation tools that account for the performance of newer chips.
One of the more exciting aspects is that these modeling and simulation capabilities are now being integrated with IT automation. Ernest Lefner, chief product officer at Gluware, which supports intelligent network process automation, said this allows engineers to connect inline testing and simulation with tools for building, configuring, developing and deploying networks.
“You can now learn about failures, bugs, and broken capabilities before pushing the button and causing an outage. Merging these key functions with automation builds confidence that the change you make will be right the first time,” he said.
Wireless analysis Equipment vendors such as Juniper Networks are using artificial intelligence (AI) to incorporate various kinds of telemetry and analytics to automatically capture information about wireless infrastructure to identify the best layout for wireless networks. Ericsson has started using Nvidia Omniverse to simulate 5G reception in a city.
Nearmap recently partnered with Digital Twin Sims to create dynamically updated 5G coverage maps into 5G planning and operating systems.
Security and compliance Grammer said digital twins could help improve network heuristics and behavioral analysis aspects of network security management. This could help identify potentially unwanted or malicious traffic, such as botnets or ransomware. Security companies often model known good and bad network traffic to teach machine learning algorithms to identify suspicious network traffic.
According to Lefner, digital twins could model real-time data flows for complex audit and security compliance tasks.
“It’s exciting to think about taking complex yearly audit tasks for things like PCI compliance and boiling that down to an automated task that can be reviewed daily,” he said.
Coupling these digital twins with automation could allow a step change in challenging tasks like identifying up-to-date software and remediating newly identified vulnerabilities. For example, Gluware combines modeling, simulation and robotic process automation (RPA) to allow software robots to take actions based on specific network conditions.
Peyman Kazemian, cofounder of Forward Networks, said they are starting to use digital twins to model network infrastructure. When a new vulnerability is discovered in a particular type of equipment or software version, the digital twins can find all the hosts that are reachable from less trustworthy entry points to prioritize the remediation efforts.
Cross-domain collaboration Network digital twins today tend to focus on one particular use case, owing to the complexities of modeling and transforming data across domains. Teresa Tung, cloud first chief technologist at Accenture, said that new knowledge graph techniques are helping to connect the dots. For example, a digital twin of the network can combine models from different domains such as engineering R&D, planning, supply chain, finance and operations.
They can also bridge workflows between design and simulations. For example, Accenture has enhanced a traditional network planner tool with new 3D data and an RF simulation model to plan 5G rollouts.
Connect2Fiber is using digital twins to help model its fiber networks to improve operations, maintenance and sales processes.
Nearmap’s drone management software automatically inventories wireless infrastructure to improve network planning and collaboration processes with asset digital twins.
These efforts could all benefit from the kind of innovation driven by building information models (BIM) in the construction industry. Jacob Koshy, information technology and communications associate, Arup, an IT services firm, predicts that comparable network information models (NIM) could have a similarly transformative role in building complex networks.
For example, the RF propagation analysis and modeling for coverage and capacity planning could be reused during the installation and commissioning of the system. Additionally, integrating the components into a 3D modeling environment could improve collaboration and workflows across facilities and network management teams.
Emerging digital twin APIs from companies like Mapped , Zyter and PassiveLogic might help bridge the gap between dynamic networks and the built environment. This could make it easier to create comprehensive digital twins that include the networking aspects involved in more autonomous business processes.
The future is autonomous networks Grammer believes that improved integration between digital twins and automation could help fine-tune network settings based on changing conditions. For example, business traffic may predominate in the daytime and shift to more entertainment traffic in the evening.
“With these new modeling tools, networks will automatically be able to adapt to application changes switching from a business video conferencing profile to a streaming or gaming profile with ease,” Grammer said.
How digital twins will optimize network infrastructure The most common use case for digital twins in network infrastructure is testing and optimizing network equipment configurations. Down the road, they will play a more prominent role in testing and optimizing performance, vetting security and compliance, provisioning wireless networks and rolling out large-scale IoT networks for factories, hospitals and warehouses.
Experts also expect to see more direct integration into business systems such as enterprise resource planning (ERP) and customer relationship management (CRM) to automate the rollout and management of networks to support new business services.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,079 | 2,022 |
"Gartner predicts 'digital twins of a customer' will transform CX | VentureBeat"
|
"https://venturebeat.com/ai/gartner-predicts-digital-twins-of-a-customer-will-transform-cx"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Gartner predicts ‘digital twins of a customer’ will transform CX Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Digital twins of physical products and infrastructure are already transforming how companies design and manufacture products, equipment and infrastructure. In its latest Immersive Hype Cycle , Gartner predicts that digital twins of a customer (DToC) could transform the way enterprises deliver experiences. Simulating a customer experience (CX) is a bit more nuanced than a machine — and there are privacy considerations to address, not to mention the creepiness factor. Though if done right, Gartner predicts the DToC will drive sales while delighting customers in surprising ways.
Gartner has a nuanced view of the customer, including individuals, personas, groups of people and even machines. It is worth noting that many enterprise technologies are moving toward this more comprehensive vision. Customer data platforms consolidate a data trail of all aspects of customer interaction. Voice of the customer tools help capture data from surveys, sensors and social media. While, customer journey mapping and customer 360 tools analyze how customers interact with brands across multiple apps and channels.
The critical innovation point of DToC is that it helps contextualize data to help understand what customers really need to improve the overall experience, Gartner VP analyst Michelle DeClue-Duerst told VentureBeat. For example, a hotel with knowledge about a customer’s gluten allergy might identify nearby gluten-free restaurants and only stock the minibar with snacks the customer will enjoy.
When done right, DToCs can help business teams design ways to serve or capture customers and facilitate new data-driven business models. They will also improve customer engagement, retention and lifetime failure.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Developing core capabilities Gartner notes that DToC implementations are still embryonic, with about 1-5%penetration of the target audience. At the same time, enterprises have been busy finding ways to get the most value from their investment using various marketing analytics tools.
Subha Tatavarti, CTO, Wipro, told VentureBeat there have been several important milestones in using tools for simulating customers to improve experiences. The most notable have been the ability to define customer experience transformation objectives, including the capability to identify and assess data assets, personas and processes and tools for building and testing behavior models. New ModelOps approaches for integrating monitoring and enhancing the models are also advancing the field.
“A new generation of recommendation systems based on intention, context and anticipated needs is a very exciting development in combined modeling and simulation capabilities,” Tatavarti said. “Personalized learning and hyper-personalized products are great advancements and personalized healthcare will have critical impacts on that industry.” Enterprises are taking advantage of new identity resolution capabilities that assemble pieces of data to create a holistic view of the customer. This stitching can help a company understand what an individual customer buys, how frequently they purchase, how much they spend, how often they visit a website and more.
“Without identity resolution, the company may have to rely on only some of the attributed data sources to fill out the digital persona, meaning the simulation would be somewhat inaccurate,” said Marc Mathies, senior vice president of platform evolution at Vericast, a marketing solutions company.
Bumpy road Enterprises will need to address a few challenges to scale these efforts. Gartner observed that privacy and security concerns could lengthen the time it takes DToCs to mature and increase regulatory risks. Organizations must also build teams familiar with machine learning and simulation techniques.
Tatavarti said the most difficult obstacles are the quality and availability of customer data from physical and digital interaction and data sharing between multiple organizations. These challenges will also involve privacy considerations and the ability to connect physical systems and virtual models without affecting the experience or performance. Teams also need to ensure the accuracy of the models and eliminate bias.
Bill Waid, chief product and technology officer at FICO, a customer analytics leader, told VentureBeat that another challenge in implementing digital twins for customer simulation is the impact of localized versus global simulation. Frequently, teams only simulate subsegments of the decision process to improve scale and manageability. Enterprises will need to compose these digital twins for more holistic and reusable simulations.
Organizations will also need to be transparent.
“Initially, it will be hard to convince customers they need a digital twin that your brand stores and that the customer should help create it to improve their experience,” said Jonathan Moran, head of MarTech solutions marketing at SAS.
Building the right foundation Industry leaders have many ideas about how enterprises can improve these efforts.
Unlike digital twins in areas like manufacturing, customer behavior shifts quickly and often, Karl Haller, partner at IBM Consulting said it is essential to implement ongoing optimization and calibration to analyze the simulation results and determine ways to improve the performance of the models. He also recommends narrowly defining the focus of a customer simulation to optimize outcomes and reduce costs. Innovations in natural language processing, machine learning, object andvisual recognition, acoustic analytics and signal processing could help.
Moran recommends enterprises develop synthetic data generation expertise to build and augment virtual customer profiles. These efforts could help expand data analytics and address privacy considerations.
Mark Smith, vice president of digital engagement solutions at CSG, recommends business to overlay voice of customer data with behavioral data captured through customer journey analytics. This modeling method is typically the fastest and most accurate route to understanding the peaks and valleys of the customer journey.
“Comparing customers’ actual actions with their reported lived experience data unearths disconnects between customers’ perception of the experience and brands’ analysis of their own offerings,” Smith said.
A mixed future Eventually, enterprises will need to find ways to optimize for profits along with customer well-being. Eangelica Germano Aton, product owner at a conversational intelligence platform, Gryphon AI, predicts that things will initially get worse for people as machines get better at predicting choices that reduce emotional well-being.
“I think it will take a customer-driven or a bottom-up revolution and rejection of the current model before a more sophisticated and genuinely humanist AI can emerge that doesn’t maximize such a shallow objective function as profit,” Germano Aton said.
Others are more optimistic.
“Over time, it will be possible to use a deep understanding of the customer in a way that creates value for the consumer, the brand and the employees of the brand,” said Chris Jones, chief product officer at Amperity, a CDP platform. “One of the things we are observing is the ability of these capabilities to deepen the human connection between brands and the customers they serve by empowering employees across the brand to truly see their customer and provide the most personalized experience possible.” In the long run, digital twin capabilities could become embedded into marketing and customer experience automation tools.
“As digital twin work moves more into marketing and CX in five to ten years, I think we will see solutions with more simulation capabilities built in,” Moran said. “Any type of marketing KPI and expected results will be simulated within the tool. Vendors already have some simulation capabilities for optimization, reinforcement learning and predictions, but I think this will start to increase even more in the coming years.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,080 | 2,022 |
"Bosch's new partnership aims to explore quantum digital twins | VentureBeat"
|
"https://venturebeat.com/ai/bosch-building-quantum-digital-twin"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Bosch’s new partnership aims to explore quantum digital twins Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Industrial giant Bosch has partnered with Multiverse Computing , a Spanish quantum software platform, to integrate quantum algorithms into digital twin simulation workflows. Bosch already has an extensive industrial simulation practice that provides insights across various business units. This new collaboration will explore ways quantum-inspired algorithms and computers could help scale these simulations more efficiently.
Bosch is exploring quantum computing and simulation as part of its broader Industry 4.0 efforts focused on increasing data collection, analytics and simulation across its 240 plants. These efforts have connected 120,000 machines used in manufacturing and over 250,000 devices into new digital twin workflows.
Multiverse is building a quantum software platform that works across different quantum computing technologies. Although most quantum hardware is still immature, the company has already discovered several quantum-inspired algorithms that perform better than conventional ones and have made it easier to deploy both across current supercomputers and different quantum hardware. The two companies hope to see the initial results of these new quantum and quantum-inspired algorithms working in Bosch’s Madrid facility later this year, which could scale across its manufacturing facilities in the future.
Accelerating industrial simulation Oscar Hernández Caballer, the senior manager of digitalization and Industry 4.0 Bosch Plant Coordination, at the Bosch plant in Madrid, told VentureBeat that the company had come a long way toward fully digitizing its facilities. His team has been working on finding ways to use this data to make decisions and control processes more efficiently with very short reaction times. For example, they have reduced the cost of production scrap by more than 20% in the last three years.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “Thanks to digitization, we can identify the causes of production problems much better and faster and establish corrective measures,” he said.
One of the most promising use cases for the new quantum algorithms is creating better machine learning models more quickly. Hernández Caballer said quantum computing shows tremendous promise in use cases with many combinations of parameters and materials. This early research could give Bosch a leg up in taking advantage of these new systems to improve machine learning and simulation.
Focus on business supremacy Multiverse was founded in 2019 in a WhatsApp group by a small team of physicists and business experts. Enrique Lizaso Olmos, founder and CEO of Multiverse Computing, told VentureBeat that the company decided to focus on quantum computing in finance and published a seminal paper that caught the attention of large customers such as Crédit Agricole, BBVA, Caixabank, Ally Bank and Bank of Canada. Other paying customers came to Multiverse for help with complex problems in energy manufacturing, chemistry, life sciences, engineering and defense.
Most of the industry has focused on achieving quantum supremacy that demonstrates ways that quantum hardware can outperform conventional computers. Olmos said this distracts from the potential for early quantum hardware to deliver real business value today.
“So the real, difficult question is what can you do with the current, small, noisy quantum computers now that’s better than some other tools that your customers are using?” he asked. “This is the hard question that most of the companies in the quantum software side, coming from pure physics, don’t know how to answer. And this is where we shine.” Multiverse focuses only on those problems where they believe that quantum or quantum-inspired algorithms such as tensor networks or a combination of the two will beat existing business tools. Olmos observed that critics have argued these tools are not faster than a supercomputer, which is technically accurate. However, Multiverse simplifies supercomputer workflows for business users to develop, deploy and manage next-generation algorithms for business use cases such as portfolio optimization or regular machine learning training.
“We believe business supremacy will be here when quantum supremacy for business arrives, but again the challenge will be to beat your competitors,” said Olmos.
The company has developed algorithms that speed AI training by more than one hundred times while reducing energy and memory use eighty times. Other companies focused on applying quantum computing to business applications include Google spinoff Sandbox AQ and Zapata.
Olmos said a key differentiator is that Multiverse combines quantum and quantum-inspired solutions that the others are not currently pursuing.
Multiverse also offers Singularity, an enterprise-grade Software-as-a-Service platform that supports the development of apps without the need for quantum expertise.
“Singularity is quantum for the masses, not just for the physicists inside corporations,” Olmos said.
They have already developed low-code templates that support over 50 business use cases.
Multiverse raised $10.25 million (€10 million) last year from investors, including Quantonation, JME, Inveready, EASO VC, SPRI and Mondragon VC. In addition, the European Commission awarded the firm $2.6 million (€2.5 million) in grants and €10M in additional equity last year through the EIC Horizon Europe program.
Bosch’s partnership with Multiverse Computing is an example of how many legacy companies are exploring quantum computing today to prepare for more capable hardware. Hernández Caballer said that his team is trying to anticipate the future so that they don’t get left behind.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,081 | 2,021 |
"Qualcomm Smart Cities Accelerator Program expands to over 400 members | VentureBeat"
|
"https://venturebeat.com/2021/09/28/qualcomm-smart-cities-accelerator-program-expands-to-over-400-members"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Qualcomm Smart Cities Accelerator Program expands to over 400 members Share on Facebook Share on X Share on LinkedIn Qualcomm is focused on IoT and smart cities.
Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Qualcomm said its smart city and internet of things (IoT) ecosystem now has more than 400 participating tech companies as the company enters the third year of its Qualcomm Smart Cities Accelerator Program.
The group includes system integrators, hardware and software providers, cloud solution providers, design and manufacturing companies, and more — all focused on delivering smart end-to-end solutions for modern cities, spaces, and enterprises.
Qualcomm made a number of IoT announcements today at its third annual Smart Cities Accelerate event in La Jolla, California. The company claimed that it continues to lead the IoT ecosystem with the growth of the Qualcomm Smart Cities Accelerator Program and momentum of the Qualcomm IoT Services Suite.
A comprehensive strategy of delivering IoT-as-a-service with a massive ecosystem is helping industries and cities adopt end-to-end smart solutions, enabling easier, faster, and more cost-effective management and deployment of smart spaces across industries including education, logistics, health care, transportation, inspection, energy, agriculture, and more.
Above: Qualcomm is going wide and deep on smart cities.
Sanjeet Pandit, Qualcomm’s global head of smart cities, said in an interview with VentureBeat that the company is enabling IoT-as-a-service through its services suite.
“This was an extremely fragmented space, and so the first thing we did was create a smart city accelerator program,” Pandit said. “Apologies for the analogy. But I would call this the Match.com of smart cities, where we bring everybody under one roof. And the ecosystem is nurtured by getting everybody to know everyone and to cooperate with each other.” The event featured a talk with Earvin “Magic” Johnson and city mayors — including Tishaura Jones of St. Louis, Sam Liccardo of San Jose, and Francis Suarez of Miami — coming together to discuss their visions for smart cities, the technologies needed for post-pandemic communities, closing the digital divide, and working in partnership with companies like Qualcomm.
Above: Qualcomm’s smart cities approach to construction.
Pandit said Qualcomm’s IoT-as-a-service offering has grown to 30 vertical markets — ranging from construction to warehouse management, first-responder services, inspections, and wildfire monitoring — in less than a year. The company helps cities implement the technologies in their markets by gathering all of the vendors and offering a solution for them to implement.
“We have everything from warehousing to location-as-a-service to smarter hospitals to oil and natural gas to agriculture to inspection,” Pandit said. “Everything is out of the box with hardware, software, dashboards — everything is integrated with commercially available devices ready to deploy for monetization.” Qualcomm’s aim is to show it can lead the digital transformation of industries with a differentiated approach that leverages the growing number of smart devices that make up the connected intelligent edge. Qualcomm said Booz Allen Hamilton has joined the program as a system integrator for ecosystem members.
The companies have collaborated on projects such as smart marine bases and naval fleet carriers, using vision intelligence and AI-infused cameras. Qualcomm said the event had more than 1,000 registrants.
“We have now taken the fragmentation out of the equation with solutions that are plug and play,” Pandit said.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,082 | 2,021 |
"In a year of major shifts, the self-driving car market is consolidating | VentureBeat"
|
"https://venturebeat.com/2021/05/02/in-a-year-of-major-shifts-the-self-driving-car-market-is-consolidating"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages In a year of major shifts, the self-driving car market is consolidating Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
News broke this week that Woven Planet, a Toyota subsidiary, will acquire Level 5, Lyft’s self-driving unit , for $550 million. The transaction, which is expected to close in Q3 2021, includes $200 million paid upfront and $350 million over a five-year period.
Toyota will gain full control of Lyft’s technology and its team of 300. Lyft will remain in the game as a partner to Toyota’s self-driving efforts, providing its ride-hailing service as a platform to commercialize the technology when it comes to fruition.
The Toyota-Lyft deal is significant because it comes on the back of a year of major shifts in the self-driving car industry. These changes suggest the autonomous vehicle market will be dominated by a few wealthy companies that can withstand huge costs and very late return on investment in a race that will last more than a few years.
The costs of self-driving car technology Costs remain a huge barrier for all self-driving car projects. The main type of software powering self-driving cars is deep reinforcement learning , which is currently the most challenging and expensive branch of artificial intelligence. Training deep reinforcement learning models requires expensive compute resources. This is the same technology used in AI systems that have mastered complicated games such as Go, StarCraft 2 , and Dota 2.
Each of those projects cost millions of dollars in hardware resources alone.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! However, in contrast to game-playing AI projects, which last between a few months to a few years, self-driving car projects take several years—and maybe above a decade—before they reach desirable results. Given the complexities and unpredictability of the real world, designing and testing the right deep learning architecture and reward, state, and action space for self-driving cars is very difficult and costly. And unlike games, the reinforcement learning models used in driverless cars need to gather their training experience and data from the real world, which is fraught with extra logistical, technical, and legal costs.
Some companies develop virtual environments to complement the training of their reinforcement learning models. But those environments come with their own development and computing costs and aren’t a full replacement for driving in the real world.
Equally costly is the talent needed to develop, test, and tune the reinforcement learning models used in driverless cars.
All of these expenses put a huge strain on the budgets of companies running self-driving car projects. According to reports, the sale of Level 5 will cut Lyft’s net annual operating costs by $100 million. This will be enough to make the company profitable. Uber, Lyft’s rival, also sold its driverless car unit , Advanced Technologies Group (ATG), in December because it was losing money.
So far, no company has been able to develop a profitable self-driving car program. Waymo, Alphabet’s self-driving subsidiary, has launched a fully driverless ride-hailing service in parts of Arizona. But it is still losing money on the project and is in the process of expanding the service to other cities in the U.S.
Driverless cars are not ready for primetime Not long ago, it was generally believed that self-driving cars were a solved problem and it would only take a couple of years of development and training to get them ready for production. Several companies had hailed launching robo-taxi services by 2018, 2019, and 2020. A few carmakers promised to make full self-driving cars available to consumers.
But we’re in 2021, and it’s clear that the technology is still not ready. Our deep learning algorithms are not on par with the human vision system.
That’s why many companies need to use complementary technologies such as lidars, radars, and other sensors. Added to that is precision mapping data that provide the car with exact details of what it should expect to see in its surroundings. But even with all these props, we haven’t reached self-driving technology that can run on any road, weather, and traffic condition.
The legal infrastructure for self-driving cars is also not ready. We still don’t know how to regulate roads shared by human- and AI-driven cars, how to determine culpability in accidents caused by self-driving cars, and many more legal and ethical challenges that arise from removing humans from behind steering wheels.
In many ways, the self-driving car industry is reminiscent of the early decades of AI : The technology always seems to be right around the corner. But the end goal seems to be receding as we continue to approach it.
The self-driving car market is consolidating What does this all mean for companies that are running self-driving car projects? Many more years and billions of dollars’ worth of investment in developing a technology that doesn’t seem to get off the ground.
This will make it very difficult for companies that don’t have a highly profitable business model to engage in the market. And this includes ride-hailing services , which are under extra pressure due to the coronavirus pandemic. Startups that are living on VC money will also be hard-pressed to deliver on timelines that are shaky at best.
Lyft’s sale to Toyota is part of a growing trend of self-driving car projects and startups gravitating toward deep-pocketed automotive or tech giants.
Waymo will continue to operate and push forward for self-driving technology because its parent company has a long history of funding moonshot projects, most of which never reach profitability.
Amazon acquired Zoox last year.
Apple is considering creating its own electric self-driving car. And Microsoft is casting a wide net in the market, investing in several self-driving car projects at the same time.
Traditional carmakers are also becoming big players in the market. Argo AI is backed by Ford and Volkswagen, both of whom have a major stake in the future of self-driving cars. General Motors owns Cruise. Hyundai has poured $2 billion into a joint self-driving car venture with green tech startup Aptiv. And Aurora , the company that acquired Uber’s ATG, is developing partnerships with several automakers.
As the self-driving car industry shifts from hype to disillusionment, the market is slowly consolidating into a few very big players. Startups will be acquired, and we can probably expect one or more mergers between big tech and big automotive. This is going to be a race between those who can withstand the long haul.
This story originally appeared on Bdtechtalks.com.
Copyright 2021 VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,083 | 2,017 |
"How open source software will drive the future of auto innovations | VentureBeat"
|
"https://venturebeat.com/2017/05/22/how-open-source-software-will-drive-the-future-of-auto-innovations"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest How open source software will drive the future of auto innovations Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Automotive companies are shifting from bending metal to bending bits. Soon they will be offering software and services to complement their manufactured metal.
As these companies become software-driven, open source will become a staple to drive innovation faster and more reliably. Today’s cloud is powered by open source software: 78 percent of businesses run open source software in some form. With the convergence of automobiles and the cloud (supporting autonomous systems and connectivity), it’s quite clear this open source paradigm that took over the cloud will take over the automobile.
This future of mobility includes the convergence of automotive hardware and software-driven cloud solutions. Open source will be at the core of this transformation and will drive innovation faster. Soon we will see Ford, GM, Fiat Chrysler, BMW, and other manufacturers launching their own open source initiatives.
Open-sourcing parts of your automobile Whether it be navigation, music and media, or mobile phone support, you might be interfacing with features built on top of open source software already.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Genivi is an open source framework for in-vehicle infotainment launched in 2009 with founding members BMW, GM, Intel, and Delphi. It launched with a goal of “driving innovation” to “reduce time-to-market and total cost of ownership.” This platform gives car makers more impact and leverage over the features available in an in-car experience. Automotive companies can reduce costs and enable richer experiences by leveraging an open source project like Genivi. This allows them to focus on what differentiates their own product.
Your engine, transmission, airbags, anti-lock brakes, and cruise control are all connected via a system called the CAN bus.
This protocol powers the backbone network in a vehicle. Like the HTTP protocol that powers the internet, systems can be built on top of the CAN bus to enable entirely new applications, like cars that drive themselves. In the automotive world, examples include: PolySync developed an open source car control project detailing the conversion of a vehicle into an autonomous driving vehicle.
George Hotz is giving away the code behind his self-driving car project , an open source alternative to Tesla’s Autopilot.
ROS, a robot operating system , is enabling R&D teams at automotive companies to quickly develop and prototype autonomous vehicles and sensor-rich vehicles.
Just like much of the web is built on Linux, much of the autonomous future will be built on open source projects. Today, it already seems clear that ROS is one of those emerging open source platforms.
The blueprints to design and build electric vehicles and transportation services One of the more audacious open source projects is OSVehicle , founded with a mission “to democratize mobility by enabling businesses and startups to design, prototype, and build custom electric vehicles and transportation services.” Renault’s open source mass market vehicle platform was the first major automotive company to leverage this product. This was the world’s first open source mass market vehicle platform.
Other OSVehicle projects include : BusyBee , the first road-legal city car built on the open platform FabCar , a vehicle showcased at Fab10 in Barcelona that can be built inside a FabLab SPA’s Luxury EV , from a historical Italian brand and made of new high-tech materials Maker’s cars , vehicles created by hobbyists with local materials such as fabric and wood Nika , the first connected car made specifically to enable app development The future of automotive is open source The future of mobility will encompass services offered for getting around more freely. Automotive companies will shift from manufacturing steel to serving up bits. Software and data will drive this core differentiator, enabling new services and seamless experiences.
The entire internet infrastructure changed over the late ’90s and early ’00s, leveraging open source software. Those proprietary systems opened up, making it easier and cheaper to build websites. Content management systems became open, allowing publishers to focus on their core differentiator: their content. As a result, we saw a proliferation of amazing websites, apps, and tools online.
The automotive industry and emerging mobility companies will see the same result over the coming decade. It won’t be limited to infotainment, autonomous systems, or vehicle design. Open source will enhance every aspect of a vehicle in the coming decade. Companies that embrace this change will drive innovation faster. They will be able to shift to enabling new services and experiences. This will drive up their competitive advantages while reducing costs associated with running commodity parts of their business.
Ted Serbinski is the managing director of Techstars Mobility , a startup accelerator.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,084 | 2,021 |
"Cadence Design Systems launches Cerebrus machine learning for chip design | VentureBeat"
|
"https://venturebeat.com/2021/07/22/cadence-design-systems-launches-cerebrus-machine-learning-for-chip-design"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Cadence Design Systems launches Cerebrus machine learning for chip design Share on Facebook Share on X Share on LinkedIn Cadence Design Systems' Cerebrus tool uses machine learning in chip design.
Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
It was only a matter of time before machine learning transformed the world of chip design.
Cadence Design Systems , which makes design tools that engineers use to create chips, is using it to make chip engineers far more productive with its Cerebrus Intelligent Chip Explorer machine learning tool.
Automating chip design (electronic design automation, or EDA) has been evolving for decades, with a hierarchy of tools operating at different levels of abstraction. Cadence got started in 1988 with the goal of using the benefits of computing to design the next generation of computing chips. But engineers have found it increasingly difficult to keep up with the intricate designs for chips that have billions of on-off switches, dubbed transistors. The process of design has become like trying to keep track of all of the ants on the planet.
With machine learning, Cadence Design Systems has been able to add an extra layer of automation on top of the design automation tools engineers have been using for many years, Kam Kittrell, senior product management group director in the Digital & Signoff Group at Cadence, said in an interview with VentureBeat.
The results are pretty awesome. With machine learning, the company can get 10 times better productivity per engineer using the design tools. And they can get 20% better power, performance, and chip area improvements. That’s a huge gain that could ultimately make each chip more affordable, reliable, and faster than it otherwise would have been, Kittrell said. That could mean billions of dollars saved.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! This kind of productivity gain is necessary as Moore’s Law, the metronome of the chip industry, has begun to slow. The law predicts that chip performance will double every couple of years, but lately the gains from moving to a new generation of manufacturing have been limited, as we’re entering miniaturization technology on the atomic level and running into barriers from the laws of physics.
Meanwhile, with billions of transistors per chip, engineers who worked on chips a few generations ago, like 28-nanometer chips, can barely function with the requirements for chip design of today’s 7-nanometer chips, where the width between circuits is seven billionths of a meter.
“These are three-dimensional puzzles,” Kittrell said.
Enter machine learning Above: Cadence’s headquarters in San Jose, California.
With compounding pressure to deliver new chips more quickly than ever before, engineers have to become increasingly efficient. Machine learning offers an answer, Kittrell said.
Just as today’s “intelligent” consumer devices provide users with information at their fingertips, machine learning automates chip design processes so engineers can complete projects “intelligently,” faster and with fewer mistakes. Machine learning also creates a level engineering playing field, whether you’re an established semiconductor player, a company outside the industry that has brought chip design in-house, or a small startup.
“There have been some refinements over time for chip design, but it’s been basically the same way. And so it’s been getting more and more complicated for an engineer to take a chip through to completion,” Kittrell said. “For example, someone who may be very good at building chips at 28 nanometers will have a huge learning curve to do a five-nanometer chip today. The technology has changed so much.” Cerebrus doesn’t replace the flow of tools and the way humans interact with the tools. But it works as a driver’s assistant, Kittrell said.
“Power, performance, and area are always the key objectives that anyone drives whenever they’re making a chip,” Kittrell said. “It has to be manufacturable. But after that, there’s a squeeze on power and performance and area. And so we use reinforcement learning in our Cerebrus tool. It controls the tool and does experimentation for the engineer in order to find the best solution.” A helper Above: Cadence Design Systems was founded in 1988.
Machine learning isn’t threatening the jobs of chip engineers, who are more sought-after than ever, Kittrell said. Rather than replacing them, machine learning has become an engineer’s “helper,” reducing the learning ramp-up time and handling many traditional engineering tasks automatically.
“This is an example where it improves the productivity of the engineer while also delivering better power performance,” Kittrell said.
Cerebrus uses unique machine learning technology to drive the Cadence RTL-to-signoff implementation flow. Here the engineer designs on a level of abstraction where he or she can understand the logical flow of electrons through a chip. Cadence’s existing, earlier tools would take the logical flow and convert it to the physical layout of the chip. The logical level is the Register Transfer Level, and it is converted to the final sign-off tools and actual placement and routing of wiring throughout a chip. There are often multiple ways to implement a logical design in a physical layout, and optimizing that can save a lot of material, energy, and costs.
An engineer can handle this part of the design on one pass. But Cerebrus can take another run through it and improve the results. The engineer delivers the final design in a database format dubbed GDSII, and then it’s off to manufacturing.
“There’s always a push to find a way to optimize for power, performance, and area. This can take a lot of time in the design process. And this is where Cerebrus can help. It can take a list of anything within the RTL to GDSII and do experiments.” “You don’t have to spend a lot of time training a model upfront in order to get started. Right from the beginning, Cerebrus can start doing searches based on your vector and your design, and within a few runs [it] can find a better solution,” Kittrell said.
From chip design to your living room Above: Nvidia’s Grace CPU for datacenters is named after Grace Hopper.
Once the chip designer is done, they hand the design over to the factory engineers. Inside a chip factory, there are hundreds of steps that are like an assembly line to build a chip one layer of material at a time. Robotics handle a lot of the tasks, but machine learning has also been applied to the giant hardware machines that pattern materials on top of chips. This is what it takes to get the latest Nintendo Switch or PlayStation 5 into the hands of the gamer in your family.
The results are as previously mentioned, and they can help many different chip applications in consumer, hyperscale computing, 5G communications, automotive, and mobile design, Cadence said. It scales engineering resources to handle more projects or bigger ones.
Cadence has already deployed the tool to over a dozen customer locations across all of those applications, Kittrell said. Now the company is making the tool available to all customers.
Cerebrus is part of the broader Cadence digital full flow of tools. The machine learning can reinforce engineers, considering solutions that humans might not explore. It also allows design learnings to be automatically applied to future designs, and it offloads work from humans. It enables distributed computing, with better on-premises or cloud-based designs.
Renesas customer Satoshi Shibatani said in a statement that automated design flow optimization is critical for making products quickly, and he said Cerebrus has improved design performance by more than 10%. So his company is adopting the technology for its latest projects. Samsung VP of design technology Sangyun Kim said Samsung Foundry used the Cerebrus tool and saw an 8% power reduction in its chip and 50% better timing, which improved overall performance.
It’s taken a while for machine learning to impact chip design, but it’s hard to find an industry that it won’t impact.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,085 | 2,021 |
"Cerebras launches new AI supercomputing processor with 2.6 trillion transistors | VentureBeat"
|
"https://venturebeat.com/2021/04/20/cerebras-systems-launches-new-ai-supercomputing-processor-with-2-6-trillion-transistors"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Cerebras launches new AI supercomputing processor with 2.6 trillion transistors Share on Facebook Share on X Share on LinkedIn Cerebras' CS-2 wafer-size chip.
Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Cerebras Systems has unveiled its new Wafer Scale Engine 2 processor with a record-setting 2.6 trillion transistors and 850,000 AI-optimized cores. It’s built for supercomputing tasks, and it’s the second time since 2019 that Los Altos, California-based Cerebras has unveiled a chip that is basically an entire wafer.
Chipmakers normally slice a wafer from a 12-inch-diameter ingot of silicon to process in a chip factory. Once processed, the wafer is sliced into hundreds of separate chips that can be used in electronic hardware.
But Cerebras, started by SeaMicro founder Andrew Feldman, takes that wafer and makes a single, massive chip out of it. Each piece of the chip, dubbed a core, is interconnected in a sophisticated way to other cores. The interconnections are designed to keep all the cores functioning at high speeds so the transistors can work together as one.
Twice as good as the CS-1 Above: Comparing the CS-1 to the biggest GPU.
In 2019, Cerebras could fit 400,000 cores and 1.2 billion transistors on a wafer chip, the CS-1. It was built with a 16-nanometer manufacturing process. But the new chip is built with a high-end 7-nanometer process, meaning the width between circuits is seven billionths of a meter. With such miniaturization, Cerebras can cram a lot more transistors in the same 12-inch wafer, Feldman said. It cuts that circular wafer into a square that is eight inches by eight inches, and ships the device in that form.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “We have 123 times more cores and 1,000 times more memory on chip and 12,000 times more memory bandwidth and 45,000 times more fabric bandwidth,” Feldman said in an interview with VentureBeat. “We were aggressive on scaling geometry, and we made a set of microarchitecture improvements.” Now Cerebras’ WSE-2 chip has more than twice as many cores and transistors. By comparison the largest graphics processing unit (GPU) has only 54 billion transistors — 2.55 trillion fewer transistors than the WSE-2. The WSE-2 also has 123 times more cores and 1,000 times more high performance on-chip high memory than GPU competitors. Many of the Cerebras cores are redundant in case one part fails.
“This is a great achievement, especially when considering that the world’s third largest chip is 2.55 trillion transistors smaller than the WSE-2,” said Linley Gwennap, principal analyst at The Linley Group, in a statement.
Feldman half-joked that this should prove that Cerebras is not a one-trick pony.
“What this avoids is all the complexity of trying to tie together lots of little things,” Feldman said. “When you have to build a cluster of GPUs, you have to spread your model across multiple nodes. You have to deal with device memory sizes and memory bandwidth constraints and communication and synchronization overheads.” The CS-2’s specs Above: TSMC put the CS-1 in a chip museum.
The WSE-2 will power the Cerebras CS-2, the industry’s fastest AI computer, designed and optimized for 7 nanometers and beyond. Manufactured by contract manufacturer TSMC, the WSE-2 more than doubles all performance characteristics on the chip — the transistor count, core count, memory, memory bandwidth, and fabric bandwidth — over the first generation WSE. The result is that on every performance metric, the WSE-2 is orders of magnitude larger and more performant than any competing GPU on the market, Feldman said.
TSMC put the first WSE-1 chip in a museum of innovation for chip technology in Taiwan.
“Cerebras does deliver the cores promised,” Patrick Moorhead, an analyst at Moor Insights & Strategy. “What the company is delivering is more along the lines of multiple clusters on a chip. It does appear to give Nvidia a run for its money but doesn’t run raw CUDA. That has become somewhat of a de facto standard. Nvidia solutions are more flexible as well as they can fit into nearly any server chassis.” With every component optimized for AI work, the CS-2 delivers more compute performance at less space and less power than any other system, Feldman said. Depending on workload, from AI to high-performance computing, CS-2 delivers hundreds or thousands of times more performance than legacy alternatives, and it does so at a fraction of the power draw and space.
A single CS-2 replaces clusters of hundreds or thousands of graphics processing units (GPUs) that consume dozens of racks, use hundreds of kilowatts of power, and take months to configure and program. At only 26 inches tall, the CS-2 fits in one-third of a standard datacenter rack.
“Obviously, there are companies and entities interested in Cerebras’ wafer-scale solution for large data sets,” said Jim McGregor, principal analyst at Tirias Research, in an email. “But, there are many more opportunities at the enterprise level for the millions of other AI applications and still opportunities beyond what Cerebras could handle, which is why Nvidia has the SuprPod and Selene supercomputers.” He added, “You also have to remember that Nvidia is targeting everything from AI robotics with Jenson to supercomputers. Cerebras is more of a niche platform. It will take some opportunities but will not match the breadth of what Nvidia is targeting. Besides, Nvidia is selling everything they can build.” Lots of customers Above: Comparing the new Cerebras chip to its rival, the Nvidia A100.
And the company has proven itself by shipping the first generation to customers. Over the past year, customers have deployed the Cerebras WSE and CS-1, including Argonne National Laboratory; Lawrence Livermore National Laboratory; Pittsburgh Supercomputing Center (PSC) for its Neocortex AI supercomputer; EPCC, the supercomputing center at the University of Edinburgh; pharmaceutical leader GlaxoSmithKline; Tokyo Electron Devices; and more. Customers praising the chip include those at GlaxoSmithKline and the Argonne National Laboratory.
Kim Branson, senior vice president at GlaxoSmithKline, said in a statement that the company has increased the complexity of the encoder models it generates while decreasing training time by 80 times. At Argonne, the chip is being used for cancer research and has reduced the experiment turnaround time on cancer models by more than 300 times.
“For drug discovery, we have other wins that we’ll be announcing over the next year in heavy manufacturing and pharma and biotech and military,” Feldman said.
The new chips will ship in the third quarter. Feldman said the company now has more than 300 engineers, with offices in Silicon Valley, Toronto, San Diego, and Tokyo.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,086 | 2,021 |
"Synopsys: 84% of codebases contain an open source vulnerability | VentureBeat"
|
"https://venturebeat.com/2021/04/13/synopsys-84-of-codebases-contain-an-open-source-vulnerability"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Synopsys: 84% of codebases contain an open source vulnerability Share on Facebook Share on X Share on LinkedIn Concept illustration for open source software Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
The number of codebases containing at least one open source vulnerability increased by nine percentage points in 2020, according to a new report from Synopsys , the silicon design company behind open source security management platform Black Duck.
In the sixth Open Source Security and Risk Analysis (OSSRA) report, Synopsys said it has provided an “in-depth snapshot of open source security, compliance, licensing, and code quality risk in commercial software,” observing that of the 1,546 commercial codebases scanned by Black Duck in 2020, 84% contained at least one open source vulnerability — up from 75% in last year’s report.
Most modern software relies to some degree on open source software, as it saves companies the time and resources needed to develop and maintain every component internally. Black Duck, which Synopsys bought in 2017 for $547 million , is one of several software composition analysis (SCA) platforms, with others including Sonatype , which was acquired by Vista Equity Partners in 2019; Snyk , which recently closed a $300 million round of funding; and WhiteSource, which last week raised $75 million.
Companies use these platforms to identify every open source component in their stack to surface vulnerabilities and license compliance risks. And it’s these open source “audits” Synopsys and Black Duck primarily use as the basis for their annual OSSRA report.
The 1,546 codebases that constituted this year’s report spanned 17 industries, including aerospace, fintech, IoT, and telecommunications, with Synopsys concluding that 98% of codebases contain open source code. This is marginally down from the 99% it reported last year, but incremental deviations are to be expected — the bottom line is that most applications continue to rely on open source components.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! So why would vulnerabilities be spreading at this rate? Tim Mackey, principal security strategist at the Synopsys Cybersecurity Research Center (CyRC), thinks that while there are some complexities behind the growth of vulnerabilities, for most companies the problem is essentially one of scale.
“If you look at the average number of components in an application over the last three years, it’s gone from 298 to 445 and now to 528,” he told VentureBeat. “If someone designed their update and patching processes to manage 300 components per app in 2018, they probably didn’t expect usage to grow that much in two years. Then if you overlay that US-CERT (U.S. cybersecurity and infrastructure agency) reported an average of slightly more than 48 new CVEs (common vulnerabilities and exposures) each day in 2020, keeping up with patching is a huge problem.” At the heart of the problem is the vast array of open source software packages out there. A slew of tools have emerged to help developers and companies make sense of the open source world. Openbase, for example, wants to be the Yelp for open source software packages. OpenLogic’s Stack Builder, meanwhile, helps enterprises choose the right combination of open source software for their needs. And Two Sigma Ventures’ Open Source Index highlights GitHub’s most popular projects right now.
But while selecting the right package is important, keeping abreast of updates is equally essential. In short, developers often struggle to keep on top of their open source stack and remember where they got their open source components from when it’s time to download patches. This is an area where companies such as Synopsys are carving their niche.
High-risk code The broad industry consensus is that vulnerabilities are rife within open source code, and bad actors are hell-bent on exploiting them. In its State of Software Security: Open Source Edition report last year, app security company Veracode noted that 70% of applications contained a security flaw in an open source library, while Sonatype recently reported a 430% surge in attacks targeting open source software supply chains.
But not all vulnerabilities are created equal, and many offer limited scope for hackers to exploit. In an interview with VentureBeat this week, WhiteSource CEO and cofounder Rami Sass said the company’s research showed that only “15% to 30% of vulnerabilities are effective — the majority of open source vulnerabilities are not called by the proprietary code.” This means it’s important to distinguish between imminently dangerous vulnerabilities and minor flaws. With that in mind, Synopsys’s latest report found that the percentage of codebases containing high-risk open source vulnerabilities grew 11 percentage points to 60% in 2020, with “high-risk” defined as a vulnerability that has been actively exploited, has “documented proof-of-concept exploits,” or has been “classified as a remote code execution vulnerability.” Above: Synopsys: Vulnerabilities in codebases Moreover, several of the top 10 open source vulnerabilities identified in the 2019 report not only reared their heads again in 2020 but showed sizable percentage increases — this, according to Mackey, was the biggest surprise the company saw in its audit.
“Normally, we’d expect to see exposure to any given CVE decline over time,” he said. “After all, once a vulnerability is reported, most teams will want to apply the patch.” The top two vulnerabilities were related to jQuery, and both demonstrated double-digit year-on-year growth.
Above: Synopsys: Top 10 vulnerabilities License conflicts Away from the vulnerability sphere, the latest OSSRA report found that the number of codebases containing open source license conflicts fell marginally year-on-year from 67% to 65%, with nearly three-quarters of these related to a GNU General Public License.
Meanwhile, 26% of the codebases used open source with either no license or a customized license. This is important because customized open source licenses often need to be evaluated for potential IP issues or legal uncertainties.
Elsewhere, the report showed that 91% of codebases contained open source dependencies with zero development activity in the past two years, up from 88% the previous year. This might not be a problem, but it means the vast majority of codebases, according to Synopsys audits, contain an open source dependency with no recent new features, enhancements, or — more importantly — security fixes.
What does this all mean? For one thing, software — open source or otherwise — can become vulnerable if nobody is at the wheel. This is why the Linux Foundation set up the The Core Infrastructure Initiative (CII), with backing from tech heavyweights such as Amazon, Google, Microsoft, Cisco, IBM, and Intel, to support open source projects that are critical to the internet and related devices and systems.
But it also means enterprise-focused commercial companies can monetize open source projects with the promise of added features and (enhanced) security. And companies such as Synopsys, WhiteSource, Snyk, and Sonatype can build billion-dollar businesses by helping developer teams keep on top of their open source stack and ensuring major flaws are addressed quickly.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,087 | 2,021 |
"Nvidia reveals Omniverse Enterprise for simulating products and worlds | VentureBeat"
|
"https://venturebeat.com/2021/04/12/nvidia-reveals-omniverse-enterprise-for-simulating-products-and-worlds"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Nvidia reveals Omniverse Enterprise for simulating products and worlds Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Nvidia has announced its Omniverse , a virtual environment the company describes as a “metaverse” for engineers, will be available as an enterprise service later this year.
CEO Jensen Huang showed a demo of the Omniverse , where engineers can work on designs in a virtual environment, as part of the keynote talk at Nvidia’s GPU Technology Conference , a virtual event being held online this week. I also moderated a panel on the plumbing for the metaverse with a number of enterprise participants.
Huang said that the Omniverse is built on Nvidia’s entire body of work, letting people simulated shared virtual 3D worlds that obey the worlds of physics.
“The science fiction metaverse is near,” he said in a keynote speech. “One of the most important parts of Omniverse is that it obeys the laws of physics.” Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! The Omniverse is a virtual tool that allows engineers to collaborate. It was inspired by the science fiction concept of the metaverse , the universe of virtual worlds that are all interconnected, like in novels such as Snow Crash and Ready Player One.
The project started years ago as a proprietary Nvidia project called Holodeck , named after the virtual reality simulation in Star Trek. But it morphed into a more ambitious industry-wide effort based on the plumbing made possible by the Universal Scene Description (USD) technology Pixar developed for making its movies. Nvidia has spent years and hundreds of millions of dollars on the project, said Richard Kerris, Nvidia media and entertainment general manager, in a press briefing.
Above: Jensen Huang, CEO of Nvidia, at GTC 21.
Omniverse debuted in beta form in December. More than 17,000 users have tested it since then, and now the company is making the Omniverse available as a subscription service for enterprises. It’s just the kind of thing that engineers need during the pandemic to work on complex projects remotely.
BMW Group, Ericsson, Foster + Partners, and WPP are using Omniverse. It has application support from Bentley Systems, Adobe, Autodesk, Epic Games, ESRI, Graphisoft, Trimble, Robert McNeel & Associates, Blender, Marvelous Designer, Reallusion, and Wrnch.
And support comes from the likes of Asus, Boxx Technologies, Cisco, Dell Technologies, HP, Lenovo, and Supermicro. More than 400 enterprises are going to use the new version for enterprises starting this summer. It comes with enterprise support for fully established enterprises, Kerris said.
What the Omniverse can do Above: Omniverse will be used across industries for design.
The Omniverse, which was previously available only in early access mode, enables photorealistic 3D simulation and collaboration. It’s a metaverse that obeys the laws of physics, and so it enables companies and individuals to simulate things from the real world that can’t be tested easily in the real world, like self-driving cars, which can be dangerous to pedestrians if they aren’t perfected.
Mattias Wikenmalm, technical specialist at Volvo, said on the panel that it’s necessary to simulate not just the car but the context around the car, like a city environment.
“The foundation is still the data, and this is the first time we can be data native, where we don’t have to focus on moving data between different systems. In this case, data is a first-class citizen,” Wikenmalm said. “It’s so nice we can just focus on the data and borrow our data for different applications and transform that data. Exchanging data between systems has been complex. If we can get that out of the way, we can start building a proper metaverse.” BMW is using Omniverse to simulate a full car factory before it builds it. And there’s no limit to the testing. If someone wanted to create an entire city, or even build a simulation of the entire United States, for a self-driving car testing ground, it would be possible.
It is intended for tens of millions of designers, engineers, architects, and other creators to use at the same time. The designers can work on the same parts of their designs at the same times without overwriting each other, with changes offered as options for others to accept. That makes it ideal for large teams to work together.
Above: Nvidia Omniverse Susanna Holt, vice president of engineering for Autodesk, said on the panel that being able to understand someone else’s data is important, and it means you don’t have to be locked into a single tool or workflow.
“We need the bits to talk to one another, and that’s been so hard until now,” she said. “It is still hard, as you have to import and export data. With USD, it’s the beginning of a new future.” The Omniverse uses Nvidia’s RTX 3D simulation tech to enable engineers to do things like work on a car’s design inside a simulation while virtually walking around it or sitting inside it and interacting with it in real time.
Martha Tsigkari, partner at architectural firm Foster + Partners, said on the panel that the architecture and construction industries really need the ability to transfer data easily from one site to the next.
“Being able to do that in an easy way without having to think about how we change that information is really important,” Tsigkari said. “In order to run really difficult simulations, or understand how buildings perform, we need to use all kinds of software to do this. Working in these processes right now can be painful, and we need to create all of these bespoke tools to do this. A future where this becomes a seamless process and opens to all kinds of industries is a fantastic opportunity that we need to grasp and go for.” Engineers on remote teams will be able to work alongside architects, 3D animators, and other people working on 3D buildings simultaneously, as if they were jointly editing a Google Doc, Kerris said. He added, “The Omniverse was built for our own needs in development.” USD’s roots at Pixar Above: Inside Out is Pixar’s 2015 film.
Pixar’s Universal Scene Description (USD) is the HTML of 3D, and it’s the foundation for sharing different kinds of images from multiple parties in Omniverse, said Kerris.
“We felt that with the entire community starting to move towards this open platform for exchanging 3D information including the objects, scenes, materials and everything, it was the best place for us to start with the foundation for what this platform would become,” Kerris said.
Pixar’s USD standard came from over a decade of film production.
Guido Quaroni is director of engineering and 3D immersive at Adobe, and before that he was at Pixar, where he was responsible for open sourcing USD. In a panel at GTC, he said the idea emerged at Pixar in 2010 as the company was dealing with multiple libraries that dealt with large scenes in its movies.
“Some of the ideas in USD go back 20 years to Toy Story 2 , but the idea was to formalize it and write it in a way that we could eventually open source it,” Quaroni said.
Above: Nvidia’s Marbles at Night demo showcases complex physics and lighting in the Omniverse.
He worked with Sebastian “Spiff” Grassia, head of the team that built USD at Pixar.
“We knew that every studio kind of had something like it,” Quaroni said. “And we wanted to see if we could offer something that became the standard, because for us, the biggest problem was the plugins and integrations with third parties. Why not give it to the world?” The problem that they had was that they needed to be able, at any point in the film pipeline, to extract an asset, to massage it with a third-party tool, and to stick it back into the production process without losing information, said Michael Kass, distinguished engineer at Nvidia and software architect of the Omniverse, in an interview.
Grassia said USD is an interchange format for data.
“It represents decades of Pixar’s experience in building software that supports collaborative filmmaking,” Grassia said. “It’s for collaborative authoring and viewing for a very large 3D scene. It handles combining, assembling, overriding, and animating the assets that you have created in a non-destructive way. That allows for multiple artists to work on the same scene concurrently.” Before USD, artists had to check out a piece of digital art, work on it, and check it back in. With USD, Nvidia has enabled sharing across all applications and different ways of viewing the art. The changes are transmitted back and forth. A large number of people can view and work on the same thing, Kass said. A feature dubbed Nucleus serves as a traffic cop that communicates what is changing in a 3D scene.
Early on, Pixar tried to create tools itself, but it found there were tools like Maya, 3D Studio Max, Unreal Engine, or Blender that were more advanced at doing particular tasks. And rather than have to train those vendors to continuously update their tools, Pixar made USD available as an open standard.
What Nvidia added Above: Omniverse Enterprise The platform also uses Nvidia technology, such as real-time photorealistic rendering, physics, materials, and interactive workflows between industry-leading 3D software products.
Pixar built a renderer, a data visualization engine dubbed Hydra. It was designed in a way to hook up other data sources, like a Maya image. So the artists can work with large datasets without having the vendor translate everything into their own native representation.
Kass and his colleagues at Nvidia found that USD was a “golden nugget” that let them represent data in a way that could be used for all sorts of different purposes.
“We decided to put USD at the center of our virtual worlds, but at Pixar, most of the collaboration was not real time. So we added on top of USD the ability to synchronize with different users,” Kass said.
Above: COVID-19 simulation in Omniverse.
The real test has been making sure that USD can be useful beyond the media and entertainment applications. Omniverse enables collaboration and simulation that could become essential for Nvidia customers working in robotics, automotive, architecture, engineering, construction, and manufacturing.
“There really isn’t anything else like it,” Kerris said. “Pixar built the standard, and we saw the potential in it. This is a demand and a need that everybody has. Can you imagine the internet without a standard way of describing a web page? It used to be that way. With 3D, no two applications use the same language today. That needs to change, or else we really can’t build the metaverse.” Nvidia extended USD, which was built for Pixar’s needs, and added what is necessary for the metaverse, Kass said.
“We got to stand on top of giants, but we are pushing it forward in a direction they weren’t envisioning when they started,” he added.
Nvidia built a tool called Omniverse Create, which accelerates scene composition and allows users in real time to interactively assemble, light, simulate, and render scenes. It also built Omniverse View, which powers seamless collaborative design and visualization of architectural and engineering projects with photorealistic rendering. Nvidia RTX Virtual Workstation software gives collaborators the freedom to run their graphics-intensive 3D applications from anywhere.
Omniverse Enterprise is a new platform that includes the Nvidia Omniverse Nucleus server, which manages the database shared among clients, and Nvidia Omniverse Connectors, which are plug-ins to industry-leading design applications.
With all of the applications working live, artists don’t have to go through a laborious exporting or importing process.
“Omniverse is an important tool for industrial design — especially with human-robot interactions,” said Kevin Krewell, an analyst at Tirias Research, in an email. “Simulation is a big new market for GPU cloud services.” Big problems The Omniverse and USD aren’t going to lead to the metaverse overnight.
Above: Nvidia’s Omniverse platform.
Tsigkari said that getting so many creative industries to work together has been a huge challenge, particularly for architecture firms that have to pull so many different disciplines to get work done from conception to completion.
“You need a way to allow for the creative people to quickly pass things directly from engineers to consultants so they can do their analysis and pass it on to the manufacturers,” she said. “In the simplest way, this doesn’t exist.” At the same time, different industries work on different timetables, from long cycles to real time. “For us, this has been really crucial to be able to do this in a seamless way where you don’t have to think about the in-between space,” she said.
Holt at Autodesk said she would like to see USD progress forward in dealing with huge datasets, on the level of modeling cities for construction purposes. “It’s not up to that yet,” she said. “Some changes would be needed as we take it into other areas like construction.” Grassia said there are features that allow of “lazy loading,” or different levels of detail becoming visible as a huge dataset loads.
Lori Hufford, vice president of applications integration at Bentley Systems, said on a panel her team has had good results so far working on large models. “I’m really excited about the open nature of USD,” she said. “We’ve been very impressed with the scale we have been able to achieve with USD.” The Omniverse today Above: WPP is using Omniverse to build ads remotely.
The enterprise version will support Windows and Linux machines, and it is coming later this year.
What can you do in this engineer’s metaverse? You can simulate the creation of robots through a tool dubbed Isaac. That lets engineers create variations of robots and see how they would work with realistic physics, so they can simulate what a robot would do in the real world by first making the robot in a virtual world. There are also Omniverse Connectors, which are plugins that connect third-party tools to the platform. That allows the Omniverse to be customized for different vertical markets.
BMW is using Omniverse to simulate the exact details of a car factory, simulating a complete physical space. The company calls the factory a “digital twin.” The factory has enough detail to include 300 cars in it at a given time, and each car has about 10 gigabytes of data.
Thousands of planners, product engineers, facility managers, and lean experts within the global production network are able to collaborate in a single virtual environment to design, plan, engineer, simulate, and optimize extremely complex manufacturing systems before a factory is actually built or a new product is integrated.
Milan Nedeljkovic, member of the board of management of BMW AG, said in a statement that the innovations will lead to a planning process that is 30% more efficient than before. Eventually, Omniverse will enable BMW to simulate all 31 of its factories.
Above: Bentley’s tools used to create a digital twin of a location in the Omniverse.
Volvo is designing cars inside Omniverse before committing to physical designs, while Ericsson is simulating future 5G wireless networks. Industrial Light & Magic has been evaluating Omniverse for a broad range of possible workflows, but particularly for bringing together content created across multiple traditional applications and facilitating simultaneous collaboration across teams that are distributed all over the world.
Foster + Partners, the United Kingdom architectural design and engineering firm, is implementing Omniverse to enable seamless collaborative design to visualization capabilities to teams spread across 14 countries.
Activision Publishing is exploring Omniverse’s AI-search capabilities for its games to allow artists, game developers and designers to search intuitively through massive databases of untagged 3D assets using text or images.
WPP, the world’s largest marketing services organization, is using the Omniverse to reinvent the way advertising content is made by replacing traditional on-location production methods with entirely virtual production.
Perry Nightingale, senior vice president at WPP, said on a panel that he is seeing collaboration on an enormous scale with multiple companies working together.
Above: Nvidia’s Omniverse can be used for entertainment creation.
“I’m excited how far that could go, with governments doing it for city planning and other sorts of grand scale collaboration around USD,” Nightingale said.
Nvidia will use Omniverse to enable Drive Sim 2.0, which lets carmakers test their self-driving cars inside Omniverse. It uses USD as Nvidia transitions from game engines to a true simulation engine for Omniverse, said Danny Shapiro, senior director for automobiles at Nvidia. Nvidia’s own developers will now be able to support new hardware technologies earlier than they could in the past.
“We initially built it for our own needs, so that when technologies were being developed in different groups that they could share immediately, rather than have to wait for the development of it into their particular area,” Kerris said. “The same holds true with our developers. It used to be if we brought a technology out, we would then work with our developers, and it would take a period of time for them to support it. However, by building this platform that crosses over these, we have the ability now to bring out new technologies that they can take advantage of day one.” The metaverse of the future Above: Omniverse enables collaboration on complex projects.
One question is how well Omniverse will be able to deal with latency, or interaction delays across the cloud. That would be important for game developers, who have to create games that operate in real time. Scenes built with Omniverse can be rendered at 30, 60, or 120 frames per second as needed for a real-time application like a game.
Kerris said in an earlier chat that most of what you’re looking at doesn’t have to be constantly refreshed on everybody’s screen, making the real-time updating of the Omniverse more efficient. Nvidia’s Nucleus tech is a kind of traffic cop that communicates what is changing in a scene as multiple parties work on it at once.
As for viewing the Omniverse, gamers could access it using a high-end PC with a single Nvidia RTX graphics card.
Huang said in his speech, “The metaverse is coming. Future worlds will be photorealistic, obey the laws of physics or not, and inhabited by human avatars and AI beings.” He said that games like Fortnite or Minecraft or Roblox are like the early versions of the metaverse. But he said the metaverse is not only a place to play games. It’s a place to simulate the future.
Above: Dean Takahashi moderates a panel of Omniverse experts at the Nvidia GTC 2021 event.
“We are building cities because we need to simulate these virtual worlds for our autonomous vehicles,” Kerris said. “We need a world in which we can train them and test them. Our goal is to scale it so so you could drive continuously drive a virtual car continuously from Los Angeles to New York, in real time, using the actual hardware that’s going to be inside the car and give it a virtual reality experience plugged into its sensory inputs, the output of our simulator, and fool it into thinking it’s in the real world. And for that, it has to be an extremely large world. We’re not quite there yet. But that is what we are moving towards.” For game companies, I can foresee game publishers eventually trading around their cities, as one might build a replica of Paris while another might build New York. After all, if everyone works with USD technology, there might not be a need to rebuild every city from scratch for simulations like games.
Ivar Dahlberg, technical artist at Embark Studios, a game studio in Stockholm, said it is tantalizing to think about trading cities back and forth between game developers who are working on city-level games.
“Traditionally, developers have focused on a world for someone else to experience,” he said. “But now it seems there are lots more opportunities for developers to create something together with the inhabitants of that world. You can share the tools with everybody who is playing. That ties in quite nicely to the idea of a metaverse. USD is definitely a step in that direction.” Tsigkari said, “That is an experience that may not be very far out. It won’t matter if one company builds Paris, London, or New York. It will be more about what you are doing with those assets. What is the experience that you offer to the user with those assets?” As I saw recently in the film A Glitch in the Matrix , it will be easier to believe in the future that we’re all living in a simulation. I expect that Nvidia will be able to fake a moon landing for us next.
GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,088 | 2,020 |
"Nvidia announces open beta for Omniverse as a 'metaverse' for engineers | VentureBeat"
|
"https://venturebeat.com/2020/10/05/nvidia-announces-open-beta-for-omniverse-as-a-metaverse-for-engineers"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Nvidia announces open beta for Omniverse as a ‘metaverse’ for engineers Share on Facebook Share on X Share on LinkedIn Nvidia's Omniverse can simulate a physically accurate car.
Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Nvidia has announced an open beta for its Omniverse , a virtual environment the company describes as a “metaverse” for engineers.
CEO Jensen Huang showed a demo of the Omniverse , where engineers can work on designs in a virtual environment, as part of the keynote talk at Nvidia’s GPU Technology Conference , a virtual event being held online this week. More than 30,000 people from around the world have signed up to participate.
The Omniverse is a virtual tool that allows engineers to collaborate. It was inspired by the science fiction concept of the metaverse , the universe of virtual worlds that are all interconnected, like in novels such as Snow Crash and Ready Player One.
“The metaverse analogy is excellent,” Nvidia media and entertainment general manager Richard Kerris said in a press briefing. “It’s actually one that we use internally, quite a lot. You’ll be able to collaborate anywhere in the world in this virtual environment. And your workflow is key, whether you’re an end user or developer. So we really are excited about it as a platform.” Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Leveraging Nvidia technology Above: The Omniverse is where robots learn to be robots.
Nvidia has worked on the tech for a while, with early access lasting 18 months. The Omniverse, which was previously available only in early access mode, enables photorealistic 3D simulation and collaboration. It is intended for tens of millions of designers, engineers, architects, and other creators and will be available for download this fall.
The Omniverse uses Nvidia’s RTX 3D simulation tech to enable engineers to do things like work on a car’s design inside a simulation while virtually walking around it or sitting inside it and interacting with it in real time. Engineers on remote teams will be able to work alongside architects, 3D animators, and other people working on 3D buildings simultaneously, as if they were jointly editing a Google Doc, Kerris said. He added that “The Omniverse was built for our own needs in development.” Above: Nvidia’s Marbles at Night demo showcases complex physics and lighting in the Omniverse.
The open beta of Omniverse follows an early access program in which customers such as Foster + Partners and ILM — along with 40 other companies and 400 individual creators — have been evaluating the platform. The cloud-based platform runs in the datacenter using servers based on chips from Nvidia, such as the Nvidia Quadro RTX A6000 chips being introduced today.
Huang views the Omniverse as the beginning of the Star Trek Holodeck concept “realized at last.” Huang said in his speech, “The metaverse is coming. Future worlds will be photorealistic, obey the laws of physics or not, and inhabited by human avatars and AI beings.” He said that games like Fortnite or Minecraft or Roblox are like the early versions of the metaverse. But he said the metaverse is not only a place to play games. It’s a place to simulate the future.
Pixar and other allies Omniverse is based on Pixar’s widely adopted Universal Scene Description (USD), the leading format for universal interchange between 3D applications. Pixar used it to make animated movies. The platform also uses Nvidia technology, such as real-time photorealistic rendering, physics, materials, and interactive workflows between industry-leading 3D software products.
“With the entire community starting to move toward this open platform [USD] for exchanging 3D information, including the objects, scenes, materials, and everything else, it was the best place for us to start,” Kerris said. “And because of that, we now are able to work with all kinds of third-party applications.” Omniverse enables collaboration and simulation that could become essential for Nvidia customers working in robotics, automotive, architecture, engineering, construction, manufacturing, media, and entertainment.
Above: Nvidia’s Omniverse can be used for entertainment creation.
Industrial Light & Magic, a Lucasfilm company and maker of visual effects for movies such as the Star Wars series, has been evaluating Omniverse for creative and animation pipelines.
Other early adopters include leading architectural design and engineering firms and telecommunication companies such as Foster + Partners, an architectural design and engineering firm, and Woods Bagot, an architectural and consulting firm. The tech allows them to have a hybrid cloud workflow for the design of complex models and visualizations of buildings.
Omniverse has support from many major software leaders, such as Adobe, Autodesk, Bentley Systems, Robert McNeel & Associates, and SideFX. Blender is working with Nvidia to add USD capabilities to enable Omniverse integration with its software.
Simultaneous real-time access Above: Nvidia’s Omniverse works with a lot of other technologies.
So it looks like engineers will be the first to kick the tires on the metaverse, which I’m hoping will someday replace the Zoomverse we’re all stuck in right now. Damn, I should have been an engineer. Being an engineering thinker, I asked Kerris whether the Omniverse would be able to deal with latency, or interaction delays across the cloud.
He noted that the only information that has to be transmitted across the internet to the other users are the parts of a project that are being changed. That means most of what you’re looking at doesn’t have to be constantly refreshed on everybody’s screen, making the real-time updating of the Omniverse more efficient. Nvidia’s Nucleus tech is a kind of traffic cop that communicates what is changing in a scene as multiple parties work on it at once.
Above: COVID-19 simulation in Omniverse.
“A decent connection to the cloud gives you the real-time performance that you’ll need to have that kind of workflow feel like you’re in the same room with one another person, even if you are in different parts of the world,” Kerris said.
What can you do in this engineer’s metaverse? You can simulate the creation of robots through a tool dubbed Isaac. That lets engineers create variations of robots and see how they would work with realistic physics. So they can simulate what a robot would do in the real world by first making the robot in a virtual world. There are also Omniverse Connectors, which are plugins that connect third-party tools to the platform. That allows the Omniverse to be customized for different vertical markets.
If you’ve been wondering what technology might be useful for the regular person’s metaverse, the Omniverse offers a pretty big clue. As an FYI, VentureBeat is holding a metaverse conference on January 27.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,089 | 2,018 |
"Ready Player One film review -- Not bad for a Spielberg film with a lot of new licenses | VentureBeat"
|
"https://venturebeat.com/2018/03/28/ready-player-one-film-review-not-bad-for-a-spielberg-film-with-a-lot-of-new-licenses"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Review Ready Player One film review — Not bad for a Spielberg film with a lot of new licenses Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
The tech, game, and virtual reality industries have a lot of hopes tied to Steven Spielberg’s Ready Player One film, which debuts on March 29 throughout the United States. I caught a premier at the Dolby Theater in San Francisco as part of promotion for Roblox, a virtual world company which has its own sort of James Halliday father figure and creator in Dave Baszucki , the cofounder of Roblox and its virtual world. I enjoyed myself, and so will anyone who gets a kick out of hunting down pop culture allusions.
Virtual reality plays a central role in Ready Player One.
The book inspired many people in VR to pursue their own dreams, raising billions of dollars and creating a whole entertainment ecosystem. The movie could play a similar role in popularizing VR among consumers, who may enjoy the movie and take an interest in VR. Or so that is what some companies in the VR industry hope. HTC, for instance, has created a number of Ready Player One VR experiences to accompany the movie. I don’t think any single industry can count on a film to lift it into the public’s consciousness. Those industries have to do it themselves with compelling products.
(Craig Donato, chief business officer at Roblox, will speak on the “leisure economy” at our GamesBeat Summit 2018 event).
In Ready Player One , Halliday creates an enormously popular online world dubbed the OASIS. He dies from an illness, and he sets the world on a contest, requiring hunters (known as Gunters, short for Egg Hunters) to find three keys and solve puzzles in order to inherit the world, which is valued at half a trillion dollars. That pits lone Gunters like Parzival and Art3mis against the evil corporation, IOI (Innovative Online Industries), and its minions known as the Sixers. Parzival and his friends start a romp through the 1980s, unearthing Halliday’s memories in order to decipher the puzzling clues he left behind. It culminates in a gigantic clash of video game and pop culture avatars.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Editor’s note. This review has film spoilers.
Spielberg’s treasure and trivia hunt Above: A haptic suit enables Wade to feel touch in the Ready Player One trailer.
I’ve read the book twice and count myself as a fan. So is my 18-year-old daughter, who went with me. She didn’t get all of the references to 1980s pop culture, so I filled her in on a few things. (That darn familiar Tolkien-like incantation quote still eludes me.
Update: It’s from Merlin character in Excalibur per @trilobyte ). It’s filled with so many references that it’s impossible to count, and many of them are quite different from the book. If you’re looking for a replica of the events of the book, then Spielberg has ruined this movie for you. On the other hand, the licensing lawyers have probably had a gargantuan battle over securing rights to properties used in the film, and that may explain why the film is so different.
The book had some memorable scenes with video games such as Joust, Zork, Pac-Man, Tempest, Black Tiger, and Adventure. And the book had lots of references to Blade Runner , Monty Python and the Holy Grail , and War Games.
From music, Rush’s 2112 album played a big role. That was half the fun of the book for me, as I grew up during the 1970s and 1980s, and I knew so many of those references.
But the movie dispenses with many of the quests in the book, changing them so that one becomes a road race with the Wade Watts, the hero of the story, driving a DeLorean car from Back to the Future.
I missed the presence of Ultraman, another figure from my childhood, but the rights for that property were in dispute and unavailable to Spielberg. The film also doesn’t have any of the book’s signature games, with the exception of Adventure , a game on the Atari 2600 which had the first Easter egg , or a hidden message embedded inside the game. Spielberg had a big ally in Warner Bros., but they didn’t score all of the important licenses.
Naturally, the film cut short a lot of the story, quests, and details of Ernest Cline’s 2011 book. But I felt like it captured the essence of it. I enjoyed the big scene in the dance club, where the evil Sixers invade a disco in hopes of assassinating Wade (and his avatar Parzival) and his quest partner Art3mis. Parzival and Art3mis do a funny disco scene when Parzival buys dance moves that replicate John Travolta’s gyrations from Saturday Night Fever.
I laughed out loud at that part.
I also loved a section of the film that was based on the Jack Nicholson film version of The Shining , which I remember loathing as much as author Stephen King did. Spielberg pulled out a few of the iconic moments of that film and brought them into a hilarious action scene in the movie. And I liked how this scene and others gave bigger roles to Aech and Art3mis than they had in the novel.
Keeping the core of the emotion Above: A scene from Spielberg’s upcoming Ready Player One.
I also thought that Spielberg gave us glimpses of the emotional depth of characters like Art3mis, who resists Parzival’s advances so she can complete the quest on her own. It delves into Halliday’s whose own inaction leads to him surrendering his interest in Kira, the love of his life, so that she instead becomes the wife of his partner, Ogden Morrow. You get a glimpse of the tragedy that led Halliday to create an entire world to distract him from his loneliness.
There’s also a tiny snub that Halliday gives to business-oriented Nolan Sorrento, setting Sorrento on a quest to accumulate a business empire so he can take over Halliday’s creation. And there are small glimpses into the issues of spending too much time in VR and not enough in the real world, or falling in love with the avatar of Art3mis, rather than the real person who is hiding behind the avatar.
There were so many delightful pieces of the book that were left out, like Wade’s impoverished life and his discovery of the first Dungeons & Dragons puzzle and his struggle with the video game Joust. But book author Cline was a co-screenwriter on the film with Zack Penn, and I figure if Cline is OK with the big changes , then I’m OK with it too. After all, it would be really boring to watch somebody play a video game in a movie.
The battle at the end of the film is visually epic, and it was so fun to see characters from Halo and Overwatch, if only for an instant, as two sides engage in final combat. That scene lives up to the expectations I had when I first read the book, and that is quite a tall order. That was all I was really looking for in this movie: visuals that could point me to what the future would look like someday, either in the real world or the virtual world.
If there were some disappointments, it had to do more with the depiction of the real world. The world of 2045 is so dystopian that the 1980s are viewed fondly as the peak of American society. But Spielberg didn’t really visualize this part of our society as well as he did the virtual world. The characters, with the exception of Aech, also didn’t look like I expected them to be in the real world. That is, I expected them to be rather homely overweight people, like they were in the book. Instead, the actors in real life looked pretty much like pop stars.
Ready Player One’s impact on the real world Above: The worlds that want to be Ready Player One.
The references that have echoed back into the real world are fun in part because people inspired by the novel have gone on to try to re-create it in real life. Palmer Luckey, founder of Oculus, asked his employees to read Ready Player One as he prepared to launch the second coming of VR. Greg Castle, an investor in Oculus and founder of the $12 million Anorak Ventures fund , named his company after Anorak, the avatar of James Halliday, the creator of the Oasis, the virtual world of Ready Player One.
HTC has commissioned numerous applications based on Ready Player One.
Baszucki, the creator of Roblox, had a chance to interview Cline at the South-by-Southwest conference. The parallel between Halliday and Baszucki, who has built a world with 50 million monthly visitors , is quite striking.
“I was driving a car across Canada a long time ago all by myself, and the notion hit me, that this market and these technologies would be inevitable, and we have such an amazing responsibility and stewardship [role] to usher them in such a graceful way,” Baszucki said at the premier.
In partnership with Warner, Roblox has millions of players going on a quest that mirrors the quest for the Egg in Ready Player One.
Many young fans and Roblox game creators have created 70,000 videos related to Ready Player One , and the quest has been played millions of times.
I think it’s going to be some time before we can create the world of Ready Player One , at least the way both Spielberg and Cline have envisioned it. But I appreciate that they’ve pointed the way, and that some people in tech and games and other industries are trying to make it happen. Lastly, check out the infographic above to get a sense of the candidates for the world of Ready Player One.
GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.
Discover our Briefings.
Join the GamesBeat community! Enjoy access to special events, private newsletters and more.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,090 | 2,021 |
"Esri boosts digital twin tech for its GIS mapping tools | VentureBeat"
|
"https://venturebeat.com/2021/07/18/esri-boosts-digital-twin-tech-for-its-gis-mapping-tools"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Esri boosts digital twin tech for its GIS mapping tools Share on Facebook Share on X Share on LinkedIn ESRI digital GIS mapping Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Geographic information system (GIS) mainstay Esri is looking to expand its stake in digital twin technologies through significant updates in its product portfolio. As it announced at its recent user conference, the company is updating complex data conversion, integration, and workflow offerings to further the digital twin technology mission.
In fact, GIS software is foundational to many digital twin technologies, although that is sometimes overlooked as the nebulous digital twin concept seeks greater clarity in the market.
Esri’s updates to its ArcGIS Velocity software promise to make diverse big data types more readily useful to digital twin applications. At Esri User Conference 2021, these enhancements were also joined by improvements in reality capture, indoor mapping, and user experience design for digital twin applications.
Reality capture is a key to enabling digital twins, according to Chris Andrews, who leads Esri product development in geo-enabled systems, intelligent cities, and 3D. Andrews gave VentureBeat an update on crucial advances in Esri digital twins’ capabilities.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! “Reality capture is a beginning — an intermittent snapshot of the real world in high accuracy 3D, so it’s an integral part of hydrating the digital twin with data,” he said. “One area we will be looking at in the future is indoor reality capture, which is something for which we’re hearing significant demand.” What is reality capture? One of the most important steps in building a digital twin is to automate the process of capturing and converting raw data into digital data.
There are many types of raw data, which generally involve manual organization. Esri is rapidly expanding workflows for creating, visualizing, and analyzing reality capture content from different sources. This includes point clouds (lidar), oriented and spherical imagery (pictures or circular pictures), reality meshes, and data derived from 2D and 3D raster and vector content such as CAD drawings.
For example, Esri has combined elements it gained from acquiring SiteScan and nFrames over the last two years with its in-house developed Drone2Map. Esri also created and is growing the community around I3S, an open specification for fusing data captured by drones, airplanes, and satellites, Andrews told VentureBeat.
ArcGIS Velocity handles big data Esri recently disclosed updates to ArcGIS Velocity, its cloud integration service for streaming analytics and big data.
ArcGIS Velocity is a cloud-native, no-code framework for connecting to IoT data platforms and asset tracking systems, and making their data available to geospatial digital twins for visualization, analysis, and situational awareness. Esri released the first version of ArcGIS Velocity in February 2020.
“Offerings like ArcGIS Velocity are integral in bringing data into the ArcGIS platform and detecting incidents of interest,” said Suzanne Foss, Esri product manager.
Updates include stateful real-time processing introduced in December 2020, machine learning tools in April and June 2021, and dynamic real-time geofencing analysis in June 2021. The new stateful capabilities allow users to detect critical incidents in a sensor’s behavior over time, such as change thresholds and gap detection. Dynamic geofencing filters improve the analysis between constantly changing data streams.
Velocity is intended to lower the bar for bringing in data from across many different sources, according to Foss. For example, a government agency could quickly analyze data from traffic services, geotagged event data, and weather reports to make sense of a new problem. While this data may have existed before, it required much work to bring it all together. Velocity lets users get mashup data into new analytics or situational reports with a few clicks and appropriate governance. It is anticipated that emerging digital twins will tap into such capabilities.
Building information modeling tie-ins One big challenge with digital twins is that vendors adopt file formats optimized for their particular discipline, such as engineering, operations, supply chain management, or GIS. When data is shared across tools, some of the fidelity may be lost. Esri has made several advances to bridge this gap such as adding support for Autodesk Revit and open IFC formats. It has also improved the fidelity for reading CAD data from Autodesk Civil 3D and Bentley MicroStation in a way that preserves semantics, attribution, and graphics. It has also enhanced integration into ArcGIS Indoors.
Workflows are another area of focus for digital twin technology. The value of a digital twin comes from creating digital threads that span multiple applications and processes, Andrews said. It is not easy to embed these digital threads in actual workflows.
“Digital twins tend to be problem-focused,” he said. “The more that we can do to tailor specific product experiences to include geospatial services and content that our users need to solve specific problems, the better the end user experience will be.” Esri has recently added new tools to help implement workflows for different use cases.
ArcGIS Urban helps bring together available data with zoning information, plans, and projects to enable a digital twin for planning applications.
ArcGIS Indoors simplifies the process of organizing workflows that take data from CAD tools for engineering facilities, building information modeling (BIM) data for managing operations, and location data from tracking assets and people. These are potentially useful in, for example, ensuring social distancing.
ArcGIS GeoBIM is a new service slated for launch later this year that will provide a web experience for connecting ArcGIS and Autodesk Construction Cloud workflows.
Also expected to underlie digital twins are AR/VR technologies, AI, and analytics. To handle that, Esri has been working to enable the processing of content as diverse as full-motion imagery, reality meshes, and real-time sensor feeds. New AI, machine learning, and analytics tools can ingest and process such content in the cloud or on-premises.
AI digital twin technology farm models The company has also released several enhancements to organizing map imagery, vector data, and streaming data feeds into features for AI and machine learning models. These can work in conjunction with ArcGIS Velocity either for training new AI models or for pushing them into production to provide insight or improve decision making.
For example, a farmer or agriculture service may train an AI model on digital twins of farms, informed by satellite feeds, detailed records of equipment movement, and weather predictions, to suggest steps to improve crop yield.
Taken as a whole, Esri’s efforts seek to tie very different kinds of data together into a comprehensive digital twin. Andrews said the company has made strides to improve how these might be scaled for climate change challenges. Esri can potentially power digital twins at “the scale of the whole planet” and address pressing issues of sustainability, Andrews said.
Like so many events, Esri UC 2021 was virtual. The company pledged to resume in-person events next year.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,091 | 2,021 |
"Digital Twin Consortium pursues open source collaboration | VentureBeat"
|
"https://venturebeat.com/2021/06/11/digital-twin-consortium-pursues-open-source-collaboration"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Digital Twin Consortium pursues open source collaboration Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Digital twins promise to bring digital transformation across various industries by harmonizing data flows between applications and users. However, this interest has also driven growth in various trade groups, standards bodies, and consortiums to ensure interoperability. The concern is that the rise of so many standards could slow down meaningful adoption.
Such concerns form the backdrop to the Digital Twin Consortium (DTC) announcement last month of a significant open source effort to facilitate digital twin collaboration across different groups on open source projects, open source code, and open source collateral to address this Tower of Babel.
Digital twins provide a way of unifying data across many applications and types of users for larger projects. But the industry has struggled with application and data silos. Open standards should make it easier to develop applications that span these silos. Some of the biggest challenges include lacking a standard definition of what a digital twin is, integrating back-end data sources, and providing a standard information model.
A recent survey created under the auspices of the Industrial Internet Consortium , a liaison of the DTC, identified at least eight different industrywide efforts working on various aspects of digital twin standards. These groups are working to solve various pieces of digital twin interoperability in diverse end-use cases. But their efforts have been siloed, and various elements relating to open code, open specifications, or an open development model have yet to be tackled.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Organizations driving digital twin technologies include: Clean Energy and Smart Manufacturing Innovation Institute (CESMII) Digital Twin Consortium (DTC) GAIA-X Industrial Digital Twin Association (IDTA) Industrial Internet Consortium (IIC) Open Industry 4.0 Alliance Open Manufacturing Platform (OMP) Platform Industry 4.0 Generally, standards group participants today are hoping to go beyond just sharing code and creating open source content and data. That’s according to David McKee, CTO/founder of Slingshot Simulations and co-chair of the Digital Twin Consortium.
McKee told VentureBeat that government bodies are moving to open up important datasets. But these efforts exist independently and often without reference to the tools and technologies actually used to generate or read the pertinent data.
“This initiative highlights the need to bring these together to generate value [and] showing how digital twins are built on data using tools to read that data and also generate new data for decision-making,” McKee said.
Participants are also hoping that this effort will make it easier to weave together various new technologies that come with their own active communities of developers and end users, rich sets of tools and methodologies, as well as advanced standardization efforts and open-source tools for specific domains and industries.
Improved collaboration should lower the barriers to adoption, Dr. Said Tabet, chief architect in the office of the CTO at Dell Technologies, told VentureBeat. “Open-source collaboration will accelerate the adoption of digital twins that today rely on enabling technologies such as AI, modeling and simulation, IIoT and Edge, 5G, and high-performance compute,” he said.
Interop 2.0 So far, the various digital twin groups’ emphasis has been on standards, but standards need enabling software that runs across platforms. Improved collaboration could drive the digital twin industry, much in the way Interop conferences drove Internet adoption.
In the mid-1980s, the telecommunications industry seemed poised to adopt the Open System Interconnect standards championed by telecom companies. Meanwhile, another group started experimenting with a much lighter set of protocols based on the evolution of local area networking technologies called TCP/IP. Proponents organized Interop conferences to showcase how their equipment could work together across a common backbone connected via open source software.
Similarly, the development of open source collaboration for digital twins could drive the practical adoption of approaches that build on different tools that interoperate today, rather than creating over-engineered specifications that are too complicated to work together.
Striking the right balance Digital Twin Consortium chief technical officer Dan Isaacs told VentureBeat the group is working to find the appropriate balance between interoperability and keeping projects simple and extensible. One significant issue has been weeding out all the “open source” proposals that include requirements to purchase proprietary elements.
The group believes that open source projects can be more flexible and respond more rapidly than closed counterparts. With open source, there is a significant number of developers and practitioners that are available. Also, the open source culture can increase everyone’s desire to build and contribute in a meaningful way, as exemplified by Linux or the various Apache projects.
The DTC is also in the process of establishing branches around the world with close ties to academia, local government, and private industry. It has also been expanding relationships with other organizations such as the Linux Foundation, Fiware , and others.
“These and other ongoing activities serve to further drive the adoption and showcase both the value of digital twin in terms of open source implementations and open source standards requirements,” Isaacs said.
Perilous path with promise Gartner vice president and analyst Peter Havart-Simkin told VentureBeat that, for now, all the existing digital twin standards are proprietary in some way. “There is no multi-vendor open standard for a digital twin that can be used by third parties, and there is currently no such thing as an open multi-vendor digital twin integration framework,” he said.
In Havart-Simkin’s estimation, digital twins either exist as templates for a particular vendor’s asset or exist as a set of enabling technologies allowing users to build their own digital twins. In many cases, digital twins exist buried in platforms such as IoT platforms, or in enterprise applications such as asset performance management (APM).
The industry is lacking a digital twin app store where enterprises could buy a digital twin template of an asset that they own (for example, a pump on an oil refinery). This could be possible with the advent of a set of agreed-upon standards for digital twins that included a digital twin definition language.
It would also require a framework within which digital twins from multiple vendors can be combined to define the digital twin of a composite asset — for example, by allowing the combination of a digital twin of a brake system from one vendor with the gearbox from another into a larger digital twin of a car.
One big concern is the ownership of the intellectual property of the assets. This could lead to restrictions on third parties being able to create digital twins of assets they did not build. That could, in turn, raise the age-old issues of walled gardens.
Havart-Simkin also believes that the industry currently suffers from too many proprietary approaches, although, down the road, it may make sense for digital twin standards efforts to align according to vertical markets, such as turbines or buildings.
In the meantime, the current DTC effort shows promise, Havart-Simkin believes. He said it has a very broad global membership that will, to a large degree, prevent certain vendors from attempting to hijack any proposed standards to their own benefit.
Digital Twin Consortium membership is extensive — it includes Ansys, Autodesk, Bentley, Dell, GE Digital, Microsoft, and many others. The real key here, Havart-Simkin emphasizes, is the scale of the involvement and membership of the Digital Twin Consortium.
“There is absolutely no doubt that driving toward open source, open data, and open specifications is the only way that this will play out to the benefit of all developers and vendors and all end-user organizations wishing to build their own,” Havart-Simkin said.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,092 | 2,020 |
"Unlearn.ai raises $12 million to accelerate clinical trials with 'digital twins' | VentureBeat"
|
"https://venturebeat.com/2020/04/20/unlearn-raises-12-million-to-accelerate-clinical-trials-with-digital-twins"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Unlearn.ai raises $12 million to accelerate clinical trials with ‘digital twins’ Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Unlearn.ai , a company that designs software tools for clinical research, today announced that it secured $12 million in equity financing. Unlearn’s “digital twin” approach to trials, in which digital models are used in place of real test subjects, could reduce the number of people required to run a trial without sacrificing standards of evidence.
Unlearn’s technology could also help to solve the systemic reproducibility problem in clinical research, which a pair of surveys by Bayer and Amgen recently brought into sharp relief. Bayer reported successfully replicating just 25% of published preclinical studies it analyzed, while Amgen confirmed findings in just 6 of 53 landmark cancer studies (11%).
Unlearn was cofounded in 2017 by physicists Charles Fisher, Aaron Smith, and Jon Walsh, who initially built the company’s platform atop an AI architecture called restricted Boltzmann machines (RBMs). RBMs are inspired by statistical mechanics and can model a person’s characteristics while remaining robust in the face of missing data, but they poorly model data from different groups, producing blended rather than distinct distributions of, for example, patients.
To address such shortcomings, the team architected an open source package called Paysage, which implemented unsupervised learning algorithms (meaning they use data that hasn’t been classified or labeled) including a hybrid of an RBM and generative adversarial networks: a Boltzmann Encoded Adversarial Machine (BEAM). GANs are two-part AI models consisting of a generator that creates samples and a discriminator that attempts to differentiate between the generated samples and real-world samples, and this unique arrangement enables them to achieve impressive feats of media synthesis.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Unlearn’s DiGenesis platform is built upon this hybrid model. It processes historical clinical trial data sets from thousands of patients to build the disease-specific machine learning models, which are used to create digital twins and their corresponding virtual medical records. Digital twin records are longitudinal and include demographic information, common lab tests, and endpoints and/or biomarkers that look identical to actual patient records in a clinical trial.
In a case study published last year, Unlearn applied its system to predict Alzheimer’s disease progression, in essence projecting the symptoms that individual patients will experience at any point in the future. It simultaneously computed predictions and confidence intervals for multiple characteristics of a patient at once using a BEAM, which was trained and tested on the Coalition Against Major Diseases (CAMD) Online Data Repository for Alzheimer’s Disease. The data set consisted of 5,000 patients measured over a period of 18 months covering 50 variables, including the individual components of ADAS-Cog (a widely used cognitive subscale) and Mini-Mental State Examination, a questionnaire used to measure cognitive impairment in clinical and research settings.
In the course of the study, Unlearn leveraged the trained model to generate “virtual patients” and their associated cognitive exam scores, laboratory tests, and clinical data. Simulations were run for individual patients to project their disease progression in areas such as word recall, orientation, and naming, which were in turn used to compute the overall ADAS-Cog score.
The result: The unsupervised model was able to make accurate ADAS-Cog predictions out to at least 18 months.
Unlearn says that undisclosed pharmaceutical companies have expressed interest in DiGenesis — which isn’t surprising. It takes on average over $2 billion and 10 years to develop and sell a new medicine, and much of the costs arise in the trial phases, during which around 90% of candidate treatments are proven ineffective or unsafe.
“Patients who volunteer for clinical trials take some risk; they could receive a treatment that doesn’t work, or experience serious side-effects. Therefore, it’s really important that we run these trials as efficiently as possible while providing reliable evidence to further medical science,” Fisher told VentureBeat via email. “We believe that our [platform] will have a profound impact on this problem, and are excited to partner with 8VC to realize a shared vision to use technology to improve the lives of patients.” Unlearn’s aspirational goal is to develop a digital twin for every patient, which it envisions will help physicians evaluate the risks each patient faces and develop the best course of treatment for that patient. In the near term, Unlearn intends to focus on neurological diseases, starting with Alzheimer’s disease and multiple sclerosis.
This financing round — a series A — was led by 8VC with participation from all of Unlearn’s existing investors including DCVC, DCVC Bio, and Mubadala Capital Ventures. (It brings Unlearn’s total raised to date to over $17 million.) Through its investment, 8VC principal Francisco Gimenez joined the company’s board of directors.
According to LinkedIn data, San Francisco-based Unlearn has 16 employees.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,093 | 2,021 |
"Cado Security raises $10M for cloud cybersecurity forensics | VentureBeat"
|
"https://venturebeat.com/2021/04/15/cado-security-raises-10m-for-cloud-cybersecurity-forensics"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Cado Security raises $10M for cloud cybersecurity forensics Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Digital forensics platform Cado Security today announced a $10 million series A investment led by Blossom Capital, with participation from existing backers. The funds bring the company’s total raised to $11.5 million and will be used to support growth in engineering, customer support, and go-to-market operations.
Some experts estimate that legacy forensics tools only provide 5% or less of the data needed to investigate a cloud attack. Forensics analysts often determine that an attack is not worth further investigation, due to the level of effort required to dig deeper. But these attacks aren’t slowing. Some 20% of organizations get hit with cyberattacks six or more times a year, and 80% say they’ve experienced at least one incident in the last year so severe it required a board-level meeting, according to a report from IronNet.
James Campbell and Chris Doman founded Cado Security in 2020 with the goal of addressing challenges in cloud security forensics. Campbell, who previously led PricewaterhouseCoopers’ cyber response service and Australia’s national Australian Signals Directorate as associate director, teamed up with ThreatCrowd creator Doman to build a forensics platform that speeds up investigations of cloud attacks.
Pandemic-driven shifts “We founded Cado Security right in the midst of the pandemic in April 2020, as enterprises were shifting to the cloud, to enable their remote workforces to successfully work from anywhere,” Campbell told VentureBeat via email. “This uptick in the cloud introduced new complexities and risks enterprises had never seen before. Security teams didn’t have the time to become experts in the cloud amidst the shift, and hackers noticed.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Cado Security automatically captures and processes data to visualize and investigate attacks, leveraging an analysis engine that detects malicious files, suspicious events, personally identifiable information, and financial data. Employing a combination of full-content inspection, log parsing, event correlation, and machine learning models, Cado Security’s platform indexes files and logs for later inspection, creating a human-readable timeline of events.
“[Our] platform has a unique detection engine that uses machine learning in order to identify financial or personally identifiable data across systems that have been impacted by an event,” Campbell explained. “Many of the existing solutions provide an incident overview, which represents a fraction of the actual data related to the event, meaning you’re more likely to miss something big … [Cado] can see data attempting to be exfiltrated by a hacker, even when they are not using any malicious software to evade detection.” Acceleration According to Gartner, nearly 70% of enterprises plan to accelerate spending on cloud services in 2021. As more data moves to the cloud, attacks on cloud infrastructures are increasing significantly, putting new pressures on security teams to respond quickly.
Cado Security claims it has seen “significant demand” despite competition in the over $34.5 billion cloud security market.
Netskope recently raised $340 million at a $3 billion valuation, while Valtix nabbed $14 million in June 2019. There’s also Bitglass , which raked in $70 million for its cloud-native platform that helps companies monitor and secure employee devices.
“Data is moving to the cloud at an alarming rate. We founded Cado Security to help enterprises quickly and easily conduct deep forensic investigations across modern cloud environments to stay one step ahead of today’s cybercriminals,” Campbell said. “Our platform is [one of the few solutions] that can capture data across short-term environments, such as containers and auto-scaling infrastructures, enabling security teams to effectively investigate threats.” Ten Eleven Ventures also participated in London-based Cado Security’s latest funding round.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,094 | 2,021 |
"Bigeye raises $17M to algorithmically monitor data quality | VentureBeat"
|
"https://venturebeat.com/2021/04/15/bigeye-raises-17m-to-algorithmically-monitor-data-quality"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Bigeye raises $17M to algorithmically monitor data quality Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Bigeye , a data quality engineering platform, today announced it has raised $17 million in a series A round led by Sequoia Capital. The company says the funds will be used to improve its platform and help make it available to more data teams.
Data is increasingly critical to enterprises and is woven into the products and services that directly affect customers. To keep pace, data engineering has increased in scale, complexity, and automation, leading to a number of significant workflow challenges. A clear majority of employees (87%) peg data quality issues as the reason their organizations failed to successfully implement AI and machine learning, according to a recent Alation report.
San Francisco, California-based Bigeye, previously called Toro Data Labs, employs machine learning to enable companies to instrument data lakes and warehouses with thousands of data quality metrics. Founded in 2020, the platform automatically instruments datasets and pipelines with metrics, creating alerts driven by anomaly detection techniques.
How it works Bigeye uses connectors and read-only accounts to connect to data sources and record health metrics. Available in fully managed software-as-a-service form or as an on-premises app for enterprises, Bigeye samples objects like tables and generates recommended metrics based on data profiling and semantic analysis. By default, all metrics have automatic thresholds enabled — within 5 to 10 days, Bigeye learns the behavior of the metrics and begins to make adjustments. When those thresholds are reached, the platform sends alerts via email, Slack, and other channels and optionally triggers remediation steps.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! For one company, Bigeye identified that its customer data had a number of rows in which the values had been written into the wrong columns. The percentage of rows affected was small enough that analysts might not have spotted it, but at the scale that the company was working, it could have led to hundreds of customer support tickets that would have needed to be resolved.
Bigeye can draw from Snowflake, Redshift, BigQuery, and other popular sources, and its no-code interface allows teams to create, edit, and read configuration and metric histories. The company says that as a part of its efforts to improve the platform, it recently increased support for service-level agreements, which can help engineers build trust through transparency with users.
Data quality As processes around data remain a hurdle in adopting AI — 34% of respondents to a 2021 Rackspace survey stated poor data quality as the reason for AI R&D failure — observability solutions like Bigeye are attracting investments. There’s Aporia , Monte Carlo , and WhyLabs , a startup developing a solution for model monitoring and troubleshooting. Another competitor is Domino Data Lab , a company that claims to prevent AI models from mistakenly exhibiting bias or degrading.
“Right now, modern data teams are held up by the heroics of data engineers, analysts, and data scientists trying to triage data quality incidents after something has already gone wrong. We’ve been the people who have to stay up until 3 a.m. on a Saturday trying to backfill a pipeline — and it doesn’t feel heroic,” cofounder and CEO Kyle Kirwan told VentureBeat via email. “For companies to realize the value of their data, it needs to be effortless for data teams to measure, improve, and communicate data quality for their organizations.” But Bigeye has already successfully courted large customers, including Instacart, Crux Informatics, and Lambda School.
In addition to Sequoia, Costanoa Ventures also participated in Bigeye’s latest funding round. The three-year-old company has 11 employees, and the funds bring its total raised to $21 million.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,095 | 2,021 |
"KPMG: 79% of chip industry expects profits to grow in 2021 amid shortage | VentureBeat"
|
"https://venturebeat.com/2021/03/01/kpmg-79-of-chip-industry-expects-profits-to-grow-in-2021-amid-shortage"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages KPMG: 79% of chip industry expects profits to grow in 2021 amid shortage Share on Facebook Share on X Share on LinkedIn Quantum computer technology concept.
Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Despite the pandemic and economic downturn, the semiconductor industry grew 6.5% to $439 billion in 2020, and 79% of executives believe profits will increase in 2021, according to a report from accounting firm KPMG and the Global Semiconductor Alliance trade group.
The semiconductor industry will continue growing due to the mainstream growth of the internet of things, 5G wireless networks, and the auto industry, according to the KPMG Global Semiconductor Industry Outlook.
KPMG surveyed 156 senior executives from global semiconductor companies in the fourth quarter. Eighty-five percent of the executives predicted that revenue will continue to increase in 2021, and 73% plan to increase capital spending. Seventy-one percent of respondents said they plan to spend more on research and development.
Sixty-eight percent reported that executing on growth initiatives is their top strategic priority over the next three years. As far as concerns, 53% of executives listed territorialism (concern over territory disputes such as those between the West and China), and 37% cited supply chain disruption.
Forty-four percent of respondents ranked making their supply chains more flexible and adaptable to geopolitical changes and other disruptions as one of their top three strategic priorities.
Future growth Above: KPMG chip executive survey results.
Sixty-three percent expect to increase headcount over the next year, while 30% identified talent risk as an issue facing the industry. Developing and managing talent was rated one of the top three strategic priorities (53%), up 13 percentage points from last year.
Lincoln Clark, partner in charge of KPMG’s global semiconductor practice, said in a statement that the pervasiveness of technology across society and all sectors is accelerating as we undergo profound shifts in home-based work, education, and entertainment. This is driving a surge in demand for chip-based products, and semiconductor companies have been quick to react to the change.
Respondents highlighted the most potential for growth in sensors/micro-electro-mechanical systems (MEMS), analog/radio frequency (RF)/mixed signal, and microprocessors — including graphic processing units (GPU), microcontrollers (MCU), and memory protection units (MPU).
Supply chain risks Above: KPMG is warning about supply chain risks for the chip industry.
Semiconductor companies are not alone in their concerns about the supply chain. The pandemic has triggered an across-the-board reassessment of supply chain resiliency — from businesses to governments — to ensure they are prepared for future crises. U.S. President Joe Biden recently announced a supply chain review after the ongoing pandemic sparked a number of shortages across critical industries. He expressed particular concern about semiconductor supply chain issues.
KPMG urged companies to review their supply chains in light of political and pandemic concerns.
Many carmakers have faced semiconductor shortages, and some have even been forced to close production lines. Automakers have historically relied on just-in-time inventory, and with early COVID-19 shutdowns and demand rising faster than expected in the second half of 2020, they could not ramp up the ability to source sufficient volumes of the necessary semiconductor content fast enough.
It is important for companies to weigh the benefits of “just-in- time” versus “heavier assets-on-hand” inventory approaches. The geographical diversity of supply chains is an important consideration, with more flexible supply chains — and those that can adapt to geopolitical changes — becoming increasingly successful.
The report also suggested chipmakers and their customers reassess the need for redesign or introduction of micro supply chains for critical components, rather than applying one-size-fits-all supply chain procurement models.
Many companies already outsource the manufacturing/assembly of their products or key components to third-party suppliers, many of whom are in low-cost manufacturing countries. Depending on the arrangement, inventory that is held at the supplier location or in transit could become “accounting inventory” on the books. And verifying the completeness, existence, and accuracy of this inventory could present audit challenges. Alternatively, operations that elect the heavier assets-on-hand approach to address just-in-time requirements open themselves up to greater risk of excess or obsolete inventory.
Potential tariffs Companies are also facing a greater risk of tariffs. KPMG said reducing costs and risks associated with rising trade and tariffs across the supply chain is crucial. In the semiconductor industry, for example, some manufacturers have made significant supply chain changes, including sourcing chip content from different geographies, to optimize operations in the current high-tariff environment.
Additionally, nationalist technology and trade policies — particularly by the U.S. and China — may add cost pressure and supply chain complexity. Governments have grown increasingly protective of homegrown intellectual property, especially as it relates to sensitive technology sectors such as 5G and others. With growing frequency, export controls and sanctions are tools to restrict foreign access to advanced hardware, software, and technical data. These controls present significant compliance and operational challenges, and effective management is key to maintaining a market edge, KPMG said.
The pandemic has accelerated the digital transformation in many industries but slowed progress in others. At the same time, it forced many manufacturers and suppliers to update their systems and operating models to accommodate remote workforces and become more efficient and cost-effective.
KPMG said companies should be fully aware of the practices of suppliers, producers, vendors, and partners across the entirety of their supply chain to ensure they meet various compliance requirements.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,096 | 2,021 |
"AI Weekly: Biden calls for $37 billion to address chip shortage | VentureBeat"
|
"https://venturebeat.com/2021/02/26/ai-weekly-biden-calls-for-37-billion-to-address-chip-shortage"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages AI Weekly: Biden calls for $37 billion to address chip shortage Share on Facebook Share on X Share on LinkedIn U.S. President Joe Biden holds a semiconductor during his remarks before signing an executive order on the economy in the State Dining Room of the White House on February 24, 2021 in Washington, DC.
Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Shortly after a meeting with members of Congress on Wednesday, President Joe Biden signed an executive order that launches a review of supply chain vulnerabilities in the United States. COVID-19 made evident gaps in the U.S. supply chain in medical equipment like face masks and ventilators, but in a ceremony carried live by TV news networks, Biden held up a chip, calling it the “21st century horseshoe nail.” AI research has received military funding from the outset, and government organizations like DARPA continue to fund AI startups, but a global chip supply shortage caused by COVID-19 has hindered the progress of numerous industries. During his remarks, Biden acknowledged that semiconductor chip shortages impacts products like cars, smartphones, and medical diagnostic equipment. Earlier this month, Ford said the shortage would reduce production by up to 20% in Q1 2021.
Smartphone production is also expected to decline as a result of the chip shortage, and earlier this month, business executives from AMD, Intel, Nvidia, and Qualcomm sent a letter to Biden urging support for the CHIPS for America Act and stating that a chip shortage could interrupt progress for emerging technology areas like AI, 5G, and quantum computing. CHIPS stands for Creating Helpful Incentives to Produce Semiconductors. That bill was introduced in Congress in summer 2020 and called for $22 billion in tax credits and research and development funding. The American Foundries Act , also introduced in Congress last summer, called for $25 billion. As part of the executive order signing ceremony Wednesday, Biden pledged support for $37 billion over an unspecified period described as “short term” and pledged to work with ally nations to address the chip bottleneck. The executive order will also review key minerals and materials, pharmaceuticals, and the kinds of batteries used in electric vehicles.
“We need to prevent the supply chain crisis from hitting in the first place. And in some cases, building resilience will mean increasing our production of certain types of elements here at home. In others, it’ll mean working more closely with our trusted friends and partners, nations that share our values, so that our supply chains can’t be used against us as leverage,” Biden said.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! A 2019 U.S. Air Force report put the urgency of the matter in context.
That report finds that “90% of all high-volume, leading-edge [semiconductor] production will soon be based in Taiwan, China, and South Korea.” The Semiconductor Industry Association (SIA) finds that 12% of global semiconductor production takes place in the U.S. today.
Analysts who spoke to VentureBeat found a number of factors contributing to the current chip shortage.
Kevin Krewell is a principal analyst at Tirias Research. He attributes the chip shortage to an initial slump followed by unexpected demand increase, not enough advanced semiconductor manufacturers, the fact that more complex semiconductor processes are hard to scale, and that there’s a long lead time on building new semiconductor manufacturing facilities, or “fabs.” Intel and Samsung being slow to get advanced process nodes out in a timely fashion has put more pressure on TSMC to make more chips, but he expects shortages will get addressed as more capacity comes on line and demand returns to more predictable levels.
“The $37 billion figure is a small start, but it is a start,” he said. Building a single semiconductor manufacturing facility can cost tens of billions of dollars.
Linley Group senior analyst Mike Demler said a fourth quarter growth in car sales caught auto manufacturers off guard, that high demand for consumer electronics during the pandemic rippled through other industries. He also said that the U.S. semiconductor industry wants to use the shortage to increase domestic semiconductor-manufacturing capacity.
“The semiconductor industry has thrived because of the global supply chain. Greater investment in R&D could help restore US technological leadership in manufacturing technology, but it would take many years to shift the ecosystem,” Demler said.
IDC analyst Mario Morales said the chip shortage is a real thing but that some businesses may be blaming that shortage to distract from deeper underlying business problems or poor planning. For example, Ford may be reducing inventory due to a lack of chips, but Toyota has a stockpile.
“I think some of this is just not very good business continuity planning, and that some of this is a reaction to that. And others I think they’re using this as an excuse, because there is some underperformance from some of these vendors,” he said.
When discussing what caused the chip shortage, analysts VentureBeat interviewed talked primarily about COVID-19 and made virtually no mention of China, but you could potentially say the opposite about national security interests in the U.S., the other driver of interest in domestic chip production. The final report from the National Security Commission on AI is due out next week. That group was formed by Congress a few years ago and is made up of some of the most influential AI and business leaders in the United States today, like soon-to-be Amazon CEO Andy Jassy, Google Cloud AI chief Andrew Moore, and former Google CEO Eric Schmidt.
The report calls for the United States to remain “two generations ahead of China,” with $12 billion over the next five years for research, development, and infrastructure. It also supports creation of a national microelectronics research strategy like the kind espoused in the American Foundries Act. The 2021 National Defense Authorization Act created a committee to develop a national microelectronic research strategy.
The report calls for 40% refundable tax credit as well. The CHIPS for America Act also calls for hefty tax credits for semiconductor manufacturers through 2027.
“The dependency of the United States on semiconductor imports, particularly from Taiwan, creates a strategic vulnerability for both its economy and military to adverse foreign government action, natural disaster, and other events that can disrupt the supply chains for electronics,” the draft final report reads. “If a potential adversary bests the United States in semiconductors, it could gain the upper hand in every domain of warfare.” The draft final report echoes calls from the National Security Commission on Artificial Intelligence (NSCAI) for more public-private partnerships around semiconductors.
In testimony before the House Budget committee about how AI will change the economy, NSCAI commissioner and Intelligence Advanced Research Projects Activity (IARPA) director Dr. Jason Matheny said, “It will be very difficult for China to match us if we play our cards right.” “We shouldn’t rest on our laurels, but if we pursue policies that strengthen our semiconductor industry while also placing the appropriate controls on the manufacturing equipment that China doesn’t have and that China currently doesn’t have the ability to produce itself and is probably a decade away from being able to produce itself, we’ll be in a very strong position,” he said.
A Bloomberg analysis found that Chinese spending on computer chip production equipment jumped 20% in 2020 compared to 2019.
Reuters has recorded Chinese chip imports above $300 billion for the past three years.
Advanced semiconductor manufacturing facilities can be more expensive than modern day aircraft carriers , and fabs are only part of the equation. IDC’s Morales agreed with Krewell that $37 billion is a start, but that becoming a leader in manufacturing could take a decade of investment not just in semiconductor manufacturing plants, but also design, IP, and infrastructure.
“The goal should be to collaborate a lot more with other regions that I would say are more neutral,” Morales said. He added that, based on conversations with manufacturers, he expects an end to chip supply chain shortage issues by Q2 or Q3 2021.
We’ll have to wait a few months to see what the review ordered by the Biden administration prescribes to improve resilience when it comes to chip production, but it seems clear that $37 billion may only be the start.
For AI coverage, send news tips to Khari Johnson , Kyle Wiggers , and AI editor Seth Colaner — and be sure to subscribe to the AI Weekly newsletter and bookmark our AI Channel.
Thanks for reading, Khari Johnson Senior AI Staff Writer VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,097 | 2,020 |
"Arm and Siemens deploy 'digital twins' to accelerate automotive design | VentureBeat"
|
"https://venturebeat.com/2020/01/06/arm-and-siemens-deploy-digital-twins-to-accelerate-automotive-design"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Arm and Siemens deploy ‘digital twins’ to accelerate automotive design Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
What might chip design company Arm and manufacturing conglomerate Siemens gain from a wide-ranging technology partnership? According to the two titans, which jointly announced the newfound collaboration today during CES 2020, quite a lot. In the coming weeks, Arm says it’ll adapt some of its methodologies and tools to help automakers, integrators, and suppliers bring their next-gen platforms to market. As for Siemens, it asserts that Arm’s intellectual property will help address the challenges facing the automotive industry, specifically with respect to developing platforms that realize advanced driver assistance systems, in-vehicle infotainment platforms, digital cockpits, vehicle-to-vehicle and vehicle-to-infrastructure communications, and self-driving vehicles.
To this end, Siemens’ Pave360 digital twin product — which incorporates Arm technologies — applies high-fidelity modeling techniques, incorporating everything from sensors and integrated circuits to vehicle dynamics and the environment cars operate in. (“Digital twin” in this context refers to virtual automotive chips, such as AI accelerators.) The models can run entire software stacks, providing early metrics of power and performance, and they enable automakers to simulate sub-system designs to better understand how they perform in situ.
The hope is that models like those created by Pave360 will foster the development of integrated circuit designs that allow car manufacturers to consolidate electronic control units. (Electronic control units are the embedded systems in vehicle electronics that control one or more of the electrical systems.) The result could be thousands of dollars in cost savings per vehicle, thanks to a reduction in the number of circuit boards and meters of wire within the vehicle design.
“In all we do at Siemens, our goal is to provide transportation companies and suppliers the most comprehensive digital twin solutions, from the design and development of semiconductors to advanced manufacturing and deployment of vehicles and services within cities,” said Tony Hemmelgarn, president and CEO of Siemens’ digital industries software division, in a statement. “Siemens believes collaboration with Arm is a win for the entire industry. Carmakers, their suppliers, and [integrated circuit] design companies all can benefit from the collaboration, new methodologies, and insight now sparking new innovations.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The “digital twins” approach to modeling has gained currency in other domains, chiefly industry and energy. For instance, London-based SenSat helps clients in construction, mining, energy, and other industries create models of locations relevant to projects they’re working on, translating the real world into a version that can be understood by machines. For its part, GE offers technology that allows companies to model digital twins of actual machines, whose performance is closely tracked. And Oracle offers services that rely on virtual representations of objects, equipment, and work environments.
In fact, the market for digital twin solutions like those developed by SenSat, GE, Oracle, and Siemens is estimated to grow from $3.8 billion in 2019 to $35.8 billion by 2025. According to Report Linker, key factors driving the uptick include the declining time and cost of product development and unplanned downtime with the adoption of digital twins, increasing adoption of emerging technologies such as the internet of things (IoT) and cloud, and growing use of models for predictive maintenance.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,098 | 2,018 |
"Tempo raises $20 million to open connected electronics factory in San Francisco | VentureBeat"
|
"https://venturebeat.com/2018/04/17/tempo-raises-20-million-to-open-connected-electronics-factory-in-san-francisco"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Tempo raises $20 million to open connected electronics factory in San Francisco Share on Facebook Share on X Share on LinkedIn Tempo factory in San Francisco.
Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
After years of leaning on China to manufacture its electronic wonders at a low cost, Silicon Valley’s factory of the future may soon be located just up the highway in the heart of San Francisco.
Tempo , a low-volume electronics manufacturer, said today that it has raised $20 million to build a “smart” factory in the city. The facility will be 42,000 square feet and lean on Tempo’s advanced software to create an efficient, effective manufacturing process that can deliver affordable gadgets in one of the nation’s most expensive places to live.
The company says that its manufacturing process allows for a flexibility and speed that is critical for companies seeking to churn out new consumer electronics at an accelerated pace.
“Whether they’re building products from rockets to medical devices to autonomous cars, today’s leading companies are racing to get their ideas and concepts to market faster,” said Tempo CEO Jeff McAlvay, in a statement. “Yet the tools to design and manufacture hardware have not improved in decades. When developing new software, it would be unimaginable to have to wait weeks and trade tens of phone calls and emails just to see if your code works or not. Yet that’s the daily experience of electrical engineers today.” The expansion for the manufacturing startup comes amid increased talk of “Industry 4.0.” The dream that sensors, connectivity, artificial intelligence, and new materials will revolutionize all facets of production is starting to be realized across a wide range of industries. Of course, tech companies have in general been under pressure from President Trump to bring more of their overseas manufacturing back to the U.S.
In the case of Tempo, which is currently located in San Francisco’s Dogpatch neighborhood, its manufacturing service helps hardware companies rapidly prototype new products. In its connected factory, customers can upload design information directly to machines in the facilities that allow for a quicker turnaround. The same manufacturing line can be used to build up to 15 different products in one day.
“In the same way that when a consumer makes an order on Amazon, the software is connected to Amazon’s warehouses, inventory systems, robots on the warehouse floor, and then shipping and delivery, Tempo’s connected factory is bringing that same digital thread to custom manufacturing so every step of the process — from design data to machines to material vendors and to technicians — is interconnected, which results in our customers being able to iterate up to 5 times faster,” said Shashank Samala, Tempo’s cofounder and vice president of product, in a statement.
The new factory, which will be located up the road in the Potrero Hill neighborhood, will have five such production lines. Tempo reports that its revenue grew 500 percent last year. With the new money, the company plans to grow its staff from 60 to 100.
The investment round was led by P72 Ventures and included money from existing investors Lux Capital, Uncork Capital, and AME, as well as new investors such as Dolby Ventures, Industry Ventures, and Cendana.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,099 | 2,021 |
"A simple model of the brain provides new directions for AI research | VentureBeat"
|
"https://venturebeat.com/2021/06/03/a-simple-model-of-the-brain-provides-new-directions-for-ai-research"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages A simple model of the brain provides new directions for AI research Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Last week, Google Research held an online workshop on the conceptual understanding of deep learning. The workshop, which featured presentations by award-winning computer scientists and neuroscientists, discussed how new findings in deep learning and neuroscience can help create better artificial intelligence systems.
While all the presentations and discussions were worth watching (and I might revisit them again in the coming weeks), one in particular stood out for me: A talk on word representations in the brain by Christos Papadimitriou, professor of computer science at Columbia University.
In his presentation, Papadimitriou, a recipient of the Gödel Prize and Knuth Prize, discussed how our growing understanding of information-processing mechanisms in the brain might help create algorithms that are more robust in understanding and engaging in conversations. Papadimitriou presented a simple and efficient model that explains how different areas of the brain inter-communicate to solve cognitive problems.
“What is happening now is perhaps one of the world’s greatest wonders,” Papadimitriou said, referring to how he was communicating with the audience. The brain translates structured knowledge into airwaves that are transferred across different mediums and reach the ears of the listener, where they are again processed and transformed into structured knowledge by the brain.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “There’s little doubt that all of this happens with spikes, neurons, and synapses. But how? This is a huge question,” Papadimitriou said. “I believe that we are going to have a much better idea of the details of how this happens over the next decade.” Assemblies of neurons in the brain The cognitive and neuroscience communities are trying to make sense of how neural activity in the brain translates to language, mathematics, logic, reasoning, planning, and other functions. If scientists succeed at formulating the workings of the brain in terms of mathematical models, then they will open a new door to creating artificial intelligence systems that can emulate the human mind.
A lot of studies focus on activities at the level of single neurons. Until a few decades ago, scientists thought that single neurons corresponded to single thoughts. The most popular example is the “ grandmother cell ” theory, which claims there’s a single neuron in the brain that spikes every time you see your grandmother. More recent discoveries have refuted this claim and have proven that large groups of neurons are associated with each concept, and there might be overlaps between neurons that link to different concepts.
These groups of brain cells are called “assemblies,” which Papadimitriou describes as “a highly connected, stable set of neurons which represent something: a word, an idea, an object, etc.” Award-winning neuroscientist György Buzsáki describes assemblies as “the alphabet of the brain.” A mathematical model of the brain To better understand the role of assemblies, Papadimitriou proposes a mathematical model of the brain called “interacting recurrent nets.” Under this model, the brain is divided into a finite number of areas, each of which contains several million neurons. There is recursion within each area, which means the neurons interact with each other. And each of these areas has connections to several other areas. These inter-area connections can be excited or inhibited.
This model provides randomness, plasticity, and inhibition. Randomness means the neurons in each brain area are randomly connected. Also, different areas have random connections between them. Plasticity enables the connections between the neurons and areas to adjust through experience and training. And inhibition means that at any moment, a limited number of neurons are excited.
Papadimitriou describes this as a very simple mathematical model that is based on “the three main forces of life.” Along with a group of scientists from different academic institutions, Papadimitriou detailed this model in a paper published last year in the peer-reviewed scientific journal Proceedings of the National Academy of Sciences. Assemblies were the key component of the model and enabled what the scientists called “assembly calculus,” a set of operations that can enable the processing, storing, and retrieval of information.
“The operations are not just pulled out of thin air. I believe these operations are real,” Papadimitriou said. “We can prove mathematically and validate by simulations that these operations correspond to true behaviors… these operations correspond to behaviors that have been observed [in the brain].” Papadimitriou and his colleagues hypothesize that assemblies and assembly calculus are the correct model that explain cognitive functions of the brain such as reasoning, planning, and language.
“Much of cognition could fit that,” Papadimitriou said in his talk at the Google deep learning conference.
Natural language processing with assembly calculus To test their model of the mind, Papadimitriou and his colleagues tried implementing a natural language processing system that uses assembly calculus to parse English sentences. In effect, they were trying to create an artificial intelligence system that simulates areas of the brain that house the assemblies that correspond to lexicon and language understanding.
“What happens is that if a sequence of words excites these assemblies in lex, this engine is going to produce a parse of the sentence,” Papadimitriou said.
The system works exclusively through simulated neuron spikes (as the brain does), and these spikes are caused by assembly calculus operations. The assemblies correspond to areas in the medial temporal lobe, Wernicke’s area, and Broca’s area, three parts of the brain that are highly engaged in language processing. The model receives a sequence of words and produces a syntax tree. And their experiments show that in terms of speed and frequency of neuron spikes, their model’s activity corresponds very closely to what happens in the brain.
The AI model is still very rudimentary and is missing many important parts of language, Papadimitriou acknowledges. The researchers are working on plans to fill the linguistic gaps that exist. But they believe that all these pieces can be added with assembly calculus, a hypothesis that will need to pass the test of time.
“Can this be the neural basis of language? Are we all born with such a thing in [the left hemisphere of our brain],” Papadimitriou asked. There are still many questions about how language works in the human mind and how it relates to other cognitive functions. But Papadimitriou believes that the assembly model brings us closer to understanding these functions and answering the remaining questions.
Language parsing is just one way to test the assembly calculus theory. Papadimitriou and his collaborators are working on other applications, including learning and planning in the way that children do at a very young age.
“The hypothesis is that the assembly calculus—or something like it—fills the bill for access logic,” Papadimitriou said. “In other words, it is a useful abstraction of the way our brain does computation.” Ben Dickson is a software engineer and the founder of TechTalks. He writes about technology, business, and politics.
This story originally appeared on Bdtechtalks.com.
Copyright 2021 VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,100 | 2,021 |
"The business value of neural networks | VentureBeat"
|
"https://venturebeat.com/2021/05/25/the-business-value-of-neural-networks"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages The business value of neural networks Share on Facebook Share on X Share on LinkedIn Brain neural network Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Neural networks are the backbone of algorithms that predict consumer demand, estimate freight arrival time, and more. At a high level, they’re computing systems loosely inspired by the biological networks in the brain. But there’s more to them than that.
Neural networks began rising to prominence in 2010, when it was shown that GPUs make backpropagation feasible for complex neural network architectures. ( Backpropagation is the technique used by a machine learning model to find out the error between a guess and the correct solution, given the correct solution in the data.) Between 2009 and 2012, neural networks began winning prizes in contests, approaching human-level performance on various tasks, initially in pattern recognition and machine learning. Around this time, neural networks won multiple competitions in handwriting recognition without prior knowledge of the languages to be learned.
Now neural networks are used in domains from logistics and customer support to ecommerce retail fulfillment. They power applications with clear business use cases, which has led organizations to increasingly invest in adoption, development, and deployment of neural networks. Enterprise use of AI grew a whopping 270% over the past several years, Gartner recently reported , while Deloitte says 62% of respondents to its corporate October 2018 study adopted some form of AI , up from 53% in 2019.
What are neural networks? A neural network is based on a collection of units or nodes called neurons, which model the neurons in the brain. Each connection can transmit a signal to other neurons, with the receiving neuron performing the processing.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The “signal” at the connection is a real number, or a value of a continuous quantity that can represent a distance along a line. And the output of each neuron is computed by some function of the sum of its inputs.
The connections in neural networks are called edges. Neurons and edges typically have a weight that adjusts as learning proceeds, such that the weight increases or decreases the strength of the signal at a connection. Typically, neurons are aggregated into layers, and different layers may perform different transformations on their inputs. Signals travel from the first layer (the input layer) to the last layer (the output layer), sometimes after traversing the layers multiple times. And some neurons have thresholds that must be exceeded before they send a signal.
Neural networks learn — i.e., are “trained” — by processing examples. Each example contains a known “input” and a “result,” which are both stored within the data structure of the neural network itself. Training a neural network from example usually involves determining the difference between the output of the network (often a prediction) and a target output. This is the error. The network then adjusts its associations according to a learning rule, using this error value.
Adjustments will cause the neural network to produce an output that is increasingly similar to the target output. After a sufficient number of these adjustments, the training can be terminated based upon certain criteria. Such systems “learn” to perform tasks by considering examples, generally without being programmed with task-specific rules. For instance, in image recognition, they might learn to identify images that contain cats by analyzing example images that have been manually labeled as “cat” or “no cat” and using the results to identify cats in other images.
Applications Neural networks are used in a number of business applications, including decision-making, pattern recognition, and sequence recognition. For example, it’s possible to create a semantic profile of a person’s interests from pictures used during object recognition training.
Domains that potentially stand to benefit from neural networks include banking, where AI systems can evaluate credit and loan application evaluation, fraud and risk, loan delinquencies, and attrition. On the business analytics side, neural networks can model customer behavior, purchase, and renewals and segment customers while analyzing credit line usage, loan advising, real estate appraisal, and more. Neural networks can also play a role in transportation, where they’re able to power routing systems, truck brake diagnosis systems, and vehicle scheduling. And in medicine, they can perform cancer cell analysis, emergency room test advisement, and even prosthesis design.
Individual companies are using neural networks in a variety of ways. LinkedIn, for instance, applies neural networks — along with linear text classifiers — to detect spam or abusive content in its feeds. The social network also uses neural nets to help understand the kinds of content shared on LinkedIn, ranging from news articles to jobs to online classes, so it can build better recommendation and search products for members and customers.
Call analytics startup DialogTech also employs neural networks to classify inbound calls into predetermined categories or to assign a lead quality score to calls. A neural network performs these actions based on the call transcriptions and the marketing channel or keyword that drove the call. For example, if a caller who’s speaking with a dental office asks to schedule an appointment, the neural network will seek, find, and classify that phrase as a conversation, providing marketers with insights into the performance of marketing initiatives.
Another business among the many using neural networks is recruitment platform Untapt.
The company uses a neural network trained on millions of data points and hiring decisions to match people to roles where they’re more likely to succeed. “Neural nets and AI have incredible scope, and you can use them to aid human decisions in any sector. Deep learning wasn’t the first solution we tested, but it’s consistently outperformed the rest in predicting and improving hiring decisions,” cofounder and CTO Ed Donner told Smartsheet.
Challenges and benefits Despite their potential, neural networks have shortcomings that can be challenging for organizations to overcome. A common criticism is that they require time-consuming training with high-quality data. Data scientists spend the bulk of their time cleaning and organizing data , according to a 2016 survey conducted by CrowdFlower. And in a recent Alation report , a majority of respondents (87%) pegged data quality issues as the reason their organizations failed to implement AI.
Beyond data challenges, the skills gap presents a barrier to neural network adoption. A majority of respondents in a 2021 Juniper report said their organizations were struggling with expanding their workforce to integrate with AI systems. Unrealistic expectations from the C-suite, another top reason for failure in neural network projects, also contributes to delays in AI deployment.
Issues aside, the benefits of neural networks are tangible — and substantial. Neural networks can solve otherwise intractable problems, such as those that render traditional analytical methods ineffective. Harvard Business Review estimates that 40% of all the potential value created by analytics comes from the AI techniques that fall under the umbrella of deep learning. These leverage multiple layers of neural networks, accounting for between $3.5 trillion and $5.8 trillion in annual value. Gartner anticipates that neural network-powered virtual agents alone will drive $1.2 trillion in business value.
The takeaway is that neural networks have matured to the point of offering real, practical benefits. They’re already essential to supporting decisions, automating work processes, preventing fraud, and performing other key tasks across enterprises. While flawed, they’ll continue developing, which is perhaps why adoption is on the upswing. In a recent KPMG survey , 79% percent of executives said they have a moderately functional AI strategy, while 43% say theirs is fully functional at scale.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,101 | 2,021 |
"Why AI can't solve unknown problems | VentureBeat"
|
"https://venturebeat.com/2021/04/02/why-ai-cant-solve-unknown-problems"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Why AI can’t solve unknown problems Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
When will we have artificial general intelligence, the kind of AI that can mimic the human mind in all aspect? Experts are divided on the topic, and answers range anywhere between a few decades and never.
But what everyone agrees on is that current AI systems are a far shot from human intelligence. Humans can explore the world, discover unsolved problems, and think about their solutions. Meanwhile, the AI toolbox continues to grow with algorithms that can perform specific tasks but can’t generalize their capabilities beyond their narrow domains. We have programs that can beat world champions at StarCraft but can’t play a slightly different game at amateur level. We have artificial neural networks that can find signs of breast cancer in mammograms but can’t tell the difference between a cat and a dog. And we have complex language models that can spin thousands of seemingly coherent articles per hour but start to break when you ask them simple logical questions about the world.
In short, each of our AI techniques manages to replicate some aspects of what we know about human intelligence. But putting it all together and filling the gaps remains a major challenge. In his book Algorithms Are Not Enough , data scientist Herbert Roitblat provides an in-depth review of different branches of AI and describes why each of them falls short of the dream of creating general intelligence.
The common shortcoming across all AI algorithms is the need for predefined representations, Roitblat asserts. Once we discover a problem and can represent it in a computable way, we can create AI algorithms that can solve it, often more efficiently than ourselves. It is, however, the undiscovered and unrepresentable problems that continue to elude us.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Representations in symbolic AI Throughout the history of artificial intelligence, scientists have regularly invented new ways to leverage advances in computers to solve problems in ingenious ways. The earlier decades of AI focused on symbolic systems.
Above: Herbert Roitblat, data scientist and author of Algorithms Are Not Enough.
This branch of AI assumes human thinking is based on the manipulation of symbols, and any system that can compute symbols is intelligent. Symbolic AI requires human developers to meticulously specify the rules, facts, and structures that define the behavior of a computer program. Symbolic systems can perform remarkable feats, such as memorizing information, computing complex mathematical formulas at ultra-fast speeds, and emulating expert decision-making. Popular programming languages and most applications we use every day have their roots in the work that has been done on symbolic AI.
But symbolic AI can only solve problems for which we can provide well-formed, step-by-step solutions. The problem is that most tasks humans and animals perform can’t be represented in clear-cut rules.
“The intellectual tasks, such as chess playing, chemical structure analysis, and calculus are relatively easy to perform with a computer. Much harder are the kinds of activities that even a one-year-old human or a rat could do,” Roitblat writes in Algorithms Are Not Enough.
This is called Moravec’s paradox, named after the scientist Hans Moravec, who stated that, in contrast to humans, computers can perform high-level reasoning tasks with very little effort but struggle at simple skills that humans and animals acquire naturally.
“Human brains have evolved mechanisms over millions of years that let us perform basic sensorimotor functions. We catch balls, we recognize faces, we judge distance, all seemingly without effort,” Roitblat writes. “On the other hand, intellectual activities are a very recent development. We can perform these tasks with much effort and often a lot of training, but we should be suspicious if we think that these capacities are what makes intelligence, rather than that intelligence makes those capacities possible.” So, despite its remarkable reasoning capabilities, symbolic AI is strictly tied to representations provided by humans.
Representations in machine learning Machine learning provides a different approach to AI. Instead of writing explicit rules, engineers “train” machine learning models through examples. “[Machine learning] systems could not only do what they had been specifically programmed to do but they could extend their capabilities to previously unseen events, at least those within a certain range,” Roitblat writes in Algorithms Are Not Enough.
The most popular form of machine learning is supervised learning, in which a model is trained on a set of input data (e.g., humidity and temperature) and expected outcomes (e.g., probability of rain). The machine learning model uses this information to tune a set of parameters that map the inputs to outputs. When presented with previously unseen input, a well-trained machine learning model can predict the outcome with remarkable accuracy. There’s no need for explicit if-then rules.
But supervised machine learning still builds on representations provided by human intelligence, albeit one that is more loose than symbolic AI. Here’s how Roitblat describes supervised learning: “[M]achine learning involves a representation of the problem it is set to solve as three sets of numbers. One set of numbers represents the inputs that the system receives, one set of numbers represents the outputs that the system produces, and the third set of numbers represents the machine learning model.” Therefore, while supervised machine learning is not tightly bound to rules like symbolic AI, it still requires strict representations created by human intelligence. Human operators must define a specific problem, curate a training dataset, and label the outcomes before they can create a machine learning model. Only when the problem has been strictly represented in its own way can the model start tuning its parameters.
“The representation is chosen by the designer of the system,” Roitblat writes. “In many ways, the representation is the most crucial part of designing a machine learning system.” One branch of machine learning that has risen in popularity in the past decade is deep learning, which is often compared to the human brain. At the heart of deep learning is the deep neural network, which stacks layers upon layers of simple computational units to create machine learning models that can perform very complicated tasks such as classifying images or transcribing audio.
Above: Deep learning models can perform complicated tasks such as classifying images.
But again, deep learning is largely dependent on architecture and representation. Most deep learning models needs labeled data, and there is no universal neural network architecture that can solve every possible problem. A machine learning engineer must first define the problem they want to solve, curate a large training dataset, and then figure out the deep learning architecture that can solve that problem. During training, the deep learning model will tune millions of parameters to map inputs to outputs. But it still needs machine learning engineers to decide the number and type of layers, learning rate, optimization function, loss function, and other unlearnable aspects of the neural network.
“Like much of machine intelligence, the real genius [of deep learning] comes from how the system is designed, not from any autonomous intelligence of its own. Clever representations, including clever architecture, make clever machine intelligence,” Roitblat writes. “Deep learning networks are often described as learning their own representations, but this is incorrect. The structure of the network determines what representations it can derive from its inputs. How it represents inputs and how it represents the problem-solving process are just as determined for a deep learning network as for any other machine learning system.” Other branches of machine learning follow the same rule. Unsupervised learning, for example, does not require labeled examples. But it still requires a well-defined goal such as anomaly detection in cybersecurity, customer segmentation in marketing, dimensionality reduction, or embedding representations.
Reinforcement learning, another popular branch of machine learning, is very similar to some aspects of human and animal intelligence. The AI agent doesn’t rely on labeled examples for training. Instead, it is given an environment (e.g., a chess or go board) and a set of actions it can perform (e.g., move pieces, place stones). At each step, the agent performs an action and receives feedback from its environment in the form of rewards and penalties. Through trial and error, the reinforcement learning agent finds sequences of actions that yield more rewards.
Computer scientist Richard Sutton describes reinforcement learning as “the first computational theory of intelligence.” In recent years, it has become very popular for solving complicated problems such as mastering computer and board games and developing versatile robotic arms and hands.
Above: Reinforcement learning can solve complicated problems such as playing board and video games and performing robotic manipulations.
But reinforcement learning environments are typically very complex, and the number of possible actions an agent can perform is very large. Therefore, reinforcement learning agents need a lot of help from human intelligence to design the right rewards, simplify the problem, and choose the right architecture. For instance, OpenAI Five, the reinforcement learning system that mastered the online video game Dota 2, relied on its designers simplifying the rules of the game, such as reducing the number of playable characters.
“It is impossible to check, in anything but trivial systems, all possible combinations of all possible actions that can lead to reward,” Roitblat writes. “As with other machine learning situations, heuristics are needed to simplify the problem into something more tractable, even if it cannot be guaranteed to produce the best possible answer.” Here’s how Roitblat summarizes the shortcomings of current AI systems in Algorithms Are Not Enough : “Current approaches to artificial intelligence work because their designers have figured out how to structure and simplify problems so that existing computers and processes can address them. To have a truly general intelligence, computers will need the capability to define and structure their own problems.” Is AI research headed in the right direction? “Every classifier (in fact every machine learning system) can be described in terms of a representation, a method for measuring its success, and a method of updating,” Roitblat told TechTalks over email. “Learning is finding a path (a sequence of updates) through a space of parameter values. At this point, though, we don’t have any method for generating those representations, goals, and optimizations.” There are various efforts to address the challenges of current AI systems. One popular idea is to continue to scale deep learning. The general reasoning is that bigger neural networks will eventually crack the code of general intelligence. After all, the human brain has more than 100 trillion synapses. The biggest neural network to date, developed by AI researchers at Google, has one trillion parameters. And the evidence shows that adding more layers and parameters to neural networks yields incremental improvements, especially in language models such as GPT-3.
But big neural networks do not address the fundamental problems of general intelligence.
“These language models are significant achievements, but they are not general intelligence,” Roitblat says. “Essentially, they model the sequence of words in a language. They are plagiarists with a layer of abstraction. Give it a prompt and it will create a text that has the statistical properties of the pages it has read, but no relation to anything other than the language. It solves a specific problem, like all current artificial intelligence applications. It is just what it is advertised to be — a language model. That’s not nothing, but it is not general intelligence.” Other directions of research try to add structural improvements to current AI structures.
For instance, hybrid artificial intelligence brings symbolic AI and neural networks together to combine the reasoning power of the former and the pattern recognition capabilities of the latter. There are already several implementations of hybrid AI, also referred to as “neuro-symbolic systems,” that show hybrid systems require less training data and are more stable at reasoning tasks than pure neural network approaches.
System 2 deep learning, another direction of research proposed by deep learning pioneer Yoshua Bengio, tries to take neural networks beyond statistical learning. System 2 deep learning aims to enable neural networks to learn “high-level representations” without the need for explicit embedding of symbolic intelligence.
Another research effort is self-supervised learning, proposed by Yann LeCun, another deep learning pioneer and the inventor of convolutional neural networks. Self-supervised learning aims to learn tasks without the need for labeled data and by exploring the world like a child would do.
“I think that all of these make for more powerful problem solvers (for path problems), but none of them addresses the question of how these solutions are structured or generated,” Roitblat says. “They all still involve navigating within a pre-structured space. None of them addresses the question of where this space comes from. I think that these are really important ideas, just that they don’t address the specific needs of moving from narrow to general intelligence.” In Algorithms Are Not Enough , Roitblat provides ideas on what to look for to advance AI systems that can actively seek and solve problems that they have not been designed for. We still have a lot to learn from ourselves and how we apply our intelligence in the world.
“Intelligent people can recognize the existence of a problem, define its nature, and represent it,” Roitblat writes. “They can recognize where knowledge is lacking and work to obtain that knowledge. Although intelligent people benefit from structured instructions, they are also capable of seeking out their own sources of information.” But observing intelligent behavior is easier than creating it, and, as Roitblat told me in our correspondence, “Humans do not always solve their problems in the way that they say/think that they do.” As we continue to explore artificial and human intelligence, we will continue to move toward AGI one step at a time.
“Artificial intelligence is a work in progress. Some tasks have advanced further than others. Some have a way to go. The flaws of artificial intelligence tend to be the flaws of its creator rather than inherent properties of computational decision making. I would expect them to improve over time,” Roitblat said.
Ben Dickson is a software engineer and the founder of TechTalks. He writes about technology, business, and politics.
This story originally appeared on Bdtechtalks.com.
Copyright 2021 VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,102 | 2,020 |
"DeepMind researchers claim neural networks can outperform neurosymbolic models | VentureBeat"
|
"https://venturebeat.com/2020/12/21/deepmind-researchers-claim-neural-networks-can-outperform-neurosymbolic-models-on-visual-tasks"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages DeepMind researchers claim neural networks can outperform neurosymbolic models Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
So-called neurosymbolic models, which combine algorithms with symbolic reasoning techniques , appear to be much better-suited to predicting, explaining, and considering counterfactual possibilities than neural networks. But researchers at DeepMind claim neural networks can outperform neurosymbolic models under the right testing conditions. In a preprint paper , coauthors describe an architecture for spatiotemporal reasoning about videos in which all components are learned and all intermediate representations are distributed (rather than symbolic) throughout the layers of the neural network. The team says that it surpasses the performance of neurosymbolic models across all questions in a popular dataset, with the greatest advantage on the counterfactual questions.
DeepMind’s research could have implications for the development of machines that can reason about their experiences. Contrary to the conclusions of some previous studies, models based exclusively on distributed representations can indeed perform well on visual-based tasks that measure high-level cognitive functions, according to the researchers — at least to the extent they outperform existing neurosymbolic models.
The neural network architecture proposed in the paper leverages attention to effectively integrate information. (Attention is the mechanism by which the algorithm focuses on a single element or a few elements at a time.) It’s self-supervised, meaning the model must infer masked-out objects in videos using the underlying dynamics to extract more information. And the architecture ensures visual elements in the videos correspond to physical objects, a step the coauthors argue is essential for higher-level reasoning.
The researchers benchmarked their neural network against CoLlision Events for Video REpresentation and Reasoning ( CLEVRER ), a dataset that draws on insights from psychology. CLEVRER contains over 20,000 5-second videos of colliding objects (three shapes of two materials and eight colors) generated by a physics engine and more than 300,000 questions and answers, all focusing on four elements of logical reasoning: descriptive (e.g., “what color”), explanatory (“what’s responsible for”), predictive (“what will happen next”), and counterfactual (“what if”).
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! According to the DeepMind coauthors, their neural network equaled the performance of the best neurosymbolic models without pretraining or labeled data and with 40% less training data, challenging the notion that neural networks are more data-hungry than neurosymbolic models. Moreover, it scored 59.8% on the hardest counterfactual questions — better than both chance and all other models — and it generalized to other tasks including those in CATER, an object-tracking video dataset where the goal is to predict the location of a target object in the final frame.
“Our results … add to a body of evidence that deep networks can replicate many properties of human cognition and reasoning, while benefiting from the flexibility and expressivity of distributed representations,” the coauthors wrote. “Neural models have also had some success in mathematics, a domain that, intuitively, would seem to require the execution of formal rules and manipulation of symbols. Somewhat surprisingly, large-scale neural language models … can acquire a propensity for arithmetic reasoning and analogy-making without being trained explicitly for such tasks, suggesting that current neural network limitations are ameliorated when scaling to more data and using larger, more efficient architectures.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,103 | 2,020 |
"MIT researchers release Clevrer to advance visual reasoning and neurosymbolic AI | VentureBeat"
|
"https://venturebeat.com/2020/04/28/mit-researchers-release-clevrer-to-advance-visual-reasoning-and-neurosymbolic-ai"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages MIT researchers release Clevrer to advance visual reasoning and neurosymbolic AI Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Researchers from Harvard University and MIT-IBM Watson AI Lab have released Clevrer , a data set for evaluating AI models’ ability to recognize causal relationships and carry out reasoning. A paper sharing initial findings about the CoLlision Events for Video REpresentation and Reasoning (Clevrer) data set was published this week at the entirely digital International Conference of Representation Learning (ICLR).
Clevrer builds on Clevr , a data set released in 2016 by a team from Stanford University and Facebook AI Research, including ImageNet creator Dr. Fei-Fei Li, for analyzing the visual reasoning abilities of neural networks. Clevrer cocreators like Chuang Gan of MIT-IBM Watson Lab and Pushmeet Kohli of Deepmind introduced Neuro-Symbolic Concept Learner ( NS-DR ), a neuralsymbolic model applied to Clevr at ICLR one year ago.
“We present a systematic study of temporal and causal reasoning in videos. This profound and challenging problem deeply rooted to the fundamentals of human intelligence has just begun to be studied with ‘modern’ AI tools,” the paper reads. “Our newly introduced Clevrer data set and the NS-DR model are preliminary steps toward this direction.” The data set includes 20,000 synthetic videos of colliding objects on a tabletop created with the Bullet physics simulator, together with a natural language data set of questions and answers about objects in videos. The more than 300,000 questions and answers are categorized as descriptive, explanatory, predictive, and counterfactual.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! MIT-IBM Watson Lab director David Cox told VentureBeat in an interview that he believes the data set can make progress toward creating hybrid AI that combines neural networks and symbolic AI. IBM Research will apply the approach to IT infrastructure management and industrial settings like factories and construction sites, Cox said.
“I think this is actually going to be important for pretty much every kind of application,” Cox said. “The very simple world that we’re seeing are these balls moving around is really the first step on the journey to look at the world, understand that world, be able to make plans about how to make things happen in that world. So we think that’s probably going to be across many domains, and indeed vision and robotics are great places to start.” The MIT-IBM Watson AI Lab was created three years ago as a way to look for disruptive advances in AI related to the general theme of broad AI. Some of that work — like ObjectNet — highlighted the brittle nature of deep learning success stories like ImageNet, but the lab has focused on the combination of neural networks and symbolic or classical AI.
Like neural networks, symbolic AI has been around for decades. Cox argues that just as neural networks waited for the right conditions — enough data, ample compute — symbolic AI was waiting for neural networks in order to experience a resurgence.
Cox says the two forms of AI complement each other well and together can build more robust and reliable models with less data and more energy efficiency. In a conversation with VentureBeat at the start of the year, IBM Research director Dario Gil called neurosymbolic AI one of the top advances expected in 2020.
Rather than map inputs and outputs like neural networks, whatever you want the outcome to be, you can represent knowledge or programs. Cox says this may lead to AI better equipped to solve real-world problems.
“Google has a river of data, Amazon has a river of data, and that’s great, but the vast majority of problems are more like puzzles, and we think that to move forward and actually make AI live beyond the hype we need to build systems that can do that, that have a logical component, can flexibly reconfigure themselves, that can act on the environment and experiments, that can interpret that information, and define their own internal mental models of the world,” Cox said.
The joint MIT-IBM Watson AI Lab was created in 2017 with a $240 million investment.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,104 | 2,021 |
"Survey finds talent gap is slowing enterprise AI adoption | VentureBeat"
|
"https://venturebeat.com/2021/04/19/survey-finds-talent-gap-is-slowing-enterprise-ai-adoption"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Survey finds talent gap is slowing enterprise AI adoption Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
AI’s popularity in the enterprise continues to grow, but practices and maturity remain stagnant as organizations run into obstacles while deploying AI systems. O’Reilly’s 2021 AI Adoption in the Enterprise report , which surveyed more than 3,500 business leaders, found that a lack of skilled people and difficulty hiring topped the list of challenges in AI, with 19% of respondents citing it as a “significant” barrier — revealing how persistent the talent gap might be.
The findings agree with a recent KPMG survey that revealed a large number of organizations have increased their investments in AI to the point that executives are now concerned about moving too quickly. Indeed, Deloitte says 62% of respondents to its corporate October 2018 report adopted some form of AI , up from 53% in 2019. But adoption doesn’t always meet with success, as the roughly 25% of companies that have seen half their AI projects fail will tell you.
The O’Reilly report suggests that the second-most significant barrier to AI adoption is a lack of quality data, with 18% of respondents saying their organization is only beginning to realize the importance of high-quality data. Interestingly, participants in Alation’s State of the Data Culture Report said the same , with a clear majority of employees (87%) pegging data quality issues as the reason their organizations failed to successfully implement AI.
The percentage of respondents to O’Reilly’s survey who reported mature practices (26%) — that is, ones with revenue-bearing AI products — was roughly the same as in the last few years. The industry sector with the highest percentage of mature practices was retail, while education had the lowest percentage. Impediments to maturity ran the gamut but largely centered around a lack of institutional knowledge about machine learning modeling and data science (52%), understanding business use cases (49%), and data engineering (42%).
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Talent gap Laments over the AI talent shortage in the U.S. have become a familiar refrain from private industry. According to a report by Chinese technology company Tencent, there are about 300,000 AI professionals worldwide but “millions” of roles available. In 2018, Element AI estimated that of the 22,000 Ph.D.-educated researchers globally working on AI development and research, only 25% are “well-versed enough in the technology to work with teams to take it from research to application.” And a 2019 Gartner survey found that 54% of chief information officers view this skills gap as the biggest challenge facing their organization.
While higher education enrollment in AI-relevant fields like computer science has risen rapidly in recent years, few colleges have been able to meet student demand, due to a lack of staffing. There’s evidence to suggest the number of instructors is failing to keep pace with demand due to private sector poaching. From 2006 to 2014, the proportion of AI publications with a corporate-affiliated author increased from about 0% to 40% , reflecting the growing movement of researchers from academia to corporations.
One curious trend highlighted in the survey was the share of organizations that say they’ve adopted supervised learning (82%) versus more cutting-edge techniques like self-supervised learning. Supervised learning entails training an AI model on a labeled dataset. By contrast, self-supervised learning generates labels from data by exposing relationships between the data’s parts, a step believed to be critical to achieving human-level intelligence.
Spotlight on supervised learning According to Gartner, supervised learning will remain the type of machine learning organizations leverage most through 2022. That’s because it’s effective in a number of business scenarios, including fraud detection, sales forecasting, and inventory optimization. For example, a model could be fed data from thousands of bank transactions, with each transaction labeled as fraudulent or not, and learn to identify patterns that led to a “fraudulent” or “not fraudulent” output.
“In the past two years, the audience for AI has grown but hasn’t changed much: Roughly the same percentage consider themselves to be part of a ‘mature’ practice; the same industries are represented, and at roughly the same levels; and the geographical distribution of our respondents has changed little,” wrote Mike Loukides, O’Reilly VP of content strategy and the report’s author. “[For example,] relatively few respondents are using version control for data and models … Enterprise AI won’t really have matured until development and operations groups can engage in practices like continuous deployment; until results are repeatable (at least in a statistical sense); and until ethics, safety, privacy, and security are primary rather than secondary concerns.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,105 | 2,021 |
"AI Weekly: The challenges of creating open source AI training datasets | VentureBeat"
|
"https://venturebeat.com/2021/02/19/ai-weekly-the-challenges-of-creating-open-source-ai-training-datasets"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages AI Weekly: The challenges of creating open source AI training datasets Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
In January, AI research lab OpenAI released Dall-E , a machine learning system capable of creating images to fit any text caption. Given a prompt, Dall-E generates photos for a range of concepts, including cats, logos, and glasses.
The results are impressive, but training Dall-E required building a large-scale dataset that OpenAI has so far opted not to make public. Work is ongoing on an open source implementation, but according to Connor Leahy, one of the data scientists behind the effort, development has stalled because of the challenges in compiling a corpus that respects both moral and legal norms.
“There’s plenty of not-legal-to-scrape data floating around that isn’t [fair use] on platforms like social media, Instagram first and foremost,” Leahy, who’s a member of the volunteer AI research effort EleutherAI , told VentureBeat. “You could scrape that easily at large scale, but that would be against the terms of service, violate people’s consent, and probably scoop up illegal data both due to copyright and other reasons.” Indeed, creating AI training datasets in a privacy-preserving, ethical way remains a major blocker for researchers in the AI community, particularly those who specialize in computer vision. In January 2019, IBM released a corpus designed to mitigate bias in facial recognition algorithms that contained nearly a million photos of people from Flickr. But neither the photographers nor the subjects of the photos were notified by IBM that their work would be included. Separately, an earlier version of ImageNet , a dataset used to train AI systems around the world, was found to contain photos of naked children, porn actresses, college parties, and more — all scraped from the web without those individuals’ consent.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “There are real harms that have emerged from casual repurposing, open-sourcing, collecting, and scraping of biometric data,” said Liz O’Sullivan, cofounder and technology director at the Surveillance Technology Oversight Project, a nonprofit organization litigating and advocating for privacy. “[They] put people of color and those with disabilities at risk of mistaken identity and police violence.” Techniques that rely on synthetic data to train models might lessen the need to create potentially problematic datasets in the first place. According to Leahy, while there’s usually a minimum dataset size needed to achieve good performance on a task, it’s possible to a degree to “trade compute for data” in machine learning. In other words, simulation and synthetic data, like AI-generated photos of people, could take the place of real-world photos from the web.
“You can’t trade infinite compute for infinite data, but compute is more fungible than data,” Leahy said. “I do expect for niche tasks where data collection is really hard, or where compute is super plentiful, simulation to play an important role.” O’Sullivan is more skeptical that synthetic data will generalize well from lab conditions to the real world, pointing to existing research on the topic. In a study last January, researchers at Arizona State University showed that when an AI system trained on a dataset of images of engineering professors was tasked with creating faces, 93% were male and 99% white. The system appeared to have amplified the dataset’s existing biases — 80% of the professors were male and 76% were white.
On the other hand, startups like Hazy and Mostly AI say that they’ve developed methods for controlling the biases of data in ways that actually reduce harm. A recent study published by a group of Ph.D. candidates at Stanford claims the same — the coauthors say their technique allows them to weight certain features as more important in order to generate a diverse set of images for computer vision training.
Ultimately, even where synthetic data might come into play, O’Sullivan cautions that any open source dataset could put people in that set at greater risk. Piecing together and publishing a training dataset is a process that must be undertaken thoughtfully, she says — or not at all, where doing so might result in harm.
“There are significant worries about how this technology impacts democracy and our society at large,” O’Sullivan said.
For AI coverage, send news tips to Khari Johnson and Kyle Wiggers and AI editor Seth Colaner — and be sure to subscribe to the AI Weekly newsletter and bookmark our AI channel, The Machine.
Thanks for reading, Kyle Wiggers AI Staff Writer VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,106 | 2,020 |
"Microsoft's SoftNER AI uses unsupervised learning to help triage cloud service outages | VentureBeat"
|
"https://venturebeat.com/2020/07/14/microsofts-softner-ai-uses-unsupervised-learning-to-help-triage-cloud-service-outages"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Microsoft’s SoftNER AI uses unsupervised learning to help triage cloud service outages Share on Facebook Share on X Share on LinkedIn Microsoft Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Microsoft is using unsupervised learning techniques to extract knowledge about disruptions to cloud services. In a paper published on the preprint server Arxiv.org, researchers at the company detail SoftNER, a framework that has been deployed internally at Microsoft to collate information regarding 400 storage, compute, and other cloud outages. They claim it eliminates the need to annotate a large amount of training data while scaling to a high volume of timeouts, slow connections, and other product interruptions.
Structured information has inherent value, particularly in the high-stakes cloud and web operations domains. Not only can it be used to build AI models tailored to tasks like triaging, but it can save time and effort for engineers by automating processes like running checks on resources.
The SoftNER framework attempts to extract knowledge by parsing unstructured text, detecting entities in outage descriptions, and classifying entities into categories. It employs components that identify structural patterns in the descriptions to bootstrap training data, as well as label propagation and a multi-task model to generalize data beyond the patterns and extract entities from the descriptions.
SoftNER begins each run with data de-noising. Drawing incident statements, conversations, stack traces, shell scripts, and summaries from sources including Microsoft customers, feature engineers, and automated monitoring systems, SoftNER normalizes descriptions by pruning tables with more than two columns and getting rid of extraneous tags (like HTML tags). It then segments the descriptions into sentences and tokenizes the sentences into words.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! After performing entity tagging (for things like problem types, exception messages, locations, and status codes) and data-type tagging (for IP addresses, URLs, subscription IDs, and more), SoftNER propagates the entity values’ types to all incident descriptions. For example, if the IP address “127.0.0.1” is extracted as a “source IP” entity, it tags all un-tagged occurrences of “127.0.0.1” as “source IP.” In experiments, the researchers evaluated SoftNER’s performance by applying it to 41,000 outages at Microsoft over a two-month span from “large-scale online systems” with “a wide distribution of users,” each containing an average of 472 words. They report that the framework managed to extract 77 valid entities per 100 from descriptions with over 96% accuracy (averaged over 70 distinct entity types). Moreover, they say that SoftNER is accurate enough in downstream tasks to handle automatic triaging at Microsoft.
The researchers say that in the future, they plan to use SoftNER to evaluate bug reports and improve existing incident reporting and management tools. “Incident management is a key part of building and operating largescale cloud services,” they wrote. “We show that the extracted knowledge can be used for building significantly more accurate models for critical incident management tasks.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,107 | 2,023 |
"What a machine learning tool that turns Obama white can (and can’t) tell us about AI bias - The Verge"
|
"https://www.theverge.com/21298762/face-depixelizer-ai-machine-learning-tool-pulse-stylegan-obama-bias"
|
"The Verge homepage The Verge homepage The Verge The Verge logo.
/ Tech / Reviews / Science / Entertainment / More Menu Expand Menu Tech / Artificial Intelligence / Report What a machine learning tool that turns Obama white can (and can’t) tell us about AI bias What a machine learning tool that turns Obama white can (and can’t) tell us about AI bias / A striking image that only hints at a much bigger problem By James Vincent , a senior reporter who has covered AI, robotics, and more for eight years at The Verge.
| Share this story It’s a startling image that illustrates the deep-rooted biases of AI research. Input a low-resolution picture of Barack Obama, the first black president of the United States, into an algorithm designed to generate depixelated faces, and the output is a white man.
It’s not just Obama, either. Get the same algorithm to generate high-resolution images of actress Lucy Liu or congresswoman Alexandria Ocasio-Cortez from low-resolution inputs, and the resulting faces look distinctly white. As one popular tweet quoting the Obama example put it: “This image speaks volumes about the dangers of bias in AI.” But what’s causing these outputs and what do they really tell us about AI bias? First, we need to know a little a bit about the technology being used here. The program generating these images is an algorithm called PULSE , which uses a technique known as upscaling to process visual data. Upscaling is like the “ zoom and enhance ” tropes you see in TV and film, but, unlike in Hollywood, real software can’t just generate new data from nothing. In order to turn a low-resolution image into a high-resolution one, the software has to fill in the blanks using machine learning.
In the case of PULSE, the algorithm doing this work is StyleGAN, which was created by researchers from NVIDIA. Although you might not have heard of StyleGAN before, you’re probably familiar with its work. It’s the algorithm responsible for making those eerily realistic human faces that you can see on websites like ThisPersonDoesNotExist.com ; faces so realistic they’re often used to generate fake social media profiles.
What PULSE does is use StyleGAN to “imagine” the high-res version of pixelated inputs. It does this not by “enhancing” the original low-res image, but by generating a completely new high-res face that, when pixelated, looks the same as the one inputted by the user.
This means each depixelated image can be upscaled in a variety of ways, the same way a single set of ingredients makes different dishes. It’s also why you can use PULSE to see what Doom guy , or the hero of Wolfenstein 3D , or even the crying emoji look like at high resolution. It’s not that the algorithm is “finding” new detail in the image as in the “zoom and enhance” trope; it’s instead inventing new faces that revert to the input data.
This sort of work has been theoretically possible for a few years now, but, as is often the case in the AI world, it reached a larger audience when an easy-to-run version of the code was shared online this weekend. That’s when the racial disparities started to leap out.
PULSE’s creators say the trend is clear: when using the algorithm to scale up pixelated images, the algorithm more often generates faces with Caucasian features.
“This bias is likely inherited from the dataset” “It does appear that PULSE is producing white faces much more frequently than faces of people of color,” wrote the algorithm’s creators on Github.
“This bias is likely inherited from the dataset StyleGAN was trained on [...] though there could be other factors that we are unaware of.” In other words, because of the data StyleGAN was trained on, when it’s trying to come up with a face that looks like the pixelated input image, it defaults to white features.
This problem is extremely common in machine learning, and it’s one of the reasons facial recognition algorithms perform worse on non-white and female faces. Data used to train AI is often skewed toward a single demographic, white men, and when a program sees data not in that demographic it performs poorly. Not coincidentally, it’s white men who dominate AI research.
But exactly what the Obama example reveals about bias and how the problems it represents might be fixed are complicated questions. Indeed, they’re so complicated that this single image has sparked heated disagreement among AI academics, engineers, and researchers.
On a technical level, some experts aren’t sure this is even an example of dataset bias. The AI artist Mario Klingemann suggests that the PULSE selection algorithm itself, rather than the data, is to blame. Klingemann notes that he was able to use StyleGAN to generate more non-white outputs from the same pixelated Obama image, as shown below: These faces were generated using “the same concept and the same StyleGAN model” but different search methods to Pulse, says Klingemann, who says we can’t really judge an algorithm from just a few samples. “There are probably millions of possible faces that will all reduce to the same pixel pattern and all of them are equally ‘correct,’” he told The Verge.
(Incidentally, this is also the reason why tools like this are unlikely to be of use for surveillance purposes. The faces created by these processes are imaginary and, as the above examples show, have little relation to the ground truth of the input. However, it’s not like huge technical flaws have stopped police from adopting technology in the past.) But regardless of the cause, the outputs of the algorithm seem biased — something that the researchers didn’t notice before the tool became widely accessible. This speaks to a different and more pervasive sort of bias: one that operates on a social level.
“People of color are not outliers. We’re not ‘edge cases’ authors can just forget.” Deborah Raji, a researcher in AI accountability, tells The Verge that this sort of bias is all too typical in the AI world. “Given the basic existence of people of color, the negligence of not testing for this situation is astounding, and likely reflects the lack of diversity we continue to see with respect to who gets to build such systems,” says Raji. “People of color are not outliers. We’re not ‘edge cases’ authors can just forget.” The fact that some researchers seem keen to only address the data side of the bias problem is what sparked larger arguments about the Obama image. Facebook’s chief AI scientist Yann LeCun became a flashpoint for these conversations after tweeting a response to the image saying that “ML systems are biased when data is biased,” and adding that this sort of bias is a far more serious problem “in a deployed product than in an academic paper.” The implication being: let’s not worry too much about this particular example.
Many researchers, Raji among them, took issue with LeCun’s framing, pointing out that bias in AI is affected by wider social injustices and prejudices, and that simply using “correct” data does not deal with the larger injustices.
Even “unbiased” data can produce biased results Others noted that even from the point of view of a purely technical fix, “fair” datasets can often be anything but. For example, a dataset of faces that accurately reflected the demographics of the UK would be predominantly white because the UK is predominantly white. An algorithm trained on this data would perform better on white faces than non-white faces. In other words, “fair” datasets can still created biased systems. (In a later thread on Twitter , LeCun acknowledged there were multiple causes for AI bias.) Raji tells The Verge she was also surprised by LeCun’s suggestion that researchers should worry about bias less than engineers producing commercial systems, and that this reflected a lack of awareness at the very highest levels of the industry.
“Yann LeCun leads an industry lab known for working on many applied research problems that they regularly seek to productize,” says Raji. “I literally cannot understand how someone in that position doesn’t acknowledge the role that research has in setting up norms for engineering deployments.” When contacted by The Verge about these comments, LeCun noted that he’d helped set up a number of groups, inside and outside of Facebook, that focus on AI fairness and safety, including the Partnership on AI. “I absolutely never, ever said or even hinted at the fact that research does not play a role is setting up norms,” he told The Verge.
Many commercial AI systems, though, are built directly from research data and algorithms without any adjustment for racial or gender disparities. Failing to address the problem of bias at the research stage just perpetuates existing problems.
In this sense, then, the value of the Obama image isn’t that it exposes a single flaw in a single algorithm; it’s that it communicates, at an intuitive level, the pervasive nature of AI bias. What it hides , however, is that the problem of bias goes far deeper than any dataset or algorithm. It’s a pervasive issue that requires much more than technical fixes.
As one researcher, Vidushi Marda, responded on Twitter to the white faces produced by the algorithm: “In case it needed to be said explicitly - This isn’t a call for ‘diversity’ in datasets or ‘improved accuracy’ in performance - it’s a call for a fundamental reconsideration of the institutions and individuals that design, develop, deploy this tech in the first place.” Update, Wednesday, June 24: This piece has been updated to include additional comment from Yann LeCun.
Sam Altman fired as CEO of OpenAI Breaking: OpenAI board in discussions with Sam Altman to return as CEO Windows is now an app for iPhones, iPads, Macs, and PCs Screens are good, actually What happened to Sam Altman? Verge Deals / Sign up for Verge Deals to get deals on products we've tested sent to your inbox daily.
From our sponsor Advertiser Content From More from Tech Amazon, Microsoft, and India crack down on tech support scams Amazon eliminated plastic packaging at one of its warehouses Amazon has renewed Gen V for a sophomore season Universal Music sues AI company Anthropic for distributing song lyrics Advertiser Content From Terms of Use Privacy Notice Cookie Policy Do Not Sell Or Share My Personal Info Licensing FAQ Accessibility Platform Status How We Rate and Review Products Contact Tip Us Community Guidelines About Ethics Statement The Verge is a vox media network Advertise with us Jobs @ Vox Media © 2023 Vox Media , LLC. All Rights Reserved
"
|
1,108 | 2,020 |
"Facebook says it will look for racial bias in its algorithms | MIT Technology Review"
|
"https://www.technologyreview.com/2020/07/22/1005532/facebook-says-it-will-look-for-racial-bias-in-its-algorithms"
|
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts Facebook says it will look for racial bias in its algorithms By Will Douglas Heaven archive page NeONBRAND / Unsplash The news: Facebook says it is setting up new internal teams to look for racial bias in the algorithms that drive its main social network and Instagram, according to the Wall Street Journal. In particular, the investigations will address the adverse effects of machine learning—which can encode implicit racism in training data—on Black, Hispanic, and other minority groups.
Why it matters: In the last few years, increasing numbers of researchers and activists have highlighted the problem of bias in AI and the disproportionate impact it has on minorities.
Facebook, which uses machine learning to curate the daily experience of its 2.5 billion users, is well overdue for an internal assessment of this kind. There is already evidence that Facebook’s ad-serving algorithms discriminate by race and allow advertisers to stop specific racial groups from seeing their ads, for example.
Under pressure: Facebook has a history of dodging accusations of bias in its systems. It has taken several years of bad press and pressure from civil rights groups to get to this point. Facebook has set up these teams after a month-long advertising boycott organized by civil rights groups—including the Anti-Defamation League, Color of Change, and the NAACP—that led big spenders like Coca-Cola, Disney, McDonald’s, and Starbucks to suspend their campaigns.
No easy fix: The move is welcome. But launching an investigation is a far cry from actually fixing the problem of racial bias, especially when nobody really knows how to fix it. In most cases, bias exists in the training data and there are no good agreed-on ways to remove it. And adjusting that data—a form of algorithmic affirmative action —is controversial. Machine-learning bias is also just one of social media’s problems around race. If Facebook is going to look at its algorithms, it should be part of a wider overhaul that also grapples with policies that give platforms to racist politicians, white-supremacist groups, and Holocaust deniers.
"We will continue to work closely with Facebook’s Responsible AI team to ensure we are looking at potential biases across our respective platforms," says Stephanie Otway, a spokesperson for Instagram. "It’s early days and we plan to share more details on this work in the coming months." hide by Will Douglas Heaven Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Artificial intelligence This new data poisoning tool lets artists fight back against generative AI The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.
By Melissa Heikkilä archive page Deepfakes of Chinese influencers are livestreaming 24/7 With just a few minutes of sample video and $1,000, brands never have to stop selling their products.
By Zeyi Yang archive page Driving companywide efficiencies with AI Advanced AI and ML capabilities revolutionize how administrative and operations tasks are done.
By MIT Technology Review Insights archive page Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work.
By Will Douglas Heaven archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
1,109 | 2,021 |
"Facebook's new computer vision model achieves state-of-the-art performance by learning from random images | VentureBeat"
|
"https://venturebeat.com/2021/03/04/facebooks-new-computer-vision-model-achieves-state-of-the-art-performance-by-learning-from-random-images"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Facebook’s new computer vision model achieves state-of-the-art performance by learning from random images Share on Facebook Share on X Share on LinkedIn Facebook SEER Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Facebook today announced an AI model trained on a billion images that ostensibly achieves state-of-the-art results on a range of computer vision benchmarks. Unlike most computer vision models, which learn from labeled datasets, Facebook’s generates labels from data by exposing the relationships between the data’s parts — a step believed to be critical to someday achieving human-level intelligence.
The future of AI lies in crafting systems that can make inferences from whatever information they’re given without relying on annotated datasets. Provided text, images, or another type of data, an AI system would ideally be able to recognize objects in a photo, interpret text, or perform any of the countless other tasks asked of it.
Facebook claims to have made a step toward this with a computer vision model called SEER, which stands for SElf-supERvised. SEER contains a billion parameters and can learn from any random group of images on the internet without the need for curation or annotation. Parameters, a fundamental part of machine learning systems, are the part of the model derived from historical training data.
New techniques Self-supervision for vision is a challenging task. With text, semantic concepts can be broken up into discrete words, but with images, a model must decide for itself which pixel belongs to which concept. Making matters more challenging, the same concept will often vary between images. Grasping the variation around a single concept, then, requires looking at a lot of different images.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Facebook researchers found that scaling AI systems to work with complex image data required at least two core components. The first was an algorithm that could learn from a vast number of random images without any metadata or annotations, while the second was a convolutional network — ConvNet — large enough to capture and learn every visual concept from this data. Convolutional networks, which were first proposed in the 1980s, are inspired by biological processes, in that the connectivity pattern between components of the model resembles the visual cortex.
In developing SEER, Facebook took advantage of an algorithm called SwAV, which was borne out of the company’s investigations into self-supervised learning. SwAV uses a technique called clustering to rapidly group images from similar visual concepts and leverage their similarities, improving over the previous state-of-the-art in self-supervised learning while requiring up to 6 times less training time.
Above: A simplified schematic showing SEER’s model architecture.
Training models at SEER’s size also required an architecture that was efficient in terms of runtime and memory without compromising on accuracy, according to Facebook. The researchers behind SEER opted to use RegNets, or a type of ConvNet model capable of scaling to billions or potentially trillions of parameters while fitting within runtime and memory constraints.
Facebook software engineer Priya Goyal said SEER was trained on 512 NVIDIA V100 GPUs with 32GB of RAM for 30 days.
The last piece that made SEER possible was a general-purpose library called VISSL, short for VIsion library for state-of-the-art Self Supervised Learning. VISSL, which Facebook is open-sourcing today, allows for self-supervised training with a variety of modern machine learning methods. The library facilitates self-supervised learning at scale by integrating algorithms that reduce the per-GPU memory requirement and increase the training speed of any given model.
Performance and future work After pretraining on a billion public Instagram images, SEER outperformed the most advanced state-of-the-art self-supervised systems, Facebook says. SEER also outperformed models on tasks including object detection, segmentation, and image classification. When trained with just 10% of the examples in the popular ImageNet dataset, SEER still managed to hit 77.9% accuracy. And when trained with just 1%, SEER was 60.5% accurate.
When asked whether the Instagram users whose images were used to train SEER were notified or given an opportunity to opt out of the research, Goyal noted that Facebook informs Instagram account holders in its data policy that it uses information like pictures to support research, including the kind underpinning SEER. That said, Facebook doesn’t plan to share the images or the SEER model itself, in part because the model might contain unintended biases.
“Self-supervised learning has long been a focus for Facebook AI because it enables machines to learn directly from the vast amount of information available in the world, rather than just from training data created specifically for AI research,” Facebook wrote in a blog post. “Self-supervised learning has incredible ramifications for the future of computer vision, just as it does in other research fields. Eliminating the need for human annotations and metadata enables the computer vision community to work with larger and more diverse datasets, learn from random public images, and potentially mitigate some of the biases that come into play with data curation. Self-supervised learning can also help specialize models in domains where we have limited images or metadata, like medical imaging. And with no labor required up front for labeling, models can be created and deployed quicker, enabling faster and more accurate responses to rapidly evolving situations.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,110 | 2,020 |
"Researchers show that computer vision algorithms pretrained on ImageNet exhibit multiple, distressing biases | VentureBeat"
|
"https://venturebeat.com/2020/11/03/researchers-show-that-computer-vision-algorithms-pretrained-on-imagenet-exhibit-multiple-distressing-biases"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Researchers show that computer vision algorithms pretrained on ImageNet exhibit multiple, distressing biases Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
State-of-the-art image-classifying AI models trained on ImageNet , a popular (but problematic ) dataset containing photos scraped from the internet, automatically learn humanlike biases about race, gender, weight, and more. That’s according to new research from scientists at Carnegie Mellon University and George Washington University, who developed what they claim is a novel method for quantifying biased associations between representations of social concepts (e.g., race and gender) and attributes in images. When compared with statistical patterns in online image datasets, the findings suggest models automatically learn bias from the way people are stereotypically portrayed on the web.
Companies and researchers regularly use machine learning models trained on massive internet image datasets. To reduce costs, many employ state-of-the-art models pretrained on large corpora to help achieve other goals, a powerful approach called transfer learning. A growing number of computer vision methods are unsupervised, meaning they leverage no labels during training; with fine-tuning, practitioners pair general-purpose representations with labels from domains to accomplish tasks like facial recognition, job candidate screening, autonomous vehicles, and online ad delivery.
Working from the hypothesis that image representations contain biases corresponding to stereotypes of groups in training images, the researchers adapted bias tests designed for contextualized word embedding to the image domain. (Word embeddings are language modeling techniques where words from a vocabulary are mapped to vectors of real numbers, enabling models to learn from them.) Their proposed benchmark — Image Embedding Association Test (iEAT) — modifies word embedding tests to compare pooled image-level embeddings (i.e., vectors representing images), with the goal of measuring the biases embedded during unsupervised pretraining by comparing the association of embeddings systematically.
To explore what kinds of biases may get embedded in image representations generated where class labels aren’t available, the researchers focused on two computer vision models published this past summer: OpenAI’s iGPT and Google’s SimCLRv2. Both were pretrained on ImageNet 2012, which contains 1.2 million annotated images from Flickr and other photo-sharing sites of 200 object classes. And as the researchers explain, both learn to produce embeddings based on implicit patterns in the entire training set of image features.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The researchers compiled a representative set of image stimuli for categories like “age,” “gender-science,” “religion,” “sexuality,” “weight,” “disability,” “skin tone,” and “race.” For each, they drew representative images from Google Images, the open source CIFAR-100 dataset, and other sources.
In experiments, the researchers say they uncovered evidence iGPT and SimCLRv2 contain “significant” biases likely attributable to ImageNet’s data imbalance. Previous research has shown that ImageNet unequally represents race and gender; for instance, the “groom” category shows mostly white people.
Both iGPT and SimCLRv2 showed racial prejudices both in terms of valence (i.e., positive and negative emotions) and stereotyping. Embeddings from iGPT and SimCLRv2 exhibited bias for an Arab-Muslim iEAT benchmark measuring whether images of Arab Americans were considered more “pleasant” or “unpleasant” than others. iGPT was biased in a skin tone test comparing perceptions of faces of lighter and darker tones. (Lighter tones were seen by the model to be more “positive.”) And both iGPT and SimCLRv2 associated white people with tools while associating Black people with weapons, a bias similar to that shown by Google Cloud Vision, Google’s computer vision service, which was found to label images of dark-skinned people holding thermometers “gun.” Beyond racial prejudices, the coauthors report that gender and weight biases plague the pretrained iGPT and SimCLRv2 models. In a gender-career iEAT test estimating the closeness of the category “male” with “business” and “office” and “female” to attributes like “children” and “home,” embeddings from the models were stereotypical. In the case of iGPT, a gender-science benchmark designed to judge the relations of “male” with “science” attributes like math and engineering and “female” with “liberal arts” attributes like art showed similar bias. And iGPT displayed a bias toward lighter-weight people of all genders and races, associating thin people with pleasantness and overweight people with unpleasantness.
The researchers also report that the next-level prediction features of iGPT were biased against women in their tests. To demonstrate, they cropped portraits of women and men including Alexandria Ocasio-Cortez (D-NY) below the neck and used iGPT to generate different complete images. iGPT completions of regular, businesslike indoor and outdoor portraits of clothed women and men often featured large breasts and bathing suits; in six of the ten total portraits tested, at least one of the eight completions showed a bikini or low-cut top.
The results are unfortunately not surprising — countless studies have shown that facial recognition is susceptible to bias.
A paper last fall by University of Colorado, Boulder researchers demonstrated that AI from Amazon, Clarifai, Microsoft, and others maintained accuracy rates above 95% for cisgender men and women but misidentified trans men as women 38% of the time. Independent benchmarks of major vendors’ systems by the Gender Shades project and the National Institute of Standards and Technology (NIST) have demonstrated that facial recognition technology exhibits racial and gender bias and have suggested that current facial recognition programs can be wildly inaccurate, misclassifying people upwards of 96% of the time.
However, efforts are underway to make ImageNet more inclusive and less toxic.
Last year, the Stanford, Princeton, and University of North Carolina team behind the dataset used crowdsourcing to identify and remove derogatory words and photos. They also assessed the demographic and geographic diversity in ImageNet photos and developed a tool to surface more diverse images in terms of gender, race, and age.
“Though models like these may be useful for quantifying contemporary social biases as they are portrayed in vast quantities of images on the internet, our results suggest the use of unsupervised pretraining on images at scale is likely to propagate harmful biases,” the Carnegie Mellon and George Washington University researchers wrote in a paper detailing their work, which hasn’t been peer-reviewed. “Given the high computational and carbon cost of model training at scale, transfer learning with pre-trained models is an attractive option for practitioners. But our results indicate that patterns of stereotypical portrayal of social groups do affect unsupervised models, so careful research and analysis is needed before these models make consequential decisions about individuals and society.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,111 | 2,020 |
"MIT researchers find 'systematic' shortcomings in ImageNet data set | VentureBeat"
|
"https://venturebeat.com/2020/07/15/mit-researchers-find-systematic-shortcomings-in-imagenet-data-set"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages MIT researchers find ‘systematic’ shortcomings in ImageNet data set Share on Facebook Share on X Share on LinkedIn Massachusetts Institute of Technology (MIT) Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
MIT researchers have concluded that the well-known ImageNet data set has “systematic annotation issues” and is misaligned with ground truth or direct observation when used as a benchmark data set.
“Our analysis pinpoints how a noisy data collection pipeline can lead to a systematic misalignment between the resulting benchmark and the real-world task it serves as a proxy for,” the researchers write in a paper titled “From ImageNet to Image Classification: Contextualizing Progress on Benchmarks.” “We believe that developing annotation pipelines that better capture the ground truth while remaining scalable is an important avenue for future research.” When the Stanford University Vision Lab introduced ImageNet at the Conference on Computer Vision and Pattern Recognition (CVPR) in 2009 , it was much larger than many previously existing image data sets. The ImageNet data set contains millions of photos and was assembled over the span of more than two years.
ImageNet uses the WordNet hierarchy for data labels and is widely used as a benchmark for object recognition models. Until 2017, annual competitions with ImageNet also played a role in advancing the field of computer vision.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! But after closely examining ImageNet’s “benchmark task misalignment,” the MIT team found that about 20% of ImageNet photos include multiple objects. Their analysis across multiple object recognition models revealed that having multiple objects in a photo can lead to a 10% drop in general accuracy. At the core of these issues, the authors said, are the data collection pipelines used to create large-scale image data sets like ImageNet.
“Overall, this [annotation] pipeline suggests that the single ImageNet label may not always be enough to capture the ImageNet image content. However, when we train and evaluate, we treat these labels as the ground truth,” report coauthor and MIT Ph.D. candidate Shibani Santurkar said in an International Conference on Machine Learning (ICML) presentation on the work. “Thus, this could cause a misalignment between the ImageNet benchmark and the real-world object recognition task, both in terms of features that we encourage our models to do [and] how we assess their performance.” According to the researchers, an ideal approach for a large-scale image data set would be to collect images of individual objects in the world and have experts label them in exact categories, but that’s not cheap or easy to scale. Instead, ImageNet collected images from search engines and sites like Flickr. Images scraped from the internet search engine were then reviewed by annotators from Amazon’s Mechanical Turk. The researchers note that Mechanical Turk employees who labeled ImageNet photos were directed to focus on a single object and ignore other objects or occlusions. Other large-scale image data sets have followed a similar — and potentially problematic — pipeline, the researchers said.
To evaluate ImageNet, the researchers created a pipeline that asked human data labelers to choose from multiple labels and pick one that was most relevant to the photo. The most frequently selected label was then used to train models to determine what the researchers call an “absolute ground truth.” “The key idea that we leverage is to actually augment the ImageNet labels using model predictions. Specifically, we take a wide range of models and aggregate their top five predictions to get a set of candidate labels,” Santurkar said. “Then we actually determine the validity of these labels by using human annotators, but instead of asking them whether a single label is valid, we repeat this process independently for multiple labels. This allows us to determine the set of labels that could be relevant for a single image.” But the team cautions that their approach is not a perfect match for ground truth since they also used non-expert data labelers. They conclude that it can be difficult for human annotators who are not experts to accurately label images in some instances. Choosing from one of 24 breeds of terriers could be difficult unless you’re a dog expert, for example.
The team’s paper was accepted for publication at ICML this week after being initially published in late May.
The paper’s presentation at the conference followed MIT’s decision to remove the 80 Million Tiny Images data set from the internet and ask researchers with copies of the data set to delete them. These measures were taken after researchers drew attention to offensive labels in the data set, like the N-word, as well as sexist terms for women and other derogatory labels. Researchers who audited the 80 Million Tiny Images data set, which was released in 2006, concluded that these labels were incorporated as a result of the WordNet hierarchy.
ImageNet also used the WordNet hierarchy, and in a paper published at the ACM FaccT conference , ImageNet creators said they plan to remove virtually all of about 2,800 categories in the person subtree of the data set. They also cited other problems with the data set, such as a lack of image diversity.
Beyond large-scale image data sets used to train and benchmark models, the shortcomings of large-scale text data sets was a key theme at the Association of Computational Linguistics (ACL) conference earlier this month.
In other ImageNet-related news, Richard Socher left his job as Salesforce chief scientist this week to launch his own company. Socher helped compile the ImageNet data set in 2009 and oversaw the launch of the first cloud AI services at the company, as well as overseeing Salesforce Research.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,112 | 2,020 |
"MIT takes down 80 Million Tiny Images data set due to racist and offensive content | VentureBeat"
|
"https://venturebeat.com/2020/07/01/mit-takes-down-80-million-tiny-images-data-set-due-to-racist-and-offensive-content"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages MIT takes down 80 Million Tiny Images data set due to racist and offensive content Share on Facebook Share on X Share on LinkedIn Massachusetts Institute of Technology (MIT) in Boston Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Creators of the 80 Million Tiny Images data set from MIT and NYU took the collection offline this week, apologized, and asked other researchers to refrain from using the data set and delete any existing copies. The news was shared Monday in a letter by MIT professors Bill Freeman and Antonio Torralba and NYU professor Rob Fergus published on the MIT CSAIL website.
Introduced in 2006 and containing photos scraped from internet search engines, 80 Million Tiny Images was recently found to contain a range of racist, sexist, and otherwise offensive labels, such as nearly 2,000 images labeled with the N-word, and labels like “rape suspect” and “child molester.” The data set also contained pornographic content like non-consensual photos taken up women’s skirts. Creators of the 79.3 million-image data set said it was too large and its 32 x 32 images too small, making visual inspection of the data set’s complete contents difficult. According to Google Scholar, 80 Million Tiny Images has been cited more 1,700 times.
Above: Offensive labels found in the 80 Million Tiny Images data set “Biases, offensive and prejudicial images, and derogatory terminology alienates an important part of our community — precisely those that we are making efforts to include,” the professors wrote in a joint letter. “It also contributes to harmful biases in AI systems trained on such data. Additionally, the presence of such prejudicial images hurts efforts to foster a culture of inclusivity in the computer vision community. This is extremely unfortunate and runs counter to the values that we strive to uphold.” The trio of professors say the data set’s shortcomings were brought to their attention by an analysis and audit published late last month (PDF) by University of Dublin Ph.D. student Abeba Birhane and UnifyID chief scientist Vinay Prabhu. The authors say their assessment is the first known critique of 80 Million Tiny Images.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The paper authors and the 80 Million Tiny Images creators say part of the problem comes from automated data collection and nouns from the WordNet data set for semantic hierarchy. Before the data set was taken offline, the coauthors suggested the creators of 80 Million Tiny Images do as ImageNet creators did and assess labels used in the people category of the data set. The paper finds that large-scale image data sets erode privacy and can have a disproportionately negative impact on women, racial and ethnic minorities, and communities at the margin of society.
Birhane and Prabhu assert that the computer vision community must begin having more conversations about the ethical use of large-scale image data sets now, in part due to the growing availability of image-scraping tools and reverse image search technology. Citing previous work like the Excavating AI analysis of ImageNet , analysis of large-scale image data sets shows it’s not just a matter of data, but of a culture in academia and the industry that permits the creation of large-scale data sets without the consent of participants “under the guise of anonymization.” “We posit that the deeper problems are rooted in the wider structural traditions, incentives, and discourse of a field that treats ethical issues as an afterthought. A field where in the wild is often a euphemism for without consent. We are up against a system that has veritably mastered ethics shopping, ethics bluewashing, ethics lobbying, ethics dumping, and ethics shirking,” the paper states.
To create more ethical large-scale image data sets, Birhane and Prabhu suggest: Blur the faces of people in data sets Do not use Creative Commons licensed material Collect imagery with clear consent from data set participants Include a data set audit card with large-scale image data sets, akin to the model cards Google AI uses and the datasheets for data sets Microsoft Research proposed The work incorporates Birhane’s previous work on relational ethics , which urges creators of machine learning systems to begin by speaking with the people most affected by those systems and suggests concepts of bias, fairness, and justice are moving targets.
ImageNet was introduced at CVPR in 2009 and is widely considered important to the advancement of computer vision and machine learning.
Whereas some of the largest data sets could previously be counted in the tens of thousands, ImageNet contains more than 14 million images. The ImageNet Large Scale Visual Recognition Challenge ran from 2010 to 2017 and led to the launch of a variety of startups, including Clarifai and MetaMind, a company Salesforce acquired in 2016.
According to Google Scholar, ImageNet has been cited nearly 17,000 times.
As part of a series of changes detailed in December 2019 , ImageNet creators, including lead author Jia Deng and Dr. Fei-Fei Li, found that 1,593 of the 2,832 people categories in the data set potentially contain offensive labels, which they said they plan to remove.
“We indeed celebrate ImageNet’s achievement and recognize the creators’ efforts to grapple with some ethical questions. Nonetheless, ImageNet as well as other large image datasets remain troublesome,” the Birhane and Prabhu paper reads.
Updated 5:13 am July 15: This story was edited due to the fact that the original version stated that Salesforce acquired MetaMind in 2017 when in fact Salesforce acquired MetaMind in 2016.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
Subsets and Splits
Wired Articles Filtered
Retrieves up to 100 entries from the train dataset where the URL contains 'wired' but the text does not contain 'Menu', providing basic filtering of the data.