id
int64
0
17.2k
year
int64
2k
2.02k
title
stringlengths
7
208
url
stringlengths
20
263
text
stringlengths
852
324k
13,567
2,017
"U.S. Panel Endorses Designer Babies to Avoid Serious Disease | MIT Technology Review"
"https://www.technologyreview.com/s/603633/us-panel-endorses-designer-babies-to-avoid-serious-disease"
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts U.S. Panel Endorses Designer Babies to Avoid Serious Disease By Antonio Regalado archive page Since its invention four years ago, a powerful and precise technology for editing DNA called CRISPR has transformed science because of how it makes altering the genetic makeup of plants and animals easier than ever before. But no possibility opened by gene-editing technology has been so exciting, frightening, or as hotly contested as its capacity to allow humanity, for the first time , to control the genetic constitution of children by applying CRISPR to human embryos, sperm, or eggs—cells which together make up the “germ line.” On Tuesday, in a striking acknowledgement that humanity is on the cusp of genetically modified children, a panel of the National Academy of Sciences, the nation’s source of blue-ribbon advice on science policy, recommended that germ-line modification of human beings be permitted in the future in certain narrow circumstances to prevent the birth of children with serious diseases. "Heritable germline genome editing trials must be approached with caution, but caution does not mean that they must be prohibited," according to a 216-page report released today and which was researched and written over the course of a year by a 22-member panel of prominent scientists and experts. The recommendations came freighted with moral and technical caveats, however. The panel believes it will be many years before germ-line engineering is safe enough to consider. The panel also said it should proceed only under “stringent oversight,” and drew a bright line between preventing disease and “enhancements” like attempting to alter genes to make people more intelligent, which it said should not be pursued “at this time.” Despite the cautious language, the panel’s endorsement of GM humans could prove politically explosive, and puts the academy’s experts in conflict with existing legislation in Europe and the U.S. as well as with swaths of the public who oppose the idea of modifying the human genome from birth out of religious conviction or for other reasons. Germ-line modification is already prohibited as a practical matter in the U.S. In 2015, pro-life legislators added a rider to the U.S. Department of Health and Human Services appropriations bill, which forbids the U.S. Food and Drug Administration from considering any proposal to create genetically modified offspring. The legislation, which has to be renewed periodically, means that any proposal to modify an embryo and create a child would be ignored and could not legally proceed in the U.S. In contrast, the academy panel argued that germ-line editing should be allowed in narrow cases where it is the only option for “preventing a serious disease or condition.” For instance, a couple who each suffers from beta thalassemia might only have healthy children free from the inherited blood disorder if they were able to produce embryos in which the genetic defect was corrected using gene editing. The report acknowledges that such circumstances might be exceedingly rare. “They show a narrow but clear path to future clinical use,” says Tetsuya Ishii, a bioethicist at Hokkaido University in Japan who tracks global legislation on germ-line modification. He says the report also provides a justification for laboratory research already occurring in China , Sweden , and the U.K. in which gene-editing is being applied to human embryos to explore its potential. “They want to show that basic research toward severe disease prevention would be permissible,” he says. Related Story The report’s authors struggled with how legitimate medical applications could be encouraged while still preventing “a slippery slope toward less compelling or even antisocial uses” like enhancement of height, looks, or intelligence. The report’s authors addressed that problem by arguing that no form of germ-line editing should be allowed if a country’s regulators can’t also guarantee the technology won’t be misused for “enhancement” of human beings. “They have said there is one narrow corner, a tiny fraction of cases, where it might be the right thing to do,” says Eric Lander, head of the Broad Institute in Cambridge, Massachusetts, which has invested heavily in developing CRISPR technology. “What is fascinating is their argument that if we can’t control where it goes from there, we shouldn’t do it at all. ” It’s not clear how such a policy, which Lander calls “the ‘no slippery slopes’ recommendation,” would be implemented. Other technologies considered dangerous, like nuclear weapons, are monitored by a complex combination of technical bodies, international diplomacy, sanctions, and military threats. Lander says the Broad Institute is “uneasy” with germ-line therapy. It controls more than a dozen patents on CRISPR, which it has licensed to biotech companies, but with a requirement they don’t use it for germ-line modification. “We didn’t want to be licensing technology for germ-line editing ahead of society reaching consensus and we are still very far from a consensus,” Lander says. The report draws a sharp distinction between modifying embryos and modifying the DNA of adults and children. The latter process, known as gene therapy, is already a well-established part of medical research, does not raise the same ethical questions, and should proceed without new restrictions. Scientists are now racing to apply CRISPR as an even more effective way to perform gene therapy on adults, including to treat cancer and muscular dystrophy. In the case of editing human embryos, the line between avoiding serious disease and enhancement may eventually prove to be a blurry one. In addition to preventing the transmission of known genetic diseases like beta thalassemia or cystic fibrosis, the report’s authors said their positive recommendation could also apply to genetic improvements that would act like a vaccine, making people less susceptible to HIV infection or cancer. For instance, people with a certain version of a gene called ApoE are much less likely to develop Alzheimer’s. The report’s authors said that swapping in a “protective” version of ApoE or another gene to an embryo might also be considered acceptable if it prevents disease. “We do not view prevention as a form of enhancement,” says R. Alta Charo, a University of Wisconsin bioethicist who co-chaired the panel. “But whether it’s permissible is up to regulators.” She says the group intentionally did not list specific diseases or situations where germ-line modification should be used. Controlling the technology could prove difficult. One worry is that doctors and scientists will go overseas to countries with permissive rules, or no rules, to attempt it. That is already occurring with a related technique known as mitochondrial transfer, which involves the transfer of DNA-bearing structures between eggs. Last year, a New York fertility doctor treated an American woman in Mexico using the procedure. Most reports of the academy quickly end up on bookshelves and are of interest to only a few experts. But George Annas, a bioethicist at Boston University, says this one has the potential to be politically explosive because of how it presents the right of parents to use germ-line modification as a “procreative liberty” such as abortion. “The scientists are saying this is all a question of risk benefit analysis, versus saying, 'No, it’s just wrong to do,'” says Annas. He thinks the committee “underestimates” public discomfort with the idea. “It’s like torture—some people think we should never do it, other people say, 'No, no, if it works, then it’s okay.' Designer babies is a lot like that.” Charo says her panel did not consider the political ramifications of its findings. “We looked at these questions without considering what happens in the political sphere. That is a moving target,” she says. “That is beyond us.” hide by Antonio Regalado Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard How to fix the internet Katie Notopoulos New approaches to the tech talent shortage MIT Technology Review Insights Deep Dive Biotechnology and health Everything you need to know about artificial wombs Artificial wombs are nearing human trials. But the goal is to save the littlest preemies, not replace the uterus. By Cassandra Willyard archive page Some deaf children in China can hear after gene therapy treatment After deafness treatment, Yiyi can hear her mother and dance to the music. But why is it so noisy at night? By Antonio Regalado archive page Zeyi Yang archive page Scientists just drafted an incredibly detailed map of the human brain A massive suite of papers offers a high-res view of the human and non-human primate brain. By Cassandra Willyard archive page How AI can help us understand how cells work—and help cure diseases A virtual cell modeling system, powered by AI, will lead to breakthroughs in our understanding of diseases, argue the cofounders of the Chan Zuckerberg Initiative. By Priscilla Chan archive page Mark Zuckerberg archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more. Enter your email Thank you for submitting your email! It looks like something went wrong. We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive. The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window "
13,568
2,023
"The Tortured Bond of Alice Sebold and the Man Wrongfully Convicted of Her Rape | The New Yorker"
"https://www.newyorker.com/magazine/2023/05/29/the-tortured-bond-of-alice-sebold-and-the-man-wrongfully-convicted-of-her-rape"
"Newsletter To revisit this article, select My Account, then View saved stories Close Alert Search The Latest News Books & Culture Fiction & Poetry Humor & Cartoons Magazine Puzzles & Games Video Podcasts Goings On Shop Open Navigation Menu Find anything you save across the site in your account Close Alert Profiles The Tortured Bond of Alice Sebold and the Man Wrongfully Convicted of Her Rape By Rachel Aviv Facebook X Email Print Save Story “I still don’t know where to go with this but to grief and to silence and to shame,” Sebold wrote, more than a year after the exoneration. Photographs by Elinor Carucci for The New Yorker Save this story Save this story Save this story Save this story A few months ago, the writer Alice Sebold began to experience a kind of vertigo. She looked at a cup on the table, and it no longer appeared solid. Her vision fractured. Objects multiplied. Her awareness of depth shifted suddenly. Sometimes she glanced down and for a split second felt that there was no floor. Sebold and I had recently begun corresponding, a little more than a year after she learned that the wrong man had been sent to prison, in 1982, for raping her. In 1999, she had published “ Lucky ,” a best-selling memoir about the rape and the subsequent conviction of a young Black man named Anthony Broadwater. Then she wrote “ The Lovely Bones ,” a novel about a girl who is raped and murdered, which has been described as the most commercially successful début novel since “ Gone with the Wind. ” But now Sebold had lost trust in language. She stopped writing and reading. Even stringing together sentences in an e-mail felt like adopting “a sense of authority that I don’t have,” she said. Sebold, who is sixty, recognized that her case had taken a deeply American shape: a young white woman accuses an innocent Black man of rape. “I still don’t know where to go with this but to grief and to silence and to shame,” she wrote to me. In February, I met Sebold in San Francisco for the first time. She lives alone with her dog. She wore fingerless woollen gloves and kept the lights off; her living room was lit by a window. Several times she started explaining something she’d once thought, and then stopped, midsentence. Although she’d quickly accepted the news that Broadwater was innocent, she felt as if she had “strapped on the new reality” and was still in the process of inhabiting it. She allowed that her experience with vertigo represented a kind of psychological progress: she was absorbing the fact that “there was no ground when I thought there was ground,” she said. “There’s that sense of standing up and immediately needing to sit down because you’re going to fall over.” She was fearful of taking in new details too quickly. “It’s not just that the past collapses,” she said. “The present collapses, and any sense of good I ever did collapses. It feels like it’s a whole spinning universe that has its own velocity and, if I just stick my finger in it, it will take me—and I don’t know where I’ll end up.” She was struggling to figure out what to call Broadwater. She had avoided his name for forty years. “Broadwater” felt too cold. “Anthony” felt like a level of closeness she didn’t deserve. And yet their lives were intertwined. “The rapist came out of nowhere and shaped my entire life,” she said. “My rape came out of nowhere and shaped his entire life.” Sebold and Broadwater had defined themselves through stories that were in conflict. But Broadwater, too, felt that they were bound together, the same moments creating the upheaval in their lives. “We both went through the fire,” he said. “You see movies about rape and the young lady is scrubbing herself in the shower, over and over. And I’m saying to myself, ‘Damn, I feel the same way.’ Will it ever be gone from my memory, my mind, my thoughts? No. And it’s not going to be gone for her, either.” Sebold was raped in a pedestrian tunnel in a park around midnight on May 8, 1981, the last day of her freshman year at Syracuse University. “I heard someone walking behind me,” she wrote in an affidavit. “I started to walk faster and was suddenly overtaken from behind and grabbed around the mouth.” When she tried to run away, the man yanked her by the hair, dragged her along a brick path, pounded her skull into the ground, and said he’d kill her if she screamed. Eventually, she stopped resisting and tried to intuit what he wanted. “He worked away on me,” she wrote in “ Lucky. ” “I became one with this man.” She walked back to her dorm, bleeding, and a student called an ambulance. According to a medical exam, her nose was lacerated, her urine was bloody, and her clothes and hair were matted with dirt and leaves. When she was interviewed by the police that morning, she said that her rapist was a Black man, “16-18 yrs. of age, small and muscular build.” In the affidavit, she wrote, “I desire prosecution in the event this individual is caught.” But the detective on her case seemed skeptical of her account—he wrote, without explanation, that it did not seem “completely factual”—and recommended that “this case be referred to the inactive file.” Sebold went home for the summer to a suburb of Philadelphia, where she rarely changed out of her nightgown. Friends from her parents’ church, where her mother was a warden, were told of the rape and treated her as if she had contracted a spiritual disease. Sebold saw herself as a misfit, an “earthy loose cannon,” she said, and felt that being raped confirmed her separateness. She sensed that her father believed she was at fault somehow, for walking through a park at night alone. Her parents wanted her to drop out of Syracuse and spend her sophomore year at a small Catholic college near home, but she had been accepted into classes that fall with the writers Tess Gallagher and Tobias Wolff, and she didn’t want to lose the chance to study with them. Even during the rape, she was aware that she would eventually write about it. “It was one of the ways that I stayed with myself,” she told me. “There’s that thing where you shut down, but you don’t want to disappear, so you reach out for the thing that connects you to life, and for me it was words, language, writing.” In the fall, Gallagher, a poet, introduced herself to Sebold’s class by singing a ballad. She instructed her students to write “poems that mean,” a phrase that Sebold jotted down in her notebook. She felt that Gallagher, the partner of Raymond Carver, who also taught at the university, embodied the transcendence of a life devoted to writing. Carver was such a celebrity on campus that, to discourage students from stopping by their home at all hours, he and Gallagher hung a cardboard sign from their door that read “No visitors please,” with a picture of eyes squinting in concentration. For her first assignment, Sebold turned in an opaque five-page poem that alluded to the rape. The other students didn’t pick up on the metaphor, and at office hours Gallagher proposed that Sebold write a poem with a more straightforward conceit: it should begin with the line “If they caught you.” Gallagher told me, “I realize now that that was rather dangerous, because I’m not a psychiatrist, but writing comes out of a being, and you must minister to the being. I saw her anger and lostness, and I had to make a way for the condition—that essential condition of having been violated—to find speech.” In the class the following week, Sebold read aloud a poem, heavily influenced by Sylvia Plath, called “Conviction,” which was addressed to her rapist. “If they caught you,” she wrote. “Long enough for me / to see that face again, / maybe I would know / your name.” She went on, “Come to me, Come to me, / Come die and lie, beside me.” The next week, before her workshop with Tobias Wolff, Sebold was picking up a snack on the main street near campus when she saw a man who looked like her rapist. “I was hyperaware,” she wrote in “ Lucky. ” “I went through my checklist: right height, right build, something in his posture.” A few minutes later, she saw the man crossing the street toward her. “Hey,” the man said. “Don’t I know you?” He was actually talking to a police officer named Paul Clapper, who was behind Sebold, but she thought he was addressing her, and she suddenly felt certain that he had been on top of her in the tunnel, and that he was mocking her, because he’d got away. She couldn’t speak. “I needed all my energy to focus on believing I was not under his control again,” she wrote. She walked away quickly and heard him laughing. “O.K., I have the e-mail open, but I don’t see an attachment. I would ask your brother for help, but he’s so busy with work and the baby, I don’t know how he does it. Wait, I see it now—do I just click on it? I clicked on it and nothing happened. I’m telling you I’m clicking on it right now.” Cartoon by Adam Cooper and Mat Barton Copy link to cartoon Copy link to cartoon Link copied Shop Shop She hurried to class and told Wolff that she had to miss the workshop. “She was utterly distraught,” Wolff said, “and she told me that she had been raped and that she had just seen her rapist down on Marshall Street and that he had spoken to her.” Wolff told her, “You’ve got to call the police right now.” The author of memoirs about the Vietnam War and a tumultuous childhood, he had a kind of mantra: “Hold on to the memories, keep everything straight.” He shared that advice with Sebold. She rushed back to her dorm room, “every nerve ending pushing out against the edges of my skin,” to call the police. As she walked, “I became a machine,” she wrote. “I think it must be the way men patrol during wartime, completely attuned to movement or threat. The quad is not the quad but a battlefield where the enemy is alive and hiding. He waits to attack the moment you let your guard down. The answer—never let it down, not even for a second.” The scene is a devastating portrait of the nightmare-like state that post-traumatic stress disorder can induce. Previously, when Sebold had seen men who even vaguely resembled her rapist, she had felt sick. On some level, she wrote, she knew that these people hadn’t raped her, but described how eerie it was that “I feel like I’ve lain underneath all these men.” This time, her terror solidified into a firm belief. The moment of recognition was perhaps amplified by the wild, magical hopes that can accompany the act of writing. Sebold had looked to Gallagher as a kind of good witch of art, the sort of writer and woman she wished to be. Now Sebold had made literal Gallagher’s instruction to write “poems that mean.” She had summoned her rapist. Sebold sketched the man’s face, and the Syracuse Police Department issued an alert to its officers. Clapper, the cop who had been chatting with him, recognized the description. Nine days later, Anthony Broadwater, who was twenty years old, was arrested. One of six brothers, Broadwater had left the Marine Corps to take care of his father, a former janitor at Syracuse, who was dying of cancer. His mother had died of pneumonia when he was five, and he and his brothers had been dispersed among various relatives. Broadwater was working as a telephone installer. He couldn’t remember what he’d been doing when Sebold was raped, nearly five months earlier, but, he told the police, “I know I wasn’t doing that. ” He had greeted Clapper because he remembered him as a rookie cop who used to patrol his neighborhood. Five days after the arrest, Gallagher went to the courthouse with Sebold for a hearing. After Sebold testified, a memo from the district attorney’s office reported, “She makes a very good appearance, handled herself very well on cross-examination, and was very cool and collected.” A judge ruled that the prosecution could move forward. Sebold called her parents to tell them the news. “I could see her trying to talk with them, and it was very awkward,” Gallagher told me. “I just felt they weren’t responsive in some way. They could not connect with what was happening to her. I could feel that she was unprotected.” Two weeks later, Sebold was asked to identify Broadwater in a lineup. He was the fourth in a line of five Black men wearing jail uniforms. Sebold identified the fifth man. After signing a form that confirmed her decision, she felt a wave of nausea. She sensed that she’d made the wrong choice. The detective on her case looked downcast and told her, “You were in a hurry to get out of there,” according to her account in “ Lucky. ” The assistant district attorney, Gail Uebelhoer, was a thirty-one-year-old pregnant woman whom Sebold saw as another role model, her guide through a court system dominated by men. Sebold felt that she had failed Uebelhoer. But, Sebold writes in “ Lucky ,” Uebelhoer reassured her that her mistake was understandable. “Of course you chose the wrong one,” Uebelhoer said. “He and his attorney worked to make sure you’d never have a chance.” She said that Broadwater had intentionally duped her by asking an almost identical-looking friend from jail to stand in the No. 5 spot and stare at her, to scare and fluster her. (In fact, Broadwater was not friends with the man in the No. 5 spot, and they did not look the same.) In a memo, Uebelhoer wrote that Sebold had chosen the wrong man because he was “a dead ringer for defendant.” Broadwater’s attorney, Steven Paquette, assumed that the case would be dismissed. He was shocked when Uebelhoer presented it to a grand jury that day. He wondered if she was trying to compensate for the indifference with which the police had originally met Sebold’s account of her rape. “I think she may have been driven by a feeling of ‘Darn it, this isn’t going to happen to this young lady again,’ ” Paquette said. (Uebelhoer didn’t respond to requests for an interview.) On the witness stand, Sebold tried to explain her error. “Five did look at me almost in a way as if he knew me even though I realized you really can’t see through the mirror,” she said. “I don’t know, I was very scared, but I picked five basically because he was looking at me and his features are very much like No. 4.” “You picked him out of the lineup,” a juror said to her. “Are you absolutely sure that this is the one?” “No, five I am not absolutely sure,” she said. “It was between four and five, but I picked five because he was looking at me.” “So then, what you are saying, you are not absolutely sure that he was the one?” the juror asked. “Right.” When Clapper testified, a juror asked him, “When someone is picked out of a lineup, doesn’t it have to be absolutely sure that the person that they picked out of the lineup is the one they’ve seen before?” “That’s correct,” Clapper responded. Uebelhoer cut him off. “He really can’t give you an opinion on that,” she said. Broadwater was indicted after Uebelhoer told the grand jury that a pubic hair found on Sebold’s body during her rape examination matched a sample of Broadwater’s hair. Then she read from the medical records, saying that Sebold had been a virgin. When Paquette offered to show Broadwater photographs taken of Sebold on the night of the rape, as preparation for the trial, Broadwater felt tainted even being near such a crime. He refused to look at the pictures. Paquette recommended that Broadwater choose a bench trial, because he thought it was likely that a jury would be all white. Paquette assumed that a judge, confronted with the story of a Black man raping a virginal white college student, would be more impartial. At the trial, Broadwater was the only person to testify for the defense. “When is the first time that you ever saw Alice Sebold?” Paquette asked him. “Just today,” he said. “Never seen her before.” He explained that he had a scar on his face and a chipped tooth, neither of which Sebold had included in her description of her rapist. But she never heard him testify, because the trial had been scheduled for the same day as her sister’s college graduation. The trial date couldn’t be changed, and her parents said she couldn’t miss the ceremony. The trial lasted only two days, and Sebold came for the second day. Her father, a professor of Romance languages at the University of Pennsylvania, accompanied her but mostly stayed in the lobby, reading a book in Latin. Her mother didn’t come. Sebold had no friends there, either. At the time, she said, “I felt more identified with people I had met in the criminal-justice system than I did with my peers.” On campus, she said, she had to pretend to be a normal student, but in the courtroom “I could exist as a person who had been raped.” Sebold felt that, in order to save herself from being murdered, she had been forced to participate in her own rape. On the witness stand, she described how she helped the man undress her; she had to kiss him and give him oral sex, so that he could maintain an erection. After he finished, “he told me that he wanted to hug me,” she said. “I wouldn’t come near him. So he came over and pulled me back to the wall and hugged me and apologized for that, he said, ‘I am sorry, and you were a good girl.’ ” Then he asked her name. “I couldn’t think of anything else, because I was very scared,” she said. “I said ‘Alice,’ and he said, ‘It is nice knowing you, Alice, and I will be seeing you around.’ ” To draw attention to the biases inherent in the proceedings, Paquette asked Sebold, “How many Black people do you see in the room?” “I see one Black person,” she answered. Except for Broadwater, everyone in the courtroom was white. “The whole thing made me uncomfortable,” she wrote in “ Lucky. ” “But this wouldn’t be the first time, or the last, that I wished my rapist had been white.” “We both went through the fire,” Broadwater said of him and Sebold. During a brief recess, the judge, who had four daughters, chatted with Sebold and asked about her family and what her father did for a living. Immediately after the closing statements, the judge pronounced Broadwater guilty. None of Broadwater’s friends or family came to the trial. His cousin Delores said, “We knew he wasn’t chosen in the lineup. We knew he didn’t have a mind-set to do something like that.” They expected him to be acquitted. When the judge sentenced Broadwater to between eight and twenty-five years in prison, he was numb. Sebold felt uneasy that, at the trial, she had been transformed into “a character that was already not me,” she said. In court, she heard the word “virgin” so often, she said, that it “clanged in my ear.” But she also felt that she’d done something important by seeing the case through. In the year after the trial, the Syracuse Herald American reported, the district attorney’s office lost nine rape cases in a row. “There was a sense of pride,” Orren Perlman, a friend of Sebold’s, told me. She could have “collapsed into incredible shame, but she was really able to tolerate it and to show up.” Broadwater appealed the verdict, arguing that Sebold had a “reduced ability to perceive objects accurately due to the fear she felt during and after the attack.” At the time, there was only limited recognition of the fallibility of eyewitness testimony. Since then, studies have shown that roughly a third of eyewitness identifications are incorrect, and that, when the defendant and the witness are not the same race, the witness is fifty per cent more likely to be mistaken. Broadwater argued that Sebold had “probably added the person she saw on the street in Syracuse to the mental file of her assailant.” His appeal was denied. He spent the first few months of his sentence at Great Meadow Correctional Facility, nicknamed Gladiator School, in Comstock, New York. Many of the men there had just been sentenced. “The hatred, the frustration, the pain, the disbelief—it was all manifesting,” he told me. Later, he was moved to Auburn prison, where a close friend of his from Syracuse was killed in the kitchen while he stood next to him, protecting himself with a baking tray. As a convicted sex offender, Broadwater was targeted by other prisoners. Each time he was transferred to a new prison, he said, “I would try to prevent some incident by asking, ‘Hey, who’s the head of the Latin Kings? Who’s the head of the Aryan Nation? Listen, they need to read this.’ ” He would give gang leaders pages from his appeal and transcripts from his trial. “That was the only way I could really save my life,” he said. At Attica prison, an imam read parts of his transcript aloud to his cell block. Preparing for the worst, Broadwater made a weapon out of tuna-fish cans that he put inside two socks. But, after the imam finished reading, men came up to him and said, “You shouldn’t be in prison, man.” Sebold did not know that Broadwater had appealed his conviction. The D.A.’s office never informed her, she said, and she never followed up herself: “I thought it would be a negative thing, psychologically. I wanted to live my own life.” After college, she enrolled in the writing program at the University of Houston, to study poetry, but she felt adrift. She began doing drugs, and dropped out. She moved to Manhattan and lived in a low-income housing development in the East Village, where she often used heroin. In “ Lucky ,” she describes her realization that she did not share her life with the students at Syracuse or with the friends she’d made in New York. “I share my life with my rapist,” she wrote. In 1989, while teaching freshman composition at Hunter College, she published an article in the Times titled “Speaking of the Unspeakable,” which described the “degree of denial and prettification” that surrounds the crime of rape. “Even my own father, who has spent his life working with young people, confessed to me that he did not understand how I could have been raped if I didn’t ‘want to’ be,” she wrote. “I am alive but eight years later, I can still see and smell that tunnel. And eight years later, it remains true that no one wants to know what happened.” After the piece was published, Oprah Winfrey asked Sebold to appear on an episode of her TV show devoted to rape. Onstage, Sebold looked strikingly beautiful. She wore black pants, a black blouse, and black dagger-like earrings, and her dark hair was pulled up in a high ponytail. “The reason I came today is I think the most important thing we are doing today is telling the story of individual rape victims,” she said in a low, deep voice. “That’s the first step in getting over all of this.” At Winfrey’s request, Sebold recounted the story of seeing her rapist months after the attack. “And so when he came up to you on the street, was it an approach to—let’s go somewhere?” Winfrey asked. “I think he was just having fun,” Sebold responded. “I kept walking, because I was very scared.” She added, “And then I pursued an I.D.” “I don’t understand how you I.D.’d,” Winfrey said. “What do you mean?” Sebold asked. “Because you didn’t know his name,” Winfrey said. “How did you find him, how did you know, I mean—” “Right. Well he came up and walked up to me, and the policeman was there, so I told the policeman, and then we pursued from that point.” Winfrey still seemed confused. “And the policeman believed you, obviously,” she said. Three years later, Sebold learned that her Times essay had been quoted in “ Trauma and Recovery, ” a groundbreaking book by the psychiatrist Judith Herman. At the time, post-traumatic stress disorder was largely seen as a syndrome affecting male combat veterans—it didn’t become an official diagnosis until 1980, the year that Sebold entered college—but Herman argued that trauma could be caused by more intimate forms of violence, too. She wrote that sexual assault could provoke the same symptoms as witnessing death on the battlefield: flashbacks, dissociation, shame, social isolation, a sense of being trapped in the past. She quoted Sebold in a chapter that described how “traumatized people feel that they belong more to the dead than to the living.” Sebold felt that Herman’s book explained the past decade of her life. She went to the library and spent a week reading first-person accounts by veterans of Vietnam. “Somehow, reading these men’s stories allowed me to begin to feel,” she wrote. In 1990, after eight years in prison, Broadwater was granted a hearing before a parole board. “I want to prove to myself and the people in the city of Syracuse that it wasn’t me,” he told the board’s commissioners. “I feel a crime like that every day, every night,” he went on. “It hurts me, hurts me just to be convicted of a crime like that.” He explained that he could have been working and saving money during these years. “I accept the fact it’s going to always be with me,” he told the board. His parole was denied. Two years later, he went before the board again. He had gone to sex-offender counselling, to improve his chances of getting parole. A commissioner asked what he talked about in counselling, given his claim of innocence. “Well, sir, the crime was done,” Broadwater answered. “I was punished for it. I must live with that.” “That wasn’t my question,” the commissioner said. “My question is, what kind of responses do you give when the question was asked, why was this crime committed?” “Well, sir, there is the problem,” Broadwater said. “If I’m convicted of it, yes, I’ve been going through the stages for it, yes.” “You’re still vacillating as to whether or not you committed the crime,” the commissioner said. “They can’t treat you unless you first come to the threshold of acknowledgment of guilt.” “Well, sir, the fact that I am guilty of being convicted of a crime—” “No, no one is guilty of being convicted of a crime,” the commissioner interrupted. “Either you’re guilty of committing the crime or you’re not guilty of committing the crime. You’re talking in circles when you talk about being guilty of being convicted of committing a crime.” Broadwater tried to find something else for which he could accept responsibility. If he was released, he would make sure “to have all my time accountable,” he said. “In case, you know, something like this arises or I be arrested or I’m being questioned for a crime again.” The board denied him parole, citing the fact that he couldn’t acknowledge his guilt. Two years later, the board gave him another chance. “I presume after reading the minutes of your last Board appearance two years ago that you still maintain that you did not commit this crime,” a commissioner said. “Is that correct?” “You know how to whistle, don’t you? You just go to YouTube, search ‘whistle how to,’ and it’s like the third video from the top.” Cartoon by Benjamin Schwartz Copy link to cartoon Copy link to cartoon Link copied Shop Shop “Well, Ma’am, the last time I answered that question, I was hit with twenty-four months,” he said. “I’m afraid to say anything.” “I understand that you are in a Catch-22,” the commissioner said. Broadwater couldn’t get accepted into additional sex-offender treatment programs, which were a requirement for parole, he was told, “because you refuse to acknowledge that you committed the crime.” “Yes, Ma’am.” “And according to what we have in front of us, you are guilty of this crime.” He was denied parole again. The commissioner concluded that “the limited sex offender programming you have participated in does not rise to a level commensurate with the severity of your crime.” On “Oprah,” Sebold had explained that she could not have endured her rape “if I didn’t separate myself and look down and watch.” When she was thirty-two, she enrolled in the master’s writing program at the University of California, Irvine, and began writing a novel about a girl named Susie Salmon, who exists in this dissociated state. After being raped and murdered in the first chapter, Susie spends the rest of the novel in Heaven, observing from above as the people she knows continue with their lives. A celestial “intake counselor” tells Susie that she can observe other people living but “you won’t experience it.” Susie comes to understand that “life is a perpetual yesterday.” Sebold put the novel aside when she realized that she was trying to wedge in everything she wanted to say about rape. For a long time, she’d been frustrated that, when rape appeared in literature, the crime was described through poetic deflection. She wanted to “just put it all out on the table,” she said. Sebold got a grant from the university to go to Syracuse and research her rape, for a memoir. Gail Uebelhoer no longer worked in the D.A.’s office, but she met Sebold there. She pulled out a large plastic zipper bag with the underpants that Sebold had worn the night she was raped, which still had blood on them, and showed her pictures and documents from her file. Sebold was allowed to look at only some of the material. “Gail ended up being that filter for me,” she said. In a class taught by Geoffrey Wolff, the director of the graduate fiction program, Sebold submitted the first sixty pages of what became “ Lucky. ” “My god this is good,” Wolff wrote to her in a letter. He was astonished by her ability to describe the rape’s “daily intersection with your character, your choices, your ferocious will to comprehend.” Her work reminded him of the “great good mystery of writing, why it matters to read, why it heals to write.” His brother is Tobias Wolff, Sebold’s former professor at Syracuse. Both men had written childhood memoirs with conflicting portraits of their parents, an experience that had made Geoffrey acutely aware of the limitations of a writer’s perspective. “There are always other people in that room, too,” he said. But it never occurred to him that “ Lucky ,” of which he read many drafts, should try to capture Broadwater’s experience. “Shame on me,” he said. “The idea that it was the wrong guy didn’t enter my mind, so I didn’t give a shit about his point of view.” “ Lucky ,” which opens with a meticulous reconstruction of the rape, was published in 1999 to quiet praise. Sebold detailed her failure to discriminate between the men standing in the No. 4 and No. 5 spots in the lineup, as well as Uebelhoer’s justification for her error, but readers did not publicly question her rendering of Broadwater’s guilt. (In the book, she refers to Broadwater by a pseudonym.) In Elle , the novelist Francine Prose wrote, “Reading Lucky , you understand how Sebold succeeded in persuading a judge that what happened to her occurred precisely—word for word, detail for detail—the way she described it.” Three years after “ Lucky ” came out, Sebold, who had recently married a writer from her master’s program, published her novel about Susie Salmon, “ The Lovely Bones. ” The novel sold more than ten million copies and was adapted into a movie by Peter Jackson. The World Trade Center had just been attacked, and critics wondered if readers were perhaps uniquely receptive to the story of an innocent person who suffers a harrowing death and then learns how to adapt to the afterlife. “The response to ‘ The Lovely Bones ’ has been like a big, collective sigh of ‘That’s just what we needed,’ ” Laura Miller wrote in Salon. “ Lucky ” was subsequently reissued in paperback and ended up selling more than a million copies. Sebold was surprised to learn that Uebelhoer was speaking to book clubs that were reading the memoir. Uebelhoer sent Sebold a packet of printouts about “ Lucky ” that she shared when she spoke with readers. “I love meeting with book clubs because it not only gets Alice’s story out there,” Uebelhoer wrote in an e-mail to a filmmaker, “but it also increases sales of her book!” Paquette, Broadwater’s attorney, read the memoir after hearing about it from a colleague. He was taken aback by what Uebelhoer had told Sebold about the lineup, but he said, “Twenty years later, it didn’t occur to me that a chapter of a book about misconduct would be something to act on.” He hadn’t spoken with Broadwater since he went to prison. In 1998, Broadwater was at the Mid-State Correctional Facility, a medium-security prison near Utica, when he was asked again to meet with the parole board. This time, he told a jail administrator that he was declining the opportunity. He understood that, unless he took blame for the rape, the parole board wouldn’t release him. He had nine more years until he hit the maximum sentence. Several months later, an officer came to his cell and told him to pack up, because he was going home. “I know you’re joking,” he told the officer. “Leave me alone.” Broadwater figured that he had been given a disciplinary charge and was being transferred to a maximum-security prison. He gathered his legal records in a manila envelope and packed a few belongings. Then officials handed him paperwork to sign. He had been in prison for sixteen years and seven months and had reached his conditional release date, which is determined by a committee that considers a person’s record in prison. “When that last gate buzzed open—Lord have mercy,” he said. “You don’t think you will do it, but I did what everybody does. I knelt down, and I kissed that ground. I said, ‘Lord, I’m free, and I’m going to stay free for the rest of my life.’ ” Broadwater was thirty-eight. He moved in with a cousin, whose mother was the only person who had regularly sent him letters while he was in prison. His father had died, and his brothers hadn’t kept in touch. He applied for temp jobs, but, as a registered sex offender with a sixteen-year gap in his work history, he was rejected. He bought a nineteen-dollar shovel from a hardware store and began clearing people’s driveways after snowstorms. When winter ended, he mowed their lawns. He went to see a psychiatrist at a V.A. medical center about depression, but he was too ashamed to explain the cause of his distress: he didn’t want female doctors to learn about the rape conviction and be afraid of him. He figured they’d think he was lying about his innocence. Instead, he spoke in vague terms about injustice in the world. He had nightmares and flashbacks, but, when therapists asked him to elaborate on his memories, he spoke of his mom’s death or an injury in the military, leaving out the trauma that defined his life. A year after his release, one of his cousins set him up with a woman named Elizabeth, who worked as a roofer. On their first night together, he told her that he wanted to be in a relationship with her but that she had to read his trial documents first. He slept on the couch while she spent the night in his bedroom with the transcripts. In the morning, she came into the living room where he was sleeping and said, crying, that she believed him. They found jobs that they could do together, like roofing, janitorial, and factory work. They requested night shifts, because Broadwater wanted a potential alibi during what he called the “witching hours”—the time when most violent crimes occur. He was continually stunned that Elizabeth never left him for being a sex offender and never doubted his innocence. “That’s basically how I kept my face up,” he told me. But they decided not to have children, because they didn’t want their child to grow up with the stigma of the crime. He had been free for two years when the police knocked on his door, to ask him about an eighteen-year-old white woman named Jill-Lyn Euto, who had been murdered in her apartment in Syracuse. “I was scared to death,” he said. “I said, ‘Oh no, not me—I work from six at night to six in the morning. I’m on the computer. I’m on camera.’ ” The police didn’t ultimately pursue him as a suspect, but the encounter made him so afraid that he didn’t want to work anywhere with female employees. He worried that he might accidentally glance at a woman in a way that would be interpreted as staring, or that he might make a gesture that appeared aggressive. “I’m always thinking, Maybe she knows,” he said. “It is very painful and shameful.” He became preoccupied with the mechanics of surveillance: he wanted jobs where he could punch into a clock, his movements recorded by cameras in each room. The idea of just being loose in the world, without a method of proving where he had been, was such a source of terror that sometimes he imagined he’d feel less anxiety if he was back in a jail cell. After he had been out of prison for a few years, Elizabeth learned about “ Lucky ” and went to the public library to skim the book. Broadwater said, “She was trying to tell me things in the book, but I said, ‘I don’t want to know. It’s not about me. It’s what happened to her. It don’t pertain to me.’ ” In 2010, Jane Campion, the only woman to be nominated twice for the Academy Award for best director, called Sebold. Campion wanted to adapt “ Lucky ,” which she had found “gripping, funny, devastating,” she said. After Sebold agreed, Campion asked Laurie Parker, who had produced Campion’s film “In the Cut,” to write the screenplay. Parker spent two years researching and writing the first portion of the script, which follows Sebold up to the point when she tells Tobias Wolff that she has seen her rapist. Once that part of the script had been approved, Parker began researching the next installment, which dramatized the criminal proceedings. But, after Parker read the trial transcripts, she felt disturbed that there wasn’t more evidence. She had already interviewed Uebelhoer, the prosecutor, but she called her again to try to understand why the case had gone forward. Uebelhoer told Parker the same story about the lineup that Sebold narrates in “ Lucky. ” “She also explained how few rapes made it to trial,” Parker told me, “and how Alice really was a kind of Joan of Arc figure with the police, how they rallied around her, and how the judge seemed to feel fatherly toward her.” As Parker continued writing, she thought about an episode from her own life. When she was nineteen, living in San Francisco, an older man had sexually assaulted her. She became so afraid of encountering him in the city that she moved to Berkeley. Several months later, she was at a library and thought she saw the man in a study carrel. “I froze,” she said. “It was a kind of out-of-body experience. I was tingling, and my face was tingling. It was the sort of terror that teleports you back to the original trauma.” For about thirty minutes, she couldn’t move. Finally, though, she had to leave for an appointment. As she walked out of the room, the man looked at her. “There was just no recognition at all,” she said. “And then I saw it: I’m wrong. That is not the same person.” She had a “visceral but somewhat unconscious” sense, she said, that Sebold’s certainty may have been unreliable, too. “Because I had experienced being wrong myself, I just had this fundamental feeling of the subjectivity of every single person involved.” She didn’t feel that she could write a script in which the actor shown raping Sebold appears on Marshall Street five months later. “I just felt that we couldn’t perpetuate this story,” she told me. By the summer of 2014, after interviewing Paul Clapper and a few other Syracuse cops who knew about the case, Parker had reached the point where she felt that “there was so little evidence that it should not have resulted in a conviction,” she said. She decided that the only way she felt comfortable telling the story was from a highly subjective point of view: the camera would be like a bird on the Sebold character’s shoulder. In her script, Parker referred to the man on Marshall Street not as the rapist but as “ SHORT MUSCULAR MAN ,” and never says if the man has been convicted. “That script had no objective perspective, no signifiers of any kind,” she said. When she submitted the script, she was told that it was not “viable.” The project collapsed. Parker was a single mother, raising two children with special needs, and the movie could have transformed her career. Nevertheless, “there was a part of me that definitely didn’t want to make the movie, and I’m aware of that,” she said. “On some level, I probably knew that I was killing the project.” Not long afterward, Parker began volunteering in prisons, holding writing workshops. “I think that connection was pretty direct,” she told me. “I felt like the perspective of the person who was convicted is not present, and it should be.” A year and a half later, James Brown, who had recently produced the Oscar-winning film “Still Alice,” signed on to adapt “ Lucky. ” One of his sisters had been the victim of an attempted rape, and Sebold’s memoir had reshaped his understanding of the crime. Brown enlisted Karen Moncrieff, the writer and director of two well-regarded films about violence against women, to write the script. Moncrieff, who had a close friend who’d been raped, had wanted to adapt “ Lucky ” since it was published. “There really has not been a film that deals with the true experience of a rape survivor in a way that’s honest, raw, unflinching and humane and isn’t engineered to titillate on some level,” she wrote to Brown in an e-mail, in 2017. “The peel just doesn’t feel like enough anymore.” Cartoon by Liana Finck Copy link to cartoon Copy link to cartoon Link copied Shop Shop Moncrieff wrote a script that hewed closely to the book. The man that Sebold sees on Marshall Street is referred to as “ RAPIST. ” When he’s convicted, Sebold pours herself a shot and “suddenly lets out a celebratory whoop! ” But Moncrieff felt uncomfortable with the script. Since first reading the book, “something had shifted in my awareness,” she said. Although “ Lucky ” had been praised for breaking taboos—it was recommended by psychologists and rape counsellors, and taught in colleges—there was also something traditional about the arc of the story: Sebold became a hero fighting for justice against an evil, unknowable stranger, who would pay for what he had done to her, with little consideration of the violence or fallibility of that form of payment. Sebold described the poem she’d written in Gallagher’s workshop as a “permission slip—I could hate.” But sometimes it reads as if she is repeating lines that she’s been told, assenting to a kind of cultural belief in the redemptive power of getting revenge. The fantasy of the poem—“If they caught you”—was fulfilled. But, when they caught and punished him, she did not find the promised relief. Before casting the rapist, Moncrieff found Broadwater’s name and photograph on a registry of sex offenders. “This guy looked really sweet,” she said. “He had the sweetest-looking eyes.” She wanted to cast someone with a similarly welcoming face, so her casting directors brought in several young Black actors to audition, a process that involved pretending to rape someone. Moncrieff viewed the videos of the auditions from her home, in Los Angeles. She had felt conflicted by the idea of showing a Black man raping a white woman, and now she was ashamed to be looking at these interchangeable Black bodies. “It was fucking painful on so many levels,” she told me. “None of these guys wanted to be there.” In April, 2021, her casting directors recommended a young Canadian actor named Adrian Walters. On a Zoom call, she showed Walters the picture of Broadwater from the sex-offender registry. “I remember feeling so heartbroken,” Walters told me. “He just had these kind, unassuming eyes. He looked like someone I would have grown up with.” Walters read the memoir and the script and then spent a week praying about whether to accept the role. “I remember something popped up on my TV when I was in contemplation,” he said. “I heard something along the lines of ‘young Black person killed by the hands of police’ and whatnot. That was the moment where I got the sign I needed from God, saying, ‘No, you can’t do this role. This will not be of service to people who look like you.’ ” When he explained his reasoning to Moncrieff, she decided that she could not move forward with the script. “Since going down this road, and then embarking on the reality of actually casting the part, I have tried to get with the program, but find that I just can’t,” she wrote to Brown. “That it is true doesn’t make it The Truth.” She submitted a revised draft, which Brown accepted. In the new version, the rapist would be white. In early June, 2021, the movie’s actors were supposed to fly to Toronto to begin shooting. Victoria Pedretti was cast as Sebold, and Marcia Gay Harden as her mom. The movie’s financier, Timothy Mucciante, was a disbarred attorney—he had spent about a decade in prison after being convicted of bank fraud and forgery of bonds—but he had been upfront about his past. Yet the funds to begin shooting never materialized. When the production team received a copy of a wire transfer from Mucciante that appeared to have been doctored—the font of the dollar signs didn’t match—he was terminated from the project, and the shooting was called off. (Mucciante said that the font was altered inadvertently.) Not long afterward, he asked his employees to investigate the details of Sebold’s rape. James Rolfe, an associate producer for the company, said, “I told him to drop it. We’ll move on. But, as soon as control of the project was taken away from him, he wouldn’t let go.” When his employees couldn’t find information about the crime, Mucciante hired Dan Myers, a former sheriff who worked as a private investigator. Mucciante explained that he had doubted Sebold’s story after the race of the rapist was changed in the script. “He wanted me to get him details of the actual rape—whether or not it even happened,” Myers said. Myers called Paul Clapper, the officer who had been talking to Broadwater on the street. “He mentioned the bad lineup,” Myers said. Clapper suggested that the right man may not have been caught. “I got the impression that he had been dying to tell someone for quite a long time.” Broadwater was sixty and lived on the south side of Syracuse, across from a cemetery, in a house with broken windows covered by tarp. Myers found Broadwater in front of the house. He asked if Broadwater knew that people were making a movie about the woman he’d been convicted of raping. “It’s a lie,” Broadwater said. “The whole conviction.” He explained that, since his release, he’d been trying to find a lawyer to take his case. He’d paid three hundred dollars for a polygraph test, which he passed. “Well, let me tell you something,” Myers, who recorded the conversation, said. “Officer Clapper—you know who that is?” When Broadwater was growing up, he responded, Clapper was an overbearing figure in the neighborhood who would “try to make you snitch.” “I talked to Clapper, and he believes in your innocence.” “No kidding!” “The people that hired me want to help you,” Myers said. “Hell yeah.” Broadwater’s voice gathered strength. “I’m on board with that—hundred per cent.” Broadwater said he’d give Myers all his legal documents. “This is something with my head, man, like a black shadow,” he said. “Believe it or not, I want to write a book. I want to tell my story.” Myers shared what he’d learned with two Syracuse lawyers, Dave Hammond and Melissa Swartz, saying he believed that Broadwater was innocent. They both read “Lucky.” “We were, like, Oh, my God, there’s newly discovered evidence,” Hammond said. What had been, for hundreds of thousands of readers, a story of justice was, in their eyes, a careful recounting of prosecutorial misconduct. They wondered why Sebold didn’t question the conviction when she was writing her book, but her confidence made more sense after they learned of Uebelhoer’s involvement in researching and promoting it. Swartz, who had worked in a district attorney’s office, said, “I’ve been on the other side, and I know the amount of trust and loyalty people feel for a prosecutor. And then that person is championing your book? It’s like reaffirmation that the conviction was good.” Mucciante raised money for Hammond and Swartz to work on Broadwater’s case. He also hired Red Hawk Films, a small production company, to make a documentary about Broadwater’s quest to prove his innocence. It would be called “Unlucky.” Broadwater signed a release giving Mucciante’s company the exclusive right to his story. When Sebold heard about Mucciante’s efforts, she asked James Brown, the producer, what was happening. Brown described Mucciante’s history of fraud and told Sebold, “Don’t believe it. Put it out of your mind.” Swartz asked William Fitzpatrick, the Onondaga district attorney, for whom she had previously worked, to read the transcript of Broadwater’s trial and give her his opinion. The transcript was so short that Fitzpatrick read it in about an hour. “I was stunned,” he told me. “I couldn’t believe that, in 1981, in a non-jury trial, a guy could be convicted on that.” In October, 2021, he contacted Sebold, who by then felt that she was largely “done with rape,” she said. After the #MeToo movement, she felt that she could retire from the cause as a younger generation took up the work. In an e-mail, Fitzpatrick explained that Broadwater had new lawyers who were filing a motion to vacate his conviction, based on newly discovered evidence. “You have done remarkable things in removing some of the barriers encountered by sexual assault victims,” he wrote. “The problem is the hair testimony.” He explained that the methodology used at trial had been discredited. In 2015, in one of the country’s worst forensic scandals, the Justice Department and the F.B.I. acknowledged that, for two decades, forensic examiners had been applying erroneous standards to the comparison of hairs. Sebold wrote back a few hours later, thanking him for keeping her updated. “It sounds like Broadwater’s attorney is doing the right thing on behalf of her client and that there will be many steps going forward before there is an end result one way or another,” she wrote. Sebold told me, “I was very passionate in my belief that he was guilty, and the last twenty years of no one saying anything would only underscore that.” A month later, Fitzpatrick e-mailed Sebold to say that he’d had a call with Gordon Cuffy, the judge who was reviewing Broadwater’s motion, and Cuffy wanted to know if the scenes in “ Lucky ” describing the lineup—and the commentary by Uebelhoer after it—were accurate. In those passages, Fitzpatrick explained, “the inference could be drawn that you were coached on how to handle the issue at trial which is not an ethical approach by law enforcement.” Sebold responded, “I felt an immense responsibility to portray things as truthfully as I was capable of.” She believed Uebelhoer had told her details about the lineup, she wrote, because “she had a natural understanding that knowing what was happening in the case helped center and soothe me.” Five days later, Fitzpatrick e-mailed Sebold again. “After a brief hearing moments ago Judge Gordon Cuffy vacated Mr. Broadwater’s conviction,” he wrote. The foundation of Broadwater’s conviction, Cuffy had concluded, rested on a debunked hair analysis and a lineup that had been tainted. “There is much I can wish for,” Fitzpatrick went on, “not the least of which is that 40 years ago a young woman had gotten home safely to her dorm. But she didn’t. So I wish you peace and happiness and comfort in knowing you never deviated from doing the right thing.” “Has anyone been watching anything good?” Cartoon by Emily Bernstein Copy link to cartoon Copy link to cartoon Link copied Shop Shop Sebold’s friend Orren Perlman went to her house after the exoneration and made food for her, but she couldn’t talk about what had happened. (Sebold and her husband had divorced a decade earlier.) “It’s like someone pulling a thread out of a sweater and the whole thing just falls away,” Perlman said. When Sebold started to speak, “she’d be, like, ‘I have to stop.’ It was too much.” She told her friends that she would never write again. She tried not to look at the Internet, but she understood, from what friends shared, that she was being criticized online. It was easy to internalize the “voices of the Internet,” she said, because they were amplifying “the voice that lies inside me.” The headline of a Daily Mail story read, “She made millions off the story while he lived in windowless squalor.” Perhaps there was an added level of urgency to the criticism, because it relieved the sense of group complicity—the hundreds of thousands of people who had read about Sebold’s identification of Broadwater and had not been concerned. It was as if the book itself had become a kind of weathervane for where, two decades earlier, the publishing world and its readership had been in their understanding of crime and race. When pictures were published of Sebold walking her dog, carrying plastic bags for its poop, she stopped leaving her house. Friends took the dog, so that Sebold wouldn’t have to go outside. Eight days after the exoneration, Sebold, whose agent had found a crisis-communications consultant to help her, sent a one-page apology to Broadwater’s lawyers, and then posted it on Medium. “I am sorry most of all for the fact that the life you could have led was unjustly robbed from you, and I know that no apology can change what happened to you and never will,” she wrote. “My goal in 1982 was justice,” she went on. “Certainly not to forever, and irreparably, alter a young man’s life by the very crime that had altered mine.” Bitch Media published an article titled “The Infuriating Failure of Alice Sebold’s Apology,” criticizing her for writing sentences in the passive voice. An article in UnHerd was titled “Alice Sebold’s Empty Apology: I’ve Never Believed a Word She’s Written.” On the day that she published her apology, Scribner, which had legally vetted the book and reissued it in 2017, announced that it would stop distributing “ Lucky. ” Broadwater had assumed that Sebold knew about his attempts to prove his innocence, and just didn’t care, but when he learned that no one had kept her abreast of his legal ordeal he felt less at odds with her. A wrongful conviction leaves wreckage in more than one direction. “I thank the good Lord I made it to a point where I’m strong enough mentally to say, ‘Hey, it was the court. It was the system. It’s not the victim’s fault,’ ” he told me. Sebold had written that she shared her life with her rapist, but she had also foisted a kind of unchosen intimacy on a different man. The unspeakable nature of rape, which Sebold struggled with for many years, had become Broadwater’s burden, too. When people congratulated him on the exoneration, he said, they seemed not to realize that “I still carry the crime.” He never uses the word “rape.” “I won’t say exactly what it was,” he told me, “because that word is perplexing and humiliating, and it’s too hard on people.” By the end of December, 2021, the “Unlucky” documentary had come to a halt. The crew refused to continue working, saying that they’d gone for more than a month without being paid and were owed nearly a hundred thousand dollars. (Mucciante said that he was withholding funds because he deemed some expenditures improper, among other reasons.) Broadwater cut off contact, after a lunch meeting in which it seemed that Mucciante was focussed on the market value of a wrongful-conviction story. “I’d been thinking he was out for the goodness of proving my innocence, not knowing he had another agenda—profit, stuff like that,” Broadwater said. Brown, the producer of the movie “ Lucky ,” wondered if whatever psychological characteristics had made Mucciante capable of conning people had also made him a different kind of reader. “I think that normal people who are equipped to feel empathy read the first chapter about Alice’s rape—the most unimaginable horror you could possibly imagine—and become so fully on Alice’s side that you don’t pay attention to detail,” he said. “But he could see through the emotional clutter of the experience.” Sebold has a box in her house labelled “R,” for rape, where she keeps documents from the criminal proceedings, as well as her journals from that time. For the past year and a half, she has wanted to open it and reread the material but she finds that she can’t. Several times, when I asked about her memories of the trial—how she made sense of her certainty as an eighteen-year-old, for instance—she would try very hard to answer, straining to offer a helpful remark, but she would seem to shut down. She could discuss the exoneration on a broader level, but “it’s the details,” she said. “It’s the finding out of the details. I can’t dive into it without losing a sense of who I even am. My perceptions of other people, my trust in myself. That I can fuck up so badly and not even know it.” Broadwater was disappointed that Sebold had not yet asked to meet him in person, but Sebold said that, when it comes to “identity destruction,” she was pacing herself: she is working on sending him a letter first. She wants to directly confront the enormity of his trauma, which she said makes her own troubles feel comparatively small, but she is also aware that her brain is not yet in the place that she wishes it were in, to be ready for those granular details. From remarks that Broadwater made after the exoneration, she sensed that, despite everything he’d been through, he was a remarkable person, a fact that had made her feel both better and worse. In a room together, after forty years, Broadwater hoped to “compare notes,” so that he could understand how the district attorney’s office “duped her and kept her blind.” When she envisioned the meeting, she expected that language would fail.“We might do nothing but stare at the floor or weep,” she said. I thought that perhaps Sebold would have to repopulate her rape with a new face, to keep the memory intact, but she said she’d given up on the idea of narrative closure. She knew there was talk of other suspects who might have been her real rapist—“the ghost in this horror story,” as she described him—but she wasn’t sure she needed to know. She and Broadwater had both “gone from twenty years old to sixty years old in this time,” she said. “What most people consider the prime of their life has started and finished.” The window for making sense of it all through a story was over. The philosopher Susan Brison, in “Aftermath,” a book about her rape, describes how trauma “introduces a ‘surd’—a nonsensical entry—into the series of events in one’s life.” In the years after she was raped, Brison was always trying to keep the story of her attack straight, both to make sure that her rapist was found guilty and to regain a sense of control and coherence. In the book, she asks if holding on to one tight narrative may, “if taken too far, hinder recovery, by tethering the survivor to one rigid version of the past.” She wonders if, after mastering the story, “perhaps one has to give it up, in order to retell it, without having to ‘get it right,’ without fear of betraying it.” Sebold had always defined herself as a “ ‘books saved my life’ person,” she said, but, since the exoneration, she had found it impossible to “return to the place where I perceive words as inherently kind and playful.” Making sense of her trauma through writing was supposed to help make Sebold feel whole, a wish her writing professors encouraged, but, at a crucial moment when she was eighteen, her faith in literature may have got in the way of her ability to see and judge what was in front of her. Narratives about trauma can restore meaning so that the “surd” doesn’t just sit there, destroying a person’s beliefs about the world. But they can also provide unrealistic clarity, creating too singular a point of view, symmetries that don’t exist. “What I thought was the truth and wrote about as the truth—which then was validated year after year for 20+ years as a never out-of-print title—was not only NEVER the TRUTH , but the truth resided with Anthony B,” Sebold wrote to me. “He and his loved ones have held a lonely vigil all along.” Shortly after his exoneration, Broadwater sued the State of New York for wrongful imprisonment. He also filed a federal lawsuit for violation of his civil rights. “While a defendant would normally be left to speculate as to how a victim can pick out the wrong individual at a lineup but then be permitted to explain why they did so,” the state lawsuit said, “the victim here published a book explaining in detail the events just after the lineup.” In February, the state settled with Broadwater, for five and a half million dollars. He and Elizabeth are looking to buy a house. They want about ten acres of land, in the country, near Syracuse. Previously, only a handful of friends had ever invited Broadwater and Elizabeth over. Now neighbors were stopping by their house throughout the day. One of Broadwater’s brothers, whom he hadn’t heard from in more than a decade, had invited them to stay at his house. “I tell her, ‘There’s another reason and purpose for them inviting us now,’ ” Broadwater said, when I met him and Elizabeth at Hammond’s law office, in downtown Syracuse. Since the exoneration, little in Broadwater’s life has changed. He still has a self-imposed curfew of 7 p.m. , unless he is working. “I have to prevent myself from being in harm’s way,” he told me. Recently, when a student at Syracuse University was assaulted, he called his lawyer, panicked that he might become a suspect. “You get tense, you start sweating, and then the adrenaline comes,” he said. When I described Sebold’s sense that he was a remarkable person, he and Elizabeth began crying so hard that it took several minutes for them to start speaking again. I mentioned that Sebold wanted to write a letter to him. “I think it needs to be face to face,” Elizabeth said, barely audibly. “If she’s comfortable with it.” “I guess starting out with a letter would be pretty nice,” Broadwater said. When Sebold wrote about her experience, he added, she should know that “I was part of it—whatever she’s recollecting, each day and moment, I experienced it, too. I don’t think I can judge her pain, but I know that for me it was war,” he said, referring to the violence in prison. “I tell Liz, ‘I’m not normal,’ ” he said. Broadwater said that his psychiatrist at the V.A. center often asked him if he had suicidal thoughts, and recently it occurred to him that he no longer had to worry as much about being there for Elizabeth: she would be O.K. without him, because she could live on the money from the settlement. “Hmm,” Elizabeth said, sharply. “My psychiatrist says, ‘Don’t think like that,’ ” he said. Since his exoneration, Broadwater had finally been able to confide in his psychiatrist without worrying about whether his story would be believed. He could share the memories that were really haunting him. “Doubt,” he said softly. “It creeps in and goes back out.” ♦ New Yorker Favorites First she scandalized Washington. Then she became a princess. The unravelling of an expert on serial killers. What exactly happened between Neanderthals and humans ? When you eat a dried fig, you’re probably chewing wasp mummies, too. The meanings of the Muslim head scarf. The slippery scams of the olive-oil industry. Critics on the classics: our 1991 review of “Thelma & Louise.” Sign up for our daily newsletter to receive the best stories from The New Yorker. Weekly E-mail address Sign up By signing up, you agree to our User Agreement and Privacy Policy & Cookie Statement. This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. American Songbook By Bruce Handy In Uniform By Bob Morris Letter from Israel By David Remnick Poems By Melissa Ginsburg Sections News Books & Culture Fiction & Poetry Humor & Cartoons Magazine Crossword Video Podcasts Archive Goings On More Customer Care Shop The New Yorker Buy Covers and Cartoons Condé Nast Store Digital Access Newsletters Jigsaw Puzzle RSS About Careers Contact F.A.Q. Media Kit Press Accessibility Help © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights. The New Yorker may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. Ad Choices Facebook X Snapchat YouTube Instagram Do Not Sell My Personal Info "
13,569
2,023
"The Accidental Truthtellers of the Post-Privacy Era | The New Yorker"
"https://www.newyorker.com/books/under-review/the-accidental-truth-tellers-of-the-post-privacy-era"
"Newsletter To revisit this article, select My Account, then View saved stories Close Alert Search The Latest News Books & Culture Fiction & Poetry Humor & Cartoons Magazine Puzzles & Games Video Podcasts Goings On Shop Open Navigation Menu Find anything you save across the site in your account Close Alert Under Review The Accidental Truthtellers of the Post-Privacy Era By Peter C. Baker Facebook X Email Print Save Story Illustration by Ard Su Save this story Save this story Save this story Save this story It is only in the penultimate chapter of Kerry Howley’s “ Bottoms Up and the Devil Laughs: A Journey Through the Deep State ” that we properly get to Satan. The previous two hundred pages have traced an odyssey through the post-9/11 American security state, searching for rhymes and resonance among the lives of its whistle-blowers, accidental truthtellers, targets, and victims—and also the rest of us, tapping at our phones, constantly feeding data onto the Internet, aware that it’s all accumulating somewhere, much of it accessible to the government. The book is full of suggestive swerves and leaps of association, Howley’s attempts at getting us to look again at subjects and stories that might have been shocking once upon a time, but which we’ve grown used to living with. When the devil shows up, we’ve been reading about Reality Winner, who, when twenty-five years old and working as a National Security Agency contractor, leaked to the press a document about Russian cyberattacks on U.S. election officials. Right after narrating the 2018 hearing at which Winner was sentenced to five years and three months in prison, Howley cuts to the Monster Energy drink company, describing a 2014 viral video of a woman with a story to sell: She has perfected her argument—that Monster Energy drinks are primarily a vehicle for Satan—into a crisp patter punctuated by forays into Hebrew, textual analysis, paranoid semiotics, and moments of well-timed eye contact. There is in her presentation a genuinely remarkable union of speech and physicality. She is tremendously good at this. “If God can use people and product,” she says, folding up an illustrative poster and picking up a can, pointing with crisp and clean gestures that focus the attention precisely where she intends, “so can Satan. You cannot deny that that is a cross.” She points to a cross on the can. “And what is witchcraft? When the cross”—meaningful pause—“goes upside down.” Here she inverts the can, as if taking a sip. “Bottoms up,” she says, tips the can a bit. “And the devil laughs.” Howley presents the video as a harbinger of our present—its subject an amateur conspiracist piecing together far-flung fragments into an absurd, morally Manichaean fiction; its Internet-friendliness and focus on Satan presaging QAnon and other twenty-first-century conspiracy theories. But the video is also a useful (and, I suspect, intentional) point of comparison for Howley’s own project. Picking the best fragments from the infinite universe, stitching them together, making formal and stylistic choices in hopes of encouraging the audience to cast off conventional wisdom and move closer to the truth as you see it: the work of conspiracy-mongers online and the work of writers of critically acclaimed narrative nonfiction have considerable overlap. “All narrators, I say, are fiction,” Howley wrote in her first book, “ Thrown ,” which was about mixed-martial-arts fighters, and concerned with questions of artifice and truth. “ All. The reliable ones have the decency to admit it.” We can’t say we weren’t warned. To the extent that “Bottoms Up” has a main character, it’s Winner, whom Howley profiled for New York magazine in 2017, and who represents, to a remarkable degree, a convergence of the author’s preoccupations in a single, colorful character (with, as a writerly bonus, a name straight from Thomas Pynchon). When Reality—as Howley typically refers to her heroine—is on the page, we feel the intimacy of a novel. Born in small-town Texas, Winner started teaching herself Arabic after 9/11 —she was nine years old at the time—hoping to one day use the language to prevent another attack. After high school, she joined the Air Force and became fluent in Farsi, Pashto, and Dari. She was assigned to support drone operations abroad from Fort Meade in Maryland. She worked long days, watching people from above, and interpreting their conversations to help decide whether they would be killed. The work was classified, so she couldn’t talk about it with family or friends. In 2016, she left the military and started applying for jobs with N.G.O.s in Afghanistan. She never got one, perhaps because she had no college degree. In 2017, Winner took a job with Pluribus, an N.S.A. contractor, using her language skills to translate classified documents related to Iranian airspace—subject matter in which she had little interest. She worried about where her life was going, what contribution she was making. She soothed her anxieties with weight lifting and yoga. At home, she had a dog and a cat and several guns, including a pink AR-15. She worried about global warming and racial inequality. She hated Trump. At work, TVs everywhere were set to Fox News, and she hated that, too. In a note to herself—later confiscated by the F.B.I. and presented as evidence at her trial—she wrote, “I want to burn the White House down. Find somewhere in Kurdistan to live or Nepal. Ha, ha. Maybe.” In a message to her sister, she wrote, “I have to take a polygraph test where they’re going to ask if I plotted against the government. #gonnafail [. . .] Look I only say I hate America three times a day. I’m no radical.” One day, bored at work and browsing the ever-expanding world of classified material available via her work computer, Winner came across a five-page document about the 2016 election. Russian intelligence, this document said, had orchestrated a cyberattack that sent malware to hundreds of U.S. election officials, attached to e-mails that looked like they’d originated from a reputable election-software company. Winner experienced a yawning disconnect between the world that her security clearance let her access and the world that everyone outside her professional life seemed to think they were living in. She folded up a printout of the document detailing the cyberattack, stuffed it in her pantyhose, took it home, and mailed it to the left-leaning investigative news outlet the Intercept, which used a bafflingly sloppy process to verify the document’s legitimacy. On the same day the Intercept published their story , the F.B.I. put out a statement: Winner was in custody. She was eventually imprisoned, then transferred to a halfway house in 2021 for good behavior; she is under probation until 2024. Howley’s central claim—the thread with which she seeks to string her wide-ranging material, including Winner’s story—is that American life since the early two-thousands has been increasingly marked by two developments. First: the mass collection of personal data by the state, with help from tech companies. Second: most individuals’ daily participation in this process, as we deposit more and more digital fragments of our life on the Internet, some of it notionally private, some posted publicly, accessible to the government. This creates a near-infinite well of details that can be used to tell stories about us. These stories—“sticky fiction” pieced together from facts, but not wholly true—get built to rationalize decisions about who gets charged with crimes, who gets tortured, who gets targeted by drone strikes, who gets imprisoned, and for how long. Before the leak, Winner lived primarily on one side of this process, helping the state watch and listen and assemble its stories. Afterward, she was on the other side—someone the state wanted to tell one particular story about, in the service of punishment. In this story, supported amply by choice fragments (“I only say I hate America three times a day”), she was one thing: a young woman long determined to harm America. In Howley’s telling, Winner is, like all of us, many things. We contain multitudes. We communicate in different registers depending on whom we’re communicating with. We say things we don’t mean, or half-mean, or say things to see how it feels to say them, and learn something about how our minds work. Human multiplicity, and irreducibility to data points, seems to be Howley’s North Star, the principle that she is invested in more than any other, and is writing to protect. (When she describes Winner’s mother as “incredibly cagey and deeply opaque, among the most unknowable people I have ever tried to capture in writing,” it sounds like a compliment.) We change our minds, and we forget, and we lose parts of ourselves (sometimes intentionally, sometimes not), and we call this process becoming a self. It’s as if Howley, in profiling Winner with such care, is trying to wash off the state’s sticky, simplifying fiction and reveal the human underneath—suggesting how easily anyone could be reduced to a version of themselves they wouldn’t recognize. The extent to which you’d share this concern might depend on your baseline feelings about the power of the government. Howley worked and has written for years at the libertarian magazine Reason ; in “Bottoms Up,” the L-word shows up only in passing, but its traces are apparent in the insistence that we would all do well to treat the state—any state, but especially the sprawling post-9/11 American security apparatus—with a reflexive wariness, even if we believe we have nothing to hide. “The radical transparency we have accepted, step by step, these past years, is a bet we have made: that we and the people with the guns and cages will stay on good terms,” she writes. Even though Winner is the center of Howley’s narrative mosaic, she doesn’t appear much in the first hundred pages. Instead, quickly drawn character studies form a montage of the post-9/11 era in which Winner came of age, the mood board of her psychocultural inheritance. There’s John Walker Lindh, the American who, at twenty, joined the Taliban, and, after being captured on the battlefield in Afghanistan, became a national symbol of treachery. There’s Abu Zubaydah, the Saudi man mistakenly thought to be a high-ranking Al Qaeda member, and as a result subjected to a nightmarish cycle of confinement and torture, many details of which have been destroyed. (When dealing with the C.I.A., it turns out, there may be no receipts to keep.) There are whistle-blowers of different temperaments and ideological inclinations, including John Kiriakou—a former C.I.A. officer who characterized waterboarding as an effective interrogation technique, but who also went to prison for leaking to a journalist the name of a colleague allegedly connected to the government’s torture program—and Chelsea Manning, who sent troves of classified data to WikiLeaks. There’s Joe Biggs, who joined the Army after 9/11 and got sent to Iraq and Afghanistan, and later became a correspondent for the conspiracy-weaving Infowars, then an organizer for the Proud Boys, charged with seditious conspiracy relating to the attack on the U.S. Capitol on January 6, 2021. (Biggs has pleaded not guilty.) Julian Assange gets a few pages, too. Howley doesn’t strongly demonstrate how these stories relate to one another or to her stated arguments, which roam far beyond her central point about the state’s digitally supercharged storytelling powers and into the broader culture. Americans, she asserts, have lost touch with the skill of defining themselves as individuals; relatedly, they have “lost the knack for anonymity.” Our lives have become not less physical per se—screens and the like are as physical as anything else—but less sensual, and this has created for some a hunger for extreme physical experiences. “Bottoms Up” is peppered with such confident aphorisms, and it’s often hard to tell whether Howley is working to validate these claims or whether she assumes that readers will accept as common sense her descriptions of the world. Focussing on them in isolation, I sometimes felt skeptical. Did we ever really—before smartphones, before cloud storage—construct our selves through some fundamentally private, internal process? Haven’t individuals’ identities always been bound up with the gaze of others—sometimes experienced as a comfort, sometimes as oppressive or burdensome, sometimes as both? In her attempt to connect government surveillance to the way we live now, Howley leans especially hard on the way our old opinions, stray thoughts, and jokes accumulate online, and how these decontextualized pieces of our former selves can make trouble for our careers and reputations. Though it’s true that the practice of combing through someone’s old blog posts and tweets looking for ammunition does bear resemblance to how the government might seek to cook up a case against a target, concerns about malicious and ungenerous misinterpretation of the words we leave online are, it seems safe to say, much more salient to a professional writer like Howley than to the average citizen. As a fellow nineties kid, I sometimes wondered whether Howley was placing outsized significance on gauzy nostalgia for what it meant to be a teen-ager before smartphones, selfies, and social media. I get it: I do the same thing. It’s easy to forget that the challenge of becoming a person has always meant negotiating pressures between inside and outside, self and group, the stories you tell about yourself and the stories the world tells about you. Digital technologies have, of course, changed the texture of these negotiations—but technology didn’t create them. Similarly, state institutions have told powerfully distorting stories about their targets—and used those stories to justify injustice—since long before anyone ever sent an e-mail or left a trail of metadata. For another, more conventional book, objections like this might be fatal. But “Bottoms Up” proceeds less by the sequential logic of the proof, or of typical journalism, and more by the associative logic of the mood board. And it works. Howley’s prose reminded me of Don DeLillo’s, not just in its preternatural attunement to invisible currents of feeling which course between varied pockets of the globalized American project, but also in the feeling that she’d taken her experience of the world and melted it down into a weapon meant to puncture our hardened habits of perception. “American surveillance is partly made of electrons and partly made of tubes and partly made of dogs,” Howley writes. This sentence is the sharp tip of a longer passage designed to remind us that, when we hear about surveillance on the news, we’re hearing not just about software programs running silently through the cloud. We’re hearing about thousands of middle-class Americans working in generic-looking office buildings in generic-looking office parks, and also—at the N.S.A. server farm in Utah, anyway—a crew of on-site security dogs, with, presumably, a stash of dog food and poop bags. Stories about government surveillance have a strange tendency to make an abstract concept even more abstract. Dogs, in Howley’s hands, deliver a jolt of the real. The more of these jolts Howley produces, the more “Bottoms Up” restores the world to something akin to its original strangeness. It’s a daring approach, and an invaluable one: seeing the world anew makes it feel, in some small way, up for grabs, and this feeling is a precondition for real thought. I still haven’t decided whether I agree with at least half of Howley’s arguments, or the arguments suggested by her method. In writing as in intelligence analysis, there’s always the possibility that the connection you’re tracing isn’t actually there, or isn’t meaningful (bottoms up!). But that’s hardly a reason not to try. As penance for its mishandling of her leak, the Intercept’s parent company funded Winner’s legal team, which eventually ran up a two-million-dollar bill. Thanks to the byzantine workings of security clearance, she was able to speak with her legal team only on the rare occasions when everyone involved—including lawyers in Washington, local counsel in Georgia, and Winner herself—could all get to secure-communications facilities at the same time. Details of her case, including the content of the document she’d leaked, were routinely discussed in the press, but her lawyers were afraid that, by accessing it themselves, they would run afoul of regulations on the handling of classified material. Winner never became a celebrity on the level of Edward Snowden or Chelsea Manning, perhaps in part because of whistle-blower fatigue, but also, it seems possible, because she fell between political fault lines. No one on the right wanted to support someone providing possible evidence to the Russia-Trump collusion narrative, and many on the left didn’t, either, viewing that narrative as an attempt by the Democratic establishment to dodge responsibility for its losses. Her mom, Billie, tried holding a protest on her behalf in her Texas home town, but only twenty people came. The longer we stay with Winner, the more we sense Howley’s admiration for her subject—less for the impulsive leak itself, and more for the strength of her commitment to figuring out, however imperfectly, the world and her place in it. This explains why we hear so much about Winner’s time behind bars. (In fact, just from reading Howley’s book, you wouldn’t know Winner has been released.) We read in granular detail about the needless, punitive difficulty of simple visits. We read about the horrible food. We read about Winner asking her mother to give money to her fellow-inmates, leading in yoga classes, worrying about the health of a pregnant peer, arguing with a Bible instructor. These are, we come to understand, examples of Winner trying, under extraordinary pressure, to remember who she is and to work out who she’s going to be—or, more modestly, to remind herself that she’s alive. Sometimes, this means sticking her tongue out at the experience of constantly being watched, her every mood and appearance scrutinized by someone else. Like her courtroom sketch artist: Baker Donelson filed and filed and filed again. Reality was ferried back and forth to the courthouse, which she hated, because these were days in which she would not get her time outside. When she was back at Lincoln the women would gather around the television to watch the evening’s local news, on which a story about Reality would appear. This story was inevitably accompanied by a color sketch. These sketches looked nothing like her; the face long and hard and mature, the nose aquiline; her youthful moon face replaced with a kind of sinister bony angularity. She grew to recognize the sketch artist in court: big white hair, big bushy eyebrows, a set of tiny binoculars, opera glasses, through which to watch her. He looked, to her, like “Jim Carrey dressed as Mark Twain in Bruce Almighty. ” Once, seated next to her attorney Matt Chester, she felt the sketch artist’s eyes on her. She picked up Chester’s pen. She stared directly at the sketch artist staring at her. She began to draw. “No,” said Chester. In jail, on the phone with her sister, and aware their call is being surveilled and transcribed, they amuse themselves by saying “vodka” to each other again and again in a Russian accent. Sometimes they phone up the White House, and ask for Donald Trump. These are small moments of silliness, but they contribute to an intimate portrait of Winner as someone fighting to be meaningfully human under constraints of the very sort that disturb Howley most. In her introduction, Howley recalls being pregnant and worrying about the world her daughter would be born into. “I despaired many times [. . .] about my ability to protect the thing I was growing from a world that had abandoned walls, that asserted its right to invade, to amass electrons against wholeness, that had forgotten what it was like to construct a self in the dark.” She wants things to be different, but isn’t sure how. “She is here now, in the world, and there is nothing to do but help her remember.” The hope Howley is expressing is that, by trying as hard as possible to figure out exactly what story she wants to tell, and how to tell it, a writer can come to know something she didn’t before, and that, in the private space of communion with the text, the reader will learn something new, too, and feel the possibility of learning more. Like Winner’s small acts of resistance, this is a modest hope, but a fierce one. ♦ New Yorker Favorites First she scandalized Washington. Then she became a princess. The unravelling of an expert on serial killers. What exactly happened between Neanderthals and humans ? When you eat a dried fig, you’re probably chewing wasp mummies, too. The meanings of the Muslim head scarf. The slippery scams of the olive-oil industry. Critics on the classics: our 1991 review of “Thelma & Louise.” Sign up for our daily newsletter to receive the best stories from The New Yorker. More: Book Review Surveillance National Security Agency (N.S.A.) C.I.A. (Central Intelligence Agency) Whistle-Blowers Investigative Journalism Leaks Books & Fiction E-mail address Sign up By signing up, you agree to our User Agreement and Privacy Policy & Cookie Statement. This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. Letter from Israel By David Remnick News Desk By Adam Rasgon Profiles By Rachel Aviv A Reporter at Large By Evan Osnos Sections News Books & Culture Fiction & Poetry Humor & Cartoons Magazine Crossword Video Podcasts Archive Goings On More Customer Care Shop The New Yorker Buy Covers and Cartoons Condé Nast Store Digital Access Newsletters Jigsaw Puzzle RSS About Careers Contact F.A.Q. Media Kit Press Accessibility Help © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights. The New Yorker may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. Ad Choices Facebook X Snapchat YouTube Instagram Do Not Sell My Personal Info "
13,570
2,023
"How to Be Blind | The New Yorker"
"https://www.newyorker.com/culture/the-weekend-essay/how-to-be-blind"
"Newsletter To revisit this article, select My Account, then View saved stories Close Alert Search The Latest News Books & Culture Fiction & Poetry Humor & Cartoons Magazine Puzzles & Games Video Podcasts Goings On Shop Open Navigation Menu Find anything you save across the site in your account Close Alert The Weekend Essay How to Be Blind By Andrew Leland Facebook X Email Print Save Story Illustration by Camille Deschiens Save this story Save this story Save this story Save this story Andrew Leland reads. I first noticed something wrong with my eyes in New Mexico. I was a freshman in high school, in the mid-nineties, and had recently been accepted into a clique of older kids whom I admired—the inner circle of Santa Fe Prep’s druggie bohemian scene. We hung out at Hank’s house; he was our charismatic leader, and his mom was maximally permissive. One night, in Hank’s room, our friend Chad sat on a beanbag chair, packing a pipe with weed. Nina danced alone in front of a boom box to Jane’s Addiction, throwing around her bleached hair. After dark, we hiked up the hill behind the house to get a view of the city. The moon was bright, but I found myself tripping on roots and stones and wandering off track. At one point, I walked right into a piñon tree with prickly branches. My friends laughed—“You’re stoned, aren’t you?” Chad said—and I played up my intoxication for effect. But, on the way down, I quietly put a hand on Hank’s shoulder. This became common. At the movies, I got up to get a soda, and, when I returned, I couldn’t find my mother in the rows of featureless bodies. I complained about night blindness, but my mother assured me it was normal—it was dark out there! Eventually, though, she brought me to see an eye doctor. After a series of tests, he sat us down and said that I had retinitis pigmentosa, or R.P., a rare disease affecting about a hundred thousand people in the U.S. As the disease progressed, the rod cells around the edges of my retina would die, followed by the cones. My vision would contract, like looking through a paper-towel roll. By middle age, I’d be completely blind. The doctor asked if I could see stars, and I said that I hadn’t seen them in years. This was the detail that made it real for my mother. “You can’t see stars?” she asked. I spent my teen-age years mostly in denial: my blindness seemed distant, like fatherhood, or death. But in my thirties the disease caught up with me. One morning, I swung my car into a crosswalk and heard—and felt—something slamming my hood: I had almost hit a pedestrian, and he was banging my car with his fist, shouting, “Open your fucking eyes!” Soon after, I almost hit a cyclist, and I gave up driving. One weekend, while living in Missouri, I found that I had lost my sunglasses. My wife, Lily, was out of town, so I decided to walk to a nearby LensCrafters. But what was normally a ten-minute drive became a harrowing ordeal on foot. There were few sidewalks, so I walked in the road, with cars speeding past. The sun and haze made it hard to see. I stood for a long time at a large intersection, trying to turn left without getting hit by a truck. In 2011, I ordered an I.D. cane, used less for tapping around than to signal to the world that its bearer might not see well. It folded up, and mostly I hid it in my bag. But, after running into fire hydrants and hip-checking a toddler in a café, I began using it full time. Reading became difficult: the white of the page took on a wince-inducing glare, and the words frosted over, like the lowermost lines on the optometrist’s eye chart. It was only once I’d reached this stage that my diagnosis started to feel real. I frantically wondered whether I should use my last years to, say, visit Japan, or plow through the Criterion Collection, instead of spending my evenings watching “Crazy Ex-Girlfriend” with Lily. One night, I lay awake in bed. I knew that, if Lily were awake, she’d be able to see the blankets, the window, the door, but, when I scanned the room, I saw nothing, just the flashers and floaters that oscillated in my eyes. Is this what it will be like? I wondered, casting my gaze around like a dead flashlight. I felt like I’d been buried alive. In 2020, I heard about a residential training school called the Colorado Center for the Blind, in Littleton. The C.C.B. is part of the National Federation of the Blind, and is staffed almost entirely by blind people. Students live there for several months, wearing eye-covering shades and learning to navigate the world without sight. The N.F.B. takes a radical approach to cultivating blind independence. Students use power saws in a woodshop, take white-water-rafting trips, and go skiing. To graduate, they have to produce professional documents and cook a meal for sixty people. The most notorious test is the “independent drop”: a student is driven in circles, and then dropped off at a mystery location in Denver, without a smartphone. (Sometimes, advanced students are left in the middle of a park, or the upper level of a parking garage.) Then the student has to find her way back to the Colorado Center, and she is allowed to ask one person one question along the way. A member of an R.P. support group told me, “People come back from those programs loaded for bear ”—ready to hunt the big game of blindness. Katie Carmack, a social worker with R.P., told me, of her time there, “It was an epiphany.” That fall, I signed up. In 1966, the sociologist Robert Scott spent three years visiting agencies for the blind for his book “ The Making of Blind Men. ” Most of these agencies, whose methods were based on the training programs developed for veterans after the First World War, took an “accommodative approach”: they believed that clients could never be truly independent, and strove only to keep them safe and comfortable. The agencies installed automated bells over their front doors so that residents could easily find the entrance from the street, served pre-cut foods, and gave out only spoons. They celebrated clients for the tiniest accomplishments, with the result that, as Scott put it, “many of them come to believe that the underlying assumption must be that blindness makes them incompetent.” Blind education already had a fraught history. The first secular institution for the blind—the Hospice des Quinze-Vingts, established by King Louis IX of France around 1260—housed residents, but required them to beg on the streets for bread. Blind people were popularly depicted as lecherous, duplicitous, and drunk. The first schools that actually tried to teach blind students were established in the eighteenth century. Catherine Kudlick, a disability historian, pointed out that this was during the height of the Enlightenment, when there were discussions about educating women and people from the lower classes. “The idea was to give them the tools so that they could become educated members of society,” she said. But, in their determination to prepare students for employment, many schools, like other institutions at the time, came to resemble sweatshops, making blind children spin wool and grind tobacco for subminimum wages. The best institutions Scott visited were those that followed the philosophy of Father Thomas Carroll, a Catholic priest who worked at the Army’s rehabilitation centers during the Second World War, where many innovations—including the long white cane—were first developed. Carroll argued that the average blind person is capable of some independence. His students took fencing lessons, which he thought helped with balance. But Carroll took a surprisingly grim view of blindness. “Loss of sight is a dying,” he wrote. His students, he believed, would always be significantly impaired. One student who recently attended the Carroll Center, in Newton, Massachusetts, told me that he felt coddled there. “I didn’t feel a lot of independence,” he said. “We go to these places because we want to level up our independence, and be pushed to the edge. We need that.” Carroll’s philosophy met its sharpest critic in Kenneth Jernigan, the president of the National Federation of the Blind. The N.F.B. was founded, in 1940, as an organization of and not for the blind: its constitution mandated that a majority of its chapter members had to be blind. Jernigan rejected Carroll’s Freudian sense of blindness—Carroll has described it in terms of castration—in favor of a civil-rights approach. Blindness, he insisted, was merely a characteristic, like hair color; it was an intolerant society that was disabling. He organized protests against airline policies that forced blind passengers to sit in handicap seats and give up their canes; his followers held sit-ins on planes, and were physically carried off by police. In the fifties, Jernigan and his colleagues proposed an experiment: the N.F.B. would take control of a state agency for the blind in Iowa—which a federal study had rated one of the worst in the country—and reinvent it. At this center, and those which followed, blind teachers took students waterskiing and rock climbing. At traditional agencies, blind students (but not instructors) were addressed by their first names. Jernigan mandated that his students be addressed by “Mr.” and “Ms.” as a sign of respect. N.F.B. employees followed a strict dress code: ties and jackets for men, skirts for women. Bryan Bashin, the former C.E.O. of the San Francisco LightHouse for the Blind, one of the largest blindness agencies in the U.S., compared this to the suited brothers in the Nation of Islam: “We were not going to give our oppressors the right to say we’re sloppy or unprofessional.” Blindness agencies traditionally taught students to travel by route memorization: walk down the block for fifty-five paces, and the entrance to the café is on your right. Jernigan pointed out the obvious flaw: you were at a loss as soon as you travelled or the coffee shop closed. The N.F.B. developed a method that came to be known as “structured discovery”: students learn to pay attention to their surroundings and use the information to orient themselves. Instructors were constantly asking Socratic questions, such as “What direction do you hear the traffic coming from?” and “Can you feel the sun warming one side of your face?” Bashin told me, of what he learned by spending a year at a center, “Confidence isn’t a deep enough word. It’s a faith in your ability to figure it out.” He added, “Until you get profoundly lost, and know it’s within you to get unlost, you’re not trained—until you know it’s not an emergency but a magnificent puzzle.” Students were pushed out of their comfort zones. Gene Kim, a recent C.C.B. graduate, told me that, for his independent drop, he was let off at some place resembling a hospital. He spent hours crossing bridges, “weird islands and right-turn lanes, weirdly cut curbs.” He was on the verge of giving up when he heard a dinging sound, and followed it to a light-rail train that took him home. The experience, he said, helped him make peace with the “relentless uncertainty” of blind travel. The historian Zachary Shore, on the other hand, got so lost on his independent drop that he stubbornly picked a direction and just kept walking. Police officers stopped him when he was about to walk onto a highway, and gave him a ride back to the center, where the director told him, “You failed this time. But we’re gonna make you do it again—and you will do it. I know you can do it. And we’re going to give you an even harder route.” (On his second try, Shore found his way back.) Sometimes teachers crossed a line. In 2020, dozens of students alleged that staff at N.F.B. centers had bullied them, sexually harassed or assaulted them, or made racist remarks. Many students at the centers had, in addition to blindness, a range of other disabilities: hearing loss, mobility impairments, cognitive disabilities. Some reported being mocked for having impairments that made the intense mental mapping required by blind-cane travel a challenge. Bashin ascribed this to the fact that blind people, like any collection of Americans, regrettably included their share of racists, abusers, and jerks. He said, of the N.F.B., “As a people’s movement, it looks like the U.S. It is a very big tent, and it is working to insure respect for all members.” But a group of “victims, survivors, and witnesses of sexual and psychological abuse” wrote an open letter in the wake of the allegations, blaming, in part, the N.F.B.’s tough methods. “What blind consumers want in the year 2020 is not what they may have wanted in previous decades,” they wrote. “We don’t want to be bullied or humiliated or have our boundaries pushed ‘for our own good.’ ” The N.F.B. has since launched an internal investigation and formed committees dedicated to supporting survivors and minorities. Jernigan once mocked Carroll’s notion that blind people needed emotional support, but the N.F.B. now maintains a counselling fund for members who endured abuse at its centers or any of its affiliated programs or activities. Julie Deden, the director of the Colorado Center, told me, “I’m saddened for these people, and I’m sorry that there’s been sexual misconduct.” She is also sad that people felt like they were pushed so hard that it felt like abuse, she noted. “We don’t want anyone to ever feel that way,” she said. But, she added, “If people really felt that way, maybe this isn’t the program for them. We do challenge people.” Ultimately, she said, she had to defend her staff’s right to push the students: “Really, it’s the heart of what we do.” The twenty-​four units at the McGeorge Mountain Terrace apartments are all occupied—music often blasts from a window on the second floor, and laughter wafts up by the picnic tables—but there are no cars in the parking lot, because none of its residents have driver’s licenses. The apartments house students from the Colorado Center. At 7:24 A.M. every weekday, residents wait at the bus stop outside, holding long white canes decorated with trinkets and plush toys, to commute to class. I arrived at the center in March, 2021. When the receptionist greeted me, I saw her gaze stray past me. Nearly everyone in the building was blind. In the kitchen, students in eyeshades fried plantain chips, their white canes hanging on pegs in the hall. In the tech room, the computers had no monitors or mouses—they were just desktop towers attached to keyboards and good speakers. A teen-ager played an audio-only video game, which blasted gruesome sounds as he brutalized his enemies with a variety of weapons. When I met the students and staff, I was impressed by blindness’s variety: there were people who had been blind from birth, and those who’d been blind for only a few months. There were the greatest hits of eye disease, as well as a few ultra-rare conditions I’d never heard of. Some people had traumatic brain injuries. Makhai, a self-described stoner from Colorado, had been in a head-on collision with a Ford F-250. Steve had been working in a diamond mine in the Arctic Circle when a rock the size of a two-story house fell on top of him, crushing his legs and blinding him. Alice, a woman in her forties, told me that her husband had shot her. She woke up from a coma and doctors informed her that she was permanently blind, and asked her permission to remove her eyeballs. “I never mourned the loss of my vision,” she told me. “I just woke up and started moving forward.” She said that she’d had a number of “shenanigans” at the center, her word for falls, including a visit to the emergency room after she slipped off a curb and slammed her head into a parked truck. At the E.R., she learned that she had hearing loss, too, which affected her balance; when she got hearing aids, her shenanigans decreased. Soon after, my travel instructor, Charles, had me put on my shades: a hard-​shell mask padded with foam. (Later, the center began using high-performance goggles that a staffer painstakingly painted black, which made me feel like a paratrooper.) I was surprised by how completely the shades blocked out the light—I saw only blackness. I left the office, following the sound of Charles’s voice and the knocking of his cane. “How are you with angles?” he said. “Make a forty-​five-​degree turn to the left here.” I turned. “That’s more like ninety degrees, but O.K.,” he said. Embarrassed, I corrected course. With shades on, angles felt abstract. On my way back to the lobby, I got lost in a foyer full of round tables. Later, another student, Cragar Gonzales, showed me around. He’d fully adopted the N.F.B.’s structured-discovery philosophy, and asked constant questions. “What do you notice about this wall?” he said. This was the only brick wall on this floor, he told me, so whenever I felt it I’d instantly know where I was. By the end of the day, though, I still wasn’t able to get around on my own. I felt a special shame when I had to ask Cragar, once again, to bring me to the bathroom. That afternoon, I followed Cragar to lunch. He had compared the school’s social organization to high-school cliques, except that the wide age range made for some unlikely friendships; a few teen-agers became drinking buddies with people pushing fifty. A teen-ager named Sophia told me that so many people at the center hooked up that it reminded her of “ Love Island ”: “People come in and out of the ‘villa.’ People are with each other, and then not.” Within a few days, I started hearing gossip about students throughout the years who had sighted spouses back home but had started having affairs. Some of the students had lived very sheltered lives before coming to the program: classes brought together people with Ph.D.s and those who had never learned to tie their shoes. One staff member told me that some students arrive with no sex education, and there are those who become pregnant soon after arriving at the center. I’d heard that some people find wearing the shades intolerable, and make it to Colorado only to quit after a few days. I found it a pain in the ass, but also fascinating—like solving Bashin’s “magnificent puzzles.” On the same day that I arrived, I’d met a student nicknamed Lewie who had a high voice, and I spent the day thinking he was a woman. But people kept calling him man and buddy , and, with some effort, I reworked my mental image. Lewie had cooked a meal of arroz con pollo. I felt nervous about eating with the shades on, but I found it less difficult than I expected. Only once did I raise an accidentally empty plastic fork to my lips. At one point, I bit into what I thought was a roll, meant to be dipped in sauce, and was sweetly surprised to find that it was an orange-flavored cookie. I began to think of walking into the center each day as entering a kind of blind space. People gently knocked into one another without complaint; sometimes, they jokingly said, “Hey, man, what’d you bump into me for?”—as if mocking the idea that it might be a problem. Students announced themselves constantly, and I soon felt no shame greeting people with a casual “Who’s that?” Staff members were accustomed to students wandering into their offices accidentally, exchanging pleasantries, then wandering off. One day, I was having lunch, and my classmate Alice entered, then said, “Aw, man, why am I in here ?” I learned an arsenal of blindness tricks. I wrapped rubber bands around my shampoo bottles to distinguish them from the conditioner. I learned to put safety pins on my bedsheets to keep track of which side was the bottom. I cleaned rooms in quadrants, sweeping, mopping, and wiping down each section before moving on. I had heard about a gizmo you could hang on the lip of a cup that would shriek when a liquid reached the top. But Cragar taught me just to listen: you could hear when a glass was almost full. In my home-management class, Delfina, one of the instructors, taught me to make a grilled-cheese. I used a spoon on the stove like a cane to make sure the pan was centered without torching my fingers. Before I flipped the sandwich, I slid my hand down the spatula to make sure the bread was centered. When I finished, I ate it hungrily; it was nice and hot. One weekend, I went with a group of students to play blind ice hockey. The puck was three times the size of a normal puck, and filled with ball bearings that rattled loudly. On St. Patrick’s Day, we went to a pub and had Irish slammers. One day, Charles took me and a few other students to Target to go grocery shopping. This was my first time navigating the world on my own with shades, and every step—getting on the bus, listening to the stop announcements—was distressing. When we got to Target, we were assigned a young shopping assistant named Luke. He pulled a shopping cart through the store, as we hung on, travelling like a school of fish. Charles had invited me to his apartment for homemade taquitos , and I asked Luke to show us the tortilla chips. He started listing flavors of Doritos—Flamin’ Hot, Cool Ranch. “Do you have ‘Restaurant Style’?” I asked, with minor humiliation. At the self-checkout station, I realized that I couldn’t distinguish between my credit and debit cards. “Is this one blue?” I asked, holding one up. “It’s red,” Luke said. I couldn’t bring myself to enter my PIN with shades on, so I cheated for my first and only time, and pulled them up. The fluorescent blast of Target’s interior made me dizzy. I found my card, and then quickly pulled the shades back down. We retraced our steps back to the bus stop. As we got closer, we heard the unmistakable squeal of bus brakes. “Go to that sound!” Charles shouted, and we ran. I wound up hugging the side of the bus and had to slide to the door. When I made it to my seat, I was proud and exhausted. One day, after class, I headed back to the apartments with Ahmed, a student in his thirties. Ahmed has R.P., like me, but he had already lost most of his vision during his last year of law school. He’d managed to learn how to use a cane and a screen reader, which reads a computer’s text aloud, and still graduate on time. But his progression into blindness took a steep toll. After he passed the bar, he moved to Tulsa, where he had what he describes as a “lost year.” He deflected most of my questions about what he did during that time, only gesturing toward its bleakness. “But why Tulsa?” I asked. “Because it was cheap,” he said. He knew no one in the city. He just needed a place to go and be alone with his blindness. With apologies to a city that I’ve enjoyed visiting, after listening to Ahmed, I began to think of Tulsa as the depressing place you go when you confront the final loss of sight. When would I move to Tulsa? The public perception of blindness is that of a waking nightmare. “Consider them, my soul, they are a fright!” Baudelaire wrote in his 1857 poem “The Blind.” “Like mannequins, vaguely ridiculous, / Peculiar, terrible somnambulists, / Beaming—who can say where—their eyes of night.” Literature teems with such descriptions. From Rilke’s “Blindman’s Song”: “It is a curse, / a contradiction, a tiresome farce, / and every day I despair.” In popular culture, Mr. Magoo is cheerfully oblivious to the mayhem that his bumbling creates. Al Pacino , in “Scent of a Woman,” is, beneath his swaggering machismo, deeply depressed. “I got no life,” he says. “I’m in the dark here!” Many blind people (including me) resist using the white cane precisely because of this stigma. One of the strangest parts of being legally blind, while still having enough vision to see somewhat, is that I can observe the way that people look at me with my cane. Their gaze—curious, troubled, averted—makes me feel like Baudelaire’s somnambulist, the walking dead. In response to this, blind activists have pushed the idea that blindness is nothing to grieve—that it’s something to be celebrated. “Blindness is not a tragedy,” the writer and former C.C.B. counsellor Juliet Cody said. “It’s a positive opportunity to have faith and believe in yourself.” I find this notion appealing, even liberating. But I’ve also struggled to force myself into an epiphany. When I’m honest with myself, I find that I’m already mourning the loss of small things: the ability to drive my son to the mountains for a hike, or to browse the stacks in a library. Cragar told me that, when his vision loss began to accelerate, he told his family that he wasn’t scared—that he was ready. But he admitted to me that he wasn’t so sure: “I say that, but do I really know?” Tony, another student I met at C.C.B., told me that, when he realized he could no longer see the chalkboard in his college classes, he retreated to his dorm room, flunked out, moved back in with his father, and spent his disability money on weed, to numb out. “I hit some very dark chunks,” he told me. One night, in Colorado, I heard a student say, “When I lost my vision, I didn’t leave my bed for a month.” In my weeks at the center, I began to suspect that consolation lies not in any moment of catharsis but in an acknowledgment of blindness’s ordinariness. The Argentinean writer Jorge Luis Borges wrote that blindness “should not be seen in a pathetic way. It should be seen as a way of life: one of the styles of living.” Accepting blindness’s difficulty allows one to move on. “Life is never meant to be easy,” Erik Weihenmayer, the first blind person to climb Mt. Everest, wrote in his memoir, “ Touch the Top of the World. ” “Ironically when I finally accepted this reality, that’s when life got easy.” Under sleep shades, I found that I could read, write, cook, travel. There was frustration, but this wasn’t unique to blind life. At one point, as I listened to the chatter of a cafeteria full of blind people, I thought, How strange that I’m still myself. I’d worried over stories of people unable to handle total occlusion, but, in the moment, it felt surprisingly normal. I began to appreciate the novel experiences that blindness gave me. The notion that blind people have better hearing than the sighted is a myth, but relying on my ears did change my relationship with sound. Neuroscientists have found that the visual cortices of blind people are activated by such activities as reading Braille, listening to speech, and hearing auditory cues, such as the echo of a cane’s taps. At lunch, one day, Cragar’s wife, Meredith, who was visiting from Houston, came into the room carrying their fifteen-month-old daughter, Poppy. The sounds that she made—cooing, laughing—cut through the room like washes of color. I didn’t quite hallucinate these colors, but I came close. In the coming weeks, I had several mildly psychedelic experiences like this, a kind of blind synesthesia. The same thing happened with touch. I played blackjack with a Braille deck, and, after a few days, began to intuitively read the cards as if I were seeing them. In the art room, a teacher taught me to pull a wire through a mound of wet clay. Later, as I described the experience to Lily and our son, Oscar, on a video call, I had to remind myself that I’d never actually seen this tool or the clay. It was so clear in my mind’s eye. My sense of space gradually transformed. Walking the carpeted halls of the center’s lower level, I could see a faint black-and-blue virtual-reality environment lit by some unseen light source. Sometimes my cane penetrated one of the velvety walls, and I had to redraw my mental map. When I was out in the city, Charles sometimes informed me that what I thought was Alamo Avenue was actually Prince Street, or that east was actually north, and I had to lift the landscape in my mind, rotate it ninety degrees, and set it back down. I could almost feel my brain trembling under the strain. But it was also kind of fun. On your last day at the center, the staff presents you with a “freedom bell” emblazoned with the words “ TAKE CHARGE WITH CONFIDENCE AND SELF-RELIANCE !” (Students sometimes quote this when doing banal activities like trying to find the bathroom.) At Lewie’s graduation, a few weeks into my stay, Julie invited him to ring the bell, saying that it represented not just his independence but that of blind people everywhere. My time at the center was cut short by family demands, but this spring I returned to see how far I had come. On my second-to-last day, Charles told me that I would be doing an independent drop. This seemed extreme; most students do that test after being at the center for nine months; I had been there for a total of four weeks. I rode out in the center’s van with another instructor, Ernesto, feeling nervous. “I need some reassurance,” I told him. “Do you really think that I’m ready to do this on my own?” “Actually, Andrew, it was two against one,” Ernesto said flatly. He had been outvoted. When we arrived at my drop point, Josie, one of the center’s few sighted employees and its designated driver, seemed worried. “Don’t get out on that side!” she said. Stepping out of the van, I felt immediately disoriented. The sun was shining on my face, so I had to be facing east. My cane hit a wooden door, and a dog started barking. This must be a residential street. I’d learned, when lost, to find a bus stop. Most students used their one question to ask the driver where to go, and had memorized the bus routes and rail lines sufficiently to make it home from there. There wouldn’t be a bus stop on a residential block, so I set off toward the sound of traffic. I soon arrived at a busy intersection. One of the hardest parts of blind travel is crossing the street. Once you leave the curb, there’s nothing guiding you to the other side, and you might walk in front of a car. Most corners have a dip for wheelchairs, but these sometimes point across the street, and sometimes point diagonally into the middle of the intersection. I learned to use my ears to find my way. I listened to the perpendicular traffic driving past my nose and calibrated my alignment so that the sound was equal in both ears—like balancing a stereo. When the light changed, I took off. I listened to the cars roaring past me, adjusting my trajectory to stay parallel to them. I felt the crown of the road (which rises and falls, to allow water to drain) beneath my feet, and that let me know that I was halfway. When my cane connected with the far curb, I could feel my heart pounding. I must have often looked bewildered on my journey. At one point, I was trying to decide whether a dip was a corner or a driveway, and a driver slowed down and said, “You drop something, buddy?” I answered, with forced cheer, “Thanks! I’m just exploring.” At a big, four-lane intersection, I stood for a long time, listening. A worker from a hospital came out to check on me, and, when I told him I was looking for a bus stop—not technically a question, but a little sneaky nonetheless—he pointed me in the general direction. He went back to work, saying mournfully, as though leaving me to die, “Please take care.” Blind travel requires you to think like an urban planner. Charles had taught me to swing my cane wide in search of a bus pole. On wide downtown blocks, bus stops are curbside, but on narrower streets they’re set back behind the grass line. Halfway up one block, I connected with a metal bench. I lifted my cane and hit a low roof. There was no pole, but what else could this be? When the bus arrived, I climbed aboard and let fly my official question: “How do I get to Littleton/Downtown station?” The driver told me to go to the end of the line, then take the light-rail. When we got to the rail station, I crossed the tracks, and boarded a train. In Littleton, I almost stepped on a person passed out on the ground. I walked back to the center, hearing the familiar sound of tapping canes as I arrived. An announcement went out that I had returned, and cheers rose up from the classrooms. The next night, I did a cooking test, making lemon-garlic kale salad and red-lentil soup. It took me about twice as long as it would have without shades, and I burned a finger. Still, I was surprised by how good it tasted. The students gathered around the kitchen table, and one sat on the couch; this arrangement would have been visually odd, but, sonically, it felt perfectly natural. Ernest, a member of a Black Methodist church, said that he thought his blindness made him more holy. “I walk by faith, not by sight,” he said, quoting Scripture. My classmate Steve suggested, dubiously, that being blind made him less susceptible to racism. He told us that he’d been working with a physical therapist who came from Japan, and had accidentally touched her cornrows and realized that she was Black—she had been born in Congo. Michelle, a sound engineer from Mexico, disagreed, saying that she didn’t think blindness made her any more “pure.” I spilled a cold cup of coffee into a supermarket cake, but we were all full by then anyway. The next morning, I flew home. As I exited the plane, sweeping my cane in front of me, a man asked if I needed help. I ignored him and headed toward the baggage claim, but he followed me, irritated, repeating, “Do you need any help ?” I shook my head. I didn’t. I followed the sound of roller bags, feeling the carpet of the gate area give way to the concourse’s linoleum. I was halfway to the escalators before I thought of using my eyes to look around for an exit sign. I already knew where I was going. ♦ This is drawn from “ The Country of the Blind: A Memoir at the End of Sight. ” New Yorker Favorites First she scandalized Washington. Then she became a princess. The unravelling of an expert on serial killers. What exactly happened between Neanderthals and humans ? When you eat a dried fig, you’re probably chewing wasp mummies, too. The meanings of the Muslim head scarf. The slippery scams of the olive-oil industry. Critics on the classics: our 1991 review of “Thelma & Louise.” Sign up for our daily newsletter to receive the best stories from The New Yorker. More: Blindness Blind Education Colorado Writers Books Goings On E-mail address Sign up By signing up, you agree to our User Agreement and Privacy Policy & Cookie Statement. This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. Daily Shouts By Kate Hakala As Told To By Diego Lasarte The Current Cinema By Anthony Lane The New Yorker Documentary By Austin Elias-de Jesus Sections News Books & Culture Fiction & Poetry Humor & Cartoons Magazine Crossword Video Podcasts Archive Goings On More Customer Care Shop The New Yorker Buy Covers and Cartoons Condé Nast Store Digital Access Newsletters Jigsaw Puzzle RSS About Careers Contact F.A.Q. Media Kit Press Accessibility Help © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights. The New Yorker may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. Ad Choices Facebook X Snapchat YouTube Instagram Do Not Sell My Personal Info "
13,571
2,008
"The Real Work of Magic | The New Yorker"
"https://www.newyorker.com/magazine/2008/03/17/magic-the-real-work"
"Newsletter To revisit this article, select My Account, then View saved stories Close Alert Search The Latest News Books & Culture Fiction & Poetry Humor & Cartoons Magazine Puzzles & Games Video Podcasts Goings On Shop Open Navigation Menu Find anything you save across the site in your account Close Alert Our Far-Flung Correspondents The Real Work By Adam Gopnik Illustration by Mark Stutzman Save this story Save this story Save this story Save this story On a long plane ride home to New York from Las Vegas, a man and a boy are playing with cards. Only their hands are visible to the people sitting near them, so that, as they shuffle and reshuffle and fan and deal, they seem to be engaged in a game of gin rummy that never quite begins. The hands move, first large and crabby, then small and soft, in example and imitation, and all through the night, hour after hour—while everyone else on the plane sleeps or dozes or watches DVDs on a laptop—their hands move and their voices murmur. What they are doing is magic, and, because it is magic, it requires hour upon hour of hard work. A magician is teaching an apprentice how to do a card trick—a trick so complicated and subtle that it will, when finally shown, be almost too subtle to enjoy. It is called Twisting the Aces: the four aces are shown face down; they are counted out, still face down, one by one; the packet of cards is twisted, and each time the aces are counted out one of them, a different ace each time, appears face up. It’s as though inside the packet the cards, untouched by human hands, were somehow turning over. The magician and his apprentice are believers in the deep and narrow art of closeup card magic. A few nights earlier, they had gone, with a dozen or so other people, on a rare late-night tour of the illusionist David Copperfield’s warehouse, which contains the world’s greatest collection of magic paraphernalia. All of Houdini’s most important boxes—the Water Torture cell, the Metamorphosis Trunk—were there, but the magician had walked over to a wall where a tiny book was kept under a false cover. “It’s Malini’s Erdnase!” he said, as one might say, “It’s Lincoln’s Bible!” The magician’s face came alive as he looked through it. The boy watched, rising up on his toes to gaze at the small old-fashioned engravings of hands, neatly turned with late-nineteenth-century cuffs, manipulating cards, hands and cards, hands and cards, page after page. The boy is one of the tribe that you will find every Saturday afternoon at Tannen’s Magic, a windowless shop on the sixth floor of a nondescript building on West Thirty-fourth Street. All afternoon, the magic boys step into a tiny elevator that takes them to Tannen’s. They are often searching for relief from a needle of worry in their minds. They go to buy tricks, “gaffs,” that will lend them magic. An acute boy might sense that Tannen’s once was greater, or at least bigger. On a back table, he can find, half discarded in a big tub, faded old blueprints for illusion boxes—instructions for making magic cabinets of a kind that no one makes anymore. “Girl in a Dream: Illusion Plan No. 1223,” one boy reads out to a friend, as they decide to buy a plan as a memento of the old magic. (Actually, it’s “Girl in a Drum,” but they find out only when they get it home.) The magic boys often go for a Saturday meal to a Mexican restaurant around the corner, where they show each other their tricks. Some of them have heard of a better magicians’ dinner in the back room of a little restaurant and sports bar off Ninth Avenue called the Joshua Tree. The gathering takes place in the small hours, after the last curtain of “Monday Night Magic,” a lovely chamber session of magicians that has somehow survived for ten years in various theatres around the city. One of the luckiest things that have happened to me in New York is being able to go to the Joshua Tree and watch the magicians work and listen to them talk. “Do you recall the Miser’s Dilemma?” “Wasn’t that the version Al Flosso did?” “Well, Flosso did a version of it, but it was Vernon who had the real work on it, and he announced it in Harry’s book. He used to do it at the summer table at the Castle, but meanwhile Craven was doing a similar version except with the aces reversed, and with a different handling, at the Witchdoctor’s table in New York. You know, the one that met around the corner from the Palace when Blackstone Junior was headlining?” “Was that the table that Max was a regular at?” “No, you’re thinking of the Witchcraft roundabout that Vernon ran . . .” And on and on like that, in dreamlike composite. Magicians have the most rapturous and absorbed shoptalk of any artists I know. This is partly because magicians have leisure between gigs, and partly because much of the pleasure of being a magician is membership in a subculture, where methods and myths can be appreciated only by initiates. Magicians are, in their relations with one another, both extremely generous and extremely jealous. Just as chefs know that recipes are of little value in themselves, magicians know that learning the method is only the beginning of doing the trick. What they call “the real work” isn’t the method, which anyone can learn from a book (and, anyway, all decent magicians know roughly how most tricks are done), but the whole of the handling and timing and theatrics of the effect, which are passed along from magician to magician and from generation to generation. The real work is the complete activity, the accumulated practice, the total summing up of tradition and ideas. The real work is what makes a magic effect magical. If I had to choose one moment where I have sensed myself in the aura of the real work, it might be the night I watched Jamy Ian Swiss, over dinner at the Joshua Tree, perform thirteen versions of the pass in about as many seconds. The pass (it is sometimes called “the shift,” and card cheaters call one version “the hop”) is among the glories of advanced sleight of hand. Diagrammed by S. W. Erdnase, in “The Expert at the Card Table,” a treatise he published in 1902, it involves moving a packet of cards invisibly from the center of the deck to the top. Judging the quality of any magician’s pass is inherently difficult, since the better it is done the harder it is to see that anything has happened. To watch Jamy Ian Swiss perform thirteen versions of the pass is to see this: The cards in his hand, then one card—say, the three of clubs—inserted somewhere, anywhere, in the middle of the deck. His hands burp and hiccup for half a second, merely squaring the deck, and then the three of clubs is disclosed, right back on top. He runs through a number of variants: the riffle pass, the stroboscopic pass, the dribble pass . . . The other magicians nod, knowingly, like bird-watchers seeing an unusual find in the middle distance. Swiss is widely thought to have one of the masterly sleight-of-hand techniques in the world today, and the pass is one of his accomplishments. Seeing him do thirteen versions of it is therefore a little like seeing Yo-Yo Ma practice scales in rehearsal at Carnegie Hall. On the other hand, it is not at all like watching Yo-Yo Ma practice scales, since the audience is likely to include someone like Chuy, the Mexican Wolfman, an amiable sideshow artist with a very, very full beard, or The Great Throwdini, the knife thrower, and his target girl Tina, not to mention Simon Lovell, an underfed, pale-green-and-white-complexioned Englishman who is a master card cheat—I have seen him make whole decks disappear and replace them with other decks in less time than it takes to describe it—and Todd Robbins, who is now perhaps the last remaining sideshow artist capable of doing the Human Blockhead act (a six-inch steel nail goes into and up the nose) while giving a scholarly account of its origins. The few civilians who do come around as often as not have no idea of the quality of what they’re seeing—the magician’s eternal plight being that of a Yo-Yo Ma who, after he plays, has people come up onstage and tell him that they know how he does it, he scrapes that bow thing across the strings, and, anyway, they have an uncle who used to play the cello a little, has he ever met him? Most cellists, in those circumstances, would do what most magicians do—nod politely and say yes, I bet your uncle was a real music lover, and retreat into beer and diffidence. Perhaps one cellist in a generation would say no, scraping a bow against a string has nothing to do with making music, you don’t know how it’s done, you actually have no idea how it’s done, and your uncle was no more a cellist than a man with a stereo is a conductor. Jamy Ian Swiss is that cellist. He is as absolute in his passions and prohibitions as a Zen master, albeit one with a Vandyke, a small potbelly, an earring in his left ear, and a taste for Hawaiian shirts. Most nights, he maintains a note of conspiratorial mirth, leaning in toward a listener to share outrage over some stupidity—“Can you believe that guy!”—and breaking into a wolfish grin. But at times he lowers into a kind of set-faced gloom at the things the world is willing to watch and praise. Tender in his connections, a gentle and inspiring teacher of young magicians (he has performed a marriage ceremony for at least one, the young closeup man Matthew Holtzclaw), he can be brutal in his beliefs. In an old-fashioned, barking Brooklyn accent, part Bogart hiss and part Art Carney howl, he produces at the table a flow of interrogations, exclamations, verdicts, and interdictions. “Magic only ‘happens’ in a spectator’s mind,” he puts it emphatically. “Everything else is a distraction. Magic talk on the Internet is a distraction. Magic contests are a distraction. Magic organizations are a distraction. The latest advertisement, the latest trick—distractions. Methods for their own sake are a distraction. You cannot cross over into the world of magic until you put everything else aside and behind you—including your own desires and needs—and focus on bringing an experience to the audience. This is magic. Nothing else.” A producer and collector of practical aphorisms—“In every other art, technique must be transparent; only in magic must it be invisible”; “Don’t run when they’re not chasing you”; “Don’t make unimportant things important”; “Magicians have taken something intrinsically profound and made it look trivial”; “Closeup magic is an art looking for an easel”—he is perhaps the most feared (and resented) intellectual in the world of magic, with the saving ironic awareness that almost no one knows that there are in his world any intellects to be feared or resented. (“Mimes were invented to give magicians someone to look down on,” he said one night.) He is a true intellectual in that he cannot help arguing about ideas even when it would be in his own interest, narrowly conceived, to stop. When someone says something stupid about magic, or sells something fake, or performs derivatively or cynically in some way, he just can’t abide it—just cannot accept it—and uses whatever forum he has to denounce the offense. In regular columns in the magic magazines Genii and Antinomy , he launches impassioned assaults on phony magic, on “street magic,” on Internet magic, and on any other kind of magical practice that seems to him to have brought shame on his profession. The world at large, of course, is not particularly interested in hearing why someone is wrong about magic, or doing magic the wrong way—it’s all most people can do to turn up once a year at a magic show for their kid’s birthday—and the magicians Jamy thinks are wrong certainly don’t want to hear it, and so, like all intellectuals, he probably exasperates as many people as he enlightens. There are those—especially on Web sites and among YouTube magicians—who believe that Jamy is a man out of time, defending a dying tradition in the face of a renaissance of new and edgier sorts of conjuring. Jamy writes to attack them, and they respond in kind. (Even his enemies call him Jamy. It’s a small world.) People who don’t like him find his relentless search for the meaning of magic tiresome or just pretentious. Those who follow and admire him find something gallant, and Cyrano-like, in his quest to make magic matter, not as a redoubt of nostalgia but as a living art that might cross over with the other arts. Yet even his triumphs are shaded by the rueful knowledge that often all he has to prove his convictions is card tricks, and the handful of people who care. On the plane, Swiss’s voice rises a touch: “You’ve got to—no, you’ve got to relax your wrist just then, you have to—you want it to look more casual. You’re making too much of the moment. The ace is no big deal. Don’t force it. Let it happen.” The boy’s hands go flat, and turn and start again.Magicians like to say that magic is as old as civilization, stretching back to Egyptian priests and Greek oracles. But stage magic, performed magic, in which conjuring is acknowledged as craft and entertainment—in which, in one of Swiss’s favorite aphorisms, the honest magician promises to deceive you, and then does—is probably a few hundred years old. In his essay “A Millennium of Magic Literature,” Swiss accepts the significance of the date 1584, when Reginald Scot’s book “Discoverie of Witchcraft” and Jean Prevost’s “Clever and Pleasant Inventions” were published. Scot and Prevost, he explains, write similar things, slightly at cross-purposes: Scot is “discovering,” that is, debunking, witchcraft—there are no witches, he says, and he explains how the make-believe witches achieve apparently magical effects. Prevost’s book presents itself as a book of tricks, to be done for pleasure. From the start, then, the history of magic-as-fun is interwoven with the history of magic-as-fraud, more or less the way the history of chemistry is bound up with the history of alchemy. Modern magic may begin around 1905, in Ottawa, when the very young David Verner read S. W. Erdnase’s “The Expert at the Card Table.” Appropriately, it’s a deeply mysterious text. Who Erdnase was and why he wrote, and self-published, his book are two abiding enigmas of modern magic. No one of that name has ever been found, and the general agreement is that it is a pseudonym, probably for someone named Andrews. (Turn it around.) He seems to have been a cardsharp rather than a magician; most of the book is taken up with cheating techniques, daintily not always called such. (As for why he gave away his craft, William Kalush, the founder of the leading magic library in New York, suggests that Erdnase may have been suffering from the perpetual fantasy among nonwriters that writing books is a way to make money.) In Erdnase, you see the same relation between display and deceit that has always been part of magic, only instead of doing things that could get you burned alive at an auto-da-fé he is doing things that could get you shot dead at a card table. But his inventory of closeup skills—the cull, the break, the shift, the color change—became the foundation of twentieth-century closeup magic. David Verner, or Dai Vernon, as he became onstage, is the protagonist of modern magic, the Jesus to Erdnase’s John the Baptist. Improbable though it seems for a closeup card magician who spent much of his life in residence at a private magic club, he has been the subject of two full-scale biographies, one of them a multivolume scholarly work, not to mention a huge commemorative and annotative literature within the magic world. Trying to explain his stature to civilians, magicians call him “the Picasso of magic,” but Vernon is really something closer to its Marcel Duchamp. Like Duchamp, he responded to the drying up of the natural niche for his art form in his lifetime not by trying to compete with the new media—with the revolution that the photographic image and movies had wrought—but by seceding from the outside world, making magic into a secretive coterie art rather than an expansive public one. Vernon worked at a time when movie palaces were pushing out magic venues and theatres. The great illusionist shows were slowly going out of business, while closeup magic went on mostly in night clubs. By the nineteen-thirties, as Vernon’s biographer David Ben writes, “amateur magicians, with stars in their eyes, had little idea of how unsatisfying the work could be. Performers traveled great distances and performed numerous shows before unappreciative audiences. . . . Those who presented large-scale illusions, ‘Tall Grass Showmen,’ were shunted to the hinterlands and focused their efforts on the cities, towns and villages that received little entertainment—of any sort.” Throughout the twenties and thirties, Vernon alternated magic shows, some successful, with long periods as an itinerant silhouette cutter. Yet during this discouraging time for magic he began the work that led to his greatest routines, among them Twisting the Aces and Triumph, in which a deck mixed up face up and face down suddenly straightens itself out, producing, as a bonus, a selected card. Triumph is to magic what “I Got Rhythm” is to jazz, the basis of innumerable variations. Two stories shape Vernon’s myth. The first is about how, in Chicago in 1922, he fooled Houdini, who boasted of being able to figure out any card trick, with a version of the routine called the Ambitious Card. Vernon was put off by Houdini’s bad grace in the face of his own perplexity, and this helped create a divide that can still be found among magicians: between those who see Houdini, perhaps the most famous name in entertainment history, as essentially a tourist trap—a Salvador Dali, there for the flash and the obvious effects, but not even a competent closeup man or illusionist—and those who see Houdini’s fame as proof positive that he did the first thing a magician needs to do, which is to grasp the mind of his time. The second story is about how, in the nineteen-thirties, Vernon embarked on what became the legendary quest of modern magic, the search for the center dealer. (Karl Johnson tells the story beautifully in his book “The Magician and the Cardsharp.”) Vernon had heard rumors of a cardsharp who was able to deal not merely the bottom card or the second card of the deck, in the usual way, but from its center—meaning that a chosen card could be dealt at will, no matter where the deck was cut—and, after years of looking in the hardscrabble gamblers’ underground of Depression America, he found him, just outside Kansas City. Vernon learned the move, and taught it to a handful of other magicians. The story, as usually told, emphasizes Vernon’s search for “naturalness,” for methods of card manipulation that would look entirely real, even under scrutiny. The deeper meaning of the myth, though, is that the magician is one of the few true artists left on earth, for whom the mastery of technique means more than anything that might be gained by it. He center-deals but makes no money—doesn’t even win prestige points—because nobody knows he’s doing it. Vernon grasped that there is an imbalance between the spectator’s experience and the performer’s, greater even than the normal imbalance in the arts between the insider and the outsider. We could watch Horowitz’s fingers on the keyboard as we listened to the music; if we could admire Vernon’s fingers on the deck as he did the trick, he wouldn’t be doing it right. This makes insiders’ experience of magic distinctive, a clinging together within a charmed circle of knowledge. Erdnase’s genuine criminality became, in Vernon’s hands, a kind of symbolic criminality—an aesthetic of the clandestine. Vernon’s insistence on “natural handling,” on making every move look casual rather than “presentational”—like a man handling cards rather than like a magician handling props—is a precept of modern magic. But magic, like novel writing or acting, is always bending toward naturalism, and, very quickly, the forms of naturalism become rigidly stylized. The nineteenth-century magician Robert-Houdin insisted on magicians’ wearing gentleman’s evening clothes, instead of the elaborate sorcerer’s and Chinese costumes that conjurers usually wore onstage. The white tie and tails and top hat became the magician’s regalia well into the twentieth century, long after everyone else had stopped wearing them. Like the Polish Hasidim, whose move toward spontaneous religion kept them in the ordinary clothes of the eighteenth century, magicians wore a costume that had first been meant as camouflage. The task of making magic seem natural must be perpetually renewed, and is more complex than just making it look offhand. One night a year ago, a young magician came into the Joshua Tree and auditioned while Swiss sat having grilled salmon and a microbrew. He did a “torn and restored” bill trick—tearing up a dollar bill and then making it whole again. Swiss took him aside, and could be seen talking to him, sharply but intensely, explaining, teaching. Someone asked what was wrong with the trick; it had seemed very neatly done. “He was appealing—he did have a nice persona,” Swiss said, leaning into the table. “He could do the moves. But he tore the dollar up slowly, like this.” Swiss replicated the young magician’s careful, studied action. “Why? Why would you tear it up slowly? Nobody tears a dollar bill up in the first place, but, if you’re going to tear up a dollar bill at all, you’d tear it up quickly, in a sudden fit, zip-zip-zip.” He demonstrated. “The only reason you would tear a dollar bill up slowly is if you were doing something else to it at the same time—if you were doing a goddam magic trick. So right away we’re off in the magic land of ‘I have in my hand an ordinary deck of cards.’ But, O.K., let’s live with that. Why are you tearing it up? Are you doing it angrily? Gaily? Why are you asking me to watch you tear up a dollar bill? The method is not the trick. The method is never the trick. Once you’ve mastered the method, you’ve hardly begun the trick.” All grownup craft depends on sustaining a frozen moment from childhood: scientists, it’s said, are forever four years old, wide-eyed and self-centered; writers are forever eight, over-aware and indignant. The magician is a permanent pre-adolescent. At least, all lives of magicians begin with a twelve-year-old at a place like Tannen’s. “Jamy Ian Swiss is actually my name,” Jamy said at another dinner. “And I grew up in Brooklyn, in Flatbush and then in Sheepshead Bay. My mother gave those names to me because ‘Jamy’ couldn’t be shortened, and ‘Ian’ sounded elegant and English somehow, high class. The ‘Swiss’ is a Jewish name that got changed somewhere along the road.” Often battling with his mother, he loved his father, who got him started in magic. “I was an awkward and shy kid, with bottle-glasses and a horrific speech impediment,” he said. “Then one day my father brought home, to amuse me, a Color Vision box that he had bought at Tannen’s.” The Color Vision box is one of the simplest of self-working tricks. The magician gives you a cube with a different color on each side and a box; you put the cube in the box with the color of your choice facing up, and replace the lid. The magician discerns your choice without seeming to open the box. “I thought it was wonderful, amazing, and he taught me how to do it,” Swiss said. “And then we started going to Tannen’s together, taking the train in on Saturdays, all the way from Sheepshead Bay to Times Square. “In those days, Tannen’s was in the Wurlitzer Building, behind Bryant Park, where the Verizon Building is now. You just can’t imagine the effect Tannen’s could have in those days on a shy kid with a speech impediment. It was a scene! Everyone would be there! There were photographs of magicians from floor to ceiling, and shadow boxes filled with effects, and Tannen’s symbol, a hat and a rabbit, was inlaid right in the linoleum. And so full of light! The magicians were everywhere, and they were such elegant and resourceful men. They would all drop in and do work just for the pleasure of it. Lou Tannen himself was there, a kind man who loved magic and sympathized with kids. He actually taught me a version of the cups and balls. Whenever anyone asks me how I started, I say, ‘Just the same way you did. When I was awkward and twelve and bought a trick.’ ” It was around then that Swiss had his first epiphany about the power of magic and its risks. “We had a mixed-up family, often at odds with each other, but there was one cousin, I’ll call her Sharon, whom I adored. I would show her the Color Vision box, over and over, and she loved me for doing it. She couldn’t get enough of it. But she kept begging me and begging me to show her how I did it, and at last I did. And she was furious—absolutely furious! The trick was so simple, even stupid. I learned a huge lesson that day, and not just not to tell civilians the secrets. It was more complicated and ambiguous than that, and it’s taken me years to work out all of its meanings. It was”—he paused—“it was that the trick was not the trick, and that it was the interchange between us that was the source of the effect.” At the age of twenty-nine, after short flings in the pet trade and the telephone business, Swiss took a year off, while his wife at the time supported him, and spent it doing nothing but sleight of hand. “Mastering magic at twenty-nine is as late to begin seriously as it would be if you were studying violin. I felt that I had hands like stumps. It’s why I still so envy closeup men like Prakash, who has such soft hands. By the end of the year, I had begun to get very good, technically, and I had heard somewhere that you could work as a magic bartender. So I went to bartender school.” He looked across the table. “I loved being a bartender. Loved it! There was a blue-collar side to it, I suppose. I didn’t talk with an accent like this, growing up. It was a rebellion against my cosseted middle-class upbringing. Magic was for me partly an art thing, partly a blue-collar artisanal thing, so being a magic bartender was ideal.” He barks his bark. “Ha! It turned out that the whole idea of magic bars was dying even as I entered it.” In 1985, Swiss went to see a magician-and-juggler act he had been hearing about for a couple of years, called Penn & Teller. They were in the middle of their legendary stand at the Westside Arts Theatre. “The night shook me up completely. I mean, beyond completely! They made fun of magicians, and still did brilliant magic. And they refused to say that they were magicians doing a magic show, although that was obviously what it was.” Two years later, Swiss went to work at the Magic Castle, the famous club in Hollywood. “I was arrogant enough to call the Castle and say that I was free to lecture or perform,” he says, shaking his head, “and they let me do some work there. Naturally, I was desperate to see Vernon, and we sessioned together.” Sessioning is the magicians’ equivalent of jamming. Dai Vernon, who died in 1992, just shy of a century, had been the magician-in-residence there since the late sixties. “I can still recall everything he did. I did some things for him, one of them a slow-motion coin-vanish routine. That Friday, I did the afternoon lunch shows in the Close-up Gallery. Two shows. When I walked out for the first show, there was Vernon sitting in the front row, a little to my right. He was mere feet from me, and my hands began to shake. I almost dropped a gaff. Afterward, I came out and saw Vernon at the bar. I approached and apologized for my shakiness. ‘You scared the hell out of me.’ He waved me off. “When I entered for the second show, there he was again, much to my surprise. I was calmer this time and the show came off without a hitch.” On Sunday, Swiss gave a lecture and Vernon was there again: “I asked Vernon, ‘So, Professor, did you see anything you liked?’ He said, ‘What do you mean, did I see anything I liked? I stayed awake for the whole thing, didn’t I?’ ” On the plane, the hands move. “Now, this is called Twisting the Aces,” the boy explains to his father, who has put down his book, curious. “It’s a Vernon trick, isn’t it?” the boy asks the magician. He nods. The boy keeps a picture of Vernon as the screen saver on his computer, still young and dapper, cigarette in hand, all smoke and cards. The boy’s hands move, trying to conceal something. “Make it natural, make it easy,” the magician says. Swiss befriended Penn & Teller, and began working with them on illusions, ideas, and routines. (“Penn taught me to drive. I’m a New Yorker, you know. Who drives? Penn put me in a car and said, ‘My ability to speak quickly and clearly is the only thing keeping us alive.’ ”) But his friendship with Vernon led him in another direction as well—backward, in a sense, to try to define what it was that made magic matter, why the tradition counted, and what it meant. He began to think about magic as an entertaining form of skepticism rather than as a debased form of mysticism. In one way, this was de rigueur for magicians. Houdini had unmasked psychics. But it challenged the way Swiss understood his own work. A lot of magicians saw stage and closeup magic simply as fake “wonder,” reproducing, in admittedly artificial ways, old rituals of death and restoration, giving people the hope, for half an hour, that real magic was possible. Even Edmund Wilson, an amateur magician, insisted that magic’s most enduring effects came from its imitation of ancient mystery religion. Vernon, though, knew better. As he put it, “A spectator never or rarely was fooled by what a magician performed for him in the way of tricks.” As Swiss wrote the series of essays that were eventually collected in his book “Shattering Illusions,” he arrived at the idea that magic was, in his words, “an experiment in empathy”—a contest of minds, in which the magician dominates by a superior grasp of the way minds work. The spectator is not a dupe who gets fooled but a rational actor who gets outreasoned. When the aces are twisted, the viewer doesn’t think, That’s supernatural! The viewer thinks, I know it’s a trick, but my mind is unable to imagine how any trick of the fingers could alter the cards when they’re obviously still right in the middle of the pack. In a recent summing-up essay in Antinomy , Swiss observed that, whereas a juggler like the young Penn Jillette doesn’t have to imagine an audience to experience his effects, the magician must: “From the very start, the moment a magician looks into his practice mirror, he is envisioning an alien awareness—a mind other than his own, perceiving an illusion that he is creating but cannot actually experience for himself.” Only by a command of intellectual empathy can the magician lead the viewer down an explanatory highway from which there is no exit, or, better, from which there are six exits, all of them blocked. Magic is imagination working together with dexterity to persuade experience how limited its experience really is, the heart working with the fingers to remind the head how little it knows. “The one thing I can do that Steven Spielberg can’t is to say, ‘Take a card, any card you like,’ ’’ Swiss says. “And I can have you sign it, so that it’s unique in the world, and then I can make it disappear from the deck and find it in your pocket and hand it back to you. That one card. Your mind and mine.” “At every moment in the history of magic, there is an anti-magician to go along with the mainstream magicians,” Swiss says. The opposition between Vernon, the chamber genius, and Houdini, the charismatic star, goes on. The anti-magician of the moment is David Blaine, the patron saint of street and YouTube magic, the prophet of the new illusionism, but also its subverter. Close up, Blaine is beautiful, in a fifties, Actors Studio, young-Brando way, a perfect Jewish sheikh: hooded eyes, a high forehead, and a steady gaze, which he knows how to avert in awkward vulnerability. He wishes to press the edges of the form, and he believes that the future of magic lies in a naturalism beyond even Vernon’s. In his next piece, this fall, he will attempt to stay awake for a million seconds (or 11.57 days), as a comment on both endurance and, obliquely, the practice of psychological torture: he will be sleepless for our sins. Blaine first became famous for doing tricks on television—some old and simple, some new and radical—but with the emphasis always on the faces of people, mostly girls, reacting with screams and gasps and semi-sexual eye rollings. (“Blaine saw something that we all saw, how people looked when they got ‘fried,’ ” one old-timer says, “and then just turned the camera around.”) Meanwhile, he never broke his air of moody, cool detachment. He has a two-floor studio in Tribeca that is lined with magic posters of the great illusionists and seems usually to be accessorized with beautiful women, Nadias and Anyas. It tends also to be filled with a posse of purposeful young magicians in black, who both learn from Blaine and teach him—“He saved magic,” one of them, the brilliant card man and street magician Daniel Garcia, said flatly—and is the center of a small, flourishing new enterprise, a kind of merger of magic with performance art. David Blaine has become best known for what he calls “endurance art,” genuine, ungaffed daredevilry: standing on a pole for thirty-five hours, living in a water-filled plastic bubble or encased in ice. Not long ago, he claimed a record for the longest time spent underwater by a mammal, not counting a few species of whale. “It’s something I’ve been working my way around to,” he murmured, one Friday evening in December, in his studio, talking about his sleepless project. “It started out just with lions.” He shows an image he had Photoshopped of himself as Daniel in Rubens’s “Daniel in the Lions’ Den,” a swarthy Brooklynite among the Baroque roarers. “But just being in a room with lions isn’t about anything. So then I thought, What’s the worst torture someone can undergo? And I realized it’s going without sleep. So I researched it, you know, and found out what the record was. The guy who set the record didn’t train for it, and he went kind of crazy afterward. I know I’ll start to hallucinate and everything—but my idea is to do it outdoors and let people do anything they want to keep me awake. Stay awake for five days, and then bring out the lions.” He makes his half smile. “All my work is about honesty. Magic card tricks—we have to get beyond that. If magic is just magicians doing card tricks to impress other magicians—I’m not interested in that anymore. I don’t want magic that looks real. What I want are real things that feel like magic.” His stunts are not stunts; they actually take place. His way of staying awake for a million seconds is to stay awake for a million seconds. There is a famous, and dangerous, illusionist’s effect called the Bullet Catch; it is, of course, an elaborate and dangerous trick. Blaine insists that if he caught a bullet he would catch a bullet. “That’s what Chris Burden did,” he says, referring to the pioneering performance artist of the seventies, who did once have himself shot (in the arm) for art. “David Copperfield made the Statue of Liberty disappear, but then it came right back. My ideal magic would be really making the Statue of Liberty disappear, so that it never comes back, even if I have to go to jail afterward.” Las Vegas, at the MGM Grand: Swiss is about to go into David Copperfield’s show, while the banked drill of slot machines whir and ping nearby. It is sometime in the middle of the night, in a casino with no clocks or windows. Las Vegas is the last place in America where magic thrives in the normal American way: people get paid a lot to do it, and a lot of people pay to see it. Not as many as once did, perhaps—“I liked Vegas better when the Mob ran it” is the constant, semi-ironic complaint of the old-timers. They mean that the Mob’s Vegas derived so much of its profit from gambling that the fun (and the food) could be given away more or less free, and a lounge could sponsor a closeup magician just because the owner liked him. These days, many show rooms are “four-walled,” leased out by the owner and expected to be profit centers in themselves. Nonetheless, Big Magic, at least—the modern version of the splashy disappearing-girl shows that were once magic’s mainstay—continues to flourish. Performers such as Lance Burton, Penn & Teller, and David Copperfield have successful shows that have run for years. Swiss is talking about the recent past of Big Illusion: “After ‘The Ed Sullivan Show’ ended, and before Doug Henning, there was nothing. There was always some guy out there somewhere sawing a woman, but that was it. Doug Henning did what Robert-Houdin set out to do—he presented magic in the dress of the time, only now it was tie-dye and long hair and hippieness. That saved illusion, at least on television. And then Copperfield came along. Copperfield—I remember when he was still calling himself Davino, at Tannen’s.” He shakes his head. A stranger comes by and greets Swiss, a little warily. They exchange some magic shoptalk. Swiss laughs as the man walks off. “I got into a bitter argument with that guy at the Magic Castle last month. He was telling a young magician about the necessity of being pragmatic, making compromises for your art, and I said, ‘What the fuck do you know about a work of art, or what it takes to make one!’ ” He smiles sheepishly. Like many combative souls, he takes his feuds and eruptions as part of the weather in his world, and assumes that his disputants do, too. Copperfield’s show turns out to be much more loose-limbed and intelligent than his reputation for big dumb-stunt illusions suggests. He does the expected things—levitates through a steel plate, and makes thirteen audience members disappear (and reappear, at the back of the auditorium, where they giggle slightly in ways suggesting that their disappearance was less confounding than it seemed). But his oddly touching pièce de résistance is a confessional number about his father’s failed career as an actor, and his own estrangement from his grandfather, and the old man’s dream of one day winning the lottery and buying a green Lincoln Continental—probably the only case of a sad-bad-parenting memoir that ends with the thumping appearance on a Las Vegas stage of an actual green Lincoln. After the show, over omelettes at the Peppermill, a Las Vegas institution and show-biz hangout at the older, northern end of the Strip, Swiss meditates on the difference between the way audiences experience an illusion show now and the way they did a century ago. “It’s a good show, a fun show—who can deny it? But what do people come to a Big Magic show for now? Celebrity? To be amazed? What did they come for then? Of course, there was less to see in the world then. They weren’t going home to watch television. But I think they were there for beauty, too. A lot of what magicians did then was just meant to be beautiful. It got that Ahhh sound you hear when Teller does the goldfish.” He meant a signature Penn & Teller piece in which Teller turns water into silver coins and the coins into goldfish. “David Ben does an illusion show set in 1909, and, because it’s set then, he does it much slower than we do now. And that kind of stage slowness turns out to be the right speed for magic. It isn’t a high-speed art. The beauty lies in the unfolding, not in being zapped to the finish. It does for me, anyway. Onstage, it takes me three minutes to say my name.” The next night, David Copperfield takes Jamy Ian Swiss, and the twelve-year-old, and a mysterious family of Russians to his warehouse of magical paraphernalia. As they ride there in a limo, Copperfield explains the origins of his collection—it is a much larger version of the famous Mulholland collection—and hints at its treasures, largely, it seems, for Swiss’s benefit. In photographs, Copperfield assumes a manner of chilly mastery; in person he is open, bending, almost needy, reminding the world of his triumphs—his renown, his Emmys—as though the scale of his accomplishments still surprised him, too. “Jamy? You’ll be amazed when you see what I have at the warehouse,” Copperfield says. “You were at Tannen’s, too, right? Of course you were! I loved it with Lou. Did you know Lou? Did you see me in the Jubilee the year I was eighteen, when they gave me the whole second half?” The warehouse is the size of an airplane hangar, windowless and fluorescent-lit in the Vegas night. Copperfield has been building it for years, and has no intention of making it public; he offers tours, guided by him, as he thinks they are merited. Upstairs, he shows Swiss one beautiful piece after another, in spotlit cases. Everything is here. There are boxes of off-the-shelf magic sets for boys, a century’s worth, stacked high into the air. There are the great monochrome posters of Alexander, the Man Who Knows; and posters of Charles Carter, in Egypt, being hanged, fighting the Devil himself. There is the complete outfit and wiring of the performer Mr. Electric, an “Ed Sullivan” regular. And there is nearly everything of Houdini’s that matters: the original milk can that Houdini escaped from; one of the Metamorphosis Trunks (a fragment of the True Cross); and, on wax cylinders, the only recordings of Houdini’s voice, high and hectoring and European-sounding as he does his patter. There is the gun with which the great Chung Ling Soo (actually an American named William Ellsworth Robinson) was killed onstage at the Wood Green Empire, in London, in 1918, when his Bullet Catch number hit a snag. (“Oh my God. Something’s happened. Lower the curtain” were his last words onstage, and the first ones he had spoken there in English for almost twenty years.) The apparatus for a Carter number, where a girl was pulled up in a chair, and then disappeared, shows the way the little chair gets hoisted into the framework of the machinery, leaving the damsel suspended. There is the sawing table where Orson Welles sawed Marlene Dietrich in half. Then, there are the automatons that Robert-Houdin built in the mid-nineteenth century, tiny clicking cogs and wheels and whirring clockwork: a brass Chinaman actually does the cups and balls, and each time the cups come up something new is underneath them. And books, wall after wall of bookshelves, with not only Max Malini’s original copy of Erdnase but also a first edition of “Discoverie of Witchcraft” and the writings of Méliès, the French magician who invented special effects (dissolves, double exposures) in early movies—the effects that doomed Big Magic. And then there is a wall of old, signed publicity photographs of magicians: magicians with top hats, magicians resting their hands sapiently on their bearded cheeks; magicians grave and sage and, sometimes, witty and waggish, in top hats and tails, rising from floor to ceiling. “Jamy? Do you see what it is?” David Copperfield says, triumphant. “It’s the wall from Tannen’s,” Swiss says softly, looking up at it as he must have done as a twelve-year-old. “It’s the original photographs they had up, intact,” he tells the twelve-year-old with him. The magicians, shining and unchanged since the sixties, beam down on their protégés, as though Lou were still behind the counter below. Finally, in a small, crowded space upstairs, Copperfield carefully displays a legendary apparatus: the flower vase of Karl Germain, the great Cleveland-born magician. The vase stands up, music plays, a pass is made—and a whole rosebush slowly rises from inside, higher and higher, the petals of the roses unfolding as though waking up. The music plays, the roses grow and grow, higher and higher still, petals unfolding, and Copperfield cuts them off and gives one to each woman in the room. How does it work? Where do the roses come from? One difficulty in writing about magic is that it is considered a cardinal sin to reveal methods, even when you are an outsider who barely grasps them—particularly when you are an outsider who barely grasps them. “Exposure” is a hanging crime in the magic world. In the eighties, Penn & Teller provoked hack magicians to attack them for doing the cups and balls in transparent glasses. (Of course, they did it so nimbly and surprisingly that they exposed nothing but the absurdity of “exposure.”) About all an outsider may say is that the surprising thing about most magical methods is not how ingeniously complex they are but how extremely stupid they are—stupid, that is, in the sense of being completely obvious once you grasp them. The trick to Swiss’s Color Vision box, to engage in an exposure that is surely harmless, since his cousin Sharon knows all about it (and has had forty years to tell her friends, indignantly), is that the lid of the box is secretly moved by the magician to the side of the box; that is, the magician has revolved the box so that the top is now the back, and the lid is on what is now the side. He sees what the color is just by looking at it. What this teaches us is not that people are stupid but that the concept of rotating an object, though obvious, is in some way defeated by our familiarity with boxes and lids—a lid always goes on top. The move is not outside our imaginations but remote from our experience. Most big illusions, similarly, involve a remarkably limited, though resourcefully manipulated, arsenal of mirrors and lights. We will ourselves both to overlook the obvious chicanery and to overrate the apparent obstacles. Or we imagine that an elaborate bit of trickery couldn’t be achieved by stupidly obvious means. People participate in their own illusions. That is why a magician’s technique must be invisible; if it became visible, we would be insulted by its obviousness. Magic is possible because magicians are smart. And what they’re smart about is mainly how dumb we are, how limited in vision, how narrow in imagination, how resourceless in conjecture, how routinized in our theories of the world, how deadened to possibility. The magician awakens us from the dogmatic slumbers of our daily life, our interactions with cards and hoops and things. He opens a door by pointing to a window. Why does it matter? “Magic is the most intrinsically ironic of all the arts,” Teller is saying. “I don’t know what your definition of irony is, but mine is something where, when you are seeing it, you see it in two different and even contradictory ways at the same time. And with magic what you see collides with what you know. That’s why magic, even when merely executed, ends up having intellectual content. It’s intrinsic to the form.” Onstage, Teller’s character is mute. In his own house—twenty minutes outside Las Vegas, an Expressionist concrete box, a stone fortress with trapezoidal windows cut in it, Dr. Caligari’s remake of the Whitney Museum—over excellent cornmeal waffles, he is voluble, articulate, opinionated, and exact. Small and curly-haired, he looks like Harpo Marx released from his vow of silence and given tenure. His is one of those houses perfectly shaped to the needs of their fastidious and eccentric owners: it is hung with his father’s paintings; there is a beautiful coffin, given to him by a close friend for his fifty-fifth birthday, and a handsome dining table with a skeleton embedded in its glass top, its arms and feet shackled to a rack. (To extend the table for company, you turn a crank, which stretches out the skeleton, causing moans and screams to sound through a concealed speaker.) Bookcases revolve and reveal secret passageways to the next room. The Christmas tree is still up, and decorated with skulls and metallic-red devil’s heads. “There is a more romantic way to do it, to calm down the intrinsic irony, but that can get schmaltzy,” he goes on. “Many magicians do that, but it tends toward the sentimental.” In Las Vegas, Penn & Teller have not compromised their act. They do a flag burning onstage—or, rather, seem to do it—before restoring the flag completely, in a variation of the torn-and-restored dollar; it’s a heartfelt libertarian tribute to American freedom. And they end with a staggering Bullet Catch, the stunt that killed Chung Ling Soo. (Someone remarks to Teller that the Bullet Catch seems to be the “Macbeth” of magic, the bad-luck piece, and he says, dryly, “Yes. You’re standing onstage firing bullets at the magician with a live gun. That might be bad luck.”) Swiss tells Teller about the tour of the Copperfield warehouse and his rediscovery of the wall from Tannen’s. Though Teller grew up in Philadelphia, he recognizes the key moment memorialized by the wall. “There’s a moment in your life when you realize the difference between illusion and reality and that you’re being lied to,” he says. “Santa Claus. The Easter Bunny. After my mother told me that there was no Santa Claus, I made up an entirely fictitious girl in my classroom and told my mother stories about her. Then I told my mother, ‘You know what—she isn’t real.’ ” He smiles with somewhat Pugsley Addams-like glee, and goes on, more soberly, “If you’re sufficiently preoccupied with the power of a lie, a falsehood, an illusion, you remain interested in magic tricks.” The subject of the Germain vase comes up, and Teller says, “You know the funny thing about that? A friend and I did the Germain flowers last year. We put the music on, the right music played at the right time, slightly off speed, and we prepped the illusion properly, you know, had the buds set right so that they would open when you fanned them—the fanning is part of the piece—and we watched it emerge. This lovely music was playing, and we just wept at the beauty of it—tears streamed down our cheeks at the lovely apparition of it. That was magic.” Of all the arguments that can preoccupy the mindful magician, the most important involves what is called the Too Perfect theory. Jamy Ian Swiss has written about it often. Presaged by Vernon himself, and formalized by the illusionist Rick Johnsson in a 1971 article, the Too Perfect theory says, basically, that any trick that simply astounds will give itself away. If, for instance, a magician smokes a cigarette and then makes it pass through an ordinary quarter, the only reasonable explanation is that it isn’t an ordinary quarter; the spectator will immediately know that it’s a trick quarter, with a hinge. (Swiss wrote that once, after he performed the Cigarette Through Quarter—perfectly, in his opinion—a spectator responded, “Neat. Where’s that nifty coin with the hole in it?”) What makes a trick work is not the inherent astoundingness of its effect but the magician’s ability to suggest any number of possible explanations, none of them conclusive, and none of them quite obvious. As the law professor and magician Christopher Hanna has noted, two of the best ways of making a too perfect trick work are “reducing the claim” and “raising the proof.” Reducing the claim means roughing up the illusion so that the spectator isn’t even sure she saw one—bringing the cigarette in and out of the coin so quickly that the viewer doesn’t know if the trick is in the coin or in her eyes. Raising the proof is more demanding. Derek Dingle, a famous closeup man, adjusted the Cigarette Through Quarter trick by palming and replacing one gaffed quarter with another. One quarter had a small hole in it, the other a spring hinge. By exposing the holed coin, then palming that one and replacing it with the hinged coin, he led the spectator to think not There must be two trick coins but How could even the trick coin I’ve seen do that trick? Or one might multiply the possible explanations, in a card-guessing trick, by going through an elaborate charade of “reading” the spectator’s face and voice, so that, when the forced card is guessed, the obviousness of the trick is, well, obviated. At the heart of the Too Perfect theory is the insight that magic works best when the illusions it creates are open-ended enough to invite the viewer into a credibly imperfect world. Magic is the dramatization of explanation more than it is the engineering of effects. In every art, the Too Perfect theory helps explain why people are more convinced by an imperfect, “distressed” illusion than by a perfectly realized one. A form of the theory is involved when special-effects people talk about “selling the shot” in a movie; that is, making sure that the speeding spacecraft or the raging Godzilla doesn’t look too neatly and cosmetically packaged, and that it is not lingered on long enough to be really seen. (All special effects appear as such when they are studied.) The theory explains the force of the off-slant scene in a film, the power of elliptical dialogue in the theatre, the constant artistic need to turn away from apparent perfection toward the laconic or unfixed. Illusion affects us only when it is incomplete. But the Too Perfect theory has larger meanings, too. It reminds us that, whatever the context, the empathetic interchange between minds is satisfying only when it is “dynamic,” unfinished, unresolved. Friendships, flirtations, even love affairs depend, like magic tricks, on a constant exchange of incomplete but tantalizing information. We are always reducing the claim or raising the proof. The magician teaches us that romance lies in an unstable contest of minds that leaves us knowing it’s a trick but not which one it is, and being impressed by the other person’s ability to let the trickery go on. Frauds master our minds; magicians, like poets and lovers, engage them in a permanent maze of possibilities. The trick is to renew the possibilities, to keep them from becoming schematized, to let them be imperfect, and the question between us is always “Who’s the magician?” When we say that love is magic, we are telling a truth deeper, and more ambiguous, than we know. Swiss is talking over dinner about the Too Perfect theory: “What magic is out to do isn’t just to amaze you but to achieve what Whit Haydn calls putting ‘a burr under the saddle of the mind.’ ” He leaps up from the table and becomes a man on a tightrope. “Let’s say you do a Blaine trick, one he’s done on television, where you have someone choose a card and then find it in a sealed basketball. Well, if he sees it in the basketball he knows that somehow the card’s been forced on him. It’s too perfect. But if it’s got a torn corner—or it’s signed, or if maybe instead of being inside a basketball it’s behind a backboard—he thinks, It wasn’t there before, but he can’t get over there. The mind starts working. He can’t rest here, he can’t rest here, and he stays on the tightrope!” Swiss wavers on the imaginary wire. “That’s not the situation of the passive dupe,” he says, sitting down. “It’s the situation of someone whose mind is alive! It’s the state of the scientist, or the artist—and magic is a fringe art, but it’s not a fringe subject. Truth, deception, and mystery are big material, and they’re the natural, the intrinsic subject of magic. And I propose”—he smiles briefly at the formulation, but goes ahead anyway—“that it’s the only art form where that’s the intrinsic subject. And that’s why, with all the indignities and absurdities of being a professional magician, we’ll always need magic.” David Blaine is on a strict new regime as he trains for the sleepless piece. Once a week, he runs thirteen miles in Central Park, plays basketball for an hour and a half at Chelsea Piers, and then swims several miles in a downtown pool. It is his theory that a man in perfect form will be able to survive staying awake for a million seconds. He has already lost forty pounds, and this gives him a gaunt, spiritualized look, like the young Brando playing an AIDS patient. He is in his Greenwich Village apartment, showing a protégé a card trick and quizzing him, gently, on a book about the Holocaust that he had given him to read. The first vibe you get from him, of cool and insolence, is soon succeeded by a second, truer sense that he is a man trying to save magic not by making it more intellectual, or more raffish, but by making it potentially tragic, a high-stakes and risky endeavor that might end in grief. Seriousness is his keynote. He presses on his protégés copies of his favorite books, mostly classics of the fifties and sixties, which he keeps by the gross in a special closet: “Siddhartha,” “The Catcher in the Rye,” “A Confederacy of Dunces”—tales of nonconformists making their way in a monochrome world. He seems to see “mainstream” magicians the way Holden sees his teachers. His bookshelves are filled with Heidegger and Nietzsche. He has decided to call the sleepless piece “Blaine’s Wake,” in punning homage to Joyce, and he has recently begun working his way through Finnegan’s dream with the help of Joseph Campbell’s skeleton key. He still works intently on card tricks, of the more avant-garde kind that derive from the great Spanish magician Juan Tamariz, whom all magicians today, on all sides, uncritically revere. The card trick he is teaching his protégé involves no apparent skill, no card handling or card moving, still mind against mind, but without the interference of fingers. The effect is powerful, but the vibe is different from normal card tricks: melodramatic rather than clever, and deep rather than ironic. The odd thing is that, the longer one knows him, and the more time one spends with him, the more apparent it becomes that he is one more Tannen’s boy from Brooklyn. On his desk is a photograph of the adolescent Blaine collecting an award at Tannen’s summer camp, as nerdy and needy as every other boy of the tribe. “My real work began when I was walking down the street, just practicing a one-handed shuffle, and all the guys in this garage I was walking by went Ohhhh!” Blaine says_._ “ Ohhhh! —just for a shuffle! I started realizing that what I love to do is bring magic for one second—and that one second is enough. My endurance pieces are all about taking away the ego, putting yourself in a position so intense that the ordinary ‘I’ doesn’t exist anymore. You’re surviving the way a baby does—or it’s like just before an accident, when you see everything, the seats and the road, and the dashboard and your life, in slow motion. That heightened sense of awareness, the blinding flash of being shocked out of your logical mind—that’s magic for me.” For Jamy Ian Swiss, as for Penn & Teller, the future lies in magic being remade in the light of the real but still in the shadow of the past, losing the false front while revering the traditional techniques. For David Blaine, it lies in an increasing encroachment into the real, so that magic will become indistinguishable from performance art, at the high end, and reality television, at the low. The choice, in a sense, is between the real work and the real thing. I have seen Blaine and Swiss together just once. It was in October, before the annual auction of magic posters and paraphernalia at Swann Galleries, on East Twenty-fifth Street, an important occasion in the magic subculture. Swiss was once quoted as saying that Blaine’s best tricks could have been purchased for thirty dollars at a Times Square magic shop, a quote that was taken slightly out of context, and that had a gentler intention than it seems. Now, with the tough critic’s optimistic belief that it was all part of the game, Swiss went up to Blaine and congratulated him. Then Swiss mentioned a young student of his who had been hanging around with Blaine as well. “I’m trying to get him to see some of the—some of the deeper psychological things, not just tricks,” Blaine said, in his Brando mumble. “I don’t think I’m showing him tricks,” Swiss said. “Not tricks, man. I mean—you know, techniques. Showing him something deeper than techniques.” “I’m not showing him tricks,” Swiss repeated, quietly. Blaine changed the subject. Swiss went back to his seat, with his head down, his jaw set. I could see him struggling with the times—with the anger of feeling a protégé being fought over but also with the sense, which every writer knows, of helplessness in the face of the new thing, of suddenly knowing what the real fringe is like, and how it feels when you get there. We are all magicians now. The same feeling that novelists had when first confronted with movies is shared by closeup card magicians confronting television endurance artists—the feeling that something big and vital is passing from the world, and yet that to defend it is to be immediately classified as retrograde. I saw, too, that David Blaine is absolutely sincere in his belief that the way forward for a young magician lies not in mastering the tricks but in mastering the mind of the modern age, with its relentless appetite for speed and for the sensational-dressed-as-the-real. And I thought I sensed in Swiss the urge to say what all of us would like to say—that traditions are not just encumbrances, that a novel is not news, that an essay is a different thing from an Internet rant, that techniques are the probity and ethic of magic, the real work. The crafts that we have mastered are, in part, the tricks that we have learned, and though we know how much knowledge the tricks enfold, still, tricks is what they are. I felt for Jamy, and for us, and for the boy caught between. The hands stop moving as the plane lands, and the boy and the magician leave. The aces twist one turn and the boy returns to his father for the car ride home. He clutches his Las Vegas souvenir. The magicians have the boys for a moment, between their escape from their fathers and their pursuit of girls. After that, they become sexual, outwardly so, and learn that women (or other men) cannot be impressed by tricks of any kind: if they are watching at all, they are as interested as they are ever going to be, and tricks are of no help. You cannot woo anyone with magic; the magic that you have consciously mastered is the least interesting magic you have. Yet, for the time being, what the magicians teach the boys is that some knowledge cannot be communicated; it is yours and can only be shown, and the range of things that fathers don’t know is larger than what they do. The most the father can hope to become is a stooge, a willing assistant, and a spectator with a bit of corruption. The boy has secret knowledge, which he will keep, even after life arrives and magic stops. Teach me a trick, the father says to the son, and the boy, his hands working his cards, on his way, says, “I can’t teach you a trick, Dad. I’ll show you an effect.” And then he does, doing passes, like his teacher, all the way home. The card always comes back to the top of the deck, and, the better it is done, the harder it is to see that anything has happened at all. ♦ New Yorker Favorites First she scandalized Washington. Then she became a princess. The unravelling of an expert on serial killers. What exactly happened between Neanderthals and humans ? When you eat a dried fig, you’re probably chewing wasp mummies, too. The meanings of the Muslim head scarf. The slippery scams of the olive-oil industry. Critics on the classics: our 1991 review of “Thelma & Louise.” Sign up for our daily newsletter to receive the best stories from The New Yorker. More: Airplanes Ben Boys Card Tricks Children Christopher David Fathers and Sons Harry Jean Juan Las Vegas Lou Magic Magicians Penn Records Swiss Weekly E-mail address Sign up By signing up, you agree to our User Agreement and Privacy Policy & Cookie Statement. This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. Fiction By Dave Eggers Fiction By Michael Chabon The Theatre By Helen Shaw The Sporting Scene By Louisa Thomas Sections News Books & Culture Fiction & Poetry Humor & Cartoons Magazine Crossword Video Podcasts Archive Goings On More Customer Care Shop The New Yorker Buy Covers and Cartoons Condé Nast Store Digital Access Newsletters Jigsaw Puzzle RSS About Careers Contact F.A.Q. Media Kit Press Accessibility Help © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights. The New Yorker may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. Ad Choices Facebook X Snapchat YouTube Instagram Do Not Sell My Personal Info "
13,572
2,023
"The Data Delusion | The New Yorker"
"https://www.newyorker.com/magazine/2023/04/03/the-data-delusion"
"Newsletter To revisit this article, select My Account, then View saved stories Close Alert Search The Latest News Books & Culture Fiction & Poetry Humor & Cartoons Magazine Puzzles & Games Video Podcasts Goings On Shop Open Navigation Menu Find anything you save across the site in your account Close Alert American Chronicles The Data Delusion By Jill Lepore Facebook X Email Print Save Story The age of data is variously associated with late capitalism, authoritarianism, techno-utopianism, and the dazzle of “data science.” Illustration by Kelli Anderson Save this story Save this story Save this story Save this story One unlikely day during the empty-belly years of the Great Depression, an advertisement appeared in the smeared, smashed-ant font of the New York Times’ classifieds: WANTED. Five hundred college graduates, male, to perform secretarial work of a pleasing nature. Salary adequate to their position. Five-year contract. Thousands of desperate, out-of-work bachelors of arts applied; five hundred were hired (“they were mainly plodders, good men, but not brilliant”). They went to work for a mysterious Elon Musk-like millionaire who was devising “a new plan of universal knowledge.” In a remote manor in Pennsylvania, each man read three hundred books a year, after which the books were burned to heat the manor. At the end of five years, the men, having collectively read three-quarters of a million books, were each to receive fifty thousand dollars. But when, one by one, they went to an office in New York City to pick up their paychecks, they would encounter a surgeon ready to remove their brains, stick them in glass jars, and ship them to that spooky manor in Pennsylvania. There, in what had once been the library, the millionaire mad scientist had worked out a plan to wire the jars together and connect the jumble of wires to an electrical apparatus, a radio, and a typewriter. This contraption was called the Cerebral Library. “Now, suppose I want to know all there is to know about toadstools?” he said, demonstrating his invention. “I spell out the word on this little typewriter in the middle of the table,” and then, abracadabra, the radio croaks out “a thousand word synopsis of the knowledge of the world on toadstools.” Happily, if I want to learn about mushrooms I don’t have to decapitate five hundred recent college graduates, although, to be fair, neither did that mad millionaire, whose experiment exists only in the pages of the May, 1931, issue of the science-fiction magazine Amazing Stories. Instead, all I’ve got to do is command OpenAI’s ChatGPT , “Write a thousand word synopsis of the knowledge of the world on toadstools.” Abracadabra. Toadstools, also known as mushrooms, are a diverse group of fungi that are found in many different environments around the world , the machine begins, spitting out a brisk little essay in a tidy, pixelated computer-screen font, although I like to imagine that synopsis being rasped out of a big wooden-boxed nineteen-thirties radio in the staticky baritone of a young Orson the-Shadow-knows Welles. While some species are edible and have been used by humans for various purposes, it is important to be cautious and properly identify any toadstools before consuming them due to the risk of poisoning , he’d finish up. Then you’d hear a woman shrieking, the sound of someone choking and falling to the ground, and an orchestral stab. Dah-dee-dum-dum-DUM! If, nearly a century ago, the cost of pouring the sum total of human knowledge into glass jars was cutting off in their prime hundreds of quite unfortunate if exceptionally well-read young men, what’s the price to humanity of uploading everything anyone has ever known onto a worldwide network of tens of millions or billions of machines and training them to learn from it to produce new knowledge? This cost is much harder to calculate, as are the staggering benefits. Even measuring the size of the stored data is chancy. No one really knows how big the Internet is, but some people say it’s more than a “zettabyte,” which, in case this means anything to you, is a trillion gigabytes or one sextillion bytes. That is a lot of brains in jars. Forget the zettabyten Internet for a minute. Set aside the glowering glass jars. Instead, imagine that all the world’s knowledge is stored, and organized, in a single vertical Steelcase filing cabinet. Maybe it’s lima-bean green. It’s got four drawers. Each drawer has one of those little paper-card labels, snug in a metal frame, just above the drawer pull. The drawers are labelled, from top to bottom, “Mysteries,” “Facts,” “Numbers,” and “Data.” Mysteries are things only God knows, like what happens when you’re dead. That’s why they’re in the top drawer, closest to Heaven. A long time ago, this drawer used to be crammed full of folders with names like “Why Stars Exist” and “When Life Begins,” but a few centuries ago, during the scientific revolution, a lot of those folders were moved into the next drawer down, “Facts,” which contains files about things humans can prove by way of observation, detection, and experiment. “Numbers,” second from the bottom, holds censuses, polls, tallies, national averages—the measurement of anything that can be counted, ever since the rise of statistics, around the end of the eighteenth century. Near the floor, the drawer marked “Data” holds knowledge that humans can’t know directly but must be extracted by a computer, or even by an artificial intelligence. It used to be empty, but it started filling up about a century ago, and now it’s so jammed full it’s hard to open. From the outside, these four drawers look alike, but, inside, they follow different logics. The point of collecting mysteries is salvation; you learn about them by way of revelation; they’re associated with mystification and theocracy; and the discipline people use to study them is theology. The point of collecting facts is to find the truth; you learn about them by way of discernment; they’re associated with secularization and liberalism; and the disciplines you use to study them are law, the humanities, and the natural sciences. The point of collecting numbers in the form of statistics—etymologically, numbers gathered by the state—is the power of public governance; you learn about them by measurement; historically, they’re associated with the rise of the administrative state; and the disciplines you use to study them are the social sciences. The point of feeding data into computers is prediction, which is accomplished by way of pattern detection. The age of data is associated with late capitalism, authoritarianism, techno-utopianism, and a discipline known as data science, which has lately been the top of the top hat, the spit shine on the buckled shoe, the whir of the whizziest Tesla. All these ways of knowing are good ways of knowing. If you want to understand something—say, mass shootings in the United States —your best bet is to riffle through all four of these drawers. Praying for the dead is one way of wrestling with something mysterious in the human condition: the capacity for slaughter. Lawyers and historians and doctors collect the facts; public organizations like the F.B.I. and the C.D.C. run the numbers. Data-driven tech analysts propose “smart guns” that won’t shoot if pointed at a child and “gun-detection algorithms” able to identify firearms-bearing people on their way to school. There’s something useful in every drawer. A problem for humanity, though, is that lately people seem to want to tug open only that bottom drawer, “Data,” as if it were the only place you can find any answers, as if only data tells because only data sells. In “ How Data Happened: A History from the Age of Reason to the Age of Algorithms ” (Norton), the Columbia professors Chris Wiggins and Matthew L. Jones open two of these four drawers, “Numbers” and “Data.” Wiggins is an applied mathematician who is also the chief data scientist at the Times; Jones is a historian of science and technology; and the book, which is pretty fascinating if also pretty math-y, is an adaptation of a course they began teaching in 2017, a history of data science. It begins in the late eighteenth century with the entry of the word “statistics” into the English language. The book’s initial chapters, drawing on earlier work like Theodore Porter’s “ Trust in Numbers ,” Sarah Igo’s “ The Averaged American ,” and Khalil Gibran Muhammad’s “ The Condemnation of Blackness ,” cover the well-told story of the rise of numbers as an instrument of state power and the place of quantitative reasoning both in the social sciences and in the state-sponsored study of intelligence, racial difference, criminology, and eugenics. Numbers, a century ago, wielded the kind of influence that data wields today. (Of course, numbers are data, too, but in modern parlance when people say “data” they generally mean numbers you need a machine to count and to study.) Progressive-era social scientists employed statistics to investigate social problems, especially poverty, as they debated what was causation and what was correlation. In the eighteen-nineties, the Prudential Insurance Company hired a German immigrant named Frederick Hoffman to defend the company against the charge that it had engaged in discrimination by refusing to provide insurance to Black Americans. His “Race Traits and Tendencies of the American Negro,” published in 1896 by the American Economic Association, delivered that defense by arguing that the statistical analysis of mortality rates and standards of living demonstrated the inherent inferiority of Black people and the superiority of “the Aryan race.” In vain did W. E. B. Du Bois point out that suffering more and dying earlier than everyone else are consequences of discrimination, not a justification for it. Long before the invention of the general-purpose computer, bureaucrats and researchers had begun gathering and cross-tabulating sets of numbers about populations—heights, weights, ages, sexes, races, political parties, incomes—using punch cards and tabulating machines. By the nineteen-thirties, converting facts into data to be read by machines married the centuries-long quest for universal knowledge to twentieth-century technological utopianism. The Encyclopædia Britannica, first printed in Edinburgh in 1768—a product of the Scottish Enlightenment—had been taken over for much of the nineteen-twenties by Sears, Roebuck, as a product of American mass consumerism. “When in doubt—‘Look it up’ in the Encyclopaedia Britannica ,” one twentieth-century newspaper ad read. “The Sum of Human Knowledge. 29 volumes, 28,150 pages, 44,000,000 words of text. Printed on thin, but strong opaque India paper, each volume but one inch in thickness. THE BOOK TO ASK QUESTIONS OF. ” When in doubt, look it up! But a twenty-nine-volume encyclopedia was too much trouble for the engineer who invented the Cerebral Library, so instead he turned seven hundred and fifty thousand books into networked data. “All the information in that entire library is mine,” he cackled. “All I have to do is to operate this machine. I do not have to read a single book.” (His boast brings to mind Sam Bankman-Fried, the alleged crypto con man, who in an interview last year memorably said, “I would never read a book.”) And why bother? By the nineteen-thirties, the fantasy of technological supremacy had found its fullest expression in the Technocracy movement, which, during the Depression, vied with socialism and fascism as an alternative to capitalism and liberal democracy. “Technocracy, briefly stated, is the application of science to the social order,” a pamphlet called “Technocracy in Plain Terms” explained in 1939. Technocrats proposed the abolition of all existing economic and political arrangements—governments and banks, for instance—and their replacement by engineers, who would rule by numbers. “Money cannot be used, and its function of purchasing must be replaced by a scientific unit of measurement,” the pamphlet elaborated, assuring doubters that nearly everyone “would probably come to like living under a Technate.” Under the Technate, humans would no longer need names; they would have numbers. (One Technocrat called himself 1x1809x56.) They dressed in gray suits and drove gray cars. If this sounds familiar—tech bros and their gray hoodies and silver Teslas, cryptocurrency and the abolition of currency—it should. As a political movement, Technocracy fell out of favor in the nineteen-forties, but its logic stuck around. Elon Musk’s grandfather was a leader of the Technocracy movement in Canada; he was arrested for being a member, and then, soon after South Africa announced its new policy of apartheid, he moved to Pretoria, where Elon Musk was born, in 1971. One of Musk’s children is named X Æ A -12. Welcome to the Technate. The move from a culture of numbers to a culture of data began during the Second World War, when statistics became more mathematical, largely for the sake of becoming more predictive, which was necessary for wartime applications involving everything from calculating missile trajectories to cracking codes. “This was not data in search of latent truths about humanity or nature,” Wiggins and Jones write. “This was not data from small experiments, recorded in small notebooks. This was data motivated by a pressing need—to supply answers in short order that could spur action and save lives.” That work continued during the Cold War, as an instrument of the national-security state. Mathematical modelling, increased data-storage capacity, and computer simulation all contributed to the pattern detection and prediction in classified intelligence work, military research, social science, and, increasingly, commerce. Despite the benefit that these tools provided, especially to researchers in the physical and natural sciences—in the study of stars, say, or molecules—scholars in other fields lamented the distorting effect on their disciplines. In 1954, Claude Lévi-Strauss argued that social scientists need “to break away from the hopelessness of the ‘great numbers’—the raft to which the social sciences, lost in an ocean of figures, have been helplessly clinging.” By then, national funding agencies had shifted their priorities. The Ford Foundation announced that although it was interested in the human mind, it was no longer keen on non-predictive research in fields like philosophy and political theory, deriding such disciplines as “polemical, speculative, and pre-scientific.” The best research would be, like physics, based on “experiment, the accumulation of data, the framing of general theories, attempts to verify the theories, and prediction.” Economics and political science became predictive sciences; other ways of knowing in those fields atrophied. The digitization of human knowledge proceeded apace, with libraries turning books first into microfiche and microfilm and then—through optical character recognition, whose origins date to the nineteen-thirties—into bits and bytes. The field of artificial intelligence, founded in the nineteen-fifties, at first attempted to sift through evidence in order to identify the rules by which humans reason. This approach hit a wall, in a moment known as “the knowledge acquisition bottleneck.” The breakthrough came with advances in processing power and the idea of using the vast stores of data that had for decades been compounding in the worlds of both government and industry to teach machines to teach themselves by detecting patterns: machines, learning. “Spies pioneered large-scale data storage,” Wiggins and Jones write, but, “starting with the data from airline reservations systems in the 1960s, industry began accumulating data about customers at a rapidly accelerating rate,” collecting everything from credit-card transactions and car rentals to library checkout records. In 1962, John Tukey, a mathematician at Bell Labs, called for a new approach that he termed “data analysis,” the ancestor of today’s “data science.” It has its origins in intelligence work and the drive to anticipate the Soviets: what would they do next? That Netflix can predict what you want to watch, that Google knows which sites to serve you—these miracles are the result of tools developed by spies during the Cold War. Commerce in the twenty-first century is espionage for profit. While all this was going on—the accumulation of data, the emergence of machine learning, and the use of computers not only to calculate but also to communicate—the best thinkers of the age wondered what it might mean for humanity down the line. In 1965, the brilliant and far-seeing engineer J. C. R. Licklider, a chief pioneer of the early Internet, wrote “Libraries of the Future,” in which he considered the many disadvantages of books. “If human interaction with the body of knowledge is conceived of as a dynamic process involving repeated examinations and intercomparisons of very many small and scattered parts, then any concept of a library that begins with books on shelves is sure to encounter trouble,” Licklider wrote. “Surveying a million books on ten thousand shelves,” he explained, is a nightmare. “When information is stored in books, there is no practical way to transfer the information from the store to the user without physically moving the book or the reader or both.” But convert books into data that can be read by a computer, and you can move data from storage to the user, and to any number of users, much more easily. Taking the contents of all the books held in the Library of Congress as a proxy for the sum total of human knowledge, he considered several estimates of its size and figured that it was doubling every couple of decades. On the basis of these numbers, the sum total of human knowledge, as data, would, in the year 2020, be about a dozen petabytes. A zettabyte is a petabyte with six more zeroes after it. So Licklider, who really was a genius, was off by a factor of a hundred thousand. Consider even the billions of documents that the U.S. government deems “classified,” a number that increases by fifty million every year. Good-faith research suggests that as many as nine out of ten of these documents really shouldn’t be classified. Unfortunately, no one is making much headway in declassifying them (thousands of documents relating to J.F.K.’s assassination, in 1963, for instance, remain classified). That is a problem for the proper working of government, and for the writing of history, and, not least, for former Presidents and Vice-Presidents. In “ The Declassification Engine: What History Reveals About America’s Top Secrets ” (Pantheon), the historian Matthew Connelly uses tools first developed for intelligence and counterintelligence purposes—traffic analysis, anomaly detection, and the like—to build what he calls a “declassification engine,” a “technology that could help identify truly sensitive information,” speed up the declassification of everything else, and, along the way, produce important historical insights. (Connelly, like Wiggins and Jones, is affiliated with Columbia’s Data Science Institute.) The problem is urgent and the project is promising; the results can be underwhelming. After scanning millions of declassified documents from the State Department’s “Foreign Relations of the United States” series, for instance, Connelly and his team identified the words most likely to appear before or after redacted text, and found that “Henry Kissinger’s name appears more than twice as often as anyone else’s.” (Kissinger, who was famously secretive, was the Secretary of State from 1973 to 1977.) This is a little like building a mapping tool, setting it loose on Google Earth, and concluding that there are more driveways in the suburbs than there are in the city. By the beginning of the twenty-first century, commercial, governmental, and academic analysis of data had come to be defined as “data science.” From being just one tool with which to produce knowledge, it has become, in many quarters, the only tool. On college campuses across the country, data-science courses and institutes and entire data-science schools are popping up like dandelions in spring, and data scientist is one of the fastest-growing employment categories in the United States. The emergence of a new discipline is thrilling, and it would be even more thrilling if people were still opening all four drawers of that four-drawer filing cabinet, instead of renouncing all other ways of knowing. Wiggins and Jones are careful to note this hazard. “At its most hubristic, data science is presented as a master discipline, capable of reorienting the sciences, the commercial world, and governance itself,” they write. It’s easy to think of the ills produced by the hubristic enthusiasm for numbers a century ago, from the I.Q. to the G.D.P. It’s easy, too, to think of the ills produced by the hubristic enthusiasm for data today, and for artificial intelligence (including in a part of the Bay Area now known as Cerebral Valley). The worst of those ills most often have to do with making predictions about human behavior and apportioning resources accordingly: using algorithms to set bail or sentences for people accused or convicted of crimes, for instance. Connelly proposes that the computational examination of declassified documents could serve as “the functional equivalent of CT scans and magnetic resonance imaging to examine the body politic.” He argues that “history as a data science has to prove itself in the most rigorous way possible: by making predictions about what newly available sources will reveal.” But history is not a predictive science, and if it were it wouldn’t be history. Legal scholars are making this same move. In “ The Equality Machine: Harnessing Digital Technology for a Brighter, More Inclusive Future ” (PublicAffairs), Orly Lobel, a University of San Diego law professor, argues that the solution to biases in algorithms is to write better algorithms. Fair enough, except that the result is still rule by algorithms. What if we stopped clinging to the raft of data, returned to the ocean of mystery, and went fishing for facts? In 1997, when Sergey Brin was a graduate student at Stanford, he wrote a Listserv message about the possible malign consequences of detecting patterns in data and using them to make predictions about human behavior. He had a vague notion that discrimination was among the likely “results of data mining.” He considered the insurance industry. “Auto insurance companies analyse accident data and set insurance rates of individuals according to age, gender, vehicle type,” he pointed out. “If they were allowed to by law, they would also use race, religion, handicap, and any other attributes they find are related to accident rate.” Insurers have been minimizing risk since before the Code of Hammurabi, nearly four thousand years ago. It’s an awfully interesting story, but for Brin this was clearly a fleeting thought, not the beginning of an investigation into history, language, philosophy, and ethics. All he knew was that he didn’t want to make the world worse. “Don’t be evil” became Google’s model. But, if you put people’s brains in glass jars and burn all your books, bad things do tend to happen. ♦ New Yorker Favorites First she scandalized Washington. Then she became a princess. The unravelling of an expert on serial killers. What exactly happened between Neanderthals and humans ? When you eat a dried fig, you’re probably chewing wasp mummies, too. The meanings of the Muslim head scarf. The slippery scams of the olive-oil industry. Critics on the classics: our 1991 review of “Thelma & Louise.” Sign up for our daily newsletter to receive the best stories from The New Yorker. More: Data Statistics Weekly E-mail address Sign up By signing up, you agree to our User Agreement and Privacy Policy & Cookie Statement. This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. Our Columnists By Benjamin Wallace-Wells Daily Comment By Eric Lach Q. & A. By Isaac Chotiner Comment By Amy Davidson Sorkin Sections News Books & Culture Fiction & Poetry Humor & Cartoons Magazine Crossword Video Podcasts Archive Goings On More Customer Care Shop The New Yorker Buy Covers and Cartoons Condé Nast Store Digital Access Newsletters Jigsaw Puzzle RSS About Careers Contact F.A.Q. Media Kit Press Accessibility Help © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights. The New Yorker may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. Ad Choices Facebook X Snapchat YouTube Instagram Do Not Sell My Personal Info "
13,573
2,023
"“The Soccer Balls of Mr. Kurz,” by Michele Mari | The New Yorker"
"https://www.newyorker.com/magazine/2023/05/29/the-soccer-balls-of-mr-kurz-fiction-michele-mari"
"Newsletter To revisit this article, select My Account, then View saved stories Close Alert Search The Latest News Books & Culture Fiction & Poetry Humor & Cartoons Magazine Puzzles & Games Video Podcasts Goings On Shop Open Navigation Menu Find anything you save across the site in your account Close Alert Fiction The Soccer Balls of Mr. Kurz By Michele Mari Facebook X Email Print Save Story Facebook X Email Print Save Story Illustration by Guido Scarabottolo Save this story Save this story Save this story Save this story For Bragonzi, the only beautiful thing in the sad life of the boarding school in Quarto dei Mille was the soccer matches. And yet even that beauty was anguished. He realized it as early as the first match, when he saw that, once the moment came to shoot, even the best, even the oldest players suffered a kind of muscular contraction, as if forcing themselves to hold back; and, in fact, what emerged was a weak, uncertain shot, which the goalie blocked with ease. And to think that a second earlier that same forward had seemed full of confident vigor, impetuously swooping down onto the ball, defending it, rushing with long strides toward the goal area—but then . . . but then that feeble shot. Only at the third match did he make up his mind to ask, after he’d happened to give a hard kick and the ball, flying upward, just barely missed going over to the other side, beyond the wall that constituted the end of the schoolyard: “Aaaah . . .” all the little boys groaned in chorus, covering their eyes with their hands, and when the ball fell back down into the schoolyard, rather than rejoicing, they rebuked Bragonzi bitterly. “But why? What did I do wrong?” he asked Paltonieri as they went back inside for snack time. “And even if the ball did go over, why make such a big deal about it?” And so Paltonieri explained. He said that on the other side of the wall lived a Mr. Kurz, whom no one had ever seen but who must have hated all the boarding-school children, because whenever the ball ended up on his side he never gave it back (as is civil and urbane custom: you’ve sent it hurtling over there and now you anxiously wait, speculating by the wall, and, lo, by silent miracle it returns, tracing its trajectory in the sky, returning, returning—and with your heart overflowing with gratitude you give resounding thanks: “Thank you!” you say, you don’t know to whom, but you say it. Or else the miracle is delayed, and you walk away uncertainly, saddened by the game’s forced end; but when you come back the following morning the ball is there in the yard, for how long you don’t know, and so your “thank you” is all the more heartfelt, because you only think it, addressing it to the past). Not only that, but vain would have been any attempt to get the ball back; at least this was what was claimed by the young Instructresses, who, a long time ago, caving to universal insistence, had gone over to speak to Mr. Kurz. “Mr. Kurz is well within his rights,” they apparently relayed with an air of annoyance, “and can keep whatever makes its way into his yard.” Such a response, noted Paltonieri, who had heard the story from Morchiolini, sent the message that the Instructresses hadn’t put much of an effort into their mission: if only the boys could have gone themselves, just once, to speak to that man, maybe they would have convinced him, maybe he would have yelled at them a little, sure, but in the end he would have given back all the balls confiscated that year and, who knows, even in previous years. But nothing could be done, the rules barred the boys from leaving the school, and, besides, what would be the point? Mr. Kurz had said no to them, and they were schoolmistresses—never mind a bunch of snot-nosed kids! For that matter, the Instructresses had added, from that day forward they would not be going back to see that man. They had a sense of dignity, they did, and they weren’t interested in being humiliated by someone who—they stressed with a hint of sadism—happened to be correct! Michele Mari on the anguish of childhood. Of course, Paltonieri continued, if the school had been endowed with an ample supply of soccer balls, there would be nothing to get upset about in all this; if they lost one they could requisition another, and Mr. Kurz could do as he pleased. But the reality was that an endowment of balls not only wasn’t ample but wasn’t even provided for, and the boys had to make do with the odd privately owned ball. “Do you understand what this means?” Paltonieri pressed Bragonzi, now thoroughly worked up. “It means having to keep tabs on the new kids, the ones who’ve just arrived with a suitcase full of toys, and hope that they have a ball, and, if they do, persuade them to lend it to us, giving them gifts, which is already enough to make them suspicious, maybe the ball is new and so they guard it jealously, and if you try to take it away from them they squeal and then the Instructresses come running, understand? And when you’ve finally convinced them—you’ve given them heaps of trading cards and comics, promised them they’ll also get to play, even if they’re so little they don’t have a clue what a soccer match is—when finally it’s all worked out and the game begins, pow! , some idiot kicks the ball over the wall, and we’re ruined. And it’s not even possible to get our parents to buy balls when they come to see us and take us to Genoa, because visiting days are on Sunday and everything’s closed . . . You know today’s ball, the one you almost sent over to the other side? It’s Randazzo’s, and to get it he had to write to his dad a month ago, telling him to bring it last Friday, and his dad lives in Messina and only comes twice a year, understand?” Bragonzi understood, and he understood, too, that theirs would never be real matches but monstrosities, unnerving endeavors in which, more than the struggle between the two teams, what counted was the unspoken battle being played between all of them and that cruel man lying in wait. As months passed, this image grew and grew in Bragonzi’s mind, and he became accustomed to thinking of Mr. Kurz as an enormous black spider, motionless in the middle of his yard but lightning fast when pouncing on the balls that fell like fat insects into his web: then, seizing them with his foul legs, horrifically he sucked till there was naught left but the floppy remains . . . This rapacity was the scariest thing of all, because it enveloped the soccer ball even before it went over the wall, beckoning it and infecting it with a bluish leprosy, so that playing with it was a bit like contracting that disease, or like conversing with a man condemned to death; at other times, it seemed to him that the ball was a beautiful woman promised in marriage to a jealous tyrant, and that terrible torments awaited the reckless fool who dared so much as to graze her. It was but little consolation that he now played on a permanent basis for the Weenies. Dividing all the boys into Champs and Weenies had been thought up by Saniosi, whose intellect, faced with the impossibility of resolving the problem of Mr. Kurz, had at least conceived of a way to transform that nightmarish presence from a paralyzing element into an active part of the game. What he proposed was simple, and founded on the eradication of switching sides at halftime: the Weenies would always shoot at the goal chalked on the dormitory wall, the Champs at the one on the wall separating the schoolyard from Mr. Kurz; that way, Saniosi thought, the fear of losing the ball would hinder the Champs, weakening their abilities and thus levelling the playing field. And so it was—but for the fact that they all wanted to be welcomed into the ranks of the Weenies, and to this end deliberately tripped themselves up, displayed profound shortcomings in technique never previously revealed, spread their legs wide open so as to garner the supreme humiliation of the nutmeg. It became necessary to form a tribunal of memory keepers, who by punctiliously citing past dribbling and counterattacks, crosses and headed goals, forced the Champs to face, with no chance of appeal, their own talent. So Bragonzi was a Weenie, but this didn’t prevent him from noticing during the games—almost absorbing it from the uncertain looks in the eyes of the Champs—a general sense of distress. This feeling only worsened after the episode with Lamorchia. It happened as follows: For an agonizingly long week, the boys were left without a ball, to rave, bored, in the emptiness. Then, on Sunday, Tabidini’s dad took his son to Genoa. Seeing him heave a sigh in front of the lowered shutter of a toy store, he questioned the boy and, finding out the truth, gave a good long laugh; then, without another word, he took his son by the hand and pulled him along until they reached the nearest park, where several gangs of children were playing ball. “Which would you like?” he asked, encompassing in a single wave of his hand that entire swarm. “Jerry, you’re on my side of the bed again.” Cartoon by Zachary Kanin Copy link to cartoon Copy link to cartoon Link copied Shop Shop “What do you mean, ‘which’?” gulped Tabidini, who had understood perfectly. “Don’t you worry about it. There must be a ball here that tickles your fancy more than the others, no?” Tabidini observed: over here, the children were gratifying themselves with an unsizable rubber sphere, colorful and flabby, the kind for little kids; another group, right behind them, was scrambling around a ball that was more serious but also deflated—you could tell from the noise it made and from its pitiful bounce. Tabidini looked beyond the drinking fountain: over there was the biggest showdown, with at least ten players per side, and the ball was sound, but lightweight, too, made of taut plastic, one of those balls which shoot up bizarrely, almost taking flight of their own volition, no, no, too dangerous, a real shame, though; to their left, in a completely grassless area, enshrouded in an earthy cloud, six desperately lanky dawdlers were playing with a dirt-colored ball of an indecipherable nature; he looked at them more closely—they didn’t have “the goods” and were playing in loafers, their long socks pulled up to their knees, a scraping of soles, a slip-sliding amid expletives. Tabidini waited for the ball to emerge for an instant from the dust cloud to observe it more carefully: huh, it was leather, one of those prehistoric hand-stitched balls, with a wide valve like a ten-lira coin and that nutty color which had been vanquished long ago by black-and-white, weighty and lumpy and somewhat pear-shaped, of a mineral substance that had been chemically enriched over the years with mud and emotions . . . Headaches and blackened nails lay in store for the imprudent soul who opted for that ball, no thank you, better take a look at that other group in the field all the way at the far end; he asked his father for permission to go, then walked through the park until he was close enough to taste this new match—a match into which fathers and sisters had been frivolously mixed, a match that was revolving, alas, around an exceedingly light beach ball, literally lighter than a feather, a complimentary item included with the purchase of sunscreen for the sportily benighted. Disheartened, Tabidini went back to his dad, with one last glance at some other pilgrims who were blissfully delighting—poor fools!—in a felt tennis ball. “Well, then?” Tabidini was about to reply that he wasn’t exactly spoiled for choice when he was distracted by the simultaneous arrival of four cars, then of two more right after. Out of them came twenty or so older adolescents in tracksuits, loaded with gym bags and duffelbags. It was enough for one of them to tweak his hamstring muscles, tenderizing them a bit, for Tabidini, melting with emotion, to understand: yes, he didn’t need to see over it to know what was behind the park’s high gray wall, the group’s clear destination. A real soccer field! A real match! he thought, now liquefied, just as one of the last adolescents, having rested his duffel on the ground, pulled out a plastic bag, which he opened and then put back down, laying bare its contents: shimmering in the morning light of the sun, so new and untouched as to appear enamelled, flawlessly round, soft and taut at once, planet of glory, the most beautiful soccer ball Tabidini had ever seen. Propelled by an irrepressible impulse, he slipped his chubby hand out of his father’s and started to run toward the player, who had remained behind his companions and was now meticulously closing his duffelbag. As soon as he was close enough to make out the words, Tabidini stopped, and he read, “World Cup.” Oh! His heart skipped a beat. And then, right below, in a different pentagon, “Official Soccer Ball—Patented—Licensed—Tested,” and slightly lower still, “No. 3.” But what made Tabidini’s eyes bulge out of his head was the signature, the fluttering signature stamped along the length of two other pentagons (at first glance he didn’t want to believe it, looked more closely at the squiggle—but, yes, it was true, beyond a shadow of a doubt): “George Best.” Best! Best’s soccer ball! The greatest player of them all! The legend who was invoked after every intoxicating mazy run! At school, they’d only ever had one ball with a name on it: “Totonno Juliano,” it was called, it even bore Juliano’s picture, though the product was made of plastic, brought back from Naples by Fiorillo—a good ball, but nothing more, and, in any event, after just a few days it became the prey of Mr. Kurz. But this one! And Best’s, to boot! Desperately he turned toward his father, who started to walk over. Meanwhile, the adolescent, giving a shout to his companions, sent the ball their way, essentially inviting them to have a taste. Tabidini was no stranger to that weakness, that yielding to the temptation to try out a new ball while still off the field and out in the street, despite knowing full well that the rough concrete would leave a mark on its lustre—as if the owner, unable to bear so much perfection, wanted to artificially dirty and age the ball in order to finally recognize it as his own. Mr. Tabidini knew his son. Without saying a word, he trotted over to the youngsters, whom he reached right at the little iron gateway in the wall. At a distance, his son watched them talk: his father on one side, the others curved around him in a semicircle, their bags placed on the ground. They were shaking their heads, gesticulating nervously. Then his father took his wallet out of his jacket and started sliding out bills. The players shook their heads some more; then, seeing that he was still pulling out bills, they started to discuss the matter among themselves. One of them moved off, gesturing as if to tell another to go to hell, though he soon came back. Now Tabidini’s dad was standing there in silence; one guy came right up to him, shaking his fists, but three others grabbed him and shoved him out of the group. The discussion continued until Tabidini’s father finally stuck his fingers back into his wallet. When Tabidini saw one of the players pick the ball up and hand it to his father, he thought he was dreaming. Kissed by the sun as he walked back (the adolescents, behind him, went on gesticulating and arguing), Tabidini’s father looked like a paladin returning with the Grail. That evening, in a jubilant riot of oohs and ahs, Tabidini was greeted as a hero by the entire boarding school, and every boy, before falling asleep, fantasized in his bed about the match announced for the following day. So radiant was the image of George Best that, for one night, there was no room in their heads for Mr. Kurz. What followed was something horrific, and each boy found himself suddenly older. Bragonzi was left with the special sorrow of having failed to touch the ball even once. It was only a minute into the game, the Champs were on the attack, when the ball rebounded and went soaring into the air like a sublime bird: in everyone’s consciousness it came back down in slow motion, while below a roaring, elbowing melee raged. In the general confusion, no one noticed Lamorchia—only Bragonzi saw him getting ready to kick a volley: “No! No!” he shouted, or maybe he merely thought it, while the ball descended with unreal slowness, and already that kid was slanting, twisting his upper body and rearing back his right leg, already he was bending his knee as he lifted his shoe off the ground, “No! No!” not like this, not in the air, let it bounce, but Lamorchia couldn’t hear him, it was as though he were being drawn heavenward, ankle first, every sensory faculty now transferred to that ascending ankle, into that outward thrust that is called an instep. Abandoning the man he was marking, Bragonzi dived into the melee toward Lamorchia, imploring him all the while, sending him messages, and then, in a flash, everyone realized, and froze as if turned to stone, limbs caught and tangled, and, unable to give voice, each one thought inside himself, Don’t do it, don’t do it, no one daring to look at Lamorchia’s ankle, looking only at his swooning eye, captivated by his bliss and at the same time horrified . . . Pow! went the ball as it was struck from too low and from the side, rising once again, though no longer vertically, rather in an excruciating, mournful trajectory: Best’s soccer ball fell precisely on the flat top of the wall, taking everyone’s breath away, and then, after an imperceptible stasis, it plunged down definitively on the other side, and became the property of Mr. Kurz. No one did Lamorchia any harm, because the harm was locked in their hearts. Lamorchia himself, for that matter, was never the same after that day, nor did he ever again wish to play soccer: he could be seen off at the edge of the field, sitting like a pensioner warming himself in the afternoon sun, and when the ball wound up in his vicinity, and shouts of “Ball!” were directed at him from the field, he would pick it up, but, not having the courage to kick or throw it, he would carry it all the way to the center of the field, squeezing it to his chest, and, once there, set it down with care. Six months had passed since that day, during which at least twelve balls had made their way to Mr. Kurz. Then, tired of so much heartache, the boys ceased to play except with balls of knotted rags, which had the advantage of never leaving the ground: monstrous turbans that kept up the fiction of sphericality for no more than half an hour before starting to unravel, coarse comets dragging a tail of dusty tatters. After four months of this punishing humiliation, Bragonzi stopped one fine fall day in the middle of a rightward attack, and amid general protest grabbed that simulacrum of a ball in his hands. “Companions, friends,” he would have said if he had been an ancient tribune, “consider who we are, who we have been, and, gazing upon yourselves in this ignominious rag as in a mirror, may you hence derive sufficient shame to spur you to redeem a life perhaps not yet lost to the cause of Soccer. Think of those who, scorning danger, preceded us on this selfsame field, and let it conjure within you those Greats in whose shadow all of us, in regrettably distant days of yore, sought to shape ourselves: Tumburus, Fogli, Mora, Pascutti, Bobby Charlton, Chinesinho, Del Sol. They are watching us—and do we not shudder? And yet we hesitate?” His words were not these, naturally, but this was the spirit, and the result—judging by the gritting of teeth—was no different from the one such a speech would have inspired. And so war was declared, but for the moment, needing also to fight on the internal front with the Instructresses, and not knowing what they would find on the other side of the wall, they limited their actions to the launching of a reconnaissance mission. In the insanity of the hour, everyone volunteered, but it was unanimously decided that if there was one among their number to whom the honor of that mission was rightfully owed it was Bragonzi. To decide who would join him, they proceeded to draw lots, from which emerged the names of Tabidini and Sieroni. At two o’clock that night, Bragonzi slid out from under his covers and, feeling his way along the walls in the dark, came to the end of the hallway, where their Instructress’s bedroom lay. He knocked three times, and when she opened the door, dishevelled and furious and searching in the shadows for whoever the pest was, he said in one breath, “Quick, come, Tabidini is unwell!” While she ran to the afflicted, though not before covering her shoulders with a shawl, Bragonzi infiltrated her room and rummaged through everything (resisting the distraction of stockings and lace) until he found the coveted bunch of keys. Then, after hiding them in a carefully selected spot in the bathroom, he went back into the dormitory, giving the agreed-upon signal to Tabidini, who promptly ceased his stertorous gurgling. An hour later, when silence reigned anew, Bragonzi and Sieroni got dressed and slipped like thieves to the bathroom, and, with the keys retrieved, were now masters of the boarding school. First, they opened the janitor’s closet, grabbing a flashlight and a handsome collection of screwdrivers; then, after unlocking two other doors, they exited onto the field, and suddenly (or was it only a shiver from the freezing air?) it was as though Mr. Kurz could see them. One last door, to the gardener’s shed, and they came into possession of a long ladder. Bragonzi tried his best not to think about what he was doing, and, actually, thanks to a hint of fever, he was aware of it all as though he were already remembering it, as though it were a thing of the past: the ladder, which was slightly shorter than the wall; the struggle to stand it upright like an Egyptian obelisk; Sieroni hesitating, owing to an onset of second thoughts, which resulted in a necessary rebuke; his own frightening ascent, rung after rung, with the terror of spotting over the top of the wall the first of the eight hairy legs; his precarious balancing act up at the top followed by the work of lifting the ladder and lowering it on the other side, first pushed from below by Sieroni then held solely with his own strength; the cold air on his face and the impossibility of seeing anything whatever on Mr. Kurz’s side; Sieroni’s whimpering invitation to turn back; and, at last, his descent into the darkness below. After landing in Kurz’s yard, Bragonzi stood a long while in silence, until, all being quiet, he finally turned on the flashlight. The yard was small, much smaller than the school’s, and not paved. Here, then, on this earth, was where the balls fell. In front of him, a low house, two stories, its windows shut: Kurz’s house. The yard was bordered on the sides by two walls that were as tall as the one he had just climbed, but along the left wall ran a strange, glimmering structure. Bragonzi approached it and saw that it was made of glass, with leaded panes: Kurz’s greenhouse. He tried to look inside, but the glass offered back only the flashlight’s glow. The perfect place to hide the ladder, he thought, for if Kurz sees it I’m a goner. His next thought was that the screwdrivers would now come in handy, but there was no need for them: the little door to the greenhouse was closed by a latch with no padlock, and that things could be so easy immediately brought back to mind the ghastly mouth of the spider. Having flung the door wide, Bragonzi dragged and then pushed the ladder inside, making sure to erase the grooves left on the ground: he had seen this done in movies by American Indian women to the tracks of their shining warriors. Now that he was shut inside the greenhouse, he turned the flashlight back on to better conceal the ladder, and he saw them. He saw all of them, all at once, and with them the generations, the jerseys, the hopes, the dashes and dives. The greenhouse was filled with three long shelving units, two units on the sides and one in the center, like a kind of backbone, resulting in two parallel corridors; each had seven rows of shelves, each row a continuous line of flowerpots, each pot holding a soccer ball. Slightly larger in diameter than the pots, the balls protruded by three-fourths, touching one another at the sides like the segments of a monstrous caterpillar. Stunned, unsure whether to be horrified or to rejoice, his heart rioting in his chest, Bragonzi moved closer and focussed the beam of light on the first ball on the shelf to his left. It was an incredibly old ball, more gray than brown, completely peeled and with several unstitched seams. He touched it: the coarsest thing he had ever felt. There was something written on the pot in black block letters, faded with time: “May 8, 1933.” Bragonzi was trembling. He shone the light on the next ball: this one looked worn out like an old sweater, and, busted, dented, and covered in tarlike stains, it had sunk deeper than the others into its pot; here, too, the pot bore a faded inscription: “November 13, 1933.” It’s a dream, Bragonzi thought, refusing to understand. He slowly went down the corridor, moving the beam of light: February 4, 1934, April 28, 1934, May 16, 1934, June 2, 1934, June 18, 1934, August 3, 1934, September 3, 1934 . . . then eight balls from 1935, six from 1936, ten from 1937, seven from 1938, five from 1939, none from 1940 to 1945, twelve from 1946, sixteen from 1947 . . . Could it be? He turned from the shelves on the left, and, pointing the light at the central unit, immediately read, “July 21, 1956.” This one was a double shelf, each pot corresponding to a pot facing the opposite side; here he ran breathlessly, and read at random, “March 7, 1960,” “August 11, 1961.” And, finally, the shelves on the right, full of orbs from 1963, from ’64, from ’65, from ’66 . . . Overcome, he sped up his pace as he moved down the aisle, toward the back, where he knew what he would find . . . He would find Fermenti’s soccer ball, the very first one he had seen go flying over to the other—to this—side, and Randazzo’s ball, there they were! and the “Totonno Juliano” (there! “March 9, 1967,” yes, that was the day it had happened), and his own, his red-and-black beloved, it was there, too (he was about to take it but withdrew his hand), and all the others up to Best’s, there it was! shining more brightly than the rest in the glow of the flashlight, still unblemished and new-smelling, and then all the lost balls up to the day of the conversion to rags, not one was missing, oh, dearest soccer balls! But what sent a shudder running through his entire body was what he saw after the last ball, even if he could have imagined it beforehand: a line of empty pots, ready to welcome new arrivals . . . “It doesn’t work on geese.” Cartoon by John McNamee Copy link to cartoon Copy link to cartoon Link copied Shop Shop He contemplated at length the emptiness of those pots, successively lighting up their interiors, and he wondered where, in that precise moment, the balls destined to fill them were, in what storeroom or window display, and wondered, too, when they would rain down like ripe fruits from over the wall, on what date, a sixteenth of October or a twentieth of March—impossible to say. For now, the boys played with balls made of rags, but someday things would go back to normal, it was inevitable, and on that day Mr. Kurz would be happy once more. What did he think of the temporary suspension of soccer balls? Maybe from the more muffled sound of their kicks he had figured out the truth and was awaiting his hour, as he had since 1933. Bragonzi returned to the front of the greenhouse and stood before that first ball: looking at it, and thinking that those who had played with it must be older than his father by now, he considered how the balls with which an individual plays in his life get lost in thousands of ways, rolling down countless streets, landing in rivers and on rooftops, torn apart by the teeth of dogs or boiled by the sun, deflating like shrivelled prunes or exploding on the spikes of gates, or simply disappearing, you thought you had them and you look all over but they’re nowhere to be found, who knows how much time has passed since you lost them or since someone swiped them at the park; he considered how all of the balls touched by those children had thus dissipated, and if he were in their presence and asked them, “Where are all your soccer balls?” they would shrug, unable to account for the fate of a single one. That ball alone had been snatched from the clutches of destruction; only that ball, from May 8, 1933, went on being a ball. Oh, he knew all too well how things had unfolded, for how many times had he witnessed the same scene! The ball had shot upward, and even before it went over the wall everyone thought, It’s lost—goodbye, ball. But no, only in that moment was it saved. And many years later, when all those children went down into their graves, that ball would be more alive than them, the last memory of the matches of yesteryear. Bragonzi passed one more time through the entire collection, observing more closely some spheres that he hadn’t noticed before: a hard and clumpy one resembling a truffle, a still pristine one on which was written “From Grandma, to her sweet pea,” a rubber one with the faces of the players who had died in the Superga air disaster, one with Hamrin’s signature forged by an uncertain juvenile hand. And he noticed something else, which brought a lump to his throat: Mr. Kurz had arranged each ball in its pot so as to look its best, the least dented or unstitched part forward, the part with the faces or signatures, as though he loved those soccer balls. The glow of the flashlight kept growing dimmer, and so Bragonzi decided to turn it off for a little while. In the darkness, after a few seconds had passed, the silhouettes of the soccer balls began to appear like fluorescent spectres, first the whiter ones, then slowly but surely the rest, and it seemed to Bragonzi that they were quivering, and that they wanted to say something. Concentrated in that luminescence was the first glimmer of morning, as yet imperceptible in the sky. Before long it would be dawn (had he been in the greenhouse for that long?), and Bragonzi didn’t know what to do, whether to turn the flashlight back on and keep looking around, or get out of there, or scope out other areas of the yard. Instead he carried on as before, wandering slowly up and down those two corridors, one moment laying his hands on an orb whose pentagons looked like black fish in a pitcher of water, the next on a globe of gaseous yellow. The first light of sunrise took him by surprise and convinced him that he should go back. He dragged the ladder to the foot of the wall after being assailed by a gust of freezing air upon leaving the greenhouse. Then, just as he was about to climb the ladder, he noticed something in the middle of the yard, something that had been hidden before in the dark. He moved closer: it was a wooden chair with a wicker seat, turned to face the boarding school. Oh, it didn’t take much to understand what the person who sat in it waited for, and Bragonzi shivered at the thought of him sitting there, motionless, patient, day after day from morning till night, saddened by the fruitless days, weeks, months . . . He immediately walked away from the chair, then he went back; he wanted to try to sit in it, and he did. Opposite, one saw only the wall, and, above, the sky, nothing more. He tried to imagine a match taking place behind that wall, Secerni’s attacks, Saniosi’s feints, Piva’s fouls, Fognin’s drives. He saw the sweaty faces, the dust clouds, the scraped and scabbed knees, he saw the arguments over offsides and the rock-paper-scissors to decide the teams, he saw the rage and he saw the joy. And he saw a ball spring up from the top of the wall like a black moon from the sea, saw it rise, tracing its arc in the sky, and falling to earth on this side, bouncing a few metres from the chair, then stopping meekly in the dust. Hello, ball, he said, tenderly contemplating it in the light of the dawn. When he reached the top of the wall, he realized that Sieroni had fallen asleep on the ground, right there below him; he woke him by dropping a shoe on his back. He then pulled up the ladder and climbed back down into the schoolyard. At the first occasion they had to talk about it, his throng of classmates made only a collective impression upon him while—unable to bring any one face or name into focus, surrounded by their disappointed eyes—he told of locked doors and darkness. It rained the following days, and the schoolyard remained deserted. That Sunday, their Instructress told Bragonzi that there was a surprise in store for him, his dad had come from Milan to see him, he was to run and get dressed, chop-chop! His dad took him to a restaurant and then to the movies to see a Lemmy Caution film, after which they strolled around the port looking at the ships. Toward evening they got in a taxi, but instead of giving the school’s address his father said, “To the train station.” Bragonzi didn’t ask any questions, and he kept silent even in the baggage room, where his father reclaimed a big black bag. They returned to the school in another taxi, and only when they were in front of the gate, with the taxi-driver waiting to head back to the station, did his father crouch down and open it. The first thing to emerge was an issue of Soldino , but already Bragonzi had started to tremble; then came a stick of modelling clay and a little puzzle, and meanwhile the rustling of cellophane could be heard underneath; then there was a balsa-wood model-airplane kit; and then, finally, that transparent bag, which his father gave to him after making him wait longer than for the other presents, as he smiled back in silence and hoped that his tremors weren’t visible. “Thank you,” he said, and he wanted to add something else, but while he was thinking about what this should be his dad had already got back in the taxi. And so Bragonzi hid everything under his raincoat and ran to the dormitory. It was past the hour when boys needed to come back from any outings “already fed,” for the rules barred these temporary escapees from joining the others in the refectory during meals (his father didn’t know this, since Bragonzi had never been brave enough to tell him), and so there was no one around. After putting the other presents in his closet, Bragonzi sat on the bed with the see-through bag on his knees. It was closed with a thin red drawstring and, in addition to the ball, contained the pump and the needle for inflating it, as well as a little box of wax and a small felt cloth with zigzag edges for polishing: once opened, the bag released a delightful leathery aroma, which reminded Bragonzi of the smell of his nicest pair of shoes. The pump was icy cold, the ball less so. He stuck the needle into its valve and began to inflate it with care: some of the air in this room, he thought, is going to end up inside there, and it will never come out again. When every last pentagon had popped out convexly, he removed the needle. He spread his thighs slightly apart to better hold the ball, not wanting it to touch the floor. It was magnificent, a Derbystar “Deliciae Platearum,” even more beautiful than Best’s “World Cup” ball; he couldn’t imagine how hard his father must have had to look before finding it, or how much he had paid for it, its white just slightly pearlier than the rest, with iridescent reflections, and black pentagons framed by a subtle red outline, and a little yellow star right underneath its brand name, a ball even Rivera would kick cautiously, truly like nothing he had ever seen before . . . He fondled it awhile with his fingertips and slid it against his cheeks to take in its smoothness, decided to give it a few more pumps, then went back to caressing it. He looked at the clock: before long, the other boys would all be coming back upstairs, he had to be quick. He put the pump and the bag in the closet, and went down to the atrium with the “Deliciae Platearum” under his arm. From there, he passed through the television lounge before skirting the refectory, crouching down beneath the windows so as not to be spotted by the diners; at the end of the hallway, the door to the schoolyard was open—the Instructresses liked to take a stroll right after dinnertime. It was not yet completely dark in the schoolyard, and from the sky, now that the rain had stopped and the clouds had been torn asunder, Bragonzi could tell that the next day would be a beautiful one. He avoided the puddles as he moved to the center of the soccer field, which was marked with faded white paint. He looked at the ball in his hands, even more beautiful in the moonlight. He checked that the top of his right shoe wasn’t muddy, looked at the wall in front of him and then above the wall, too, took a deep breath, looked once more at the ball, threw it into the air, waited for it to come back down, and kicked it with his instep when it was roughly thirty centimetres from the ground, and he knew from the sound it made that he had kicked it well, saw it rise quickly into the air, first darkly silhouetted against a cloud whitened by the moonlight, then brightly against the night sky, and it seemed to rest there, suspended in midair, until it descended, and disappeared behind the black horizon of the wall. Now he could go back, and bury himself in his bed. ♦ (Translated, from the Italian, by Brian Robert Moore.) This is drawn from “ You, Bleeding Childhood. ” Books & Fiction E-mail address Sign up By signing up, you agree to our User Agreement and Privacy Policy & Cookie Statement. This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. This Week in Fiction By Dennis Zhou Letter from Amsterdam By Patrick Radden Keefe Annals of Law By Eli Hager A Reporter at Large By Ariel Levy Sections News Books & Culture Fiction & Poetry Humor & Cartoons Magazine Crossword Video Podcasts Archive Goings On More Customer Care Shop The New Yorker Buy Covers and Cartoons Condé Nast Store Digital Access Newsletters Jigsaw Puzzle RSS About Careers Contact F.A.Q. Media Kit Press Accessibility Help © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights. The New Yorker may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. Ad Choices Facebook X Snapchat YouTube Instagram Do Not Sell My Personal Info "
13,574
2,023
"What We Owe Our Trees | The New Yorker"
"https://www.newyorker.com/magazine/2023/05/29/what-we-owe-our-trees"
"Newsletter To revisit this article, select My Account, then View saved stories Close Alert Search The Latest News Books & Culture Fiction & Poetry Humor & Cartoons Magazine Puzzles & Games Video Podcasts Goings On Shop Open Navigation Menu Find anything you save across the site in your account Close Alert The Control of Nature What We Owe Our Trees By Jill Lepore Facebook X Email Print Save Story The notion that clear-cutting can be counteracted by the planting of trees is a political product of the timber industry. Photograph © Robert Adams / Fraenkel Gallery Save this story Save this story Save this story Save this story The woods I know best, love best, are made of Northern hardwoods, sugar maple and white ash, timber-tall; black and yellow birch, tiger-skinned; seedlings and saplings of blighted beech and striped maple creeping up, knock-kneed, from a forest floor of princess pine and Christmas fern, shag-rugged. White-tailed deer dart through softwood stands of pine and hemlock, bucks and does, the last leaping fawn, leaving tracks that look like tiny human lungs, trails that people can only ever see in the snow, even though, long after snowmelt, dogs can smell them, tracking, snuffling, shuddering with the thrill of the hunt and noshing on deer scat for dog treats. I make lists of finds, two-winged, four-footed, and rolling: black-throated green warblers and blue-headed vireos, porcupines and salamanders, tin cans and old tires, deer mice and fisher cats, wild turkeys and ruffed grouse, black bears and, come spring, their tumbling, potbellied, big-eared cubs. Even if you haven’t been to the woods lately, you probably know that the forest is disappearing. In the past ten thousand years, the Earth has lost about a third of its forest, which wouldn’t be so worrying if it weren’t for the fact that almost all that loss has happened in the past three hundred years or so. As much forest has been lost in the past hundred years as in the nine thousand before. With the forest go the worlds within those woods, each habitat and dwelling place, a universe within each rotting log, a galaxy within a pine cone. And, unlike earlier losses of forests, owing to ice and fire, volcanoes, comets, and earthquakes—actuarially acts of God—nearly all the destruction in the past three centuries has been done deliberately, by people, actuarially at fault: cutting down trees to harvest wood, plant crops, and graze animals. The Earth is about four and a half billion years old. By about two and a half billion years ago, enough oxygen had built up in the atmosphere to support multicellular life, and by about five hundred and seventy million years ago the first complex macroscopic organisms had begun to appear, as Peter Frankopan reports in “ The Earth Transformed ” (Knopf), an essential epic that runs from the dawn of time to, oh, six o’clock yesterday. In his not at all cheerful conclusion, looking to a possibly not too distant future in which humans fail to address climate change and become extinct, Frankopan writes, “Our loss will be the gain of other animals and plants.” An upside! The first trees evolved about four hundred million years ago, and pretty quickly, geologically speaking, they covered most of the Earth’s dry land. A hundred and fifty million years later, during a mass-extinction event known as the Great Dying, the forests perished, along with nearly everything else on land and sea. Then, two million years after that, the supercontinent broke up, a seismic process whose consequences included depositing oil, coal, and natural gas in the places on the planet where they can still be found, to our enrichment and ruination. The trees returned. The ginkgo is the oldest surviving tree species, its fan-shaped leaves unfurling lime green in spring and falling, mustard yellow, in autumn. The first primates showed up about fifty-five million years ago, in the rain forest. They lived in the trees. Our ancestors began dividing from apes—began, slowly, coming down from the trees—about seven million years ago; the genus Homo branched off four million years later; and Homo sapiens began wandering around the understory somewhere between eight hundred thousand and two hundred thousand years ago, although exactly when is apparently a matter of fierce debate, which seems right, since humans are such a contentious, Neanderthal-killing lot. Here’s how Frankopan, a professor of global history at Oxford, puts it: “Like rude house guests who arrive at the last minute, cause havoc and set about destroying the house to which they have been invited, human impact on the natural environment has been substantial and is accelerating to the point that many scientists question the long-term viability of human life.” Climate change contributed to the extinction of Neanderthals about thirty-five thousand years ago, but humans, instead of dying out, migrated to different climates, or found other ways to survive, which generally involved controlling fire and burning fallen sticks and branches for heat and to cook otherwise hard-to-digest food, or making axes to cut down trees, whose wood could be used to build shelters and, later, fences for animals. They cut and felled. Knopf printed about twenty thousand copies of Frankopan’s seven-hundred-page book on paper made from trees. I read it sitting in a house built of pine in a chair made of maple at a desk made of oak holding a pencil made of cedar. They cut and felled. The wood in my woodstove is yellow birch, burning, bark curling. “If you think about it, a tree is a tricky place in which to live,” the biologist Roland Ennos writes in “ The Age of Wood ” (Scribner). Ennos argues that dividing human history into the Stone Age (beginning two and a half million years ago), the Bronze Age (3000–1000 B.C.E.), and the Iron Age (1200–300 B.C.E.)—a scheme invented in the nineteenth century by a Danish antiquarian—misses the earliest and most important era, the Wood Age. People are arboreal, at least vestigially, Ennos points out, with binocular vision, upright posture, hind limbs for movement, forelimbs for gripping, and fingers with soft pads and nails, all features that evolved to help primates live in trees. The first primates were as small as mice, and could scramble wherever they liked, but, as they got bigger, it became harder to stay up in the trees, where it was safest, especially at night. A “clambering hypothesis,” among primatologists, has it that the thinking of great apes got more sophisticated—they developed a “self-reflective psychology”—so that they could better understand the mechanics of climbing and swinging through trees. Also, the first tools used by great apes were made of trees and in trees: nests for sleeping in higher branches. (The bigger your brain, the more REM sleep you need.) The earliest hominins who learned to walk upright did so while still living, mainly, in trees, and they came down at night only after figuring out how to make fires—with wood. That had all kinds of knock-on effects, including being able to cook food, which makes it easier for us to get energy out of it, and made it possible for our brains to grow bigger. Hominins came down from the trees, built huts, made fires, and no longer needed their fur, so they lost it, which meant that when the weather, or the climate, got colder they needed warmer huts and more fires, but with those they could go anywhere, as long as there were trees. As for making tools, they mainly used not stone but wood, and when they did use stone it was often to make better tools out of wood. You could use a stone, for instance, to sharpen a wooden spear, a tool you could wield to kill beasts of land and sea. In all this time, people did not run out of wood, since there weren’t that many people and there were a great many trees, and because trees grow back. Even after humans invented the stone axe and began to chop down trees, this was still true. Chopping and burning, they cleared openings in forests to attract game, and they adzed trunks and limbs into poles and posts, planks and beams. They built houses and rafts and boats, and some people, in places where they had cleared the forests, began to farm. During the ages of stone, bronze, and iron, down through the early modern era, Ennos writes, “almost all the possessions of everyday folk were wooden, while those that were not actually made of wood needed large quantities of wood to produce.” Only the turn to coal for fuel in the eighteenth century and to wrought iron for building in the nineteenth, he argues, brought about the end of the age of wood. Except that it didn’t exactly end, since imperialism, industrialism, and capitalism meant that people were more likely to go to war and conquer land in order to cut down other people’s trees. You could tell this story about a lot of places, but consider England and its North American colonies. By the eighteenth century, much of England and in fact much of Western Europe had been deforested, but England needed timber to build ships in order to trade goods, wage war, and found colonies. It especially wanted very tall and straight pines, for ships’ masts. During the long wars between Britain and France, often fought at sea, France had for a time a ship’s-mast advantage, having cut a path known as the Mast Road through the Pyrenees to a stand of tall fir trees. Britain harvested its masts from its colonies, and especially from the tall white pines of New England, having issued an edict, in 1691, that any pine whose trunk, when measured a foot from the ground, was more than twenty-four inches in diameter belonged to the King (later revised, fairly desperately, to twelve inches in diameter). Among the many causes of the American Revolution was the Pine Tree Riot of 1772, when New Hampshire mill owners refused to pay fines for sawing pine trees into boards. One of the earliest alarms about deforestation written in English is “Sylva, or a Discourse on Forest-Trees, and the Propagation of Timber of His Majesties Dominions,” by Sir John Evelyn, published in London in 1664. Evelyn called for tree planting as an act of patriotism, and if he was the first to do so he was not the last, as the University of Oregon geographer Shaul E. Cohen reported in his book “ Planting Nature: Trees and the Manipulation of Environmental Stewardship in America ” (2004). Writing about forests, John Perlin urges humans to “stop our war against them” in a new edition of his 1989 book, “ A Forest Journey: The Role of Trees in the Fate of Civilization ” (Patagonia), more than five hundred pages but “printed on 100 percent postconsumer paper.” Yet any plans for a truce in this war, including calls for planting trees, have often been pretty suspect, perhaps especially so in the United States. American states legislated the protection of the forests from the start, if to little effect. After the Revolution, for instance, Massachusetts forbade the cutting down of those twenty-four-inch white pines on any public lands. But in the Western territories “public lands,” which were generally the unceded ancestral homelands of tribal nations, quickly became private lands. After the Northwest Ordinance of 1787, Congress paid Revolutionary War veterans in plots of land in the Northwest Territory, north of the Ohio River. (“The utmost good faith shall always be observed towards the Indians; their lands and property shall never be taken from them without their consent; and, in their property, rights and liberty, they shall never be invaded or disturbed, unless in just and lawful wars authorized by Congress,” Congress affirmed in the Ordinance, in a pledge not honored.) In Conrad Richter’s 1940 historical novel, “ The Trees ,” a family from Pennsylvania treks to the Ohio Valley around 1787. Their little girl, looking down from a hilltop, is overwhelmed by her first view of the forest, thinking that “what lay beneath was the late sun glittering on green-black water,” mistaking for an ocean what was, instead, “a sea of solid treetops broken only by some gash where deep beneath the foliage an unknown stream made its way.” The whole of Richter’s trilogy, the story of American pioneers, is the story of clearing the woods: “Oh, it was hard beating back the woods. You had to fight the wild trees and their sprouts tooth and nail.” By the trilogy’s end, that little girl, now an old woman, is haunted by regret. “She reckoned she knew now how one of those old butts in the deep woods felt when all its fellows were cut down and it was left standing lone and gaunt against the sky, with only whips and brush and those not worth the axe pushing up around it. The second growth trees you saw today were mighty poor and spindly specimens beside the giants she had known when first she came to this country.” A sense that the great clearing meant, as well, a great loss pervaded nineteenth-century American culture. Much of it was romance, a product of the wispy, dreamy, self-justifying association many Americans made between the vanishing forest and the imagined vanishing of the Indians, even while the federal and state governments pursued a policy of conquest and war against Native nations. Tree-planting campaigns became the called-for, remorseful remedy. “If our ancestors found it wise and necessary to cut down fast forests, it is all the more needful that their descendants should plant trees,” the landscape architect Andrew Jackson Downing wrote in 1847. “Let every man, whose soul is not a desert, plant trees.” That same year, George Perkins Marsh gave a lecture in Rutland, Vermont, that helped launch the conservation movement. Marsh argued that the destruction of the forests had consequences for the climate: “Though man cannot at his pleasure command the rain and the sunshine, the wind and frost and snow, yet it is certain that climate itself has in many instances been gradually changed and ameliorated or deteriorated by human action.” He went on: The draining of swamps and the clearing of forests perceptibly effect the evaporation from the earth, and of course the mean quantity of moisture suspended in the air. The same causes modify the electrical condition of the atmosphere and the power of the surface to reflect, absorb and radiate the rays of the sun, and consequently influence the distribution of light and heat, and the force and direction of the winds. Within narrow limits too, domestic fires and artificial structures create and diffuse increased warmth, to an extent that may effect vegetation. Marsh insisted, “Trees are no longer what they were in our fathers’ time, an incumbrance.” They are, instead, a reservoir, the source of life, the regulators of the climate. Marsh, a linguist and a diplomat, went on to write a groundbreaking book, “The Earth as Modified by Human Action,” first published in 1864 under the title “Man and Nature,” a nineteenth-century version of Frankopan’s “The Earth Transformed.” The Wisconsin legislature in 1867 commissioned an investigation that resulted in the publication of its “Report on the Disastrous Effects of the Destruction of Forest Trees, Now Going On So Rapidly in the State of Wisconsin.” The state then inaugurated a program of tax exemption for landowners who planted trees. In 1873, the Nebraska senator Phineas W. Hitchcock introduced the Timber Culture Act, declaring, “The object of this bill is to encourage the growth of timber, not merely for the benefit of the soil, not merely for the value of timber itself, but for its influence upon the climate.” The act, a failure, was repealed in 1891. Instead, the lasting consequence of Marsh’s “The Earth as Modified by Human Action” was Arbor Day, created by a Nebraskan named J. Sterling Morton and first celebrated on April 10, 1872. Morton, the editor of the Nebraska City News , called for a day “set apart and consecrated for tree planting.” On that first Arbor Day, Nebraskans planted more than a million trees. The holiday soon spread, especially after Grover Cleveland appointed Morton as his Secretary of Agriculture, in 1892. The advocacy organization American Forests was founded in 1875, and, as Cohen writes, it also advanced the idea that planting a tree was an act of citizenship. This was a tradition that faltered at various times in the twentieth century but was renewed beginning in 1970 with the first Earth Day (also held in April) and with the establishment of the National Arbor Day Foundation two years later. Its many programs include Trees for America; pay a membership fee, and you get ten saplings in the mail. American Forests runs Global ReLeaf. But Cohen and other critics have argued that there is little evidence that these programs do much more than greenwash bad actors. American Forests has been sponsored by both fossil fuel and timber companies. In 1996, the climate-change-denying G.O.P. encouraged Republican congressional candidates to have themselves photographed planting a tree. “10 Reasons to Plant Trees with American Forests,” printed in 2001, suggests that “planting 30 trees each year offsets the average American’s ‘carbon debt’—the amount of carbon dioxide you produce each year from your car and home.” The E.P.A., on a Web site that linked to American Forests, urged Americans to plant trees as penance: “Plant some trees and stop feeling guilty.” What with one thing and another, have you used ten thousand kilowatt-hours of electricity? The site offered indulgences: plant ten trees, one for every thousand kilowatt-hours. At the height of the corporate tree-atonement era, a New Yorker cartoon showed a queue of businessmen waiting to see a guru, with one saying to another, “It’s great! You just tell him how much pollution your company is responsible for and he tells you how many trees you have to plant to atone for it.” The notion that clear-cutting can be counteracted by the planting of trees is a political product of the timber industry. As Cohen shows, the phrase “tree farm” was coined by a publicist at a timber company, as was the motto “Timber Is a Crop.” And the notion hasn’t died. In 2020, the World Economic Forum announced its sponsorship of an initiative called 1t, a corporate-funded plan to “conserve, restore, and grow” one trillion trees by the year 2030. At Davos in 2020, Donald Trump pledged American support. (At the time, the President mentioned that he was reading a book about the environmental movement; written by a former adviser of his, it was called “Donald J. Trump: An Environmental Hero.”) It’s good to plant trees. No one’s arguing any different. “There’s no anti-tree lobby,” a Nature Conservancy ecologist told Science News recently. Trees are the new polar bears, the trending face of the environmental movement. But it’s not clear that planting a trillion trees is a solution. In terms of biodiversity, killing forests and planting tree farms isn’t much help; a forest is an ecosystem, and a tree farm is a monoculture. Forests absorb about sixteen billion metric tons of carbon dioxide every year, but they also emit about eight billion tons. The main study behind the 1t movement proposes that planting trees on land around the world roughly equivalent in area to the United States will trap more than two hundred billion tons of carbon. Yet a forum published in Science in 2019 expressed grave skepticism about both the science and the math behind this plan. The history is fishy, too. National tree-planting schemes have, historically, come up short. Studies across countries have found that as many as nine in ten saplings planted under these auspices die. They’re the wrong kind of tree. No one waters them. They’re planted at the wrong time of year. They have not improved forest cover. The 1t folks make a point of saying that they’re not planting trees; they’re growing them. But whether they really are remains to be seen. In the meantime, you are asked to think differently about trees. They’re out there. They’re smart. They will outlast us. Brian Selznick’s graphic children’s novel “ Big Tree ” (Scholastic) tells the story of trees across tens of millions of years, through the trials of two sycamores: “Once upon a time, there were two little seeds in a very old forest. Their mama said she would give them roots and wings—roots so they’d always have a home, and wings so they would be brave enough to find it.” Selznick’s understanding of forestry, and maternal trees, borrows from the work of the Canadian ecologist Suzanne Simard. As a young scientist, Simard was the lead author of a study published in Nature , “Net Transfer of Carbon Between Ectomycorrhizal Tree Species in the Field,” in which she reported the findings of a years-long series of experiments that she conducted with seedlings. “Plants within communities can be interconnected and exchange resources through a common hyphal network, and form guilds based on their shared mycorrhizal associates,” she concluded. That is to say, plants can communicate with one another chemically, and across species, issuing warnings, for instance. Put in human terms, trees can care for one another. Simard came to call certain of these signallers “mother trees,” which both got her into hot water and made her beloved. Although subsequent research verified most of her major findings, she was for a long time chastised by scientists, an experience that was the inspiration for the trials of Patricia Westerford in Richard Powers’s intricate Pulitzer Prize-winning novel, “ The Overstory ,” from 2018. In the novel, Powers describes the moment of Westerford’s crucial finding, in a forest of sugar maples: The trees under attack pump out insecticides to save their lives. That much is uncontroversial. But something else in the data makes her flesh pucker: trees a little way off, untouched by the invading swarms, ramp up their own defenses when their neighbor is attacked. Something alerts them. They get wind of the disaster, and they prepare. She controls for everything she can, and the results are always the same. Only one conclusion makes any sense: The wounded trees send out alarms that other trees smell. Her maples are signaling. Amy Adams is slated to play Simard in an upcoming film adaptation of Simard’s memoir, “ Finding the Mother Tree: Discovering the Wisdom of the Forest ” (Knopf). Simard is herself something of a maternal spirit in Katie Holten’s collection of essays, poems, and other snippets, “ The Language of Trees ” (Tin House), in which Holten, an Irish artist and activist, introduces a tree alphabet. Each letter is represented by the striking silhouette of a tree: Apple, Beech, Cedar, Dogwood, Elm, and so on. The book reproduces a piece of Simard’s writing: “When mother trees—the majestic hubs at the center of forest communication, protection, and sentience—die, they pass their wisdom to their kin, generation after generation, sharing the knowledge of what helps and what harms, who is friend or foe, and how to adapt and survive in an ever-changing landscape. It’s what all parents do.” That “mother,” in Holten’s abecedary, reads this way: Mulberry, Oak, Tree of heaven, Horse chestnut, Elm, Redwood. Simard’s research has also been popularized by a German forester named Peter Wohlleben in his best-selling 2015 book (first translated into English in 2016), “ The Hidden Life of Trees: What They Feel, How They Communicate. ” Wohlleben’s earlier books were downers, like “The Forest: An Obituary.” “The Hidden Life of Trees” is not a downer. Forget imperialism, industrialism, and capitalism. Think feelings. A forest of trees, Wohlleben argues, is like a herd of elephants. “Like the herd, they, too, look after their own, and they help their sick and weak back up onto their feet.” Like elephants—like humans—trees have friends, and lovers, and parents and children. They have language, and they also have, he argues, a kind of sentience. As science, the mothering, feeling tree is controversial. As literature for a political movement, it’s not bad, and, after all, nothing else has worked—not Arbor Day, not the “Report on the Disastrous Effects of the Destruction of Forest Trees, Now Going On So Rapidly,” not Global ReLeaf, not 1t. At this rate, unless humans think of something better fast, the forests, and then we who walk the Earth, two-legged, will be Dogwood, Elm, Apple, Dogwood. ♦ New Yorker Favorites First she scandalized Washington. Then she became a princess. The unravelling of an expert on serial killers. What exactly happened between Neanderthals and humans ? When you eat a dried fig, you’re probably chewing wasp mummies, too. The meanings of the Muslim head scarf. The slippery scams of the olive-oil industry. Critics on the classics: our 1991 review of “Thelma & Louise.” Sign up for our daily newsletter to receive the best stories from The New Yorker. Weekly E-mail address Sign up By signing up, you agree to our User Agreement and Privacy Policy & Cookie Statement. This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. Dept. of Science By Rivka Galchen Elements By Michael Adno Poems By Melissa Ginsburg The Control of Nature By Elizabeth Kolbert Sections News Books & Culture Fiction & Poetry Humor & Cartoons Magazine Crossword Video Podcasts Archive Goings On More Customer Care Shop The New Yorker Buy Covers and Cartoons Condé Nast Store Digital Access Newsletters Jigsaw Puzzle RSS About Careers Contact F.A.Q. Media Kit Press Accessibility Help © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights. The New Yorker may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. Ad Choices Facebook X Snapchat YouTube Instagram Do Not Sell My Personal Info "
13,575
2,022
"3 ways to overcome the cybersecurity skills gap | VentureBeat"
"https://venturebeat.com/business/3-ways-to-overcome-the-cybersecurity-skills-gap"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Sponsored 3 ways to overcome the cybersecurity skills gap Share on Facebook Share on X Share on LinkedIn Presented by PwC Cyber criminals continue to get creative and have increasingly sophisticated tools at their disposal to circumvent organizations’ cyber defenses — and organizations are worried. According to PwC’s C-suite playbook on cyber and privacy , topping the 2023 list of rising organizational threats are the following: Cybercriminal activity: 65% Hacktivist/hacker: 48% Insider threat (current/past employee, contractor): 44% Many senior executives are concerned they’re not fully prepared. The pathways threat actors can access are as dynamic as they are extensive: mobile devices, email, cloud, ransomware, endpoint security, supply chain software, web applications — the list goes on. To combat these threats, many organizations plan to upskill and hire cyber talent in the next 12 months. But the cyber talent shortage is a persistent challenge. Attrition is a growing problem for 39% of organizations and it’s hindering progress on cyber goals for another 15%. Hiring from the outside also has many organizations on edge. In the U.S. alone, there are 50% fewer candidates available than are needed in the cyber field. So how do you address the cyber talent shortage? You can update how you recruit for those roles. And you can offer upskilling for existing talent that opens new career opportunities. 1. Cast a wider net when looking for talent Limiting talent pools to newly degreed talent, tenured professionals or having overreaching “entry-level” job prerequisites can keep your organization on the losing side of cybersecurity. Many organizations are trying to break these old molds and widening their search parameters. Undergraduate degrees in any area have edged out undergraduate degrees in cyber, computer science or engineering as a requirement. For about 10% of organizations, degrees aren’t even required. By expanding qualifications and talent pool, you can fill cyber positions faster and retain talent longer. Broader skills can also help information security executives reshape their teams — from a linear tree to a bifurcated branch system that sits across the organization. Many chief information security officers are placing team members on product development (49%) and business (48%) teams , which can put cybersecurity at the right place at the right time — stopping a threat in its tracks. 2. Recruit skills for the 21st century Information security executives are recognizing that social and interpersonal skills may be just as fundamental as JavaScript. In fact, 65% of organizations consider the ability to problem-solve a mandatory characteristic in a final hiring decision. The ability to configure a firewall or perform an audit now needs to be accompanied by soft skills — more than 40% of executives looking for analytical skills (47%), communication skills (43%), creativity (42%) and collaboration (41%). Individuals who have the right attitude around learning, growth and communication can be the nodal link between departments that are traditionally siloed. For example, when cybersecurity teams work with risk, internal audit and compliance teams, they can jointly monitor and prioritize risks consistently. Nearly three-quarters of organizations say they’ve seen better collaboration between cyber and operational technology (OT) teams. As a result, 79% say their cyber team made progress in securing OT during the past year. To attract, organizations must combine emotional intelligence with cyber intelligence. When information security teams can track, analyze and counter security threats and communicate, persuade and adapt, you can unite the entire C-suite to actualize change. 3. Upskill your current workforce to unlock value now and into the future To retain this new mix of cybersecurity professionals, organizations have found upskilling — hard and soft skills — to be the most effective in closing the skills gap. Ninety-three percent of companies who introduce upskilling and reskilling programs have seen increased productivity, improvements in employee retention and engagement, and a more resilient workforce. Many also lower costs through applied automation and reducing the need to fill highly specific and higher-level positions from the outside. Here are a few examples of how cyber upskilling can create a more resilient organization: A business risk officer can take courses in agile, continuous monitoring, and data and analytics to improve his productivity and the department’s performance. An IT auditor who executes risk-based audits and assesses operational effectiveness can learn how to apply Scrum frameworks to business issues, AI modeling to detect and predict fraud transactions, and wrap it all together in a risk assessment dashboard. A cyber defense analyst can fill that long-empty management role by completing credentials that teaches her how to create an incident response strategy, detect threats and analyze cybersecurity incidents. Employees can train in function-specific cybersecurity best practices and cyber hygiene to help protect the organization from email phishing attacks, ransomware and high-risk web applications. Cyber threats are dynamic. Organizations should challenge any long-held beliefs about training and design their programs to be people-powered, function-specific and business-led. ProEdge , a PwC product, curates industry-leading training from multiple vendors. By using techniques such as gamification and simulations — combined with courses and content that’s updated as new threats emerge — students can apply their newfound knowledge to real-time challenges and work towards tangible business outcomes. To learn more about upskilling for cyber, check out this eBook, Addressing the cyber skills shortage. Vikas Agarwal is Leader of PwC’s Risk and Regulatory Financial Services Practice. Matt Gorham is Leader of PwC’s Cyber & Privacy Innovation Institute VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,576
2,022
"5 reasons zero trust is the future of endpoint security | VentureBeat"
"https://venturebeat.com/security/5-reasons-zero-trust-is-the-future-of-endpoint-security"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages 5 reasons zero trust is the future of endpoint security Share on Facebook Share on X Share on LinkedIn This article is part of a VB special issue. Read the full series here: Zero trust: The new security paradigm. Most enterprises don’t know how many endpoints they have active on their networks because their tech stacks were designed to excel at the concept of “ trust but verify ,” rather than zero trust. The gap between how many human and machine-based endpoints organizations know versus have is growing. Jim Wachhaus, attack surface protection evangelist at CyCognito , told VentureBeat in an interview that it is common to find organizations generating thousands of unknown endpoints a year. In addition, a Cybersecurity Insiders report found that 60% of organizations are aware of fewer than 75% of the devices on their network, and only 58% of organizations say they could identify every vulnerable asset in their organization within 24 hours of a critical exploit. A recent Tanium survey found that 55% of security and risk management leaders believe that 75% or more of endpoint attacks will not be stopped. The typical enterprise is managing approximately 135,000 endpoint devices today and 48% of them, or 64,800 endpoints, are undetectable on their networks. A recent Ponemon Institute report , sponsored by Adaptiva, found that the average annual budget spent on endpoint protection by enterprises is approximately $4.2 million. While endpoint spending continues to increase, so does the gap between how many endpoints are known and protected on a given enterprise’s network. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Zero-trust frameworks are needed to close endpoint gaps CISOs need to consider that defining a zero-trust network access (ZTNA) framework for their businesses accelerates how quickly they can close gaps in endpoint security. A close second priority must be adopting ZTNA techniques, including microsegmentation and least-privileged access, to protect both human and machine identities. It is common knowledge in the cybersecurity community that human and machine identities are under siege, with endpoints being the primary attack vectors. Cyberattackers use endpoints to take control and exfiltrate data from identity access management (IAM) and privileged access management (PAM) systems. In 2021, market revenue for ZTNA rose by 62.4% , according to an analysis by Gartner. The research giant’s 2022 Market Guide for Zero-Trust Network Access provides useful insights security and risk professionals can use to see how their organizations can benefit from zero-trust security. “Zero trust requires protection everywhere — and that means ensuring some of the biggest vulnerabilities like endpoints and cloud environments are automatically and always protected,” said Kapil Raina, VP of zero-trust, identity and data security marketing at CrowdStrike. “Since most threats will enter into an enterprise environment either via the endpoint or a workload, protection must start there and then mature to protect the rest of the IT stack.” A report from CrowdStrike found that, “adversaries have demonstrated their ability to operate in complex environments — regardless of whether they consist of traditional endpoints, cloud environments or a hybrid of both.” CrowdStrike’s threat hunting team identified 77,000 intrusion attempts, or one on average every 7 minutes. “A key finding from the report was that upwards of 60% of interactive intrusions observed by OverWatch involved the use of valid credentials, which continue to be abused by adversaries to facilitate initial access and lateral movement,” said Param Singh, VP of Falcon OverWatch at CrowdStrike.. Zero trust is the future of endpoint security Building a business case for adopting a ZTNA framework needs to cover cloud, endpoint security and insider risk scenarios to be effective. George Kurtz, CrowdStrike’s cofounder and CEO, said during his keynote at Fal.Con on how important consolidating security tech stacks are to customers. He emphasized the strategic role of extended detection and response (XDR) in the company’s product strategy, centering on endpoint detection and response (EDR) as its foundation. “Zero trust, by definition, requires multiple technologies and process elements — and demands scale of data analysis and speed of execution to stop modern attacks,” said Raina. “With most CISOs now looking to consolidate security vendors, they are looking for a platform approach. A platform approach ensures a frictionless execution to zero-trust deployment — and leverages an enterprise’s existing investments — all in a standards-based, integrated model.” Zero trust is the future of endpoint security because it addresses the following five areas: 1) Ransomware is endpoint security’s most persistent threat Ransomware continues to proliferate, increasing by 466% in three years. Ivanti’s Ransomware Index Report Q2-Q3 2022 identifies the vulnerabilities that most lead to ransomware attacks and how quickly undetected ransomware attackers work to take control of an entire organization. Ivanti’s report discovered 10 new ransomware families, totaling 170. There are 154,790 vulnerabilities in the National Vulnerability Database (NVD) that are the basis of the analysis. Additionally, 47 new vulnerabilities, or CVEs, were added to CISA’s Known Exploited Vulnerabilities Catalog in the last quarter alone. Unknown endpoints that often aren’t secured are what cyberattackers look for to launch ransomware attackers with these new ransomware families. Endpoint protection platforms (EPPs) are becoming increasingly data-driven. Leading vendors’ EPPs with ransomware detection and response include Absolute Software , whose Ransomware Response builds on the company’s expertise in endpoint visibility, control and resilience. Additional vendors include CrowdStrike Falcon , Ivanti , Microsoft Defender 365 , Sophos , Trend Micro , ESET and others. 2) Getting microsegmentation right is challenging, but essential The goal of microsegmentation is to segregate, then isolate defined segments of a network to reduce the total number of attack surfaces and reduce lateral movement. It’s a core element of zero trust and is integral to the NIST’s zero-trust architecture. Getting microsegmentation right is also table stakes for creating a successful ZTNA framework. It becomes challenging when defining which identities belong in a given segment: it often becomes an iterative process in assigning least privileged access to every human and machine identity across a network. 3) Eliminating agent sprawl, misconfigurations and breaches by automating device configurations Eighty-two percent of data breaches involve mistakes in configuring databases and administrator options and accidentally exposing entire networks to cybercriminals. There are 11.7 security agents installed on average on a typical endpoint today. The more security controls per endpoint, the more frequent collisions and decay occur, leaving them more vulnerable. Self-healing endpoint management platforms that can rebuild and reconfigure themselves after an intrusion attempt are in demand because they save IT’s time while reducing the risk of endpoint misconfigurations. Self-healing endpoints are designed to turn themselves off, automatically update device configurations, perform patch management and then redeploy themselves without human interaction. Over 150 cybersecurity vendors claim to have self-healing endpoint management platforms that can automate device configurations and deployment today. G2Crowd currently tracks 42 of them. Leaders include Absolute Software, which has firmware-embedded persistence technology that enables endpoints to self-heal while providing an undeletable digital tether to every PC-based endpoint. Others include Malwarebytes for Business, CrowdStrike Falcon Endpoint Protection Platform, Cybereason Defense Platform, ESET PROTECT Platform and Ivanti Neurons, which uses artificial intelligence (AI)-based bots for self-healing, patching and protecting endpoints. Additionally, Microsoft Defender 365 takes its own approach to self-healing endpoints by correlating threat data from emails, endpoints, identities and applications. 4) Automating patch management across endpoints reduces the risk of a breach Security professionals spend just over a third of their time on patch management and related coordination across departments. In addition, just over half of security professionals, 53%, say that staying on top of critical vulnerabilities takes up most of their time. Of the many advances in this area by EPP vendors, Ivanti’s launch of an AI-based patch intelligence system is noteworthy for its unique approach to scaling patch management. Neurons Patch for Microsoft Endpoint Configuration Monitor (MEM) is built using a series of AI-based bots to seek out, identify and update all patches across endpoints that need to be updated. Additional vendors providing AI-based endpoint protection include Broadcom , CrowdStrike, SentinelOne , McAfee , Sophos , Trend Micro, VMware Carbon Black , Cybereason and others. 5) Adopt a zero trust-based unified endpoint management (UEM) platform Verizon’s Mobile Security Index for 2022 discovered a 22% increase in cyberattacks involving mobile and IoT devices in the last year. Advanced UEM platforms can also provide automated configuration management and ensure compliance with corporate standards to reduce the risk of a breach. The most advanced platforms can protect employees’ devices without downloading and configuring agents, which is a significant time-saver for IT teams. CISOs continue to pressure UEM platform providers to consolidate their platforms and provide more value at lower costs. Gartner’s latest Magic Quadrant [subscription required] for UEM tools reflects CISOs’ impact on the product strategies at IBM, Ivanti, ManageEngine, Matrix42, Microsoft, VMware, Blackberry, Citrix and others. Ivanti and VMware were the only two vendors recognized by Gartner for their zero-trust capabilities. Gartner wrote in its Magic Quadrant update that “Ivanti continues to add intelligence and automation to improve discovery, automation, self-healing, patching, zero-trust security and DEX via the Ivanti Neurons platform.” This reflects the success Ivanti’s been having with multiple acquisitions over the last few years. Its series of successful acquisitions, including RiskSense, MobileIron, Cherwell Software and Pulse Secure, is looking to provide CISOs with the consolidated tech stacks they need to improve endpoint security and achieve their zero-trust objectives. Getting endpoint security right Going into 2023, CISOs will be under more pressure to consolidate tech stacks and improve visibility and control across all endpoints. It will be a challenge for many, as machine identities outnumber humans by 45 times or more. Self-healing endpoints capable of shutting themselves down when an intrusion attempt is detected, reconfiguring their system and agent software autonomously, reflect the future of endpoint security technology. Endpoints that rely on firmware to provide self-healing, resilience and an undeletable digital tether to every PC-based endpoint also provide valuable telemetry data, further improving visibility. This also enables ZTNA frameworks to identify every endpoint on a network, whether the device is connected or not. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,577
2,022
"How zero-trust architectures can prevent supply chain attacks | VentureBeat"
"https://venturebeat.com/security/how-zero-trust-architectures-can-prevent-supply-chain-attacks"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How zero-trust architectures can prevent supply chain attacks Share on Facebook Share on X Share on LinkedIn This article is part of a VB special issue. Read the full series here: Zero trust: The new security paradigm. Over the last few decades, global supply chains have become increasingly interconnected and complex. Organizations today depend on third parties to streamline operations, reduce costs and more. Although, third parties also leave organizations vulnerable to supply chain attacks. Many attacks originate from compromised software or hardware. By adding malicious code to a target vendor’s trusted software, threat actors can attack all the vendor’s client organizations simultaneously. The risk of such attacks also increases from data leaks at the vendor’s end, their use of internet-connected devices, and reliance on the cloud to store data. A preventive measure organizations can lean on to mitigate supply chain attacks is to assume that no user or third party can be trusted. That means adopting zero-trust security into one’s supply chain security environment. Supply chain vulnerabilities Supply chain attacks happen when one of your trusted vendors is compromised, and access to your environment is gained either directly or from a service, they provide. Maintaining security includes practices ranging from restricting access to sensitive data to assessing the risk associated with third-party software. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! There are several types of supply chain attacks and response measures differ depending on whether the attack is performed through hardware, software or firmware. In most cases, third-party suppliers gain access to a company’s processes, data and “secret sauce,” creating risks for the success of the company they supply. The U.S. Cybersecurity and Infrastructure Security Agency (CISA) recently released guides for developers and suppliers to make organizations aware of the importance of maintaining the security of supply chain software and the underlying infrastructure. CISA also warned that hackers and criminals could target government and industry through contractors, subcontractors and suppliers at all supply chain tiers. Such risks are manifold, and cyber risk is no less critical than operational risk or business risk, as a cyber event can trigger a whole cascade of consequences. Lorri Janssen-Anessi, director of external cyber assessments at BlueVoyant, says that cyberattackers tend to be opportunistic. It’s usually much easier to exploit a smaller link in the supply chain than to directly attack a larger company up the chain. “Often smaller companies, particularly companies whose business or services are not primarily technical, tend to have fewer resources focused on cybersecurity,” Janssen-Anessi told VentureBeat. “In some cases, the vulnerabilities are there because resources are focused on normal business operations and continuity [as opposed to] cyberdefense, which includes timely patching or mitigation. Therefore, continuously monitoring yourself and your supply chain for vulnerabilities is critical to move towards a preventative and proactive cybersecurity posture,” she said. Janssen-Anessi said that as the supply chain cybersecurity risk management space is still evolving, a recommended measure is to complement it with zero-trust architectures. These provide organizations with an additional layer of security when there is a compromised component. “Every single internal or external engagement from or to your organization is a vulnerability. By implementing a zero trust-based supply chain architecture, one can acknowledge this and ensure that the organization is continuously proactive against cyberthreats,” said Janssen-Anessi. Importance of zero trust for supply chain environments Zero trust leverages the principle of least privilege (PoLP), where every user or device is given only the bare minimum access permissions needed to perform their intended function. By controlling the access level and type, PoLP reduces the cyberattack surface and prevents supply chain attacks. Previously, supply chain organizations followed a legacy approach for protection, i.e., a simple VPN connection to the organization. An issue with legacy protection approaches such as VPNs was the lack of a clear way to specifically limit users to particular systems or aspects of the internal network without extensive customization. A VPN user would usually have full access to the internal network infrastructure and internal systems in that same network space. “As zero trust inherently requires validation at every stage, the possibility of a single system getting compromised, and the attacker pivoting to other systems, is significantly decreased,” said Delbert Cope, chief technology officer at FourKites. “With zero-trust architecture, a user has access only to specific systems that are assigned to them, which gives a user only what they need for a specific period.” Zero trust also strengthens enterprise security through microsegmentation. Creating smaller segments around IT assets helps reduce the attack surface and supports implementing granular policy controls to protect the organization from breaches and restrict the lateral movement of attackers. “Global supply chains are the most disconnected they will ever be from this point forward, and involving more parties in the supply chain increases insider threats,” Sean Smith, cybersecurity and logistics expert at Denim, told VentureBeat. “Zero trust requires all parties only to have the access they need for the time they need it. This includes physical segregation with biometrics and access cards and virtual security like virtual private networks, VLANs and network segmentation. Zero trust can not only help eliminate supply chain attacks, but also reduce the impact of those attacks and contain the damage.” In supply chain attacks, the initial attack vector is rarely the attacker’s final objective. Instead, attackers are always looking to access other parts of the victim organization’s network by moving laterally across it. Sometimes, their goal is to corrupt targeted systems or steal data. The Target and SolarWinds attacks are both examples of supply chain attacks aimed at facilitating lateral movement across the victim’s network. Implementing zero trust can prevent attackers from moving laterally through the network and causing more damage. A zero-trust architecture considers trust a vulnerability or weakness. To eliminate this weakness, it continually identifies and authenticates every user, identity and device before granting them access. It also cloaks the organization’s network to limit its visibility and prevent threat actors from moving laterally across it. With zero trust, organizations can protect their networks from remote service session hijacks, restrict threat actors’ ability to access resources and prevent them from installing malware. Key considerations for zero trust-based supply chain security The term “zero trust” applies to supply chain security architectures in two ways: to companies that provide the architecture, and to the products and services themselves. Component producers and service providers should have robust security programs — i.e., zero-trust architectures — that protect the products’ integrity. Component suppliers and service providers must work together to ensure that their products fit comprehensively into customers’ zero-trust strategies. Daragh Mahon, EVP and chief information officer at Werner Enterprises, said that security experts need to look for viable AI and SaaS-based solutions already on the market to build a fundamental base for zero trust-based supply chains. “Building a zero-trust architecture with [software-as-a-service] SaaS removes the need for constant updates and patching, freeing [IT teams] up for other tasks and projects,” Mahon told VentureBeat. “Organizations must also understand that transitioning from a brick-and-mortar tech stack will take some time, and they won’t see change overnight. During such a transition, IT teams must ensure that all day-to-day business functions can continue as the new system is launched, which often means a brief period where both legacy and zero-trust systems are in play.” Mahon also said that implementing SaaS-based zero-trust solutions is less time-intensive and more sustainable than maintaining legacy brick-and-mortar counterparts. “With zero-trust architectures, leveraging AI/ML for resource access/data access/network access and implementing robust trust policies is the key to success. Especially for high-risk data or processes where the trust policies are analyzed and reviewed, audited and fine-tuned,” said Muralidharan Palanisamy, chief solutions officer at AppViewX. According to Janssen-Anessi, before implementing zero trust-based supply chains, organizations should consider doing the following: Consider additional cyber-risk factors related to network/endpoint resource utilization, user install base, and popularity among user groups with privileged access, such as human resources, legal, IT and finance. Continuously monitor the extended vendor ecosystem, using contextual analysis to prioritize zero tolerance and critical findings mitigation. Relying on questionnaires or point-in-time scans is insufficient to reduce risk and prevent compromise or lost production time. Finally, employ platforms or solutions that proactively track how critical vendors address externally visible misconfigurations, and that will work with the vendors directly to reduce risk across their exposed attack surface. Challenges, and a future of opportunities Moty Jacob, CEO and cofounder of Surf Security, believes that the main challenge today is defining the maturity level of organizations’ supply chain management, and that organizations should consider taking security more seriously. “Process improvement needs to occur around two major aspects. Supply chain management must mature to the level of being collaborative and dynamic and the risk management framework needs to be proactive and flexible,” he said. “Zero trust is critical to use if organizations have any remote workforce, especially if their apps are in the cloud.” Likewise, Kyle Black, security strategist at Symantec by Broadcom Software, said that currently, the most significant challenge is that zero trust forces already overburdened groups to work together to plan their governance structure before implementing tools. “In the future, a challenge will be the ever-evolving needs of the business, which is why planning and governance upfront is critical,” Black told VentureBeat. “Without a strong governance structure, each new technology will need to be reconsidered with [respect to] how it fits into an organization’s zero-trust model. Instead, that should be part of the decision-making process and not an afterthought.” Black added that automation would be key for supply chain risk management in the future. It will be the only way to scale. “Being able to analyze your data services and applications continuously against your organizationally accepted zero-trust architecture will help identify new threats quickly and understand the priority in which those should be addressed,” he said. “It will also drive better outcomes for security operations and engineering by ensuring they know at all times why they are doing what they are doing.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,578
2,022
"How zero trust closes security gaps in multicloud tech stacks | VentureBeat"
"https://venturebeat.com/security/how-zero-trust-closes-security-gaps-in-multicloud-tech-stacks"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How zero trust closes security gaps in multicloud tech stacks Share on Facebook Share on X Share on LinkedIn This article is part of a VB special issue. Read the full series here: Zero trust: The new security paradigm. Mergers, acquisitions and private equity roll-ups combine companies to create new businesses, leading to more multicloud tech stacks and increased urgency to get zero trust right. Acquisitions nearly always also lead to tech stacks being integrated and consolidated, especially in cybersecurity. As a result, nearly all CISOs have consolidation plans on their roadmaps, up from 61% in 2021. Ninety-six percent of CISOs also plan to consolidate their security platforms, believing that consolidating their tech stacks will help them avoid missing threats (57%) and reduce the need to find qualified security specialists (56%) while streamlining the process of correlating and visualizing findings across their threat landscape (46%). Cybersecurity vendors, including CrowdStrike , are achieving revenue growth by providing customers with a clear path to consolidating their tech stacks. Why enterprises choose multicloud Multicloud is the de facto standard for cloud infrastructure, with 89% of enterprises adopting multicloud configurations, according to Flexera’s 2022 State of the Cloud Report. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The most common motivations for enterprises to take a multicloud approach include improved availability; best-of-market innovations; compliance requirements; bargaining parity on cloud provider negotiations; and avoiding vendor lock-in. Large-scale enterprises also look to gain greater geographical coverage of their global operations. CIOs tell VentureBeat that it’s necessary today to build a business case that shows how multicloud infrastructure spending will increase cloud adoption, improve cost savings and contribute to revenue gains. Boards of directors and C-level governance teams want to understand how spending on multicloud strategies will be secure, make economic sense and help improve the business’s resiliency and responsiveness. Defining multicloud Gartner’s definition says, “a multicloud strategy is the deliberate use of cloud services from multiple public cloud providers for the same general class of IT solutions or workloads — almost always IaaS and/or PaaS, not SaaS. Many organizations become ‘accidentally’ multicloud (through inadequate governance, M&A, or the like), rather than deliberately adopting a multicloud strategy.” Hyperscalers, including Amazon AWS, Microsoft Azure and Google Cloud Platform, offer full-stack support for Platform-as-a-Service (PaaS) and Infrastructure-as-a-Service (IaaS), as well as extensive developer support and future roadmaps reflecting AI and machine learning (ML) expertise. As a result, enterprises adopt and stay with multicloud infrastructure strategies in order to have access to innovations hyperscalers are working on today. Developing the core set of skills needed to manage each hyperscaler is a continual challenge for many IT departments, however, as are the increased costs of a multicloud strategy resulting from reduced discounts. Getting started with zero trust for multicloud tech stacks CISOs tell VentureBeat that one of the best ways to assure the success of a zero-trust network access (ZTNA) framework is to first clarify it for senior management and the board of directors where the boundaries are to implementation. Defining which hyperscaler partner will have responsibility for which area of the tech stack is table stakes. One of the best ways to accomplish this is using the Shared Responsibility Model. Many organizations rely on Amazon because of its clear approach to defining identity and access management (IAM). To create a ZTNA framework, organizations need to find IAM, PAM, microsegmentation and multifactor authentication (MFA) that can traverse each hyperscaler’s cloud platform. Zero trust must be baked in to deliver results “Zero Trust requires protection everywhere — ensuring that some of the biggest vulnerabilities like endpoints and cloud environments are automatically and always protected,” Kapil Raina, vice president of zero trust, identity and data security marketing at CrowdStrike, told VentureBeat during a recent interview. ”Since most threats will enter into an enterprise environment either via the endpoint or via a workload, protection must start there and then mature to protecting the rest of the IT stack.” Raina’s comments reflect how organizations can best approach securing multicloud tech stacks as part of a ZTNA framework. Initial steps include the following: Define the core requirements for an Identity Access Management (IAM) and Privileged Access Management (PAM) system that can span multiple hyperscalers. Don’t settle for the IAM and PAM each hyperscaler vendor provides, even if they promise it can close gaps in multicloud configurations. Cyberattackers innovate faster than enterprises and, in many cases, faster than cybersecurity vendors. Take advantage of the pressure CISOs are putting on vendors to consolidate IAM, PAM and other core apps on a common platform. The cloud has won the PAM market and is the fastest-growing platform for the IAM system. The majority, 70%, of new access management, governance, administration and privileged access deployments will be on converged IAM and PAM platforms by 2025. Reduce and eliminate emergency security projects to fix broken and inaccurate multicloud configurations. Acquired IT teams often get pulled into fire drills because integrations of multicloud tech stacks rarely go smoothly. Security misconfigurations can expose thousands of endpoints and lead to intrusions and breaches. Recent announcements by CrowdStrike , Google Cloud’s recent integration with Lacework and other developments underscore why cloud native application protection platforms (CNAPP) are needed today. Scott Fanning, senior director of product management, cloud security at CrowdStrike, told VentureBeat that the company’s approach to Cloud Infrastructure Entitlement Management (CIEM) enables enterprises to prevent identity-based threats from turning into breaches because of on improperly configured cloud entitlements across public cloud service providers. One of the key design goals is to enforce least privileged access to clouds and provide continuous detection and remediation of identity threats. Consider expanding beyond the logging and monitoring apps each hyperscale offers so you can get a 360-degree view of all network activity. On AWS, there’s AWS CloudTrail and Amazon CloudWatch that monitor all API activity. On Microsoft Azure, there’s Azure security logging and auditing and Azure Monitor. Leaders in cloud monitoring tools include AppDynamics, Datadog, New Relic, Dynatrace, Sumo Logic, PagerDuty and several others. Identify how an efficient audit can be performed on the multicloud tech stack early in the ZTNA roadmap. The more regulated the business, the more audits look at how well data is secured, especially in multicloud configurations. The Health Insurance Portability and Accountability Act (HIPAA), General Data Protection Regulation (GDPR) and the Payment Card Industry Data Security Standard (PCI DSS) all require ongoing audits, for example. Providing the reporting and audit histories required by these and other regulatory agencies needs to start with understanding how multicloud integration plans are defined. Engineering compliance in right at the start of a multicloud integration effort saves millions of dollars and thousands of hours of manual reporting effort by automating each regulatory agency’s unique reporting requirements. Multicloud tech stacks that include AWS instances don’t need an entirely new identity infrastructure. Quite the contrary. Creating duplicate identities increases cost, risk, overhead and the burden of needing additional licenses. Existing Active Directory infrastructures can be extended through various deployment options, each with its strengths and weaknesses. And while AWS provides key pairs for access to Amazon Elastic Compute Cloud (Amazon EC2) instances, its security best practices recommend Active Directory or LDAP should be used instead. Multicloud tech stacks are ‘in’ Multicloud tech stacks are becoming more commonplace as mergers, acquisitions and private equity roll-ups create new businesses by merging existing ones. New businesses resulting from mergers, acquisitions and private equity roll-ups must enable smooth and rapid communication between departments to keep revenue moving. That’s why integrating tech stacks becomes a high priority. Closing the gaps between tech stacks needs to start with a solid ZTNA framework that delivers least privileged access to resources, treats every identity as a new security perimeter, and stops intrusion attempts without slowing down the company’s ability to get work done. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,579
2,022
"How zero-trust methods thwart malicious hackers | VentureBeat"
"https://venturebeat.com/security/how-zero-trust-methods-thwart-malicious-hackers"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How zero-trust methods thwart malicious hackers Share on Facebook Share on X Share on LinkedIn This article is part of a VB special issue. Read the full series here: Zero trust: The new security paradigm. The term “ zero trust ” has been around for more than a decade — but it’s a misnomer, many security experts say. “It implies that an organization does not trust their people,” said Heath Mullins, Forrester senior analyst. “It’s far from the case, it’s not the case at all. It’s about securing against malicious actors, period.” Rather, experts say, it should be referred to as “trust enough,” “trusting the right amount,” or “least privilege” — particularly when it comes to thwarting malicious insiders. “It’s giving people the right amount of trust and no more,” said Charlie Winckless, senior director analyst for Gartner — who goes so far as to call “zero trust” a “terrible name”. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Ultimately, “it’s important that organizations look at the capability and not the buzzword that’s wrapped around it,” said Winckless. The increasing malicious insider threat There’s no question that insider threats are increasing: According to the Ponemon Institute , incidents have risen 44% over the past two years, with costs per incident up more than a third to $15.4 million. Furthermore, the time to contain an insider threat incident increased from 77 days to 85 days, leading organizations to spend the most on containment. Still, the term “malicious insider” — not unlike “zero trust” — is very often misunderstood. As Winckless explained, a malicious insider is anyone inside an organization who has access — or can easily get access — to information and then improperly use it. In the case of insider threats , this could be unintentional, he pointed out. In the first scenario, a user has access to an enormous amount of data simply because they need it to do their job. “They have the potential to abuse that for many reasons,” said Winckless. “That’s the hard case for a malicious insider.” The ability to get access, meanwhile, means that that access has been given even though a user doesn’t need it. Because, Winckless noted, from an organization perspective, it’s just easier to give access than to figure out what access a particular user needs. There are an enormous number of instances of “semi-malicious insiders,” said Winckless — that is, an employee taking proprietary data or other information with them when they leave, then using it for something else. Mullins agreed that “’malicious’ implies that it’s done on purpose,” whereas sometimes it can be more “benign.” Taking sales contacts or records, for instance because the user cultivated them and built up those relationships. “It’s not just what the threat is, but the motivation behind it,” said Mullins. A delicate balance of privilege and restriction Combating malicious insiders is more a matter of strategy than technology, said Winckless: Providing the right trust to an individual based on identity and context. Zero trust, or least privilege, is best for those getting access to things they don’t need to get access to, he said. They can’t use a new password or force their way onto a system; they only see the things they need to do the job. The case of users having access to information they need to do their jobs is a little more complicated, he said. Thwarting them involves monitoring and looking for anomalies. For instance, all of a sudden, a user begins behaving differently: downloading things they normally don’t, looking at things they otherwise don’t, or storing certain data or large amounts of it. “It’s a reason to say ‘Hey, what’s going on?’ and start to do further investigation into what could be happening,” said Winckless. Doing this right means balancing complexity with security, he said. There’s a fine line to be walked when it comes to culture. “You’ve got to be granular enough to give people the right access without making it so that it’s unmanageably complicated,” he said. Organizations should implement controls that limit users to applications, and ensure that those controls are consistent and easy to implement wherever a user sits (whether in an office, at home or while in limbo at the airport). Network access control, he pointed out, while useful, only works in the office. When looking at tools, Winckless advised, organizations should ask questions such as: Does it help provide the right trust? Open up more trust? Have nothing to do with trust? Does it just have a zero-trust name on it? Mullins also underscored the importance of finding platform-agnostic third parties. The zero-trust phrasing has been “hijacked by vendors,” he said, so don’t just blindly implement a tool from vendor X. There are a lot of vendors out there, a lot of competition, and some will have most of what an organization needs, or “be adjacent with a slight overlap.” Also, don’t base least privilege on vendor definition: Create your own definition and identify what the most important aspects are for your organization, said Mullins. Implement tools and best practices — don’t throw up roadblocks In crafting and implementing a strategy and associated tools, the very first thing should be to “perform by assessment,” said Mullins. The lowest-hanging fruit is often privileged access management (PAM). This restricts what users can do because they have to go through a single port, “basically a man in the middle.” This is particularly critical with the C-suite, as they are a top target, he said. Also, organizations shouldn’t overlook their HR heads or local admins. “They’re running the business, they’re not always worried about security on their endpoint,” said Mullins. Another important tool is just-in-time access, which limits users’ access to predetermined periods of time, on an as-needed basis, he said. Also, session tracing and time-outs, or step-up authentication, which requires additional levels of authentication. Still, the no. 1 rule is transparency. “You’re not trying to create a roadblock,” said Mullins. When users have to do things too many times, it becomes a burden. They may create IT help desk tickets that backlog the department, or “they start to take shortcuts, find other ways to get around those verification prompts, or stay logged on for longer,” said Mullins. Are they who they say they are? An increasing conundrum with malicious insiders is today’s work-from-home landscape. Organizations are often hiring people that they’ve never met in person, Mullins pointed out, or that they’ve only corresponded with on Zoom calls. That person, or entity, could simply be onboarding to get a nation-state or a collective the information that they are paid to acquire, he said. It’s critical to vet and verify. Look for unique identifiers, he said. For instance, if someone is doing an interview and you’re hearing very scripted responses, ask off questions as simple as, “do you have pets?” or “what do you do for fun?” “If it doesn’t feel right, it’s probably not right,” Mullins said. He also pointed to the practice of requiring users to log in, have their faces scanned, then, with subsequent logins, applying artificial intelligence (AI) to compare features. Also, in the U.S., employees have Social Security cards or passports, but that could be entirely different if they’re from a different country. It’s a gray area, said Mullins, and the question that organizations should ask is: “What constitutes enough of a verification?” Culture: The best way to thwart malicious insiders Organizations have given a lot of privileges to a lot of users, whether they need them or not, said Winckless. “Taking away something that a user already had is always painful,” he said. Addressing that culture and avoiding the “zero trust” phrase can be a less threatening and more friendly approach. Because, frankly, people want to avoid working at a place where they don’t feel trusted, he said. Mullins agreed that it all comes down to the culture piece. Simply put, “If you treat people well, you’re less likely to have a malicious insider.” Organizations should reinforce to employees that it’s not about them not being trusted, but rather, “this is my stuff, you can’t touch my stuff unless you are vetted and verified.” And, it’s important to get the message out that it’s not just about protecting their own assets. “The organization that you work for has all kinds of info on you,” said Mullins. “Wouldn’t you want to protect that? I would.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,580
2,022
"The manufacturing industry's security epidemic needs a zero-trust cure | VentureBeat"
"https://venturebeat.com/security/the-manufacturing-industrys-security-epidemic-needs-a-zero-trust-cure"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages The manufacturing industry’s security epidemic needs a zero-trust cure Share on Facebook Share on X Share on LinkedIn This article is part of a VB special issue. Read the full series here: Zero trust: The new security paradigm. Manufacturers’ tech stacks and industrial control systems (ICS) were designed to deliver speed and transaction efficiency first, with security as a secondary goal. Nearly one in four attacks targeted manufacturers in the last year. Ransomware is the most popular attack strategy, and 61% of breaches targeted operational technology (OT)–connected organizations. IBM Security’s X-Force Threat Intelligence Index 2022 states that, “Vulnerability exploitation was the top initial attack vector in manufacturing, an industry grappling with the effects of supply chain pressures and delays.” Cyberattacks are a digital epidemic sweeping manufacturing, costing businesses millions in revenue and hours of lost production time. Manufacturing accounted for 68% of all industrial ransomware incidents in the third quarter of this year. On top of that, Dragos discovered that manufacturers suffered seven times more industrial ransomware incidents than the food and beverage industry. Forty-four percent of manufacturers had to temporarily shut their production lines down due to a cyberattack earlier this year. Why manufacturing is the top target Threat actors see supply chain attacks as ransom multipliers that can generate millions of dollars in just days. That’s because disrupting manufacturing supply chains strikes at the heart of a manufacturer’s ability to meet customer orders and grow revenue. Many manufacturers quietly pay the ransom because they have no other choice. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Another reason manufacturers are a top target is that their tech stacks are often built on legacy ICS, OT and IT systems that were streamlined for production speed, shop floor efficiency and process control — with security often a secondary priority. Limited visibility across OT, IT, supply chain and partner networks is another primary reason manufacturers are getting breached so often. Trend Micro found that 86% of manufacturers have limited visibility into their ICS environments, making them an easy target for a wide variety of cyberattacks. A typical ICS is designed for process optimization, visibility and control. As a result, many have limited security in place. Most ICS systems rely on air gaps as the first line of defense. Ransomware attackers are using USB drives to deliver malware, jumping the air gaps that industrial distributors, manufacturers and utilities rely on for that first line. Additionally, 79% of USB attacks can potentially disrupt the operational technologies (OT) that power industrial processing plants, according to Honeywell’s Industrial Cybersecurity USB Threat Report, 2021. The Cybersecurity and Infrastructure Security Agency (CISA) issued an alert earlier this year warning of attacks targeting ICS and SCADA devices. The average damage from a manufacturing breach is $2.8 million. 89% of manufacturers who have suffered a ransomware attack or breach have had their supply chains disrupted. Many manufacturers targeted by ransomware attacks have either had to temporarily cease operations to restore data from backup, or chosen to pay the ransom. They include Aebi Schmidt , ASCO , COSCO , Eurofins Scientific , Norsk Hydro , Titan Manufacturing and Distributing , and many others who decide to remain anonymous. A ransomware attack on A.P. Møller-Maersk , one of the world’s largest shipping networks, is considered the most devastating cyberattack in history. Pursuing zero trust: A must for manufacturers The manufacturing industry must overcome the misconception that Zero Trust Network Access (ZTNA) frameworks are expensive, time-consuming and technologically challenging to implement. However, as they create a business case for zero trust complete with multicloud configurations factored in. When choosing a solution, IT must be aware that cybersecurity vendors sometimes misrepresent their zero-trust capabilities, often confusing potential clients about what’s needed and what the vendor’s offering can do. The NIST provides a series of cybersecurity resources for manufacturers. Start with multifactor authentication (MFA) across every endpoint Improving endpoint security is crucial for manufacturers, as every transaction they rely on to receive and fulfill orders passes through endpoints. Forrester’s report The Future of Endpoint Management defines the six characteristics of modern endpoint management challenges. Andrew Hewitt, the report’s author, told VentureBeat that when clients ask what’s the best first step they can take to secure endpoints, he tells them that “the best place to start is always around enforcing multifactor authentication. This can go a long way toward ensuring that enterprise data is safe. From there, it’s enrolling devices and maintaining a solid compliance standard with the UEM tool.” ZTNA frameworks need to start with endpoints Unfortunately, most mid-tier manufacturers’ IT staffs are already short-handed, making defining and implementing a ZTNA framework a challenge. A business case to pursue ZTNA-based endpoint security must be based on measurable, quantifiable outcomes. Cloud-based endpoint protection platforms (EPPs) provide an efficient on-ramp for enterprises looking to get started quickly. EPPs also increasingly support self-healing endpoints. Self-healing endpoints shut themselves off; re-check all OS and application versioning, including patch updates; and reset themselves to an optimized, secure configuration. All these activities happen without human intervention. Absolute Software , Akamai , CrowdStrike , Ivanti , McAfee, Microsoft 365 , Qualys , SentinelOne , Tanium , Trend Micro and Webroot have delivered self-healing endpoints to enterprises today. A manufacturer’s security perimeter is identities and data Every identity is a new security perimeter in the supply chain, across sourcing networks, service centers and distribution channels. Manufacturers need to adopt a ZTNA mindset that sees every human and machine identity outside their firewalls as a potential threat surface. That’s why, for manufacturers just starting with a ZTNA framework, finding a solution with Identity and Access Management (IAM) integrated as a core part of the platform is a good idea, and it’s essential to get IAM right early. Leading cybersecurity providers that offer an integrated platform include Akamai , Fortinet , Ericom , Ivanti and Palo Alto Networks. Ericom’s ZTEdge platform combines ML-enabled identity and access management, ZTNA, micro-segmentation and secure web gateway (SWG) with remote browser isolation (RBI). Remote browser isolation (RBI) solves manufacturers’ challenges in securing internet access RBI is a perfect solution for manufacturers pursuing a ZTNA-based approach to protecting every browser session from intrusions and breach attempts. RBI doesn’t force an overhaul of tech stacks, it protects them, taking a zero-trust security approach to browsing by assuming no web content is safe. Leaders in RBI include Broadcom, Forcepoint, Ericom, Iboss, Lookout, NetSkope, Palo Alto Networks and Zscaler. Ericom is noteworthy for its approach to zero-trust RBI by preserving the native browser’s performance and user experience while hardening security and extending web and cloud application support. The future of zero trust in manufacturing Cyberattackers have learned to target manufacturing businesses for maximum impact, asking for millions of dollars in ransom payments to return data and operable systems. Locking up a supply chain with ransomware is the payout multiplier attackers want because manufacturers often pay up to keep their businesses operating. That’s why the manufacturing industry needs to consider how to move quickly on zero trust. Implementing a ZTNA framework doesn’t have to be expensive or require an entire staff. The resources listed in this article are an excellent place to start. Gartner’s 2022 Market Guide for Zero Trust Network Access is another valuable reference that can help define guardrails for any ZTNA framework. With every identity and a new security perimeter, manufacturers must make ZTNA a priority going into 2023. Resources mentioned in this article: NIST’s series of cybersecurity resources for manufacturers Gartner’s 2022 Market Guide for Zero Trust Network Access Forrester’s The Future of Endpoint Management (defines modern endpoint management challenges) CISA’s alert on attacks targeting ICS and SCADA devices VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,581
2,022
"Why enterprises are getting zero trust wrong | VentureBeat"
"https://venturebeat.com/security/why-enterprises-are-getting-zero-trust-wrong"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Why enterprises are getting zero trust wrong Share on Facebook Share on X Share on LinkedIn This article is part of a VB special issue. Read the full series here: Zero trust: The new security paradigm. With remote work exploding amid the COVID-19 pandemic, zero trust has become a security process that enterprises depend on to protect hybrid working environments. Yet while so many organizations are looking to embrace zero-trust networking, many are getting it wrong, implementing limited access controls or turning to “zero trust in a box” solutions. Research shows that, according to one report, 84% of enterprises are implementing a zero-trust strategy — but 59% say they don’t have the ability to authenticate users and devices on an ongoing basis and are struggling to monitor users post-authentication. In addition, Microsoft notes that while (according to another report) 76% of organizations have started implementing a zero-trust strategy, and 35% claim to have it fully implemented, those claiming to have achieved full implementation admit they haven’t finished implementing zero trust steadily across all security risk areas and components. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Although these may seem small oversights, they can increase an organization’s exposure to risk significantly. A recent IBM report found that 80% of critical infrastructure organizations don’t adopt zero-trust strategies, which increased their average data breach costs by $1.17 million compared to those enterprises that do. False zero-trust promises and vendor lingo One of the most significant reasons that enterprises are getting zero trust wrong is that many software vendors use marketing that misleads them, not just about what zero trust is, but how to apply it, and whether certain products can implement zero trust. All too often, these marketing practices trick CISOs and security leaders into thinking zero trust can be purchased. “There’s a couple of mistakes a lot of people make in zero trust. First, and probably most common too, is approaching zero trust as something you can buy, a situation abetted by many vendors using the term in their marketing whether it applies to the product or not,” said Charlie Winckless, a senior analyst at Gartner. That being said, Winckless does note that there are legitimate solutions you can buy to lay the foundation for zero-trust architecture, such as zero-trust network access (ZTNA) and microsegmentation products. At the same time, Winckless warns enterprises about falling into the trap of trying to apply zero trust at too granular a level at the behest of software vendors. “Second (and again, I think a lot of the way vendors are latching onto the term) is trying to push too much security into zero trust. Fundamentally, Gartner thinks of zero trust as replacing implicit trust with adaptive explicit trust. If you push too much into it, then it becomes impossible to achieve well,” Winckless said. Getting away from a quick-fix mentality The reality of zero-trust adoption is that it’s a journey and not a destination. There’s no quick fix for implementing zero trust because it’s a security methodology designed to be continuously applied throughout the environment to control user access. “Organizations that get zero trust wrong are the ones looking for a quick fix or silver bullet. They also tend to look to a set of products to get them zero trust. They fail to understand or don’t want to acknowledge that zero trust is a strategy, it is an information security model,” said Baber Amin, COO of Veridium. Amin added, “Products can and do help achieve zero trust, but they need to be applied correctly. It’s just like purchasing the most expensive lock, which does not do anything if the door itself is not properly reinforced.” Amin also noted some of the most common mistakes organizations make besides confusing zero-trust strategy with product offerings. These mistakes include: failure to define proper access control policies to enforce the principle of least privileged (PoLP) failure to monitor access creep failure to implement multifactor authentication failure to classify and segment data lack of transparency over “shadow IT” overlooking the user’s experience To build a successful zero-trust strategy, security teams must be able to do more than continually authenticate users and devices. They must also monitor those users and devices post-authentication; microsegment their networks; and implement controls across on-premise and cloud environments to secure access to data at the application level. Over-reliance on legacy infrastructure Making the zero-trust journey is often easier said than done, since many enterprises are operating in environments with outdated and inflexible legacy infrastructure. This makes it more difficult to manage user access at speed. Over-reliance on legacy infrastructure is a well-recognized barrier to zero-trust adoption. For instance, a survey of 300 federal IT and program managers found that 58% said the biggest challenge to implementing zero trust is rebuilding or replacing existing legacy infrastructure. As a result, adopting zero trust is as much about undergoing digital transformation and replacing legacy infrastructure as it is about implementing new security controls and applying the principle of least privilege throughout the environment. “Traditionally organizations have always been behind the ball when it comes to adopting a ‘security first’ environment, and have purposely stuck with legacy models in order to cut costs on CIAM/IAM infrastructure [and] ensure users are not ‘burdened’ with extra authentication when accessing sites, files, etc., which may cause bad [user] experience or slow down overall productivity,” said Charles Medina, security engineer at Token. Organizations that need to deploy new tools to enable their zero-trust journeys also need to make sure that they’re training employees how to use the new solutions effectively. “The worst is when an organization deploys great tools that help with pushing a zero-trust model, but either aren’t trained in a proper deployment due to cost or simply don’t take the environment seriously,” Medina said. Lack of executive alignment Finally, achieving the buy-in necessary to undergo effective digital transformation rests on the ability of CISOs and security leaders to present zero-trust adoption as not just a security issue, but a business issue. CISOs need buy-in from other key stakeholders if they are to replace underlying legacy infrastructure and applications. After all, without significant investment in digital transformation, security teams won’t have the tools to implement basic access control and authentication models to manage and monitor user access. “Deployment is a step-by-step process which starts with developing and socializing a strategy with the business and establishing a governance framework which engages stakeholders in the change initiative — not just the CIO and CISO teams, but those business units who may be impacted by the implementation,” said Akhilesh Tuteja, global cybersecurity practice leader at KPMG. It’s critical that CISOs highlight the potential cost savings of going zero trust. They might, for instance, highlight Forrester research that illustrates how organizations that adopt Microsoft’s zero-trust solutions can generate a 92% return on investment (ROI) and a 50% lower chance of a data breach. This could help make the business case for investing in zero-trust controls. However, even with the support of other key stakeholders, zero trust isn’t a one-time effort, but an ongoing process. “At every stage in the process, there is potential for missteps and many surprises. Few businesses understand their IT estate, and quite how the various systems and applications interact. As you implement segregation and new access controls, things will break. Unexpected dependencies will be discovered, with surprising data flows and long-forgotten applications,” Tuteja said. Continuous improvement No matter how far along an enterprise is in its zero-trust journey, CISOs and security leaders can reduce the chance of making mistakes by viewing zero trust as a continual process, and committing to making incremental improvements to this process. Taking simple steps like making an inventory of assets that need to be protected, then deploying identity and access management (IAM) and privileged access management (PAM), can help to build zero trust from the ground up and develop a cultural mindset of continuous improvement. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,582
2,022
"Why Kubernetes security challenges call for a zero-trust strategy | VentureBeat"
"https://venturebeat.com/security/why-kubernetes-security-challenges-call-for-a-zero-trust-strategy"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Why Kubernetes security challenges call for a zero-trust strategy Share on Facebook Share on X Share on LinkedIn This article is part of a VB special issue. Read the full series here: Zero trust: The new security paradigm. Zero trust is a trending security paradigm being adopted by some of the world’s biggest and technically advanced organizations, including Google, Microsoft and Amazon Web Services (AWS). The technology finds its fit in virtually every technology platform and infrastructure, and Kubernetes is no exception. Across industries, there’s omnipresent pressure to deliver software that can perform faster, more efficiently and at a grander scale. Looking Into robust portability and flexibility, many IT organizations have turned to Kubernetes to help them efficiently meet the constantly evolving market demands. The Kubernetes community has been actively discussing zero trust for several years as a vital component of an end-to-end encryption strategy. Service mesh providers are promoting essential practices (such as mTLS and certificate key rotation) to make it easier to implement zero-trust architectures. As a result, organizations today are working towards implementing robust zero trust in applications at scale. Although using Kubernetes is an excellent option for enterprises that want to move more effectively and offer contemporary apps at scale, its relative newness and dynamic operating paradigm make it a potential target for security vulnerabilities if suitable measures are not implemented. Furthermore, with malicious parties continuously on the hunt for security flaws, even firms with extensive Kubernetes knowledge have faced data breaches. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! This also presents significant security challenges to teams who need to know how Kubernetes networking and security differ from traditional IT and infrastructure systems. Security challenges in Kubernetes While Kubernetes is a powerful solution for IT organizations to deliver their software efficiently and at scale, it is not without its security challenges and vulnerabilities. For one, Kubernetes is a relatively new system, which makes it attractive prey for cyberattackers. This is compounded by its operating model’s dynamic nature, which can easily leave room for bad actors to infiltrate if proper security measures are not taken. According to a recent report by the Shadowserver Foundation , 380,000 open Kubernetes API servers were found exposed on the internet this year alone. While these servers were only identified as exposed and not attacked, the figures indicate the severity of the vulnerability and its potential danger to API servers. Salt Security’s 2022 State of API Security revealed that 34% of examined enterprises have no API security strategy, even though 95% had their API security compromised in the last 12 months. “As more teams rely on Kubernetes to manage and deploy their applications, the risk of insecure access controls and segmentation increases,” Sam Rhea, VP of product at Cloudflare, told VentureBeat. Rhea said that attackers who gain access to the workloads being managed in a Kubernetes deployment can either take down entire services and applications or, in a worst-case scenario, use their privileged access to elevate their own permissions and reach sensitive data that the Kubernetes workloads can access. “Everything from how the management interfaces are accessed, where authentication and authorization in service-to-service communications occur, to the default-deny controls that must be put in place for east-west traffic within the environment, zero-trust principles are essential to secure Kubernetes deployments,” he said. The essence of combining zero trust with Kubernetes Container-based cloud deployments have recently shown rapid growth and adoption in production environments. According to a report by Markets and Markets , the global application container market is expected to grow from $1.2 billion in 2018 to $4.98 billion by 2023, at a compound annual growth rate of 32.9% during the forecast period. This growth is due to their ease of use in deploying streamlined and secure infrastructure, likely to be fueled by the increasing number of container orchestration and container security services deployed in enterprises globally. Kubernetes is one of the management systems leading the way, thanks to its flexibility, scalability and automation. In August 2020, the National Institute of Standards and Technology (NIST) released a whitepaper defining zero trust architecture (zero trust) and exploring “deployment models and use cases where zero trust could improve an enterprise’s overall information technology security posture.” Since then, various government agencies, including the Cybersecurity and Infrastructure Security Agency (CISA), have released several documents to guide zero-trust implementation, including a maturity model to help developers understand the journey to full zero-trust deployment. In a zero-trust model, nothing and no one is trusted. Instead, each element at each layer is tested and authenticated separately. When technological assets, apps or services connect and exchange data, the connection is routed through a specific agent that authenticates all parties and grants them access through policy-based rights. Zero-trust systems operate at every level by adhering to a least privilege rule: denying access to all parties save those explicitly authorized for a particular resource. Such a system is particularly crucial for cloud-native apps and infrastructure, as constantly validating privilege and identity is not only helpful but a security necessity. U.S. government on board with zero-trust security The zero-trust security model has grown in importance to the point where even the United States federal government took notice. The White House recently issued a memorandum outlining a national zero-trust strategy that requires all U.S. federal agencies to meet a specific zero-trust security standard by the end of fiscal year 2024. The Department of Defense established a zero-trust reference architecture. The National Security Agency also published a hardening guide that describes best practices for Kubernetes. Zero trust can help strengthen Kubernetes’ security posture and prevent attacks from internal and external threats by instituting the requirements above for users, programs and process requests to access pods. Arun Chandrasekaran, a VP analyst at Gartner, says that augmenting the native security mechanisms of Kubernetes distributions and public cloud Kubernetes services with container security tooling is highly critical for today’s work processes. “Kubernetes’ inherent complexity often leads to outdated versions and misconfiguration by organizations, making clusters susceptible to compromise,” said Chandrasekaran. “Hence, a zero-trust architecture that incorporates many aspects, such as adjustments for distribution and managed-provider uniqueness, continuous delivery considerations, cluster controls and augmentations with third-party tooling such as image scanning and workload protection, is critical to use.” The power of the service mesh A service mesh is one of the most straightforward approaches to addressing zero-trust networking in Kubernetes. The service mesh harnesses Kubernetes’ strong “sidecar” paradigm, in which platform containers can be dynamically deployed alongside application containers at deployment time as a late binding of operational functions. Service meshes use this sidecar strategy to infuse proxies into an application pod at runtime and connect these proxies to handle all incoming and outgoing traffic. This enables the service mesh to offer capabilities independent of the application code. “Implementing a service mesh (e.g., Istio) is a vital key to implementing zero trust in Kubernetes,” Abhay Salpekar, vice president, cloud operations and platform at Anomali told VentureBeat. Salpekar said that service meshes can now deliver features outside of the application, and this decoupling allows security staff to work independently of developers. According to him, this separation is a best practice, as both groups will still be working towards a common goal of a secure but feature-rich app. “Once installed and active, the auth policies for the service mesh must be defined, updated and evaluated for proper operation,” he said. “To leverage Kubernetes in a zero-trust environment, you can also consider using the secure production identity framework for everyone ( SPIFFE ), which provides authentication capabilities for workloads. Kubernetes also offers native tools that allow you to monitor your network and automate the creation of rules and policies.” Other best practices and key pillars Another advantage of using zero trust for Kubernetes architectures is that all microservices are separately validated for static and dynamic security and utilize zero-trust principles to protect themselves and each other. “Zero trust can aid in controlling access of users and external applications to the microservices when included in Kubernetes,” said Chalan Aras, risk and financial advisory managing director, cyber product and services at Deloitte. “This access is structured as a set of application programming interfaces (API) and user gateways that employ zero-trust principles around identity and continuous authorization to ensure the long-term security of the microservices within the Kubernetes cluster,” he said. Aras believes adhering to fundamental zero-trust principles should be the key practice for establishing and maintaining end-to-end zero trust in Kubernetes. The zero-trust chain starts from each microservice and extends to the individual user or external application API boundary. In his opinion, key practice elements should include the following: Building a secure service mesh for microservice communications while blocking all other communications for microservices. This ensures that all network flows are monitored and access to services is managed via proxies and access gateways. Utilizing user, API and application-assigned identities that can be verified and continuously authorized based on behavioral analysis to control access. Implementing controls for policy checking through tools such as cloud security posture management and orchestration to ensure that policies applicable to the cluster of microservices are consistently implemented as microservices are added, modified or removed over the lifecycle of the application. Future challenges and opportunities Daniel Thanos, head of Arctic Wolf Labs, said that all containers need to advertise and enforce a security posture attestation policy that can be verified by appropriate tooling before any access is granted. “As with all cloud/devops-oriented systems, the key challenge is automating these practices/tooling and shifting them left while making them a first-order artifact of how developers are creating the software/system,” Thanos told VentureBeat. “The current biggest challenge to implementing such architectures is that there are no easy off-the-shelf solutions. There is also a lack of standards to allow for the interoperability of disparate systems in this area,” he said. “Zero trust is still a largely proprietary domain in this area and only tends to practically work in closed ecosystems, which defeats the purpose of building loosely coupled distributed systems/web service-based applications over the internet.” “Organizations often tend to ignore the use of monitoring and alerting systems capable of understanding the difference between what is permitted to occur and what is actually occurring,” said Ryan Berg, engineering fellow at Alert Logic. “I find that the challenge is not often in the platform — Kubernetes, Serverless, [software-as-a-service] SaaS etc. — but in an organization’s ability to analyze requirements regardless of platform. If you can correctly understand what is essentially needed, the foundation of a Kubernetes deployment is a realistic objective,” he said. Likewise, Aras feels that future challenges for zero trust-based Kubernetes architectures include establishing controls that apply to well-established environments such as hyperscaler clouds and highly-distributed edge computing, where the cost of additional infrastructure and potentially less-reliable networks may create gaps that need to be addressed through new solutions. “As greater volumes of edge computing are required for real-time services and IoT, the power of Kubernetes in highly distributed environments is going to have to scale to meet the demands of cooperating services,” he said. “Zero trust-based services in Kubernetes today, scaled and optimized for large deployments, are going to be essential for application environments of the future.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,583
2,022
"Why zero trust needs to live on the edge | VentureBeat"
"https://venturebeat.com/security/why-zero-trust-needs-to-live-on-the-edge"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Why zero trust needs to live on the edge Share on Facebook Share on X Share on LinkedIn This article is part of a VB special issue. Read the full series here: Zero trust: The new security paradigm. Edge computing’s diverse platforms defy easy consolidation into a single security stack. This leaves networks vulnerable to endpoint attacks they never see coming. Yet, edge and IoT platform providers have only recently moved away from the “trust but verify” philosophy and begun instead “designing in” technology that treats every endpoint and identity as a new security perimeter. The truth is, most edge and IoT platforms used today weren’t designed with enough security to withstand endpoint attacks. CISOs struggle to integrate these platforms into a single security stack because legacy edge, and IoT platforms are designed to lean on server and operating system security. Interdomain trust relationships that don’t enforce least privileged access by account or resource leave wide swaths of endpoints vulnerable to intrusion and breach attempts. To avert devastating breaches, CISOs need to secure edge computing and IoT platforms across the full stack they rely on. Hardware, operating system, app platform, data, network security — enterprises need to look at how zero trust can meet the challenge of securing complete tech stacks for edge computing and IoT networks. Hyperscalers are competing to secure edge and IoT computing Amazon Web Services ( AWS ) for the Edge , Microsoft Azure Stack Edge and Google Cloud Platform (GCP) Distributed Cloud are each focusing R&D on helping enterprises solve edge computing, IoT and cybersecurity challenges. Of the three, AWS leads the market in defining how IoT can contribute to a zero-trust network access (ZTNA) framework by prioritizing machine identities as a core part of any organization’s zero-trust security strategy. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! At AWS re:Invent 2022 last year, AWS launched IoT ExpressLink. AWS designed this noteworthy cloud service to fast-track new IoT devices through devops cycles, then release them with AWS IoT Device Defender integrated. AWS also continues to make improvements to AWS IoT Greengrass , adding features asked for by customers who want to automate patch management at scale across fleets of IoT and network devices. AWS contends that standardizing its cloud platform for edge and IoT device management and security gets CISOs and security teams closer to their single-stack goal of securing all devices. One of the main reasons AWS has such a strong leadership position securing edge and IoT devices is how complementary Amazon’s zero-trust vision is to the NIST 800-207 architecture standard. As a result, AWS customers who use ExpressLink and Greengrass as part of their ZTNA framework can secure machine identities of each edge, IoT and IIoT sensor to the operating system and, if needed, the kernel level. Getting started designing zero trust into edge and IoT networks “Zero trust is being considered or deployed by most enterprises, so the debate on the need for zero trust is over; however, well over half will fail to see the benefits,” Kapil Raina, vice president of zero trust, identity and data security marketing at CrowdStrike told VentureBeat in a recent interview. “To overcome these challenges, enterprises must operationalize and make zero trust frictionless with a single platform and single sensor architecture — and that means endpoints, workloads and other technology areas.” Gartner’s 2022 Market Guide for Zero-Trust Network Access is a valuable reference for learning about zero-trust security and what considerations go into creating a ZTNA framework. Hyperscalers have the advantage of providing an integrated platform that includes edge, IoT and zero-trust security apps and tools. However, many organizations still face the challenge of securing edge and IoT endpoints on legacy tech stacks. The following are areas where organizations grappling with multiple diverse edge and IoT tech stacks can start. Make IAM and PAM priorities on the ZTNA roadmap Most, if not all, legacy edge and IoT platforms were not designed to support identity access management (IAM) and privileged access management (PAM) systems, including securing credentials and administrative passwords. As a result, there was a 34% increase in security vulnerabilities for IoT in the second half of last year alone. With cyberattackers focusing on how to take control of IAM and PAM servers, securing these two systems needs to be a priority. Edge and IoT sensor identities: Moving targets to protect As edge, IoT and IIoT sensors and their supporting networks grow more complex, it’s increasingly challenging to have a unified IAM strategy across all human and machine identities. 25% of security leaders say the number of identities they’re managing has increased by a factor of 10 or more in the last year. Furthermore, 84% of security leaders say the scope of identities they’re managing has doubled in the last year. Forrester ’s estimation is that machine identities (including bots, robots and IoT) grow twice as fast as human identities on organizational networks. Design zero-trust frameworks to authenticate mobile edge, IoT and IIoT devices Mobile endpoints that are essential in logistics, supply chains, warehouse management and strategic sourcing are one of the fastest-growing threat vectors. Gaining visibility and control across mobile devices needs to start with a Unified Endpoint Management (UEM) platform capable of delivering device management capabilities that can support location-agnostic requirements. These requirements include cloud-first OS delivery, peer-to-peer patch management and remote support. CISOs are looking at how a UEM platform can help solve their tech stack challenges while improving users’ experiences with endpoint detection and response (EDR). Gartner’s latest Magic Quadrant for Unified Endpoint Management Tools defines I BM , Ivanti and VMWare as market leaders. Gartner observed, “Ivanti Neurons for Unified Endpoint Management is the only solution in this research that provides active and passive discovery of all devices on the network, using multiple advanced techniques to uncover and inventory unmanaged devices. It also applies machine learning (ML) to the collected data and produces actionable insights that can inform or be used to automate the remediation of anomalies.” ‘Designing in” zero trust needs to be continuous to succeed Amazon continues to set a quick pace of innovation in extending its AWS platform into edge and IoT management, zero-trust security and device monitoring. For enterprises looking to migrate workloads to the cloud and launch edge- and IoT-based strategies, hyperscalers are making convincing cases that their approaches provide the necessary visibility and control. For enterprises that are not ready to move to an entirely cloud-based platform, or are deeply invested in their current tech stacks, pursuing a zero-trust strategy needs to start with IAM and PAM securing endpoints. Getting IAM and PAM right early when creating a ZTNA framework is key to enforcing least privileged access at the device and resource levels. One more point to note: Edge and IoT networks are becoming self-healing, further extending their ability to enforce least privileged access. Srinivas Mukkamala, chief product officer of Ivanti, told VentureBeat that “automation and self-healing improve employee productivity, simplify device management and improve security posture by providing complete visibility into an organization’s entire asset estate and delivering automation across a broad range of devices.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,584
2,022
"Zero trust is too trusting: Why ZTNA 2.0 won't be | VentureBeat"
"https://venturebeat.com/security/zero-trust-is-too-trusting-why-ztna-2-0-wont-be"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Zero trust is too trusting: Why ZTNA 2.0 won’t be Share on Facebook Share on X Share on LinkedIn This article is part of a VB special issue. Read the full series here: Zero trust: The new security paradigm. While the concept of zero trust can be dated as far back as 2009, when Forrester analyst John Kindervag popularized the term and eliminated the concept of implicit trust. It wasn’t until the COVID-19 pandemic that adoption began to pick up steam. Okta research finds that the percentage of companies with a defined zero-trust initiative more than doubled from 24% in 2021 to 55% in 2022, coinciding with the increase in remote and hybrid working environments during the pandemic. But what is zero trust, exactly? According to Kindervag in a blog post , zero trust “is framed around the principle that no network user, packet, interface, or device — whether internal or external to the network — should be trusted.” Under this approach, “every user, packet, network interface, and device is granted the same default trust level: zero.” Zero trust effectively means that all users have to authenticate before they can access enterprise apps, services, resources or data. It’s a concept designed to prevent unauthorized threat actors and malicious insiders from exploiting implicit trust to gain access to sensitive information. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! However, there are some who believe that the concept of zero trust is incomplete and requires a new iteration in the form of zero-trust network access 2.0 (ZTNA 2.0). Defining ZTNA 2.0 In a nutshell, ZTNA 2.0 is an approach to zero trust that applies least privileged access at the application layer without relying on IP addresses and port numbers, and implements continuous trust verification, monitoring user and app behavior, to ensure the connection isn’t compromised over time. “ZTNA 1.0 uses an ‘allow and ignore’ model. What we mean by that is, once access to an application is granted, there is no further monitoring of changes in user, application or device behavior,” said SVP of product and GTM at Palo Alto Networks , Kumar Ramachandran. Under ZTNA 1.0, once a user connects to an app once, the solution assumes implicit trust from that point onward. In effect, the lack of additional security inspection and user behavior monitoring means these solutions can’t detect compromise, leaving them vulnerable to credential theft and data exfiltration attacks. For Ramachandran, this is a critical oversight that ruins the underlying integrity of least-privileged access. “This might sound shocking, but the ZTNA 1.0 solutions implemented by vendors actually violate the principle of least privileged access, which is a fundamental tenet of zero trust. ZTNA 1.0 solutions rely on outdated contracts to identify applications, like IP addresses and port numbers,” Ramachandran said. On the other hand, ZTNA 2.0 continuously authorizes and monitors user access based on contextual signals, giving it the ability to withdraw access from users in real time if they start behaving maliciously. Is this a legitimate iteration of zero trust or a buzzword? Outside of Palo Alto Networks’ perspective, analysts are divided on whether ZTNA 2.0 stands on its own as an iteration of zero trust, or whether it’s a buzzword. “Zero Trust 2.0 is nothing but marketing, really driven from one vendor. It’s not really an evolution of the technology. This means that there really isn’t a fundamental difference; zero trust is and has been about reducing access to what is required to do a job and no more, and to enforce this based on identity and context,” said Charlie Winckless, senior analyst at Gartner. “Much of the language around ZTNA 2.0 is simply catching up to innovators in the space and what their products already offered. Not all the capabilities will be needed by all clients, and selecting a vendor is more than about a fake marketing term. It’s the 2.0 release for the vendor, not of the technology.” Winckless said. However, there are others who believe that ZTNA 2.0 does make some limited tweaks to traditional zero trust. “ZTNA 2.0 was coined in 2020 by a vendor in response to the NIST 800-207 publication. The only real differences are the addition of continuous monitoring and step-up authentication via privilege assessment, based on the resource being accessed, some form of DLP [data-loss prevention] capabilities, and additional CASB [cloud access security broker] coverage,” said Heath Mullins, senior Forrester analyst. So why does ZTNA 2.0 matter? Fundamentally, ZTNA 2.0 doesn’t challenge the underlying assumptions of zero trust, but seeks to reevaluate the approaches that ZTNA 1.0 solutions take to applying access controls, which are open to compromise. “In more modern ZTNA 2.0 technologies, authorization not only occurs upon the initiation of a session, but continuously and dynamically throughout a connected session,” said Andrew Rafla, principal at Deloitte and Touche LLP, and member of the cyber and strategic risk practice of Deloitte Risk and Financial Advisory. “This feature helps alleviate the risk of compromised credentials and session hijacking attacks,” Rafla said. Given that stolen credentials contribute to almost 50% of data breaches, organizations can’t afford to assume that user accounts are unlikely to be compromised. Thus, when looking at building a zero-trust strategy, ZTNA 2.0 solutions have a role to play in helping apply more effective controls at the application level that are responsive to account takeover attempts. That being said, zero trust remains an iterative approach to securing user access, and implementing a ZTNA 2.0 solution can’t make an organization implement zero-trust access controls “out-of-the-box.” Moving forward on the zero-trust journey Whether an organization decides to use ZTNA 1.0 or ZTNA 2.0 solutions to enable its zero-trust journey, the end goal is the same: Eliminating implicit trust, implementing the principle of least privilege and preventing unauthorized access to critical data assets. It’s important to emphasize that, while ZTNA 2.0 provides a useful component in the zero-trust journey for applying the principle of least privilege more effectively at the application level and making security teams more responsive to compromise, it’s not a shortcut to implementing zero trust. The only way to fully implement zero trust is to create an inventory of resources and data throughout the enterprise environment and systematically implement access controls to ensure that unauthorized access is prevented. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,585
2,023
"Securing generative AI starts with sustainable data centers | VentureBeat"
"https://venturebeat.com/security/securing-generative-ai-starts-with-sustainable-data-centers"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Securing generative AI starts with sustainable data centers Share on Facebook Share on X Share on LinkedIn Illustration by: Leandro Stavorengo Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Enterprises are increasingly experiencing attacks on their artificial intelligence (AI) infrastructure, with 41% having experienced an AI privacy breach, according to an August 2022 Gartner report. Twenty-five percent have experienced malicious, intentional attacks on their AI systems and infrastructure. Cyberattacks aimed at AI infrastructure most commonly focus on data poisoning (42%), adversarial samples (22%) and model stealing (20%). Despite the growing number of cyberattacks aimed at their AI infrastructures, enterprises are becoming more prolific in designing, testing and deploying models. Seventy-three percent have deployed hundreds of models into production, and large-scale enterprises have thousands of models today. CIOs and CISOs, especially in banking, finance, infrastructure, manufacturing and professional services — where models are increasing the fastest — tell VentureBeat they have concerns about keeping up from a security standpoint with the proliferation of models in development and actively deployed. Generative AI and machine learning (ML) model security and risk management is a board-level discussion across all industries. The senior management teams of infrastructure, manufacturing, and professional services are focused on gaining greater insight into risks using AI and machine learning. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “Understanding vulnerabilities and gaining insight at both the site and enterprise level will help enable faster and more informed decisions to better defend against cyberattacks, reduce potential downtime and create a safer environment for our employees,” Chase Carpenter, Honeywell chief security officer, told VentureBeat. Data centers are a high-value AI target Too much focus on cost reduction alone without sustainability designed into data center infrastructure leaves them vulnerable to cyberattacks that capitalize on weak points in infrastructure. Reducing energy costs without a sustainable long-term plan delivers short-term cost savings, but leaves a data center vulnerable to attacks that can shut an entire facility down. Examples include attacking cooling systems, disabling air flow, and damaging servers, CPUs, and GPUs. Another is assuming web servers, VPN appliances and endpoints are protected without investing in microsegmentation or endpoint security to protect them. “Cyberattacks from Advanced Persistent Threat (APT) groups that are state-sponsored are ramping up this year; we can see it in our monitoring data,” confided the CISO of a utility provider doing extensive generative AI and ML model development. “We used to see our data centers get attacked sporadically, but now it’s a steady stream of state-sponsored attacks looking to penetrate data centers and see what new AI-based monitoring technologies we have under development.” The utilities CISO says the Chinese cyberattacker group APT41 is active across global utility power grids and is actively looking to gain new generative AI and Ml technologies. Their attack strategies concentrate on using phishing emails and malware to gain access to the networks of power companies and grid operators. They’re most known in the utility industry for their 2019 cyberattack on data center providers in Asia, and the U.S. APT41 hackers exploited unpatched vulnerabilities in VPN devices, unprotected endpoints and web servers that weren’t protected with basic cybersecurity or zero trust hygiene. APT41 exfiltrated data, including intellectual property, AI and ML model development underway, and patents under development with Asian-based research institutes. Sustainability needs to deliver stronger cybersecurity With data centers under attack for the valuable generative AI and ML models under development and deployed, a one-and-done mentality never works. CISOs of banking and financial services firms whose data centers see regular state-sponsored attacks say it’s possible to improve sustainability and cybersecurity simultaneously. “We’re taking a holistic approach to the challenges of becoming more sustainable and hardening our data centers and their many integrations points back to DevOps and engineering,” said the CISO of a professional consulting firm whose clients are in banking. Staying in compliance with broader sustainability initiatives is essential to continually win new business in the years ahead. So is keeping a data center hardened enough so its physical infrastructure can’t be attacked. Here are the four strategies learned by CISOs and CIOs who have experienced data center breaches aimed at their generative AI and ML model development: Gain greater visibility across every data center asset, including energy usage first. It’s common knowledge that most enterprises don’t know where 40% of their endpoints are at any given time. In a data center, that’s a breach waiting to happen. CISOs tell VentureBeat that getting real-time visibility of every endpoint and its specific asset management profile is invaluable in helping to alleviate a breach. Tracking the energy consumption of an asset, including the segment of server blocks across their data center floors, helps provide insight into unusually high activity, which could signal the need to upgrade, repair, or replace servers. Microsegment every physical system the data centers rely on – and optimize their energy spend. APT41 is known for its expertise in attacking data center cooling systems and driving the temperatures so high that CPU, GPUs, and server silicon risk being destroyed. In retrospect, CISOs tell VentureBeat that micro-segmenting the industrial control systems (ICS) that control heating, cooling, environmental conditions, fault-tolerant batteries and backup systems are a must-have. Assume a breach has already happened and HVAC, environmental and power systems are compromised to harden a data center enough to withstand another attack. From a sustainability standpoint, every CIO and data center team VentureBeat interviewed for this article says they are advanced in using AI- and ML-based tools to analyze energy usage by asset type and group. What’s missing are insights into how all assets across a data center can be better orchestrated to reduce carbon footprints and how all data centers can be viewed in aggregate to reduce their environmental impact. Boards of directors want the roll-up view of how data centers are progressing towards sustainability and environmental, social, and governance (ESG) targets, and often, CIOs have their teams doing this manually every quarter. Real-time monitoring is table stakes for making progress on sustainability and cybersecurity. What was once considered optional and sometimes procrastinated about because of its expense is now the core of an effective sustainability and cybersecurity strategy. CISOs whose data centers have been hacked say that if they had real-time monitoring on every server, asset, endpoint, and power source, they could have identified the intrusion faster and had a chance to stop the breach. The more accurate the telemetry data real-time monitoring provides, the better the threat modeling and models to identify anonymous activity that could indicate an intrusion. Real-time data is the lifeblood of sustainable and secure data centers. Consolidate data center tech stacks to gain greater efficacy and sustainability. Data centers that get hacked have complex security tech stacks that experienced cyber attackers know how to find gaps in. It’s common to hear a CISO with a data center breached say that the cyber attackers seemed to know their network better than the admins managing them. VentureBeat has learned that more banking, financial services and professional services firms are basing their consolidation strategies around extended detection and response (XDR). Ninty-six percent of CISOs plan to consolidate their security platforms, with 63% saying (XDR) is their top solution choice. Gartner predicts that by year-end 2027, XDR will be used by up to 40% of enterprises to reduce the number of security vendors they have in place, up from less than 5% today. An attribute all XDR leaders have is deep talent density in AI and ML across their teams. Leading XDR platform providers include Broadcom , Cisco , CrowdStrike , Fortinet , Microsoft , Palo Alto Networks , SentinelOne , Sophos , TEHTRIS , Trend Micro and VMWare. By consolidating tech stacks, XDR also contributes to data centers achieving their sustainability goals. Reducing data centers’ energy consumption and carbon footprints by eliminating redundant security tools and streamlining security operations is key to a successful tech stack consolidation. XDR’s use in data centers is proving effective in improving resilience and reliability by providing faster and more accurate threat detection and response. XDR is helping data centers save up to 50% of energy costs and reduce CO2 emissions by up to 85%. Additionally, XDR can improve the performance and availability of data center applications by minimizing downtime and disruption caused by cyberattacks. Hardening data centers is core to generative AI’s future. Four strategies deliver the most practical value in securing data centers immediately, according to CISOs who have lived through an intrusion and breach attempt. For the utilities CISO being routinely scanned and probed by state-sponsored actors, the need to be vigilant and make the four strategies core to their operations is key. Real-time data and XDR are helping keep intrusion attempts out, and microsegmentation protects HVAC, power, and related subsystems. Data centers whose enterprises are known for generative AI and ML expertise are targets today. From the interviews VentureBeat has had recently, nation-state attacks are ramping up with a primary focus on power grids and related technologies. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,586
2,023
"The cyber risks of overheating data centers | VentureBeat"
"https://venturebeat.com/security/the-cyber-risks-of-overheating-data-centers"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages The cyber risks of overheating data centers Share on Facebook Share on X Share on LinkedIn Illustration by: Leandro Stavorengo Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. The heat is on. Climate change creates new challenges for data centers while exposing a new vulnerability that attackers can quickly weaponize. The burning problem of overheated servers caused by record heat waves has melted down data centers from Los Angeles to London. Many data center cooling systems weren’t designed to withstand the heat waves the world is experiencing today. Cooling systems are failing under the strain, allowing servers to overheat, leading to many of the world’s most popular websites and applications crashing. Attackers want to weaponize heat Companies who trade off lower energy costs for running a slightly hotter data center are inviting a breach or, at the least, a data center meltdown. No one cost-reduced their way into a secure data center. Sustainability is the path away from spiraling energy costs. Attackers aim to weaponize heat and exfiltrate billions of dollars in data from data centers by attacking cooling systems. From cybercrime groups to sophisticated Advanced Persistent Threat (APT) attack teams, many funded by nation-states expect more data center attacks where heat is the attacker’s weapon. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Don’t invite cyber risk by overheating data centers Datacenter costs continue to spiral to record levels for many companies, with energy costs outpacing all other expense categories. Making cooling as efficient as possible is critical to data center profitability. Cooling accounts for approximately 40 percent of a data center’s energy consumption. While data centers continue to make strides in improving energy efficiency by phasing in sustainability, starting with improved cooling methods, many are introducing greater cyber risk by marginalizing how far they could go with sustainability. “Data centers are big energy consumers—a hyper scaler’s data center can use as much power as 80,000 households do. Pressure to make data centers sustainable is therefore high, and some regulators and governments (including Singapore and the Netherlands) are imposing sustainability standards on newly built data centers,” according to McKinsey. Despite record levels of capital investment in sustainability , data centers still see overheated servers prone to failure, leading to outages. New cost-effective cooling technologies, including outside air cooling, are cost-effective, yet they can introduce contaminants into a data center infrastructure and potentially damage hardware. Another approach data centers take to reduce cooling costs is raising server inlet temperatures. It’s a calculated risk that the cost savings will be worth the increased risk of potentially causing server CPUs to fail. It’s well-known in data centers that servers are the single greatest cause of outages, making the cost savings questionable of allowing temperatures to rise. Server outages cause 30% of all data center interruptions and outages. Heat-induced server failures drive unplanned outages that disrupt data center operations and can cause websites, apps, and online storage to fail unpredictably, costing billions of dollars in lost productivity. VentureBeat interviewed several data center recovery specialists who spoke on condition of anonymity regarding how chronic data center overheating is. They affirmed that data centers are redlining to save on costs, with many struggling to keep server inlet temperatures below 80°F, the consensus standard for server cooling. Cost savings are winning out over reducing cyber risk. “Data centers are in for a wake-up call if climate change continues to deliver triple-digit heat waves and they don’t get serious about long-term, more sustainable and affordable cooling that doesn’t invite more risk,” a leading data center recovery specialist told VentureBeat. Twitter’s Sacramento data center going offline due to extreme heat in 2022 was prescient of how extreme heat could affect data center performance in the future. In an internal memo to engineers, Carrie Fernandez, Twitter’s vice president of engineering wrote, “On September 5th, Twitter experienced the loss of its Sacramento (SMF) data center region due to extreme weather. The unprecedented event resulted in the total shutdown of physical equipment in SMF.” Fernandez says that the company’s data center was in a “non-redundant state” after extreme heat caused an outage at its Sacramento data center. She called the incident “unprecedented” and said the heat wave led to “the total shutdown of physical equipment.” The Twitter outage originated due to an extreme heat wave. Cyberattackers noticed this and other extreme heat-based outages and continue to fine-tune their tradecraft to attack HVAC, electricity and redundant power systems. Specialists cite an incident in 2021 as a cautionary tale of redlining server heat to save on costs. A data center operator in Singapore raised temperatures to borderline unsafe levels to save on cooling costs, leading to the data center servers melting down and widespread server failures. The meltdown lasted nearly a week, leading to thousands of customers experiencing outages. Data center attacks that weaponized heat Attackers are fine-tuning their tradecraft and creating malware that attacks cooling systems to force a data center meltdown to get their ransomware demands met or make a political statement. A data center in Atlanta, Georgia, was hit with a cyberattack in 2018 that led to the shutdown of several city services, including the municipal court, the police department and the Hartsfield Atlanta airport. Cyberattackers used a variant of SamSam ransomware designed to encrypt data on every available server. Attackers also penetrated the data center’s cooling system, causing temperatures to rise above 100 degrees, damaging server CPUs and related silicon-based equipment. Cyberattackers demanded a $51,000 Bitcoin to unlock servers and release their cooling system control. An Iranian data center was the victim of a cyberattack in 2019 that disrupted its power supply and cooling systems, causing servers and supporting systems to overheat quickly. An adversarial nation opposing Iran’s nuclear program took responsibility for the attack, using the malware program Stuxnet designed to target and bring down industrial control systems. Iranian data center operators say the malware caused the centrifuges at the data center to spin out of control and break down. A data center in Singapore was attacked in July 2022, disrupting several government agencies, banks and media outlets’ online servers. Attackers exploited a firewall vulnerability, causing servers to malfunction due to overheating. An Indonesian hacking group took responsibility for the attack, claiming it was in retaliation to Singapore’s ongoing support of Myanmar’s military junta. Striking a balance between security and sustainability Data centers face the challenging paradox of continually increasing storage volume, reducing access latency, controlling costs and finding new ways to harden themselves from cyberattacks. Adding to the challenges is the pressure data centers are to reduce their environmental impact and energy consumption, as data centers account for about 1% of global electricity use and about 0.3% of global greenhouse gas emissions. Data center operators are creating innovative new strategies to achieve these challenging goals. They include relying more on renewable energy sources, water-efficient cooling systems and waste heat recovery technologies to improve sustainability. VentureBeat has learned that the following strategies are paying off the most from data center owners and recovery experts implementing these programs: Get in the habit of conducting detailed thermal mapping to identify hot spots and optimize cooling. Datacenter recovery specialists say this is a blind spot for many data center operators who procrastinate getting thermal mapping done periodically. Given how quickly servers can degrade over time when exposed to extreme temperatures, it’s a good idea for this task to become part of any data center’s muscle memory. Consider how AI can help improve power consumption, strengthened with eco-friendly chillers and evaporative cooling. The benefits AI can bring to the data center are just beginning, according to the experts and data center operators VentureBeat spoke with. One considered AI optimization critical to their success in meeting sustainability benchmarks needed to achieve internal and regulatory standards benchmarks. Cautious of exceeding server inlet temperatures, more data centers are also using AI to interpret and trigger alerts and actions in real time, adjusting dynamically to prevent overheating while maximizing efficiency. Redundant cooling systems with fault-tolerant power sources are the future of data center cooling. It’s undeniable that the upsurge in heat waves and the data center failures across Europe, the United States, and the major one in London last summer are leading indicators of an entirely new type of temperature challenge data centers must take on. Using AI to optimize data center asset inventories is gaining traction. It’s a perfect use case for AI and machine learning (ML) algorithms that can be trained to optimize hardware and system configurations for an increasingly complex series of constraints that data centers need to operate within. Using AI-based optimization techniques can factor in sustainability requirements, resource loads and cooling requirements by server CPU, all focused on creating the optimal environmental conditions for a data center to perform at peak performance. Data centers are in a race to improve cybersecurity and sustainability. As the data center industry strives to reduce its environmental footprint, it must balance sustainability and cyber-resilience goals. Sustainable solutions like outside air cooling, for example, that deliver energy savings, can amplify security risks if not managed as part of a broader data center cybersecurity plan. In the race to improve data center sustainability, the operations and the companies operating them can’t lose sight of securing cooling and infrastructure without sacrificing them for cost savings. It’s time to embrace sustainability over risk. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,587
2,023
"VB in Conversation - eBay doubles down on data efficiency | VentureBeat"
"https://venturebeat.com/vb-in-conversation-ebay-doubles-down-on-data-efficiency"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages VB in Conversation – eBay doubles down on data efficiency eBay doubles down on data efficiency VentureBeat Editor-in-Chief Matt Marshall talks with Mazen Rawashdeh, SVP & CTO of eBay, about how one of the world’s largest online marketplaces is using innovative technologies to power its data centers and reduce its environmental impact while meeting the growing demand for data processing in the age of generative AI. Rawashdeh explores the importance of fungibility between CPUs and GPUs, hybrid cloud, open source, running their data centers on 85% utilization and more. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,588
2,023
"VB in Conversation - Payment giant Paypal tackles the data sustainability challenge | VentureBeat"
"https://venturebeat.com/vb-in-conversation-payment-giant-paypal-tackles-the-data-sustainability-challenge"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages VB in Conversation – Payment giant Paypal tackles the data sustainability challenge Payment giant Paypal tackles the data sustainability challenge VentureBeat Editor-in-Chief Matt Marshall talks with Archana Deskus, EVP & CIO of Paypal, on how the payment giant, with more than 400 million users, is meeting the need for more computation, more power and more storage, particularly during peak periods — all while committing to sustainability and greater efficiency. Deskus dives into bursting capacity, data center consolidation, asset utilization, efficient code and more. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,589
2,023
"Aflac takes on claims challenges to scale AI efforts | VentureBeat"
"https://venturebeat.com/ai/applying-ai-at-scale--insurance-customer-story-aflac"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Aflac takes on claims challenges to scale AI efforts Share on Facebook Share on X Share on LinkedIn This article is part of a VB special issue. Read the full series here: The quest for Nirvana: Applying AI at scale. For Aflac , which provides supplemental insurance to more than 50 million people worldwide (and is well-known for its duck mascot), delivering AI at scale across the organization has become a top priority since the pandemic. Aflac has been forced to accelerate its digital transformation, including artificial intelligence (AI) , as the pandemic severely challenged the company’s traditional in-person, independent agent/franchise business. The trick, however, has been choosing the best AI use cases among competing priorities, says Shelia Anderson, who joined Aflac as CIO last July. “We’re thinking about the business challenge and outcomes we’re looking for,” she told VentureBeat. When it comes to AI and machine learning , that includes focusing on the overall viability and desirability of the company’s models, and asking questions such as: Is the model needed by the business? Is it solving a specific business need? Does the company have the technical solutions it needs? How long will it take for the model to bring value to the business? VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! A clear opportunity to automate claims Aflac has long had a focus on big data. Today, Aflac supports agents and brokers with AI and ML models that aid in suggestive selling, flagging at-risk accounts and identifying dormant accounts that are candidates for reactivation. But developing a solution that could scale AI across the organization has been a high priority since 2020, said Anderson. Just last year, the company rolled out what Anderson calls its first significant AI-driven platform that uses AI and ML to transform how Aflac processes claims. The platform consists of a set of models trained on business rules tailored to the company’s various product lines. The goal is to automate routine processes, allowing the company to pay claims more quickly. There are three main components to the platform: An AI-based document digitization pipeline to automatically extract, classify, annotate and index proof-of-loss documents Knowledge graphs to map extracted information from documents for a better context of processed data. An end-to-end, AI-based claims processing workflow for adjudication across different lines of business, allowing for fully automated or assisted, error-free, human-in-the-loop processing. “This helps our customers to be adjudicated faster and with more accuracy,” Anderson said, pointing out that before the AI solution was implemented, about 46% of Aflac claims were not fully automated. Aflac has many different claim types, she explained, but one of the first clear opportunities to scale AI was around the company’s wellness benefits. These are included in most of its accident, hospital indemnity and cancer insurance policies. Essentially, Aflac pays customers money for getting yearly checkups and medical screenings such as physicals, dental exams and eye tests. It turned out there was a high volume of lower-dollar payout claims requiring time-consuming customer interactions. “For simple claims that don’t require proof of loss, like wellness claims, we want to pay out quickly,” said Anderson. This “allows our customer care specialists to take care of our policyholders [who have] more complex situations.” Scaling the AI platform Now, Aflac is working to scale its claims automation platform to other types of claims. “The benefits that the business case has proven are improved customer ease, reducing our pain points through the journey, and increasing our touchless claims, which was a benefit to our internal workforce as well as our claimants,” Anderson said. “Streamlining with a rules-based AI reduces error rates and frees up our resources so they can focus on more critical claims where people may actually need to hear a voice on the other end of the phone, maybe dealing with more severe health-related issues where that personal touch is needed.” Anderson said she believes Aflac has only just hit the “tip of the iceberg” when it comes to implementing the platform. She has plans to expand the same capability across the organization in 2023. That, she pointed out, is the value of getting a model that works well, one that solves a basic challenge and takes advantage of an opportunity in the marketplace. “You can take that and stamp it across your other lines of business with a similar problem,” she said. “So we’re taking this and expanding it in our accident and hospital lines of business, and we’re also adding other capabilities in the future around cancer, dental and vision.” In addition, she added, there is an opportunity to extend these AI capabilities beyond the claims process, to any use case that needs to be automated based on prediction. Aflac’s biggest AI scaling challenges Besides prioritization, one of the biggest challenges in scaling any AI effort across Aflac is getting participation from various organizational entities, Anderson said. “For example, our partner that runs the analytics side of our business has a front-end team,” she explained. “We have a back-end data team and then we have business teams that we work with as well. So managing and prioritizing across that ecosystem, whether it’s AI or whether it’s another business initiative, that’s always going to be something that is a challenge for us.” In addition, in a high-demand space like AI and machine learning, attracting and retaining talent with the right skill set is a major challenge. “It’s something we all have to stay laser-focused on,” she said. Applying AI to improving customer retention Overall, Aflac’s claims automation platform has helped with customer service and customer retention, Anderson said. It’s about “how we spend the time that we need for those highest-priority customers and claims while automating others,” she said. “I think that customer service is going to be key in leveraging AI in the future.” That said, she added that she believes allowing some AI capabilities to mature has been an important part of Aflac’s journey — taking time to make sure it doesn’t take needless risks with customer interactions. “If you want to be first to market with something, of course, that’s just a risk you’re going to have to take,” she said. “But for Aflac, I believe that allowing some of these capabilities to mature was definitely part of the journey.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,590
2,023
"Can healthcare show the way forward for scaling AI? | VentureBeat"
"https://venturebeat.com/ai/can-healthcare-show-the-way-forward-for-scaling-ai"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages VB Insight Can healthcare show the way forward for scaling AI? Share on Facebook Share on X Share on LinkedIn This article is part of a VB Lab Insights series on AI sponsored by Microsoft and Nvidia. Don’t miss additional articles in this series providing new industry insights, trends and analysis on how AI is transforming organizations. Find them all here Scaling artificial intelligence (AI) is tough in any industry. And healthcare ranks among the toughest, thanks to highly complex applications, scattered stakeholder networks, stringent licensing and regulations, data privacy and security — and the life-and-death nature of the industry. “If you mis-forecast an inventory level because your AI doesn’t work, that’s not great, but you’ll recover,” says Peter Durlach, Executive Vice President and Chief Strategy Officer of Nuance Communications , a conversational AI company specializing in healthcare. “If your clinical AI makes a mistake, like missing a cancerous nodule on an X-ray, that can have more serious consequences.” Even with the current willingness of many organizations to fund AI initiatives, many healthcare organizations lack the skilled staff, technical know-how and bandwidth to deploy and scale AI into clinical workflows. In fact, it’s far lower than the average of around 54% for all industries combined. Despite the difficulties, machine learning (ML) and other forms of AI have impacted a wide range of clinical domains and use cases in hospitals, R&D centers, laboratories and diagnostic centers. In particular, deep learning and computer vision have helped improve accuracy, accelerate interpretation and reduce repetition for radiologists for x-ray, CT, MR, 3D ultrasound and other imaging. With global shortages of radiologists and physicians looming, AI assistance could be a “game-changer.” After slow growth that has trailed nearly every industry, many analysts forecast that healthcare AI will boom in 2023 and beyond. The global market is expected to exceed $187 billion by 2030, reflecting fast-growing demand. To take advantage of investments, enterprises and industry vendors must overcome several technical obstacles to adoption of clinical AI. Chief among them: Lack of standardized, healthcare-specific platforms and integrated development and run-time environments (IDEs and RTEs). Moreover, current infrastructure often lacks the functionality, workflows and governance to easily create, validate, deploy, monitor and scale — up, down and out. That makes it difficult to scale up during a morning clinic, then scale down during the evening when demand is lower, for example. Or to easily expand deployment of AI systems and models across organizations. Yet despite (and perhaps because of) these challenges, some of today’s most innovative and effective approaches for moving AI into production come from healthcare. What follows are conversations VB had separately with two global leaders about leading-edge, cloud-based approaches that might offer blueprints for other industries struggling with scaling automation. 1. Nuance: ‘From Bench to Bedside,’ deploying for impact Accelerating creation and deployment of trained models at scale with a secure cloud network service — a conversation with Peter Durlach , Executive Vice President and Chief Strategy Officer at Nuance. Good news: The growing popularity of foundation and large language approaches is making it easier to create AI models, says Durlach. But the difficulty of deploying and scaling AI models and applications into healthcare workflows models continues to present a formidable challenge. “About 95% of all models built in-house or by commercial vendors never get deployed, because getting them into clinical workflow is impossible,” Durlach said. “If I’m a client building a model just for myself, it’s one set of challenges to get that deployed in my own company. But if I’m a commercial vendor trying to deploy across multiple settings, it’s a nightmare to integrate from the outside.” Making it easier for hospitals, AI developers and others to overcome these obstacles is the goal of a new partnership between Nuance, Nvidia and Microsoft. The aim is to simplify and speed the translation of trained AI imaging models into deployable clinical applications at scale by combining the nationwide Nuance Precision Imaging Network , an AI-powered Azure cloud platform, and MONAI , an open-source and domain-specialized medical-imaging AI framework cofounded and accelerated by Nvidia. The latest solution builds on two decades of work by Burlington, Mass.-based Nuance to deploy AI applications at scale. “We are a commercial AI company,” Durlach explains. “If it doesn’t scale, it has no value.” In these interview highlights, he explains the value of an AI development and deployment service and suggests what to look for in a provider of AI delivery networks and cloud infrastructure. Underestimating complexity “People underestimate the complexity of closing the gap from development to deployment to where people actually use the AI application. They think, I put a website up, I have my model, I have a mobile app. Not so much. The activities involved in implementing an AI stretch from R&D through deployment to after-market monitoring and maintenance. In life science, they talk about getting a clinical invention from the bench to the bedside. This is a similar problem.” Key steps in developing and using AI for medical imaging The value of specialized cloud-based delivery and development “If I’m a healthcare organization, I want to use AI to drive very specific outcomes. I don’t want to have to build anything. I just want to deploy an application that solves a specific business problem. Nuance is bringing end-to-end development, from low-level infrastructure and AI tools all the way up to specific deployable applications, so you don’t have to stitch components together or build anything on top. “The Nuance Precision Imaging Network runs on Azure and is accessible across more than 12,000 connected facilities across the country. A health system or a commercial vendor can deploy from development to runtime with a single click and be already integrated with 80% percent of the infrastructure in U.S. hospital systems today. The new partnership with Nvidia brings specialized ML development frameworks for medical imaging into clinical translation workflows for the first time, which really accelerates innovation and clinical impact. Mass General Brigham is one of the first major medical centers to use the new offering. They’re defining a unique workflow that links medical-imaging model development, application packaging, deployment and clinical feedback for model refinement.” Choosing a cloud infrastructure vendor “When Nuance was looking for cloud and AI in healthcare, one of the first things we asked was What’s the company’s stance on data security and privacy? What are they going to do with the data? The large cloud companies are all great. But if you look closely, there are many questions about what’s going to happen to the data. One’s core business is monetizing data in various ways. Another one often uses data to go up the stack and compete with their partners and clients. “On the technical side, each cloud company has their strengths and weaknesses. If you look at the breadth of the infrastructure, Microsoft is basically a developer platform company that provides tools and resources to third parties to build solutions on top of. They’re not a search company. They’re not a pure infrastructure company or a retail company. For us, they have a whole set of tools — Azure, Azure ML, a bunch of governance models — and all the development environments around .NET, Visual Studio, and all these things that make it easier, not trivial, to build and deploy AI products. Once you’re running, you need to look closely at scalability, reliability and global footprint. “For data security, privacy and comfort with the business model, Microsoft stood out for us. Those were major differentiators. “Nuance was acquired by Microsoft about 10 months ago. But we were a customer long before that for all these reasons. We continue running and building atop Microsoft, both on-premises and in Azure, with a wide array of Nvidia GPU infrastructure for optimized training and model building.” Focus on value, not technology “AI technology is only as good as the value it creates. The value it creates is only tied to the impact it drives. The impact only happens if it gets deployed and adopted by the users. Great technical people look at the end-to-end workflow and the metrics. “Don’t get lost in the technology weeds. Don’t just get caught up in looking at one tool set or one annotation tool or one inferencing thing. Instead, ask W hat is the use case? What are the metrics that the use case is trying to move around cost, revenue ? What is required to actually get the model deployed? Getting super rigorous around that and not underestimating and falling in love with building the model. It has almost no value if it doesn’t end up in the workflow and drive impact.” Bottom line : Taking advantage of an established commercial delivery network and cloud ecosystem lets you focus on developing and refining AI models and applications that deliver clear value and help drive key organizational goals. When choosing a network and cloud provider, look closely at three key areas: how their business models impact data privacy, the completeness of their AI development and delivery environment, and their ability to easily scale as widely as you require. 2. Elekta: Collaborate to ‘dream bigger’ and speed innovation of products and AI Scaling global R&D infrastructure in the cloud helps make next-gen, AI-powered radiation therapy more accessible and personalized — a conversation with Rui Lopes, Director of New Technology Assessment at Elekta. In 2017, Rui Lopes visited a major radiology conference and noticed a big change. Instead of “big iron and big software,” which usually took up most of the floor space, almost half of the trade show was now dedicated to AI. To Lopes, the potential value of AI for cancer diagnosis and for cancer treatment was undeniably clear. “For clinicians, AI offers an opportunity to spend more time with a patient, to be more care-centric rather than just being the person in the darkroom who looks at a radiograph and tries to figure out if there’s a disease or not,” says Lopes, Director of New Technology Assessment for Elekta, a global innovator of precision radiation therapy devices. “But when you recognize that a computer can eventually do that better at a pixel scale, the physician starts to question, what is my real value in this operation?” Today, the growing openness of healthcare professionals worldwide to asking that question and to embrace the opportunity of cancer care driven by AI is due in no small part to Elekta. Founded in 1972 by a Swedish neurosurgeon, the company gained international renown for its revolutionary Gamma Knife used in non-invasive radiosurgery for brain disorders and cancer, and most recently its groundbreaking Unity integrated MR and linac (linear accelerator) device. For much of the last decade, Elekta has been developing and commercializing ML-powered systems for radiology and radiation therapy. Recently, the Stockholm-based company even created a dedicated radiotherapy AI center in Amsterdam called the POP-AART lab. The company is focusing on harnessing the power of AI to provide more advanced and personalized radiation treatments that can be quickly adapted to accommodate any change in the patient during cancer treatments. At the same time, Elekta recently launched its “Access 2025” initiative that aims to increase radiotherapy access by 20% worldwide, including in underserved regions. Elekta hopes that by integrating more intelligence into their systems they can help overcome common treatment bottlenecks such as shortages of clinician time, equipment and trained operators, and as a result, ease the strain on patients and healthcare providers. Along the way, Elekta has learned valuable lessons about AI and scaling, Lopes says, even as company expertise and practices continue to evolve. In these interview highlights, Lopes shares his experience and key learnings about moving to on-demand cloud infrastructure and services. Wanted: Smarter collaboration and data sharing “We’re a global organization, 4,700 employees in over 120 countries, with R&D centers spread across more than a dozen regional hubs. Each center might have a different priority for improving a particular product or business line. These disparate groups all do great work, but traditionally they each did it in a bit of isolation. “As we considered how to ramp up the speed of our AI innovations, we recognized that a common scalable data infrastructure was key to increasing collaboration across teams. That meant understanding data pipelines and how to manage data in a secure and distributed fashion. We also had to understand the development and operational environment for machine learning and AI activities, and how to scale that.” Costly on-premises servers, ‘small puddles of data’ “As a company, we have traditionally been very physics-based in our research in radiotherapy. Our data and research scientists were all very on-prem-centric for data management and compute. We invested in large servers through large capital purchases and did data preparation and massaging and other work on these local machines. “AI has a voracious appetite for data, but because of privacy concerns, it’s a challenge to get access to large volumes of medical data and medical equipment data required to drive AI development. Luckily, we have very good, very precious partner research relationships around the world, and we employ different techniques to respect and maintain strict privacy requirements. But typically, these were small puddles of data being used to try to drive AI initiatives, which is not the ideal formula. “One thing we did early was establish a larger-scale pipeline of anonymized medical data that we could use to drive some of these activities. We didn’t want replication of this data lake across all our distributed global research centers. That would mean people would have different copies and different ways of managing, accessing and potentially even securing this data, which we wanted to keep consistent across the organization. Not to mention that we’d be paying for duplicate infrastructure for no reason. So, a very big part of the AI infrastructure puzzle for us was the warehousing and the management of data.” Budgeting mind shift: Focus on cloud’s new capabilities “As we delved more and more into ML and AI, we evaluated the shift from on-prem compute to cloud compute. You do a couple of back-of-the-envelope calculations first: Where are you regionally? What are you paying now? What type of GPUs are you using? As you’re starting this journey, you’re not quite sure what you’re going to do. You’re basing the decision on your current internal capacity, and what it would cost to replicate that in the cloud. Almost invariably, you end up thinking the cloud is more expensive. “You need to take a step back and shift your perspective on the problem to realize that it’s only more expensive if I use [cloud] the way I use my on-prem capacity today. If instead you consider the things you can do in cloud that you can’t do onsite – like run parallel experiments and multiple scenarios at the same time or scale GPU capacity – the calculus is different. It really is a mind shift you have to make. “As you think of growth, it becomes obvious that migrating to cloud infrastructure can be extremely advantageous. Like with any migration, you have a learning curve to becoming efficient and managing that infrastructure properly. We may have forgotten to ‘turn off the lights’ on capacity a couple of times. but you learn to automate much of the management as well.” Aha moment: Leverage smart partners “I mentioned the challenges of accessing medical data. But another part of the challenge is that often the data you need to access is a mix of types and standards or consists of proprietary formats that can change over time. You want any infrastructure you build to have flexibility and growth capabilities to accommodate this. “When we looked around, there was no off-the shelf product for this, which was surprising and a big ‘aha moment’ for us. We quickly recognized this was not a core competence for us – you really need to work with trusted partners to build, design and scale out to the right level. “We were fortunate to have a global partnership with Microsoft, who really helped us understand how best to create an infrastructure and design it for future scaling. One that would let us internally catalog data the right way, allow our researchers to peruse and select data they needed for developing AI-based solutions – all in a way that is consistent with the access speed and latency we were expecting, and the distributed nature of our worldwide research teams and our security policies.” Starting smart and small “We started limited pilots around 2018 and 2019. Rather than betting the bank on a massive and ambitious project, we started small. We continued our current activities and way of working with the on-premises and non-scalable systems, setting aside a little bit of capacity to do limited experiments and pilots. “Setting up a small Azure environment allowed us to create virtual compute as well and doing a redundant run of a smaller experiment and asking ‘What was that experience?’ This meant getting faster, more frequent small wins instead of risking large-project-fatigue with no short-term tangible benefits. These, in turn, provided the confidence to migrate more and more of our AI activities to the cloud. “With COVID and everybody holed up at home, the distributed virtual Azure environment was very practical with a level of facility and convenience we didn’t have before.” Learning new ways and discipline “We recognized that we needed to learn as an organization before really jumping into [cloud-based AI]. Learning from it, too, so that parts of the team were getting exposed, understanding how to operate in the environment, how to use and properly leverage the virtual compute capacity. There’s operational and knowledge inertia to overcome. People say: ‘There’s my server. That’s my data.’ You have to bring them over to a new way of doing things. “Now, we’re in a different space, where the opportunity is much bigger. You can dream bigger in terms of the scale of the experiment that you might want to do. You might be tempted to try to run a really large learning on a massive dataset or a more complex model. But you must have a bit of discipline, walking before you run.” Help wanted: Developing new products, not models “Rather than going out and recruiting boatloads of AI experts and throwing them in there and hoping for the best, we recognized we needed a mix of people with domain knowledge of the physics and radiotherapy. “We did a few experiments where we brought in some real hardcore AI people. Great people, but they’re interested in developing the next great model architecture, while we’re more interested in applying solid architectures to create products to treat patients. For us, the application is more important than the novelty of the tech. At least for now, we feel there has to be organic growth, rather than trying to throw an entire new organization or a new research group at the problem. But it’s a challenge; we’re still in the process.” IT as trusted partner and guide “I’m in the R&D department, but we interact with the IT department very closely. We interact with the sales and commercial side very closely, too. Our Head of Cloud, Adam Moore, and I have more and more discussions about sharing learnings across corporate initiatives, including data management and strategy and cloud. Those are strands of the DNA of the company that are going to be intertwined as we move forward, that will keep in lockstep. “If you’re lucky, IT is a red thread that can help through all of that. But that’s not always the case for many companies or entire IT departments. There’s a competence buildup that needs to happen within an organization, and a maturity level within IT. They’re the sherpa on this journey that hopefully helps you get to the summit. The better the partner, the better the experience.” Toward more universal treatment and ‘adaptation’ “More centers and physicians are embracing the belief that (AI-assisted radiology) can have a positive impact and allow them to get closer to what’s most important — providing the best, personalized care to patients that’s more than just cookie-cutter care because there’s no time to do anything but that. “AI is not only helping with the productivity bottleneck, but with what we call adaptation. Even while the patient is on the table, about to be treated, we can take clinical decisions and dynamically modify things on the fly with really fast algorithms. It can make these hour- or day-long processes happen in minutes. It’s beyond personalization, and it’s really exciting.” Bottom line: Focus early on data pipelines and infrastructure. Start small, with smart partners and close partnership between IT and the groups developing AI. Don’t get sidetracked by “apples- to-oranges” cost comparisons between cloud and on-premises environments. Instead, expand your vision to include new capabilities like on-demand parallel processing and HPC. And be prepared to patiently overcome organizational inertia and build up new competencies and attitudes toward data sharing. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,591
2,023
"For Wells Fargo, solving for AI at scale is an iterative process | VentureBeat"
"https://venturebeat.com/ai/for-wells-fargo-solving-for-ai-at-scale-is-an-iterative-process"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages For Wells Fargo, solving for AI at scale is an iterative process Share on Facebook Share on X Share on LinkedIn This article is part of a VB special issue. Read the full series here: The quest for Nirvana: Applying AI at scale. Wells Fargo, the 170-year-old multinational financial services giant, knows what it needs to do to scale AI across the organization. But that, according to Chintan Mehta, EVP and group CIO, is really just the beginning of the journey. Implementing AI at scale is about artificial intelligence becoming a core component of any go-to-market product, he explained. “It means there is no notion of a bolt-on AI,” he said, “which by definition is not a behavior of AI at scale because in that context AI is not fundamental to the proposition you are building.” Wells Fargo is not quite there yet, he emphasized. But Mehta believes that the company is at a point where it knows what it needs to do. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “We know how to go about it,” he explained. “But it’s a function of time capital, working through it and getting it to the point where it’s embedded, transparent and available.” The three key elements to solving for AI at scale These days, AI is no longer just about developing AI models. Instead, in order to scale AI across the enterprise, companies have to solve for three independent elements that have to converge. “There are these three chunks, then you can iterate independently on each of them so that you can get better overall,” Mehta said. The first is enterprise data strategy. That is, the signals that the company needs to use, whether for visualization or for model development. “Data needs to be thought of as a product by itself,” he said, “[as] data products that data science teams can consume.” Next are the AI capabilities themselves, whether it’s large language models , neural networks , or statistical models. The third is the independent verification and mitigation structure that operates organizationally, operationally and technically. This element allows organizations to create guardrails around how AI goes to market and how it is used for or on behalf of customers. Wells Fargo has put all three elements into place, said Mehta. Now it’s about powering them at scale. “We’re trying to expand them and make them faster. The faster it becomes, the more effective it is in bringing things to market,” he said. Two examples of scaling AI at Wells Fargo It’s no surprise that processing documents is an important internal use case at Wells Fargo. So analyzing documents and streamlining processes was a prime candidate for implementing AI at scale. “You have to understand what the artifact uploaded is, whether it is the right artifact, what it represents, what is the data underneath it, and so on,” said Mehta. Wells Fargo built a capability for document processing which creates a semantic understanding of a document and provides a summary. “It’s not 100% automated, but we can augment human beings quite a bit,” said Mehta. A key customer-facing use case for scaling AI is Wells Fargo’s soon-to-launch virtual assistant, Fargo. “We started with the experiential requirements and then said, ‘What will be the best solution for the natural language ask?’” said Mehta. “Should it be a chat? Voice? Should we use a recurrent neural network? How do we manage privacy? Tokenization?” Mehta’s teams built the scaffolding for Fargo up front, testing it with a small neural network. Then, to get a deeper language understanding, they used a Google large language model. “This is going to be an ongoing thing where you keep iterating,” Mehta explained. “It’s not a one-directional flow; sometimes you find you are a few steps back because an approach doesn’t work. But that’s the journey.” There’s no magic to scaling AI There may be hype around scaling AI, but there’s no magic, Mehta emphasized. “Everybody thinks that if they just put AI in there, it will do something magical,” he said. “But everybody learns there is no box which says ‘insert magic here.’ You have to work through what you’re actually trying to do and define the problem, and then think of AI in the context of solving that problem.” Wells Fargo, he added, doesn’t have the luxury of simply building models even if they don’t solve problems. Two or three years ago it took a median of 65 weeks to develop an AI model and take it to market, and even now it still takes roughly 21 weeks. “We don’t have unlimited resources to deploy, so you’re trying to constantly fight the efficiency barrier — there’s a lot of interest, a lot of appetite, but at the same time you want to keep AI efforts safe and efficient.” That means, he said, you “have to pick the right problems to deal with in terms of where you deploy AI.” Wells Fargo’s 2023 priorities for AI at scale Mehta said there are three things he is focused on when it comes to implementing AI at scale in 2023. “These are the ones I’m focused on in an immediate, practical way, because I think those will be force amplifiers for what we can do at scale later on,” he said. The first is creating a foundational model library. “Some of these models are going to become impractical for any single group or a single entity to build out, because they become very, very large and very complex very quickly,” he said. “So our first tactical goal for this year is to build a foundational library of these kinds of models which can form the baseline for the next specialized set of models people want to build.” Next, Mehta said, Wells Fargo is trying to automate the entire AI pipeline, so “more citizen data scientists can also build on top of the models, instead of somebody who has a Ph.D. and has a Python library on their machine and knows Python.” Finally, it’s important to embed explainability into every AI step. “If you can explain along the way instead of at the end, it speeds up a lot of the other conversations later,” he said. The future of AI at scale In a few years, we may not even be talking about AI “at scale,” because it will be everywhere, Mehta predicted. “We will be hard-pressed to say, ‘Is there anything we use today which doesn’t have aspects of AI built into it?’” he said. “It’s going to be less about scale, and more about whether you know something is happening with AI at that exact moment and if it’s done safely.” Wells Fargo, he added, will continue iterating on that journey. “We know the standards, we know the goals, we are very clear on how to do it,” he said. “Now it’s a function of making sure we work through all of it.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,592
2,023
"How synthetic data is boosting AI at scale | VentureBeat"
"https://venturebeat.com/ai/synthetic-data-to-boost-ai-at-scale"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How synthetic data is boosting AI at scale Share on Facebook Share on X Share on LinkedIn This article is part of a VB special issue. Read the full series here: The quest for Nirvana: Applying AI at scale. Artificial intelligence (AI) relies heavily on large, diverse and meticulously-labeled datasets to train machine learning (ML) algorithms. In the modern era, data has become the lifeblood of AI, and obtaining the right data is considered the most critical and challenging aspect of developing robust AI systems. However, collecting and labeling vast datasets with millions of elements sourced from the real world is time-consuming and expensive. As a result, those training ML models have started to rely heavily on synthetic data , or data that is artificially generated rather than produced by real-world events. Synthetic data has soared in popularity in recent years, presenting a viable solution to the data-quality problem and offering the potential to reshape large-scale ML deployments. According to a Gartner study , synthetic data is expected to account for 60% of all data used in the development of AI by 2024. Turbocharging AI/ML with synthetic data The concept is elegantly simple. It allows practitioners to generate the data they need digitally, on demand, and in any desired volume, tailored to their precise specifications. Researchers can now even turn to synthetic datasets that were created using 3D models of scenes, objects and humans to produce action clips quickly — without encountering copyright issues or ethical concerns associated with real data. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “Using synthetic data for machine learning training allows companies to build models for scenarios that were previously out of reach due to the needed data being private, too low-quality or simply not existing at all,” Forrester analyst Rowan Curran told VentureBeat. “Creating synthetic datasets uses techniques like generative adversarial networks (GANs) to take a dataset of a few thousand individuals and transform it into a dataset that performs the same when training the ML model — but doesn’t have any of the personally identifiable information (PII) of the original dataset.” Proponents point to a variety of benefits to choosing synthetic datasets. For one thing, using synthetic data can significantly reduce the cost of generating training data. It can also address privacy concerns related to potentially sensitive data obtained from the real world. Synthetic data can help mitigate bias, as compared to real data, which may not accurately represent the full range of information about the real world. Greater diversity may also be accounted for in synthetic datasets by incorporating rare cases that represent realistic possibilities but are difficult to obtain from genuine data. Curran explained that synthetic datasets are used to create data for models in situations where the needed data does not exist because the data collection scenario occurs too infrequently. “A healthcare provider wanted to do a better job catching early-stage lung cancer, but little imagery data was available. So to build their model, they created a synthetic dataset that used healthy lung imagery combined with early-stage tumors to build a new training dataset that would function as if it were the same data collected from the real world,” said Curran. He said synthetic data is also finding traction in other secure industries, such as financial services. These companies have significant restrictions on how they can use and move their data, particularly to the cloud. Synthetic data has the potential to enhance software development, accelerate research and development, facilitate the training of ML models, enable organizations to gain a deeper understanding of their internal data and products, and improve business processes. These benefits, in turn, can promote the growth of AI on a large scale. How does it function in the real world of AI? But the question remains: Can artificially generated data be as effective as real data? How well does a model trained with synthetic data perform when classifying real actions? Yashar Behzadi, CEO and founder of synthetic data platform Synthesis AI , says that companies often use synthetic and real-world data in conjunction, to train their models and ensure they are optimized for the best performance. “Synthetic data is often used to augment and extend real-world data, ensuring more robust and performant models,” he told VentureBeat. For example, he said Synthesis AI is working with a handful of tier 1 auto manufacturers and software companies. “We keep hearing that the available training data is either too low-res or there isn’t enough of it — and they don’t have their customers’ consent to train computer vision models with it either way,” he said. “Synthetic data solves all three challenges — quality, quantity and privacy.” Companies also turn to synthetic data when they cannot obtain certain annotations from human labelers, such as depth maps, surface normals, 3D landmarks, detailed segmentation maps and material properties, he explained. “Bias in AI models is well documented, and related to incomplete training data that lack the necessary diversity related to ethnicity, skin tone or other demographics,” he said. “As a result, AI bias disproportionately impacts underrepresented demographics and leads to less inclusive applications and products.” Using synthetic data, he continued, companies can explicitly define the training dataset to minimize bias and ensure more inclusive, human-centered models without breaching consumer privacy. Replacing even a small portion of real-world training data with synthetic data makes it possible to accelerate and streamline the training and deployment of AI models of all scales. At IBM, for instance, researchers have used the ThreeDWorld simulator and its corresponding Task2Sim platform to generate simulated images of realistic scenes and objects, which can be used to pretrain image classifiers. These synthetic images reduce the amount of genuine training data required, and they have been found to be equally effective in pretraining models for tasks such as detecting cancer in medical scans. In addition, supplementing authentic data with artificially generated data can mitigate the risk of a model that has been pretrained on raw data scraped from the internet that exhibits racist or sexist tendencies. Custom-made artificial data is pre-vetted to minimize the presence of biases, reducing the risk of such unwanted behaviors in models. “Doing as much as we can with synthetic data before we start using real-world data has the potential to clean up that Wild West mode we’re in,” said David Cox, codirector of the MIT-IBM Watson AI Lab and head of exploratory AI research. Synthetic data and model quality Alp Kucukelbir, cofounder and chief scientist of factory optimization platform Fero Labs and an adjunct professor at Columbia University , said that although synthetic data can complement real-world data for training AI models, it comes with a big caveat: You need to know what gap you’re plugging in your real-world dataset. “Say you are using AI to decarbonize a steel mill. You want to use AI to unravel and expose the specific operation of that mill (e.g., precisely how machines at a specific factory work together) and not to rediscover the basic metallurgy you can find in a textbook. In this case, to use synthetic data, you would have to simulate the precise operation of a steel mill beyond our knowledge of textbook metallurgy,” explained Kucukelbir. “If you had such a simulator, you wouldn’t need AI to begin with.” Machine learning is good at interpolating, but could stand improvement at extrapolating from training datasets. However, artificially generated data allows researchers and practitioners to provide “corner-case” data to an algorithm, and could eventually accelerate R&D efforts, added Julian Sanchez, director of emerging technologies at John Deere. “We have tried synthetic data in an experimental fashion at John Deere, and it shows some promise. The general set of examples involve agriculture, where you are likely to have a very low occurrence rate of specific corner cases,” Sanchez told VentureBeat. “Synthetic data provides AI/ML algorithms with the required reference points through data and gives researchers a chance to understand how the trained [model] could handle the different use cases. It will be an important aspect of how AI/ML scales.” Likewise, Sebastian Thrun, ex-Google VP and current chairman and cofounder of online learning platform Udacity , says that this kind of data is usually unrealistic along some dimensions. Simulations through synthetic data are a quick and safe way to accelerate learning, but they typically have known shortcomings. “This is specifically the case for data in perception (camera images, speech, etc.). But the right strategy is usually to combine real-world data with synthetic data,” Thrun told VentureBeat. “During my time at Google’s self-driving car project Waymo , we used a combination of both. Synthetic data will play a big role in situations we never want to experience in the real world.” Challenges of using synthetic data for AI Michael Rinehart, VP of AI at multicloud data security platform Securiti AI , says that there’s a tradeoff between synthetic data’s usefulness and the privacy it affords. “Finding the appropriate tradeoff is a challenge because it is company-dependent, much like any risk-reward assessment,” said Rinehart. “This challenge is further compounded by the fact that quantitative estimates of privacy are imperfect, and more privacy may actually be afforded by the synthetic dataset than the estimate suggests.” He explained that consequently, looser controls or processes might be applied to this kind of data. For instance, companies may skip known synthetic data files during sensitive data scans, losing visibility into their proliferation. Data science teams may even train large models on them, ones capable of memorizing and regenerating the synthetic data, and then disseminate them. “If synthetic data or any of its derivatives are meant to be shared or exposed, companies should ensure it protects the privacy of any customers it represents by, for example, leveraging differential privacy with it,” advised Rinehart. “High-quality differentially-private synthetic data ensures that teams can run experiments with realistic data that does not expose sensitive information.” Fernando Lucini, global lead for data science and machine learning engineering at Accenture , adds that generating synthetic data is a highly complex process, requiring people with specialized skills and truly advanced knowledge of AI. “A company needs very specific and sophisticated frameworks and metrics to validate that it created what it intended,” he explained. What’s next for synthetic data in AI? Lucini believes synthetic data is a boon for researchers and will soon become a standard tool in every organization’s tech stack for scaling their AI/ML models’ prowess. “Utilizing synthetic data provides not only an opportunity to work on more interesting problems for researchers and accelerate solutions, but also has the potential to develop far more innovative algorithms that may unlock new use cases we hadn’t previously thought possible,” Lucini added. “I expect synthetic data to become a part of every machine learning, AI and data science workflow and thereby of any company’s data solution.” For his part, Synthesis AI’s Behzadi predicts that the generative AI boom has been and will continue to be a huge catalyst for synthetic data. >>Follow VentureBeat’s ongoing generative AI coverage<< “There has been explosive growth in just the past few months, and pairing generative AI with synthetic data will only further adoption,” he said. Coupling generative AI with visual effects pipelines, the diversity and quality of synthetic data will drastically improve, he said. “This will further drive the rapid adoption of synthetic data across industries. In the coming years, every computer vision team will leverage synthetic data.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,593
2,023
"The power of MLOps to scale AI across the enterprise | VentureBeat"
"https://venturebeat.com/ai/the-power-of-mlops-to-scale-ai-across-the-enterprise"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages The power of MLOps to scale AI across the enterprise Share on Facebook Share on X Share on LinkedIn This article is part of a VB special issue. Read the full series here: The quest for Nirvana: Applying AI at scale. To say that it’s challenging to achieve AI at scale across the enterprise would be an understatement. An estimated 54% to 90% of machine learning (ML) models don’t make it into production from initial pilots for reasons ranging from data and algorithm issues, to defining the business case, to getting executive buy-in, to change-management challenges. In fact, promoting an ML model into production is a significant accomplishment for even the most advanced enterprise that’s staffed with ML and artificial intelligence (AI) specialists and data scientists. Enterprise DevOps and IT teams have tried modifying legacy IT workflows and tools to increase the odds that a model will be promoted into production, but have met limited success. One of the primary challenges is that ML developers need new process workflows and tools that better fit their iterative approach to coding models, testing and relaunching them. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The power of MLOps That’s where MLOps comes in: The strategy emerged as a set of best practices less than a decade ago to address one of the primary roadblocks preventing the enterprise from putting AI into action — the transition from development and training to production environments. Gartner defines MLOps as a comprehensive process that “aims to streamline the end-to-end development, testing, validation, deployment, operationalization and instantiation of ML models. It supports the release, activation, monitoring, experiment and performance tracking, management, reuse, update, maintenance, version control, risk and compliance management, and governance of ML models.” Managing models right to gain scale V erta AI cofounder and CEO Manasi Vartak, an MIT graduate who led mechanical engineering undergraduates at MIT CSAIL to build ModelDB, co-created her company to simplify AI and and ML model delivery across enterprises at scale. Her dissertation, Infrastructure for model management and model diagnosis , proposes ModelDB, a system to track ML-based workflows’ provenance and performance. “While the tools to develop production-ready code are well-developed, scalable and robust, the tools and processes to develop ML models are nascent and brittle,” she said. “Between the difficulty of managing model versions, rewriting research models for production and streamlining data ingestion, the development and deployment of production-ready models is a massive battle for small and large companies alike.” Model management systems are core to getting MLOps up and running at scale in enterprises, she explained, increasing the probability of modeling success efforts. Iterations of models can easily get lost, and it’s surprising how many enterprises don’t do model versioning despite having large teams of AI and ML specialists and data scientists on staff. Getting a scalable model management system in place is core to scaling AI across an enterprise. AI and ML model developers and data scientists tell VentureBeat that the potential to achieve DevOps-level yields from MLOps is there; the challenge is iterating models and managing them more efficiently, capitalizing on the lessons learned from each iteration. VentureBeat is seeing strong demand on the part of enterprises experimenting with MLOps. That observation is supported by IDC’s prediction that 60% of enterprises will have operationalized their ML workflows using MLOps by 2024. And, Deloitte predicts that the market for MLOps solutions will grow from $350 million in 2019 to $4 billion by 2025. Increasing the power of MLOps Supporting MLOps development with new tools and workflows is essential for scaling models across an enterprise and gaining business value from them. For one thing, improving model management version control is crucial to enterprise growth. MLOps teams need model management systems to integrate with or scale out and cover model staging, packaging, deploying and models operating in production. What’s needed are platforms that can provide extensibility across ML models’ life cycles at scale. Also, organizations need a more consistent operationalization process for models. How an MLOps team and business unit work together to operationalize a model varies by use case and team, reducing how many models an organization can promote into production. The lack of consistency drives MLOps teams to adopt a more standardized approach to MLOps that capitalizes on continuous integration and delivery (CI/CD). The goal is to gain greater visibility across the life cycle of every ML model by having a more thorough, consistent operationalization process. Finally, enterprises need to automate model maintenance to increase yield rates. The more automated model maintenance can become, the more efficient the entire MLOps process will be, and there will be higher probability that a model will make it into production. MLOps platform and data management vendors need to accelerate their persona-based support for a wider variety of roles to provide customers with a more effective management and governance framework. MLOps vendors include public cloud-platform providers, ML platforms and data management vendors. Public cloud providers AWS, Google Cloud and Microsoft Azure all provide MLOps platform support. DataRobot, Dataiku, Iguazio, Cloudera and DataBricks are leading vendors competing in the data management market. How LeadCrunch uses ML modeling to drive more client leads Cloud-based lead generation company LeadCrunch uses AI and a patented ML methodology to analyze B2B data to identify prospects with the highest probability of becoming high-value clients. However, ML model updates and revisions were slow, and the company needed a more efficient approach to regularly updating models to provide customers with better prospect recommendations. LeadCrunch’s data science team regularly updates and refines ML models, but with 10-plus submodels and an ever-evolving stack, implementation was slow. Deployment of new models only occurred a few times a year. It was also challenging to get an overview of experiments. Each model was managed differently, which was inefficient. Data scientists had difficulty gaining a holistic view of all the experiments being run. This lack of insight further slowed the development of new models. Deploying and maintaining models often required large amounts of time and effort from LeadCrunch’s engineering team. But as a small company, these hours often weren’t available. LeadCrunch evaluated a series of MLOps platforms while also seeing how they could streamline model management. After an extensive search, they chose Verta AI to streamline every phase of ML model development, versioning, production and ongoing maintenance. Verta AI freed LeadCrunch’s data scientists up from tracking versioning and keeping so many models organized. This allowed data scientists to do more exploratory modeling. During the initial deployment, LeadCrunch also had 21 pain points that needed to be addressed, with Verta AI resolving 20 immediately following implementation. Most importantly, Verta AI increased model production speed by 5X and helped LeadCrunch achieve one deployment a month, improving from two a year. The powerful potential of MLOps The potential of MLOps to deliver models at the scale and the speed of DevOps is the main motivator for enterprises who continue to invest in this process. Improving model yield rates starts with an improved model management system that can “learn” from each retraining of a model. There needs to be greater standardization of the operationalization process, and the CI/CD model needs to be applied not as a constraint, but as a support framework for MLOps to achieve its potential. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,594
2,023
"How the quest for AI at scale is gaining momentum in the enterprise | VentureBeat"
"https://venturebeat.com/ai/the-quest-for-nirvana-applying-ai-at-scale"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How the quest for AI at scale is gaining momentum in the enterprise Share on Facebook Share on X Share on LinkedIn This article is part of a VB special issue. Read the full series here: The quest for Nirvana: Applying AI at scale. Enterprise companies have experimented with artificial intelligence (AI) for years — a pilot here, a use case there. But company leaders have long dreamed of going bigger, better and faster when it comes to AI. That is, applying AI at scale. The goals of this quest may vary. Maybe the hope is to boost customer engagement, improve operational efficiencies and unify AI and data workloads. Perhaps the goal is higher growth, more revenue streams and real-time insights. But the quest for AI Nirvana has never been just about AI. It’s about going beyond harnessing it in specific applications to implementing it at scale, generating value across the organization. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The trend toward AI at scale has gained significant momentum over the past year. Last July, for example, Gartner research analyst Whit Andrews told VentureBeat that the “colossal” AI trend underlying all other AI trends today is the increased scale of artificial intelligence in organizations. “More and more are entering an era where AI is an aspect of every new project,” he said. That’s because technology tools are better and cheaper, the talent with the right AI skills exists, and it’s easier to get access to the right data, he explained. According to a January article from Boston Consulting Group, leaders in scaling and generating value from AI do three things better than other companies: They prioritize the highest-impact use cases and scale them quickly to maximize value; they make data and technology accessible across the organization, avoiding siloed and incompatible tech stacks that impede scaling; and they recognize the importance of aligning leadership and the employees who build and use AI. But the article also maintained that even though scaling use cases is key to generating and sustaining value from AI, most companies do not yet take advantage of the full potential of this approach. In this special issue from VentureBeat, we’ll be examining the opportunities and the challenges of applying AI at scale and how organizations can get closer to AI Nirvana. It includes a look at how some enterprises are harnessing the power of MLOps to scale AI across the organization, and how experts say organizations can scale AI responsibly. We also take a deep dive into how companies are using synthetic data to boost their efforts to implement AI at scale. Finally, this issue highlights how several end-user companies were able to launch AI at scale by implementing technology, processes, governance and strategy across the organization. What does it really mean to apply AI at scale? Arsalan Tavakoli, SVP of field engineering and a cofounder of data lakehouse platform Databricks , told VentureBeat that applying AI at scale is all about whether AI has become essential to all the company’s business lines. “It’s whether AI is core to helping you drive new customer experience or product development or operational efficiency,” he said — “[whether] it has become an intrinsic part of your organization’s ability to transform.” Many Databricks clients, he pointed out, are doing experiments with AI but have no idea how to scale up. Others are farther along, with models in production, but they realize it’s not efficient. Having the right data with the right technology powering the right models is also essential, said Justin Hotard, executive vice president and general manager for HPE’s HPC and AI business group. “We’re seeing a much broader interest in AI at scale, not just because of LLMs and generative AI , but because there’s now this recognition of the power and the potential of what you can do with your data if you build the right models,” he said. Kjell Carlsson, head of data science strategy and evangelism at MLOps platform Domino Data Lab , agrees that figuring out how to make use of more data for ever larger models is certainly part of the AI-at-scale conversation. However, he added that most of the business value comes not from embedding models into applications in individual parts of the business, but from doing that in other parts of the organization. “You’re going to need to figure out how to do both of those things,” he said. Where companies are now The good news is that organizations are maturing in their efforts to implement AI at scale, said Carlsson. The question is, how much and how fast are companies maturing? The best indicator of AI maturity, he suggested, is the increasing prevalence of chief data analytics officers and other C-suite roles that have an explicit mandate to implement data science and machine learning in their organization. In addition, these executives have control over the data assets that you need in order to be able to execute. “I think previously there was this massive lack of leadership within the organization, [leadership] that actually was able to take an active role in driving AI-based transformation initiatives,” he said. The rise of ChatGPT and other generative AI solutions has certainly given companies a kick in the pants over the past few months, added Tavakoli. “I don’t remember the last time I was in a meeting where somebody did not use the word ‘ChatGPT’ in some form or another.” A year ago, AI and ML were more aspirational for many organizations, he said. “They talked about it, somebody would jokingly say it was great, investors love to hear about it, it’s the way the world is going. But it was tomorrow’s challenge, not today’s.” Now, he said, leaders are worried about falling behind in an era of fierce competition. “Every CEO’s earnings call is about AI and ML embedded in the business,” he said. “And I’m not just talking about the Netflixes and the Ubers of the world. You’re talking about the Disneys of the world, the banks of the world, the T-Mobiles of the world, the Walmarts of the world — they’re all saying AI and ML is our key to our focus area.” However, as organizations get deeper into the work, they realize that the most difficult part of implementing AI and ML is not the algorithm. “It’s all the other stuff behind it,” he said, “like ‘How do I actually figure out how to get good quality data, especially in real time? How do I actually figure out how to develop it and get my data scientists productive, put it in production, iterate on it, and understand when I have data quality issues?’” One of the biggest challenges, Tavakoli added, is that many organizations felt liberated when they moved their data away from on-premises into the cloud, because they could get “best-of-breed” solutions for everything. But that has led to a “smorgasbord” of tools that all need to be connected. “What people are realizing is they don’t really have an AI problem, they have a customer-360 problem,” he said. “When they start trying to stitch it all together, it becomes incredibly hard — and then [there’s] dealing with the data and governance around it.” What companies need to do to scale AI HPE’s Hotard says that the first thing companies should do to begin applying AI at scale is consider the places where AI can have a positive impact on their business — and whether it is playing offense in the industry, or playing defense (if you don’t do it, someone else will). Next, if there isn’t already someone in place, appoint someone to lead AI efforts at a senior level. “That’s someone engaged with the C-suite and facilitating these discussions across the business,” he said. Finally, in terms of AI tools and capabilities, consider enterprise risk and auditability. “It’s going to become important to have the ability to go back and say how you got to the decision,” he said. The good news is, there are several verticals that have already made significant headway in their quest towards applying AI at scale, said Domino’s Carlsson. “We’ve already hit the tipping point in verticals like pharmaceuticals and insurance, and I would think banking and financial services are there already [too],” he said. Pain points are still everywhere, he cautioned, from the need to break down technology and data silos to a shortage of high-skilled talent. But today, with the latest technology tools, increased compute and advanced data solutions, the quest for AI at scale can be tackled in powerful new ways that have never been available before. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,595
2,023
"How high-performance computing at the edge fuels AI, AR/VR, cybersecurity and more | VentureBeat"
"https://venturebeat.com/ai/how-high-performance-computing-at-the-edge-is-reshaping-data-center-intelligence"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How high-performance computing at the edge fuels AI, AR/VR, cybersecurity and more Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. This article is part of a VB special issue. Read the full series here: The future of the data center: Handling greater and greater demands. In recent years many data centers have transitioned from a centralized model to an edge computing approach. This brings computing closer to the point of use, reducing latency and improving performance. The impetus for this change has been a surge in AI-driven software and the widespread adoption of 5G technologies for more efficient data packet delivery. Enterprises, cloud service providers and telecom companies are investing heavily in digital edge data centers to optimize applications like streaming video, telemedicine and factory automation. Additionally, edge data centers facilitate implementation of advanced technologies such as augmented and virtual reality (AR/VR) and autonomous vehicles. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! This paradigm shift entails a redefinition of data centers and a complete overhaul of their architecture. Shift to the edge, gain an edge To gain a competitive edge in today’s fast-paced and fiercely competitive economy, organizations are combining edge computing with infrastructure modernization and other IT innovations to meet the demands of the modern digital age. Specifically, enterprises are shifting their data center strategy toward high-performance edge computing to meet AI’s compute requirements. “A high-performance edge infrastructure allows AI computation to happen near the data center at the edge of a network instead of a private data center,” Utpal Mangla, general manager of distributed edge, sovereign cloud and partnerships at IBM , told VentureBeat. “Since the data is being analyzed locally, there is potentially better availability and more real-time analytics. “It also helps to reduce networking costs,” he added. Incorporating high-performance computing (HPC) capabilities at the edge helps organizations respond promptly to evolving market conditions and analyze data sources at greater speed and scale. HPC at the edge provides not only the necessary computational power and speed but also scalability. “When relying solely on traditional [centralized] data centers, a company risks data breaches and lagging [behind] the competition by not being able to support real-time applications,” Erik Pounds, senior director for enterprise AI at Nvidia , told VentureBeat. “ AI applications are increasing at an incredible rate, which is why businesses are now quickly moving to high-performance edge computing at data centers that can support multiple AI applications running at any given time.” Leveraging 5G for high-performance edge networks and data centers Edge computing has gained much momentum thanks to its effectiveness at tackling of the challenges of data-centric workloads. By retaining sensitive data on-premises, edge computing minimizes data flow to and from distant data centers, optimizing resource use and enhancing efficiency. Industry experts believe enterprises must carefully consider the data processing requirements in the data center cloud and at the network’s edge to successfully scale AI. This is particularly crucial for next-gen use cases such as generative AI , which demand substantial computing power. Gartner has predicted that by 2025, more than 50% of enterprise-managed data will originate and undergo processing outside the data center or cloud. This underscores the necessity of embracing high-performance edge computing, which uses a distributed architecture to process data and deliver services close to the users. Pounds said that shifting computing resources closer to data collection and consumption points is crucial for next-generation devices and applications. It reduces latency, particularly for real-time processing, and speeds operations for many devices, from wearables to autonomous machines. According to Pounds, 5G technology has enhanced network capabilities by reducing latency, increasing bandwidth and supporting software-defined networking. 5G’s expanded networking capacity facilitates the development of new real-time applications. “For consumers, this may mean immersive technologies such as virtual and augmented reality and greater automation through autonomous machines and robotics for enterprises. Low latency is critical to ensure [that] the desired outcomes of these applications, from user experience to functional safety, are accomplished,” Pounds told VentureBeat. “This is only possible by distributing high-performance computing to the edge and closer to 5G networks.” Likewise, IBM’s Mangla told VentureBeat that enterprises need to harness data where it is located to benefit from edge computing and 5G speeds. He cited IBM’s recent collaboration with Bharti Airtel as an illustration of this approach. In this partnership, IBM is working with the telecom company to offer secure edge cloud services to enterprises. “We designed the platform to enable large enterprises across multiple industries, including manufacturing and automotive, to accelerate innovative solutions securely at the edge while helping data centers reduce latency and transmission costs,” said Mangla. Deploying high-performance edge at data centers for AI/ML workload management Scalability is another critical consideration. Edge computing in data centers enables an increase in connected devices by reducing the strain on centralized infrastructure. This shift toward localized processing opens doors to better device connectivity. Today, retailers are using edge AI to improve customer experience and inventory management and to analyze customer behavior for optimal product placement in aisles. With edge computing’s rapid data processing, retailers can introduce voice ordering or product-search functionalities to enhance the shopping experience. Similarly, edge-based AI is being employed in industrial settings to ensure functional safety, including collision detection and avoidance for AMRs; robot-human interaction; and detecting breaches in barriers or incorrect use of personal protective equipment. “High-performance edge computing allows AI applications to be tested and experimented [with] within controlled environments without affecting the rest of an organization’s operations. Once success is achieved, it can be further scaled and distributed by combining the power of data center architectures,” said Nvidia’s Pounds. “This can lead to iterative improvements, which can add up to major improvements in efficiency and effectiveness.” According to Rosa Guntrip, senior manager of product marketing at Red Hat , a synergistic relationship between high-performance edge and the data center is essential for AI/ML deployments. “Once the models have been tuned and are ready for production, the AI-powered intelligent application can be deployed (and automatically updated as needed). The intelligent AI-powered application running at the edge can now help make real-time decisions based on the data it is processing,” Guntrip told VentureBeat. She explained that each edge device maintains a local model, enabling data analysis and decision-making at the point of data generation. Additionally, it can periodically share newly collected data from sensors, cameras and other sources with the global model at the core data center. This ensures that the AI/ML model remains accurate by incorporating the latest data updates as necessary. Hand in hand “We expect centralized AI and distributed AI to co-exist. You already likely have AI running inside your phone, and use centralized AI like ChatGPT. These will ultimately lead to the deployment of AI onto large networks. We call that distributed AI,” John Graham-Cumming, chief technology officer at Cloudflare , told VentureBeat. Graham-Cumming said that distributed AI allows for the rapid iteration of AI models, providing end users with the most up-to-date versions. Additionally, this approach enables low-power devices to access AI capabilities by connecting to a nearby “supercloud” data center for inference. “With large distributed networks, code and data can be brought close to the 5G network, close to the end user, ensuring there’s no penalty no matter where the end user is,” he said. “At Cloudflare, we now have 300 distributed computing system-based data centers worldwide linked through the edge that operate as a massive system capable of moving data and code around for the highest performance.” Madhav Srinath, CEO/CTO of cloud analytics firm NexusLeap , pointed out that AI practitioners consistently grapple with balancing accuracy and feasibility. Conducting additional model training directly at the edge eliminates unnecessary data transfer. Srinath explained that since AI models often have a substantial data center footprint, transmitting the entire model over traditional data center networks carries more risk than incrementally training a model already deployed at the edge. He said that if the end user’s device lacks the computational capabilities to perform inference, it must communicate with the centralized infrastructure to overcome its computational limitations. “Such methods inevitably introduce data transfer costs and amplify the risk of operational failure,” Srinath told VentureBeat. “However, if edge devices are inculcated within data center infrastructures to handle AI processes demanding high computational power during the inference stage, it significantly bolsters the overall application’s reliability.” The essence of security in edge data centers Maintaining data privacy and security is crucial for highly regulated industries such as financial services and healthcare , where handling sensitive customer data is a trust-based responsibility. Operating within an edge computing framework allows data to be processed closer to its origin, minimizing the need for data transmission across multiple channels and reducing the potential for attacks. “By keeping most of the data on the edge device, rather than transmitting it over the internet to servers, the attack surface for a threat actor to get that data is smaller,” Nancy Wang, GM of data protection at AWS , told VentureBeat. Wang said that integrating edge computing into data centers also facilitates compliance with data residency and sovereignty requirements. While public cloud services are progressively expanding their reach, specific countries and sub-national governments are enforcing regulations mandating that certain data remain within their borders. “If you do not have access to a data center or cloud provider within that country, then using an edge computing device would be a compliant way to do business in that country,” she added. However, regardless of the data’s location, comprehensive management of security postures remains essential to ensure data protection. Guntrip from Red Hat emphasized that although the potential attack surface may expand with increased locations offered through the edge, containing a breach to a single site is easier than addressing a problem through the entire architecture. She said that implementing an edge computing strategy enables companies to streamline operations and enhance security across data center environments. This can be achieved through automated provisioning and hardening, efficient management, predefined configurations, and orchestration. “The key is to make security for the full supply chain a priority from the outset — from the underlying operating software to the network connectivity, and edge applications across physical and virtual elements, from mobile endpoints to business applications hosted in the public cloud,” Guntrip told VentureBeat. “Measures like role-based access control for application users, and SSL (secure sockets layer) to encrypt the application as it is created can help keep data secure along its journey.” “Data center security can be enhanced through edge computing by reducing data breaches during transfer to a data center, as well as the ability to compute and analyze data offline,” added Nvidia’s Pounds. “With AI at the edge, data can now be pre-processed, and protected information can be obscured before it is ever seen by humans or sent to a data center. Additionally, real-time decision-making means real-time safety or security measures when a risk is detected.” IBM’s Mangla emphasized the importance of companies having a unified view across all their environments, including private cloud, public cloud, on-premises, and the edge, to fully gain the benefits of processing data on-premises at the edge. “With a single point of control, organizations can seamlessly manage complexities and differences within their infrastructure, ultimately making it easier to keep data secured and compliant with regulations,” he said. The future of data centers and high-performance computing Nvidia’s Pounds said that HPC at the edge will continue to offer definitive advantages to organizations. By bringing computing power closer to the point of data collection and consumption — the data center — it empowers real-time applications needed for today’s generative AI-driven software. “It comes with challenges, namely, ensuring data is accurate and consistent across edge devices. Investments in infrastructure can also be challenging, though the return in the form of greater efficiency typically justifies the investment,” said Pounds. “Orchestration is the key.” Red Hat’s Guntrip said that edge computing for data centers allows organizations to deploy latency-sensitive applications, ensuring a seamless user experience regardless of location. This approach also ensures compliance with regulatory requirements by keeping data within specific geographic boundaries as it is stored and processed. Guntrip further highlighted that edge computing helps organizations reduce the amount of data sent to the cloud for processing, resulting in improved site resilience and optimized resource utilization and costs. This proves beneficial in addressing specific use cases or problems as they arise. “Organizations now really need to think about how to manage and maintain these highly scaled-out deployments with their existing teams and established tools and processes, with a relentless approach to automation and security to accelerate adoption whilst reducing operational costs,” she said. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,596
2,023
"Unlocking the power of data centers: High-performance computing for everyone | VentureBeat"
"https://venturebeat.com/data-infrastructure/the-data-center-in-2023-high-performance-computing-for-everyone"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Unlocking the power of data centers: High-performance computing for everyone Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. This article is part of a VB special issue. Read the full series here: The future of the data center: Handling greater and greater demands. The launch of ChatGPT seven months ago was a watershed moment, shifting the world’s attention to generative artificial intelligence (AI). Built on OpenAI ’s GPT series, the all-encompassing chatbot displayed an aptitude for dynamic conversations, offering individuals the opportunity to delve into the realm of machine intelligence and discover its potential in enhancing both their professional and personal queries. For enterprises, ChatGPT was far from the first instance of AI exposure. Before the generative tool came to the fore, companies across sectors were already using AI and machine learning (ML) for different aspects of their work — in the form of computer vision , recommendation systems , predictive analytics and a lot more. If anything, the OpenAI bot only made sure that they doubled down on these efforts to remain competitive. Today, enterprises are betting big on all sorts of next-gen workloads. However, this is no piece of cake. Take GPT-3, one of the models behind ChatGPT. The 175 billion-parameter technology needed about 3,640 petaflop-days in computing power for training. That is roughly one quadrillion calculations per second for a continuous period of 3,640 days. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! How can data centers meet these extensive computing demands? To handle the calculations demanded by next-gen workloads quickly and effectively, enterprises need massively parallel computing (MPP) in their data centers. MPP is a technique used in high-performance computing (HPC) that takes a complex task (like querying a complex database) and breaks it down into many smaller tasks, which then run on separate nodes working simultaneously. The results are combined to get the final output. Many data centers run on general-purpose processors, which can handle traditional workloads but are not fast enough to run several complex calculations, like multiplying large matrices and adding complex vectors, at the same time. This drawback is pushing enterprises to rethink their data centers and focus on specialized processors such as GPUs. “One of the most notable shifts is a trend towards offloading workloads to specialized hardware,” said Brandon Wardlaw, associate director for technology consulting at consulting firm Protiviti. “General-purpose compute nodes with heavy CPU capacity simply aren’t sufficient nor cost-effective for these next-gen workloads, and there’s been massive innovation among GPU OEMs and providers of more specialized FPGA (field programmable gate array) and ASIC (application-specific integrated circuit) hardware to support the highly parallel computation necessary to train models.” One of the companies driving the shift towards specialized hardware-accelerated data centers is Nvidia. Its data center platform provides a diverse range of GPU options — from the highest-performing H100 to the entry-level A30 — to meet the intense computing demands of modern workloads, from scientific computing and research to large language model (LLM) training, real-time analysis of machine efficiency and generation of legal material. In one case, Ecuador-based telecom company Telconet is using Nvidia’s DGX H100, a system of eight H100 GPUs combined, to build intelligent video analytics for safe cities and language services to support customers across Spanish dialects. Similarly, in Japan, these high-performance GPUs are being used by CyberAgent , an internet services company, to create smart digital ads and celebrity avatars. Mitsui & Co. , one of Japan’s largest business conglomerates, is also leveraging DGX H100 , using as many as 16 instances of this system (128 GPUs) to run high-resolution molecular dynamics simulations and generative AI models aimed at accelerating drug discovery. GPU-based acceleration comes with challenges While GPU-based acceleration meets workload demands across various sectors, it can’t be fully effective unless certain limitations are addressed. The problem is two-fold. First, implementing these add-on cards brings a major physical challenge as traditional one or two-rack unit “pizza-box” servers simply do not have the space to accommodate them. Second, this kind of dense computing hardware also results in high power draw (DGX H100 has a projected consumption of about 10.2 kW max) and thermal output, creating operational bottlenecks and increasing the total cost of ownership of the data center. To address this, Wardlaw suggested making compensatory accommodations elsewhere, like increasing compute density with high-core count x64 chipsets and migrating general-purpose workloads to these platforms. He also emphasized taking a more proactive approach to thermal management and optimizing data center layouts to increase cooling efficacy and efficiency. According to Steve Conner, vice president of sales and solutions engineering at Vantage Data Centers , the key to supporting HPC will be getting away from an air-cooled footprint. That’s because one has to control the temperatures on the CPUs and GPUs, and the only way to do that is to go to some sort of medium that has a much better heat exchange profile than air — e.g. liquid-assisted cooling. “What we’ve seen working with other platforms from an HPC standpoint [is that] the only way to get that maximum performance is to deliver that liquid to the heat sink, both on the GPU and CPU side of the house,” he told VentureBeat. Additional choices Along with specialized hardware, enterprises can consider emerging workarounds like software-based acceleration to support some next-gen workloads in their data centers. For instance, Texas-based ThirdAI offers a hash-based algorithmic engine that reduces computations and enables commodity x86 CPUs to train deep learning models while matching the performance of certain GPUs. This can not only be more affordable (depending on the workload) but also create fewer operational and physical roadblocks. There’s also the option of optimization, using techniques like knowledge distillation to reduce a model’s size and make it easier to support it. Such methods can result in some accuracy loss. But Bars Juhasz, CTO and cofounder of content generator Undetectable AI , said the company’s distilled model was 65% faster than the base one, while retaining 90% of the accuracy — a worthwhile tradeoff. “Scaling the performance of models can be thought of in a similar manner to existing technology stacks, i.e. horizontally and vertically. Adding more GPUs would be akin to horizontal scaling, whereas optimizing the model and using accelerated software is akin to vertical scaling. The key to revamping performance is understanding the technical specifics of the model [workload] and choosing the right acceleration option to match,” Juhasz noted. According to Wardlaw, if the AI/ML workloads are an “always-on” operation for the business, owning and managing the specialized hardware locally in the data center would be cost-effective at scale. However, if these workloads aren’t an “always-on” operation and the business may not run these workloads at the scale or frequency required to justify the investment, it would be better to go for alternative acceleration methods or AI/ML-optimized hardware offered by a dedicated provider or cloud hyperscaler on an Infrastructure-as-a-Service (IaaS) model. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,597
2,023
"Why -- and how -- high-performance computing technology is coming to your data center | VentureBeat"
"https://venturebeat.com/data-infrastructure/why-and-how-high-performance-computing-technology-is-coming-to-your-data-center"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Sponsored Why — and how — high-performance computing technology is coming to your data center Share on Facebook Share on X Share on LinkedIn Presented by AMD Increasing pressure on performance has been a fact of life in the data center environment for several years now. Compute intensive workloads have become more entrenched and more demanding for data centers to handle. They are built on complex, constantly evolving models that ask a lot of today’s infrastructures. Servers, storage capacity, memory, bandwidth — all must be at peak performance, all the time, while also functioning with minimal power consumption and within the tightest footprints. IT leaders are being faced with the requirements to provide increasing levels of performance, but with tighter resources. To power the computer-intensive needs of AI, 94.4% of companies with more than 2,500 employees said optimized server solutions were required. The growing demand from business leaders for these high-performance data centers was recently revealed in an AMD-sponsored IDC white paper, High-Performance Computing Drives Critical New Capabilities for Mainstream Organizations. IDC’s survey of IT decision-makers at companies with over 2,500 employees found that 94.4% of respondents required optimized server solutions for performance-intensive computing applications such as artificial intelligence. High-performance computing in the enterprise Few organizations are unaffected by the need for high-performance computing. Not long ago, conventional thinking was that high-performance computing was only required for exceptionally data-intensive applications within select industries — aerospace, oil and gas, and pharmaceuticals, for example, in addition to supercomputing centers dedicated to solving large, complex problems. This is no longer the case. As data volumes have exploded, many organizations are tapping into these technology and techniques to perform essential functions. In a relatively short timeframe, they’ve gone from believing they would never need anything beyond routine compute performance capabilities, to depending on high-performance computing to fuel their business success. Examples include keeping application logins running smoothly, running algorithms that keep systems secure, or even monitoring retail environments with computer vision to prevent shoplifting. No longer the domain of a few select fields, industries from financial trading to the advertising ecosystem rely on large-scale mathematically intensive computations. In conjunction with AI and data analytics, high-performance computing is powering entire industries that depend for their existence on performing large-scale, mathematically intensive computations for a variety of needs, including faster business insights and results to drive improved decision-making. Financial trading is increasingly based on algorithms and machine learning (ML), and real-time data on customer interactions drives online advertising. Whenever businesses design and simulate products, the computer-aided engineering applications they use also need high-performance, high-efficiency computing. New workloads call for new solutions Depending on the organization and its IT requirements, high performance in the data center can be required to perform and support certain key functions: Online transaction processing (OLTP) is at the core of many enterprise applications. Fast application performance — with more customer transactions per server — ultimately delivers improved sales. Query performance is essential to tapping into the power of relational database management systems for analytics and decision support. Faster time-to-insights means the organization can change direction quickly to adjust to shifting market and customer requirements. Virtualization performance is another critical factor in HPC infrastructure design. By running on a layer of virtualization, applications make more effective use of hardware resources. Nothing is more central to driving high data center performance than processor choice. To support IT organizations as they grapple with managing and processing massive amounts of data to accelerate business results, technology providers are bringing innovative products and services to market. But nothing is more central to driving high data center performance than processor choice. Taking transaction processing as an example, a server powered by a 2P (dual processor) 4th Gen AMD EPYC 9654 processor delivers approximately a 2.71x performance advantage compared to the Intel Xeon 8380 2P solution. 1 On query performance, a 2P AMD EPYC 9654 processor-driven server delivers about a 2.7x median performance improvement over the Intel 8380 2P offering. 2 Similar advantages are apparent in virtualization performance. For example, in a head-to-head between two 2P servers, an EPYC 9654 processor solution outscored the Intel Xeon 8480H-based solution by 1.7x in VMmark matched pair benchmark, an industry standard measure of performance, scalability, and power consumption of virtualization platforms. 3 How Emirates NBD modernized for HPC High-performance solutions like these are in demand for companies seeking to modernize their data centers, build out private or hybrid clouds or drive ultimate performance even at the edge. Emirates NBD saw increases in speed from 42% to 51%. Processing terabytes of logs daily, they went from waiting 30 seconds or minutes, to almost instantaneous processing. A case in point is Dubai-based Emirates NBD Bank. In 2018, as part of a thorough overhaul of its IT infrastructure, the bank subjected a number of proposals to intense evaluation. Ali Rey, senior vice president for Emirates NBD’s technology platform, recalls: “We tested our database performance, our core banking APIs, the memory bandwidth and how the CPUs accessed the memory. And we did a lot of web-based user testing. Eventually, we went with building out a private cloud environment powered by 3rd Gen AMD EPYC CPUs, which we found to be on average 42% faster than the alternative solution — 51% faster, in some cases.” The servers powered by the AMD CPUs are also more effective at the ongoing, processor-intensive maintenance routines all businesses depend on. According to Rey, “We process terabytes of logs every day. Before we had to wait over 30 seconds to minutes, but now they’re almost instantaneous.” He adds that the multicore processors are hugely more efficient, delivering a lot more processing for the same infrastructure density with fewer servers. “There aren’t many banks that can match our level of efficiency.” Efficient data center computing Many enterprises have concerns that achieving results like those at Emirates NBD will require larger data centers — increasing costs and energy usage. However, today’s newer CPUs can allow IT leaders and enterprises to meet their performance needs with even greater efficiency. Surprising many, larger data centers are not a requirement — rather, today’s newer CPUs achieve the greater efficiency IT leaders are after. For example, the server capacity required to run 2,000 virtual machines would call for 17 2P Intel Platinum 8490H-based servers, but only 11 servers based on a 2P 96-core 4th Gen AMD EPYC processors. That’s the same amount of work with 35% fewer servers and an estimated 36% less power consumption annually. 4 Efficiency gains at this scale are vital to enabling enterprises to achieve their data center performance goals, while also meeting business goals such as lowering energy costs and advancing broader corporate sustainability initiatives. Taking advantage of the latest CPUs now in market can allow IT leaders to meet the performance needs their organizations and customers require, all while achieving greater cost and energy efficiency. Ravi Kuppuswamy is AMD’s corporate vice president for the Server Solutions Group. https://www.amd.com/en/claims/epyc4#SP5-071A https://www.amd.com/en/claims/epyc4#SP5-070 https://www.amd.com/en/claims/epyc4#SP5-049B https://www.amd.com/en/claims/epyc4#SP5TCO-036A The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,598
2,023
"Reimagining the data center for the age of generative AI | VentureBeat"
"https://venturebeat.com/data-infrastructure/with-all-of-the-focus-on-chatgpt-what-impact-if-any-does-it-have-on-the-data-center"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Reimagining the data center for the age of generative AI Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. This article is part of a VB special issue. Read the full series here: The future of the data center: Handling greater and greater demands. Today, any conversation about artificial intelligence is bound to include the rise of ChatGPT, the ubiquitous chatbot built on OpenAI’s GPT series of large language models (LLMs). But how can you feed the demands of this kind of generative AI technology in your data center? The chatbot launched late last year and is making waves with its content-generation capabilities. People are using ChatGPT and competing bots from other vendors to get complex questions answered as well as to automate tasks such as writing software code and producing marketing copy. But with all the possibilities inherent in this generative AI technology, using foundational models to their full potential has been difficult. Most of the models out there have been trained on publicly available data, which makes them less than ideal for specific enterprise applications like querying sensitive internal documents. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Enterprises want these models to work on internal corporate data. But does that mean they have to go all in and build them from scratch? Let’s dive in. Building large language models: A costly affair within data centers The task of building a LLM, such as GPT-3 or GPT-4, requires multiple steps, starting with compute-heavy training that demands hundreds, if not thousands, of expensive GPUs clustered together in data center servers for several weeks or months. “The initial training requires a very significant amount of computing power. For example, the BLOOM model, a 176-billion parameter open-source alternative to GPT-3, required 117 days of training on a 384-GPU cluster. This is roughly equivalent to 120 GPU years,” Julien Simon, chief evangelist at Hugging Face , told VentureBeat. As the size of the model increases, the number of GPUs required to train and retrain it increases. Google, for instance, had to plug in 6,144 chips to train its 540 billion-parameter PaLM model. The process also demands expertise in advanced training techniques and tools (such as Microsoft DeepSpeed and Nvidia MegaTron-LM ), which may not be readily available in the organization. Once the training is done, these chips are then needed to run inference on the model on an ongoing basis, further adding to the cost. To put it into perspective, using just 500 of Nvidia ’s DGX A100 multi-GPU servers, which are commonly used for LLM training and inference, at $199,000 a piece would mean spending about $100 million on the project. On top of this, the additional power draw and thermal output stemming from the servers will add to the total cost of ownership. That’s a lot of investment in data center infrastructure, especially for companies that are not dedicated AI organizations and are only looking to LLMs to accelerate certain business use cases. The ideal approach toward a data center for the age of AI Unless a company has unique high-quality datasets that could create a model with a solid competitive advantage that would be worth the investment, the best way to go ahead is fine-tuning existing open-source LLMs for specific use cases on the organization’s own data — corporate documents, customer emails, etc. “A good counterexample is the BloombergGPT model, a 50 billion-parameter [model] trained by Bloomberg from scratch … How many organizations can confidently claim that they have the same amount of unique high-quality data? Not so many,” Hugging Face’s Simon said. “Fine-tuning, on the other hand, is a much more lightweight process that will require only a fraction of the time, budget and effort. The Hugging Face hub currently hosts over 250,000 open-source models for a wide range of natural language processing , computer vision and audio tasks. Chances are you’ll find one that is a good starting point for your project,” he said. If an enterprise does see value in building an LLM from scratch, it should start small and use managed cloud infrastructure and machine learning (ML) services instead of buying expensive GPUs for on-site deployment right away. “We initially used cloud-hosted MLOps infrastructure, which enabled us to spend more time developing the technology as opposed to worrying about hardware. As we have grown and the architecture of our solution has settled down from the early rapid research and development days, it has now made sense to tackle local hosting [of] the models,” Bars Juhasz, CTO and cofounder of content generator Undetectable AI , told VentureBeat. The cloud also provides more training options to choose from, going beyond Nvidia GPUs to those from AMD and Intel as well as customer accelerators such as Google TPU and AWS Trainium. On the other hand, in cases where local laws or regulations mandate staying away from the cloud, on-site deployment with accelerated hardware such as GPUs will be the default first choice. Planning remains key Before rushing to invest in GPUs, skills, or cloud partners for domain-specific LLMs and applications based on them, it is important for technical decision-makers to define a clear strategy by collaborating with other leaders in the enterprise and with subject matter experts. It is helpful to focus on the business case for the decision, and have an approximate idea of what the current and future demands of such workloads would be. With this kind of planning, enterprises can make informed decisions about when and how to invest in training an LLM. This includes aspects like what kind of hardware to choose, where they can use pre-existing models developed by others, and who might be the right partners on their AI journeys. “The landscape of AI/ML is moving incredibly quickly … If the inclusion of these new technologies is treated with the traditional mindset of future-proofing, it is likely the solution will be antiquated relatively quickly. The specialized nature of the technologies and hardware in question means a better choice may be to first develop the solution outlook, and upgrade their data centers accordingly,” Juhasz said. “It can be easy to buy into the hype and trend of adopting new technology without dignified reason, but this will undoubtedly potentially lead to disappointment and dismissal of real use cases that the business could benefit from in the future,” he said. “A better approach may be to remain level-headed, invest time in understanding the technologies in question, and work with stakeholders to assess where the benefits could be reaped from integration.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,599
2,023
"VB in Conversation - Handling Greater Demands in the Data Center with Kamran Ziaee, Verizon | VentureBeat"
"https://venturebeat.com/vb-in-conversation-handling-greater-demands-in-the-data-center-with-kamran-ziaee-verizon"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages VB in Conversation – Handling Greater Demands in the Data Center with Kamran Ziaee, Verizon Handling Greater Demands in the Data Center with Kamran Ziaee, Verizon Kamran Ziaee, SVP, Technology Strategy & Global Infrastructure at Verizon sits down with VB Editor-in-Chief Matt Marshall to discuss how data centers now must handle greater and greater demands. While high-performance computing used to be reserved for exceptionally data-intensive applications, like gene sequencing or self-driving cars, every industry is seeing real use cases for adopting HPC. Verizon is an example of a company that says HPC is now table stakes for many applications it runs in its data centers. While Verizon has consolidated down to three data centers from nine, it has also invested in multi-access edge computing (MEC) to ensure rapid response time for customers. We explore his unique approach to an optimized hybrid cloud, where many critical applications stay on-prem and highly elastic applications migrate to the public cloud to enjoy performance gains there. We touch on the Gen AI craze and the proof of concepts the company is running, which also point to a hybrid future. This video is part of our VB in Conversation series. View all videos here. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,600
2,023
"VB in Conversation - Handling Greater Demands in the Data Center with Ken Spangler, FedEx | VentureBeat"
"https://venturebeat.com/vb-in-conversation-handling-greater-demands-in-the-data-center-with-ken-spangler-fedex"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages VB in Conversation – Handling Greater Demands in the Data Center with Ken Spangler, FedEx Handling Greater Demands in the Data Center with Ken Spangler, FedEx FedEx handles 100 billion daily transactions, from tracking packages to routing flights. To keep up with this massive data demand, the company has invested in data center performance, especially in high-performance computing. Ken Spangler, EVP of IT and CIO of Global Operations Technology, FedEx, talks with VB Editor-in-Chief Matt Marshall about how the company has adopted the co-location model for the data center, which allows it to leverage the benefits of cloud computing while maintaining control of the technology stack. He also reveals how FedEx is partnering with edge computing providers to deploy “mini data centers” that can reduce latency and enhance security for its customers – and of course AI and automation. This video is part of our VB in Conversation series. View all videos here. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,601
2,023
"Accelerating AI for growth: The key role of infrastructure | VentureBeat"
"https://venturebeat.com/ai/accelerating-ai-for-growth-the-key-role-of-infrastructure"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages VB Lab Insights Accelerating AI for growth: The key role of infrastructure Share on Facebook Share on X Share on LinkedIn This article is part of a VB special issue. Read the full series here: The CIO agenda: The 2023 roadmap for IT leaders. And don’t miss additional articles providing new industry insights, trends, and analysis on how AI is transforming organizations. Find them all here. Enterprises everywhere have recognized the central role of artificial intelligence (AI) in driving transformation and business growth. In 2023, many CIOs will shift from the “why” of AI to “how?” More specifically: “ What’s the best way to quickly and economically grow AI production at scale that creates value and business growth?” It’s a high-stakes balancing act: CIOs must enable rapid, wider development and deployment, and maintenance of impactful AI workloads. At the same time, enterprise IT leaders also need to more closely manage spending, including costly “shadow AI,” so they can better focus and maximize strategic investments in the technology. That, in turn, can help fund ongoing, profitable AI innovation, creating a virtuous cycle. High-performance AI infrastructure — purpose-built platforms and clouds with optimized processors, accelerators, networks, storage and software — offers CIOs and their enterprises a powerful way to successfully balance these seemingly competing demands, enabling them to cost-effectively manage and accelerate orderly growth and “industrialization” of production AI. In particular, standardizing on a public cloud -based, accelerated “AI-first” platform provides on-demand services that can be used to quickly build and deploy muscular, high-performing AI applications. This end-to-end environment can help enterprises manage related expenses, lower the barrier to AI, reuse valuable IP and, crucially, keep precious internal resources focused on data science and AI, not infrastructure. Three major requirements for accelerating AI growth A major benefit of focusing on AI infrastructure as a core enabler of AI and business growth is its ability to help enterprises successfully meet three major requirements. We and others have observed these in our own pioneering work in the area and, more broadly, in technology development and adoption over the last 20 years. They are: standardization, cost management and governance. Let’s briefly look at each. 1. AI standardization Enabling orderly, fast, cost-effective development and deployment Like big data, cloud, mobile and PCs before it, AI is a transformative game-changer — with even greater potential impact, both inside and outside the organization. As with these earlier innovations — including virtualization, big data and databases, SaaS and many others — smart enterprises, after careful evaluation, will want to standardize on accelerated AI platforms and cloud infrastructure. Doing so brings a raft of well-understood benefits to this newest set of universal tools. Large banks, for example, owe much of their vaunted ability to quickly expand and grow to standardized, global platforms that enable fast development and deployment. With AI, standardizing on optimized stacks, pre-integrated platforms and cloud environments helps enterprises avoid the host of negatives that often result from fielding a chaotic variety of products and services. Chief among them: unmanaged procurement, suboptimal development and model performance, duplicated efforts, inefficient workflows, pilots not easily replicated or scaled, more costly and complex support, and lack of specialist personnel. Perhaps most serious is the excessive time and expense associated with selecting, building, integrating, tuning, deploying and maintaining a complex stack of hardware, software, platforms and infrastructures. To be clear: enterprise standardization of AI platform and cloud does not mean one-size-fits-all, exclusivity with one or two vendors, or a return to strictly centralized IT control. To the contrary, modern AI cloud environments should offer tiered services optimized for a diverse range of use cases. The “standardized” AI platform and infrastructure should be purpose-built for different AI workloads, offering appropriate scalability, performance, software, networking and other capabilities. A cloud marketplace, familiar to many enterprise users, gives AI developers a variety of approved choices. As for portability: containerization, Kubernetes and other open, cloud-native approaches offer easy movement across providers and multiclouds, easing concerns about lock-ins. And while enterprise standardization restores a CIO’s overall visibility and control, it can overlay on existing procurement policies and procedures, including decentralized approaches — a win-win. 2. AI cost management Focusing and freeing funds for ongoing innovation and value By various estimates, unauthorized spending, often by business groups, adds 30-50% to technology budgets. While specific figures for such “shadow AI” are hard to come by, surveys of enterprise IT priorities for 2023 show it’s a good bet that hidden investments on products and services will consume a good chunk of AI infrastructure costs. The good news is that centralized procurement and provisioning of enterprise-standard AI services restores institutional control and discipline, while providing flexibility for organizational consumers. With AI, like any workload, cost is a function of how much infrastructure you must buy or rent. CIOs want to help groups developing AI avoid both over-provisioning (often with expense but underutilized on-premises infrastructure) and under-provisioning (which can slow model development and deployment, and lead to unplanned capital purchases or overages of cloud services). To avoid these extremes, it’s wise to think of AI costs in a new way. Accelerated processing for inference or training may (or may not) initially cost more by using a powerful, optimized platform. Yet the work can be done more quickly, which means renting less infrastructure for less time, reducing the bill. And, importantly, the model can be deployed sooner, which can provide a competitive advantage. This accelerated time-to-value is analogous to the difference between total time driving to Dallas from Chicago (15 hours) or flying non-stop (5 hours). One might cost less (or with current gas prices, more); the other gets you there much faster. Which is more “valuable”? In AI, reviewing development costs from a total cost of ownership standpoint can help you avoid the common mistake of looking just at raw expenses. As this analysis shows , the advantage of arriving more quickly, with less wear and tear and fewer possibilities for detours, accidents, traffic jams or wrong turns, is a smarter choice for our road trip. So it is with fast, optimized AI processing. Faster training times speed time to insight, maximizing the productivity of an organization’s data science teams and getting the trained network deployed sooner. There’s also another important benefit: lower costs. Customers often experience a 40-60% cost reduction vs. a non-accelerated approach. Training a sophisticated large-language model (LLM) on thousands of GPUs? Optimizing an existing model on a handful of GPUs? Doing real-time inferencing across the globe for inventory? As we noted above, understanding and budgeting AI workloads beforehand helps ensure provisioning that’s well-matched to the job and budget. 3. AI governance Ensuring accountability, measurability, transparency The term AI governance lately has acquired varied meanings, from ethics to explainability. Here it refers to the ability to measure cost, value, auditability and compliance with regulatory standards, especially around data and customer information. As AI expands, the ability of enterprises to easily and transparently ensure ongoing accountability will continue to be more crucial than ever. Here again, a standardized AI cloud infrastructure can provide automations and metrics to support this crucial requirement. Moreover, multiple security mechanisms built into various layers of purpose-built infrastructure services — from GPUs, to networks, databases, developer kits and more, soon to include confidential computing — help provide defense in-depth and vital secrecy for AI models and sensitive data. A final reminder about roles and responsibilities: Achieving profitable, compliant AI growth and maximum value and TCO quickly using advanced, AI-first infrastructure cannot be a solo act for the CIO. As with other AI initiatives, it requires a close collaboration with the chief data officer (or equivalent), data science leader and, in some organizations, chief architect. Bottom line: Focus on how. Now. Most CIOs today know the “why” of AI. It’s time to make “how” a strategic priority. Enterprises that master this crucial capability — accelerating easy development and deployment of AI — will be far better positioned to maximize the impact of their AI investments. That can mean speeding up innovation and development of new applications, enabling easier and wider AI adoption across the enterprise or generally accelerating time-to-production-value. Technology leaders who fail to do so risk creating AI that sprouts wildly in expensive patches, slowing development and adoption and losing advantage to faster, better-managed competitors. Where do you want to be at the end of 2023? Visit the Make AI Your Reality hub for more AI insights. #MakeAIYourReality #AzureHPCAI #NVIDIAonAzure Nidhi Chappell is general manager of Azure HPC, AI, SAP, and confidential computing at Microsoft. Manuvir Das is VP of enterprise computing at Nvidia. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,602
2,023
"How robotic process automation (RPA) can drive enterprise productivity | VentureBeat"
"https://venturebeat.com/automation/how-robotic-process-automation-rpa-can-drive-enterprise-productivity"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How robotic process automation (RPA) can drive enterprise productivity Share on Facebook Share on X Share on LinkedIn This article is part of a VB special issue. Read the full series here: The CIO agenda: The 2023 roadmap for IT leaders. The COVID pandemic of 2020 pulled the world 10 years into the future, pushing companies to streamline their processes through automation , according to Hikari Senju, founder and CEO of ad-tech platform provider Omneky. Widely anticipated recession will push RPA further into the spotlight, he maintains. “The pandemic revealed that certain processes were unproductive and could be better automated through technology. Now, the impending recession of 2023 has forced companies into cost-cutting mode, resulting in an accelerated adoption of RPAs,” he told VentureBeat. In Senju’s view, RPA and automation in the creative services industry will enable businesses to stay ahead of the competition and adapt to ever-changing market demands. Of course, that’s if RPA is done right. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Robotic process automation , when done right, performs low-level and repetitive manual tasks that consume workers’ time. It reduces data errors, resulting in higher quality, and lowers overhead in the form of fewer people needed to perform the tasks. Indeed, it can free employees to focus on higher-level, strategic work, and reduces errors by eliminating the need for human intervention, thus increasing productivity and efficiency. And best of all, it results in speed. The automation advantage While traditional IT automation relies solely on programmatic interfaces, such as APIs, to automate specific tasks within a business process, RPA utilizes both programmatic interfaces and user interfaces to automate the entire business process from start to finish. This allows RPA to combine the capabilities of human users with those of software robots, resulting in a broader range of service capabilities. For example, RPA software can streamline invoice handling from start to finish by combining techniques of interface programming with thorough understanding of user screen interaction. This allows the software to automatically retrieve invoices from accounting systems, enter data into the invoice processing portal and route the invoice for approval. Furthermore, the integration of artificial intelligence (AI) improves the speed and efficiency of automation in a consumption-based pricing model, promoting the digitization of business processes. RPA has seen significant growth in the last three to four years. The global robotic process automation market is projected to grow to $43.5 billion by 2029 from $10 billion in 2022, an annual growth rate of 23.4%, according to a report by Fortune Business Insights. Costly business operations are a prime target of RPA, as practical CIOs seek new efficiencies, Senju said. That’s particularly true for companies like his, which seek to add AI-driven capabilities to users’ software portfolios. He said RPA is significantly transforming the creative services sector by enabling better automation of costly business operations. By implementing an omnichannel automation approach to scaling creative A/B testing, Omneky has been able to deliver cost reductions along with performance increases. Implementing RPA Although RPA can increase efficiency and reduce costs, implementing it can be a daunting task. Enterprises must devise a comprehensive plan complemented by adequate funding and continual monitoring of performance and outcomes. Rajesh Raheja, chief engineering officer at cloud-based automation platform Boomi , talked about the current challenges of integrating RPA into enterprise architectures. These architectures, he said, need more flexibility to incorporate new capabilities into their design. “Traditional architectures are not very open to [being] expanded with new functionalities needed to support the automation that RPA brings,” Raheja told VentureBeat. This makes it difficult to integrate with the systems needed. For its part, traditional RPA may employ brittle connections that break as those systems change or are updated, he added. To prevent such hazards, the first step in implementing RPA is identifying which processes are best suited for automation. This includes tasks that are repetitive, time-consuming and prone to errors, in areas such as finance, human resources, customer service and others. Next, implementers should carefully develop plans to improve the selected business processes. “The challenge today is that oftentimes RPA requires reorganizing [a] company’s [existing] structure. CIOs should always consider their decisions based on switching costs,” said Omneky’s Senju. Senju said the main aspects to be considered are: What is the cost to the organization for switching to the new system and the potential return on investment (ROI)? And what is the cost of staying with the current system and losing competitiveness or market share? A CIO can determine if an RPA solution is appropriate for their organization by evaluating key aspects including: Business case: Assess whether the proposed RPA solution will provide a clear and compelling ROI. This includes analyzing the solution’s potential cost savings and efficiency improvements. Technical feasibility: Evaluate whether the proposed RPA solution can be integrated into the existing IT infrastructure, and whether it can handle the volume of data and processes the organization needs to automate. Scalability: Consider whether the RPA solution is designed to scale in line with the organization’s growing numbers of processes and users. The RPA solution should be able to adapt to changing business requirements and procedures over time. Data security: Ensure the RPA solution protects sensitive data and complies with relevant regulations and standards. Total cost of ownership: Take into account the total cost of ownership of the RPA solution, including the initial cost, any ongoing maintenance costs and any hidden costs. Generating automations Senju said 2023 will be seminal for the field of RPA and the automation of the creative services sector. Like others, he sees the dawning of generative AI as a growth driver. “Generative AI automates the human ability to recognize patterns from vast amounts of data and use those learnings to create content — whether text, imagery or soon-to-be video,” he said. “Whatever changes we saw over the past couple of years should see an acceleration.” Will advancements in technology allow for the automation of tasks previously thought to be the exclusive domain of human creativity, such as content generation, design and video editing? Yes, Senju said. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,603
2,023
"Buy or build? How to make the best decision in an economic downturn | VentureBeat"
"https://venturebeat.com/data-infrastructure/buy-or-build-how-to-make-the-best-decision-in-an-economic-downturn"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Buy or build? How to make the best decision in an economic downturn Share on Facebook Share on X Share on LinkedIn This article is part of a VB special issue. Read the full series here: The CIO agenda: The 2023 roadmap for IT leaders. The economic downturn is here. From Alphabet to Meta and Amazon, Big Tech companies have led the news with substantial layoffs. “In this new environment, we need to become more capital efficient,” Mark Zuckerberg said when announcing the plan to let go of 11,000 employees in November 2022. “We’ve cut costs across our business, including scaling back budgets, reducing perks and shrinking our real estate footprint.” But enterprises across sectors are responding to the uncertain environment with a renewed focus on efficiency, productivity and resiliency. They’re cutting costs, laying off employees and narrowing in on priority projects. With much of the attention on reducing cash burn, questions about technology investments have become ripe. Some think IT spending will take a hit (just like everything else), while others predict it will be recession-proof as technology is essential to every business and executives may increase spending on digital business initiatives to drive growth. Gartner, for instance, estimates IT spending will grow 5.1% to $4.6 trillion in 2023. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “Economic turbulence will (only) change the context for technology investments, increasing spending in some areas and accelerating declines in others, but it is not projected to materially impact the overall level of enterprise technology spending,” John-David Lovelock , distinguished VP analyst at Gartner, notes. Part of this change will be how teams approach the modernization of their tech stacks. The big dilemma As long as software has been commercially available, CIOs and IT leaders have had to choose whether to build out solutions that meet their functional needs or buy from a third party. Often the pendulum has swung between the two, as early adopters of new technology have typically needed to do more in-house before a full ecosystem developed in support of the innovation. But there has generally been an upward spiral in technology sophistication that has blended the two. Open-source solutions are now more reliable and fully functional as components for more environments, and low-code and no-code artificial intelligence (AI) help make system modifications practical for more employees. Still, the choice can be a difficult one and, now, with the economic downturn, it has gotten tougher as everyone wants to prevent overspending. Currently, when a company builds its own solution from scratch, it gets owner-specific flexibility and control, including the ability to make changes as and when needed, but has to bear all the responsibilities associated with the move, starting from the development of the software to deployment, support and maintenance. This becomes a resource-intensive effort. “Building gives the CIO total control over data and functionality. But there are huge cost implications of taking on development and the risk that you’re just reinventing the wheel. Building also means you are responsible for planning the deployment, hosting and maintenance of the solution, which often dwarfs the initial development effort,” Matt McLarty, Salesforce MuleSoft’s global field CTO and VP of digital transformation office, told VentureBeat. Buying a solution that already exists in the market and has been battle-tested, on the other hand, transfers the entire task of upgrading the stack to a third party. This eliminates implementation and operational complexity and is comparatively faster and more affordable. According to Kevin Gordon, VP of AI technologies at AI imaging company NexOptic , with buying, teams spend 10 times more in the first couple of years and then begin to save money. However, in this case, they get less control over the functionality of the solution (changes would be difficult) and have to deal with interoperability between components. “The benefits of ‘buy’ are that you’ll see faster, if not immediate, changes to your stack. You avoid a design phase. It can be installed and running in five minutes, and there’s already a support team in place. But, the drawback is you don’t own the ‘build,’” Gordon told VentureBeat. Buy or build: How to pick the best option While buying is the more affordable option and can easily be the preferred choice in the current economic conditions, teams should not make the decision solely on the basis of cost. The ideal approach, as experts suggested, should be in terms of value creation or ROI perspective. One should look at the project in hand and see how core this capability is to their main business or customer needs. If the answer is “very” and the capability is critical to the business product/model, build efforts should be directed toward it. “If you’re a bank, should you spend time trying to build better software hosting infrastructure than major cloud providers? Focus your build efforts on the core capabilities in your organization that create, deliver and capture the value that drives your business, especially those that give you a competitive advantage,” McLarty noted. Investing time, capital and resources to build core capabilities from scratch – even in current times – can enable companies to create something unique, ultimately proving more rewarding. A good example is the case of Uber and Lyft. Both benefited from building their own routing recommendation systems to support unique features that weren’t available with off-the-shelf solutions. Having that said, in case the concerned capability does not seem very relevant to the core business in the big picture, it should be seen as a commodity and treated as such – meaning bought and managed via a third party. “I’m a big proponent of buying as opposed to building, especially for companies that don’t have technology as its core business,” EY’s chief technology officer Nicola Morini Bianzino told VentureBeat. “Often, I see companies that are trying to build the perfect solution and think that, to accomplish it, they must build it themselves. However, in the long term, they may be better off having a solution that does 70 to 80% of what is needed but is easier to manage in their existing ecosystem. It is extremely expensive to build technology internally – you need to hire and retain talent who can keep up with rapidly evolving technologies. When you buy, you don’t have to spend as much money on upkeep, and you can avoid technical debt. Overall, it’s much simpler to manage and offers better value.” When buying, businesses must also make sure that capability in question should evolve as the company grows and the cost of each piece to bring the capability to life should make sense. If either of the two does not fit well, this may not be the best vendor to go with. Alternatively, teams can also go for the third option – open source. These libraries and applications are free to use and could be modified to suit business needs in some, if not all, cases. “Some come with certain license agreements, but with the right groundwork, your company may just find the perfect tool to modernize your tech stack by leveraging these high-quality solutions at a fraction of the cost,” Gordon said. Smaller, agile projects are key to success Regardless of building or buying, companies should make it a practice to invest in technology projects and programs with a shorter runway to business results. This, according to Bianzino, will be the key to success in the current economic climate. “When we’re looking at our financial vitality in the short term, a two-year time frame is too long. Investing in many smaller, agile projects is smarter as opposed to long-term projects. Companies need to be looking at areas where a project can make a big difference for their bottom line and be effective in a shorter time frame. Data and AI are going to be two important areas for incremental investment with higher returns this year,” he said. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,604
2,023
"The CIO agenda in 2023: Driving growth and transformation | VentureBeat"
"https://venturebeat.com/data-infrastructure/the-cio-agenda-in-2023-driving-growth-and-transformation"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages The CIO agenda in 2023: Driving growth and transformation Share on Facebook Share on X Share on LinkedIn This article is part of a VB special issue. Read the full series here: The CIO agenda: The 2023 roadmap for IT leaders. The pressure is on for CIOs in 2023, experts say, as chief information officers are called upon to drive growth and transformation, not just keep the data center humming and enterprise software running. “It’s about ‘show me the money,’” Janelle Hill, chief of research for Gartner’s CIO practice, told VentureBeat. After a decade of investing in digital, she explained, organizations want to know the value of their investments, while at the same time accelerating digital initiatives such as artificial intelligence and hyperautomation — and ensuring security and privacy across an expanding attack surface. According to Gartner’s 2023 CIO and Technology Executive Agenda , released in October, CIOs expect IT budgets to increase 5.1% on average this year — lower than the projected 6.5% global inflation rate. That creates a triple squeeze — economic pressure, scarce and expensive talent, and ongoing supply challenges — heightening the desire and urgency to realize time to value. This special issue from VentureBeat kicks off the new year by tackling these issues head-on. It includes deep dives into how CIOs can impress the board, and stakeholders, by getting the most value from data analytics , cloud platforms and even the metaverse. We’ll also focus on hiring and retaining IT talent to accelerate digital initiatives while building brands, and explore how AI and automation can help achieve sustainability, overcome biases, address supply chain challenges and enforce security through the tech stack. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The CIO role has ‘never been more strategic or close to the business’ “I remember a time when IT was a fortress — the big IT tank that did all the technology,” recalled Penelope Prett, chief information, data and analytics officer for Accenture. “What’s really been interesting to watch is how fast the role has accelerated as an ombudsman and a catalytic change agent for technology impacting business performance.” Those changes were accelerated during the COVID-19 pandemic, said Fletcher Previn, SVP and CIO at Cisco. “The CIO and the whole IT team, to a large degree, is in the business of meeting unmet needs,” he said. “The pandemic created a sudden, urgent, long list of unmet needs — the CIO was really at the center of what needed to be done for businesses to weather that storm.” As a result, he said, the CIO role “has never been more strategic or close to the business.” The CIO is now seen as more of a team leader executing a company’s transformation, rather than just a necessary cost to keep the company’s infrastructure humming. “It’s a misconception, or at least a miscalculation, to think of the CIO as a kind of traditional back-office role and this expense that you wish you didn’t have,” said Previn. “A good CIO needs to understand agile ways of working devops software development, automation, user experience and design, analytics and data, and then the business objectives. And then, at same time, be able to lead an organization, drive a talent strategy and attract the best people.” For Juan Perez, CIO of Salesforce, a key element is establishing “business intimacy” in order to increase the CIO’s relevance and ability to deliver business value to the C-suite and board, among other stakeholders. Business intimacy enables the CIO and their team to drive results and increase productivity and efficiency at scale, because it leads to a greater understanding of business needs, he explained. “CIOs have a dotted line to every business leader — including the CEO, CFO, CMO and others — each of whom is being asked to do more with less,” he pointed out. “By closely partnering with key business leaders, CIOs can help prioritize amid budget constraints and deliver the technology needed to increase efficiency, improve results and lower costs.” At John Deere, the CIO role has evolved to be a deep influencer and key enabler of an organization’s strategy, said CIO Raj Kalathur. Kalathur detailed the company’s transition from manufacturing heavy machinery to applying data, automation and autonomy to develop new products and services and create more personalized experiences for customers, dealers, suppliers and Deere employees. “For example, we published a financial ambition to achieve a 20% operating return on sales for all equipment operations by 2030,” he said. “In support of this ambition, John Deere is advancing our core manufacturing technology stack to unlock significant economic value across our operations. We are positioned to leverage industry-leading technologies (5G connectivity, IoT platforms, digital twin/digital thread) to optimize labor, reduce assets/inventory and improve quality and customer satisfaction.” The adaptable CIO Jane Zhu, CIO of Veritas Technologies, leads a combined organization that includes IT, facility, finance shared service, data analytics and program management functions. These responsibilities, she said, give her a unique opportunity to see how the business functions from end to end and apply that knowledge to make the CIO role more effective. “I see the role of the CIO as being adaptable and forward-thinking as digital transformation continues to be top-of-mind,” she explained. As many organizations strive to constantly be more efficient, more flexible and therefore more profitable, she added, CIOs must lead the charge in helping companies adapt to advances in technology and the complexities that come along with them, especially in regard to new cloud technologies such as collaboration tools to support an increasingly distributed workforce. That doesn’t mean that CIOs only care about technology and cannot be people-oriented leaders within an organization, Zhu emphasized. “The most successful CIOs are going to be able to balance and collaborate with other members of the C-suite to drive meaningful outcomes for the entire organization,” she said. “This also means that we have a major hand in guiding the IT culture within our organizations, which is especially important to foster given the growing impact of the cyberskills shortage on employees within IT.” CIOs turn to AI and automation to deliver immediate value Forrester Research VP and senior research director Matt Guarini recently predicted that 80% of companies will pivot their innovation efforts “from creativity to resilience.” That is, fewer moonshot investments, and more focus on short-term gains to make firms more productive and adaptive in uncertain times. That means turning to technologies that deliver immediate value, like AI , automation and machine learning. “I think it’s clear that the path to delivering more from our backlog at a higher-quality level, with lower risk and fewer audit findings and so on, is automation, AI and reusing things that we’ve already built,” said Cisco’s Previn. “Automating the entire build, test and deploy stages of what we do is a huge focus, including all the controls.” Salesforce’s Perez added that businesses will look for ways to make it easier for employees to get their jobs done. This will include infusing artificial intelligence and automation across every line of business — to save time, and to increase employee satisfaction by automating repetitive tasks to help every team focus on the most impactful work. “Automation will increasingly become ubiquitous because it reduces the work that humans have to do on repetitive or monotonous tasks, which means more time to offer a better experience for customers and lower stress for internal teams,” he said. “In fact, 79% of automation users say these tools fuel productivity.” AI can also help increase revenue by delivering deep insights about every individual customer based on past interactions, he explained. “For example, financial solutions companies can use Salesforce’s Einstein AI technology to increase sales win rates by simply providing its sales reps with AI-powered insights in real time, or use AI to increase the accuracy of a cash receivables forecast, or enable customer service teams to scale by automatically delivering AI-based product or support recommendations,” he said. Cybersecurity and the CIO Cisco’s Previn recalled that when the pandemic hit, the initial focus was on having enough VPN capacity and operating remotely. But the focus quickly shifted to security. “What do we need to do to shore up our security and transform our network?” he said. “The focus became: Clearly this is going to be an enduring way of working — what are the long term consequences of this?” That attitude is reflected in a Gartner report that found that two-thirds of respondents said that cyber- and information security would be a top area of increased investment for 2023. As companies look to manage the business risk posed by escalating threats, Gartner forecasts that worldwide information security and risk-management spending by end users will reach $188.3 billion in 2023, up 11.3% from 2022. Gartner estimated that spending on security would grow 7.2% in 2022 compared with 2021. Overall, CIOs must foster a culture of trust across their organizations to enhance employee and customer experiences by making security seamless and frictionless across people, processes, strategy and platforms, said Milind Wagle, CIO at Equinix. “A thorough security framework ensures that innovation and bold strategies are grounded with the right risk-appetite.” The race for tech talent According to the Gartner report, many CIOs continue to struggle to hire and retain IT talent as they aim to accelerate digital initiatives. However, the survey identified numerous untapped sources of technology talent. For example, only 12% of enterprises use students (through internships and relationships with schools) to help develop technological capabilities and only 23% use gig workers. “The business is just so incredibly dependent on technology, and everybody’s struggling to get technology talent — they’ve lost talent through the Great Resignation,” said Gartner’s Hill. “Skill areas that are very hot, like cybersecurity, AI, any kind of advanced analytics, even cloud migration — they just can’t find enough people to do that.” There may be an opportunity to pick up some talent from the Big Tech layoffs, she said, but added that one of Gartner’s biggest messages to CIOs is to stop using the same conventional techniques everyone else is using to attract talent. For example, “Break the barriers that your company artificially imposes that will limit your talent, like one person, one job, one manager,” she said. “If I’m a data scientist working for the CMO and I have a slow period, why can’t I work somewhere else and use my talent elsewhere?” Here’s the conclusion: Biggest opportunities for the CIO Overall, the CIO has a tremendous opportunity, said Equinix’s Wagle, who explained that the CIO role comes with “an extraordinary vantage point of visibility into every single aspect of an entire organization’s operations.” CIOs can help break down silos, he said, because they can understand what is going on in all divisions of the business. “We can identify where technology can help increase productivity or bring in efficiencies to drive the company’s overall success.” Today’s CIO, he emphasized, should “claim their rightful place” in the leadership. “Adopt a strategic partner and in-service mentality,” he said, “to make every other C-suite leader successful.” The opportunity right now, added Accenture’s Prett, is what she calls the most “open-minded time in the history of the world” when it comes to technology. “People who had previously inflexible stances about what they thought of technology or its use, they had all those paradigms broken during the pandemic,” she said. “They are open to listening to stories that they might not have contemplated before, so the opportunity is to start talking.” It’s a moment in time, added Gartner’s Hill, where a CIO can really demonstrate their business executive leadership competency. “In 2020, at the start of COVID, there was a need to go [to] remote working practically overnight,” she said. “And boy did the CIO step up; they gained tremendous credibility.” But now it’s another moment, she added, when CIOs need to deliver and demonstrate how technology can help the business thrive and grow. “Now’s the time to really bring it forward in a business context,” she said. “That’s the biggest opportunity.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,605
2,023
"Automation News | VentureBeat"
"https://venturebeat.com/programming-development/how-automation-low-code-no-code-can-fight-the-talent-crunch"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Automation OpenAI’s leadership coup could slam brakes on growth in favor of AI safety Here is how far we are to achieving AGI, according to DeepMind Meta's new AI milestone: Emu Video and Emu Edit set to revolutionize text-to-video generation and image editing Google DeepMind unveils Lyria, a powerful GenAI model for music creation Corporate card startup Ramp integrates with Microsoft Teams and 365 Copilot Microsoft goes all-in on Gen AI: Everything it announced at Ignite Microsoft announces Mirroring, a way to copy databases Microsoft announces expanded Copilot capabilities at Ignite for sales and service OfferFit gets $25M to kill A/B testing for marketing with machine learning personalization Running thousands of LLMs on one GPU is now possible with S-LoRA Tangram Vision’s AI-powered 3D sensor could transform computer vision in robotics Former Angi CEO launches new startup Keychain to revolutionize consumer goods manufacturing Inside Visa’s AI-powered war against holiday fraud How to use OpenAI’s new GPT Builder Forrester predicts A.I. code flaws will enable new attacks next year Gong Forecast gets AI upgrade, improving accuracy 20% over CRM revenue forecasting Sponsored Overcoming the trust, talent and price tag challenges in scaling generative AI Sponsored New report reveals why immersive ecommerce is an urgent priority for brands Nvidia, Intel claim new LLM training speed records in new MLPerf 3.1 benchmark Sponsored How AI and automated scheduling will transform the meeting lifecycle OpenAI takes ‘baby step’ toward AI agents with Assistants API New method reveals how one LLM can be used to jailbreak another VB Event In an AI-driven world, effective game moderation is more urgent than ever Sponsored Generative AI task forces: What are they and should you start one? Taking on giants: a QA with robotic vacuum startup Matic’s co-founder Mehul Nariyawala Microsoft unveils ‘LeMa’: A revolutionary AI learning method mirroring human problem solving Analysis IDC study: Businesses report a massive 250% return on AI investments Exclusive: Stability AI brings advanced 3D and image fine-tuning to Stable Diffusion 1up emerges from stealth with $2.5M for sales AI that answers customer objections, fills out RFPs Forrester’s 2024 Predictions Report warns of AI ‘shadow pandemic’ as employees adopt unauthorized tools Sponsored When tightly managing costs, smart founders will be rigorous, not ruthless MIT’s copilot system can set the stage for a new wave of AI innovation VB Event VentureBeat launches a GenAI tour for 2024 Cisco zooms in on new AI power for Webex teams SurveyMonkey launches ‘Build with AI’, ushers in a new era for survey creation Bomb hoax leads Starship to temporarily suspend food delivery robots GM driverless car subsidiary Cruise suspended by California DMV How generative AI is making robots smarter, more capable, and more ready for the mainstream AI Godfathers Bengio and Hinton: Major tech companies should devote a third of AI budget to managing AI risk MonsterAPI leads the charge in democratizing AI with no-code fine-tuning 2023 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov 2022 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2021 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2020 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2019 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2018 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2017 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2016 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2015 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2014 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,606
2,023
"Securing a dynamic future for APIs and enterprise integration | VentureBeat"
"https://venturebeat.com/programming-development/securing-a-dynamic-future-for-apis-and-enterprise-integration"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Securing a dynamic future for APIs and enterprise integration Share on Facebook Share on X Share on LinkedIn This article is part of a VB special issue. Read the full series here: The CIO agenda: The 2023 roadmap for IT leaders. APIs are the cornerstones of digital business, and they define the future of enterprise integration. By enabling different systems and software to communicate with each other, APIs allow enterprises to create new digital initiatives and transform themselves. Gartner reports that 98% of enterprises use or are planning to use internal APIs, up from 88% in 2019. By 2025, less than 50% of enterprise APIs will be managed, as explosive growth in APIs surpasses the capabilities of API management tools. The rapid pace of innovation in API technology, products, platforms and security is redefining what tech stacks will look like for years to come. CIOs and devops leaders say that the leading factor in deciding whether to consume and produce an API is how well it integrates with internal apps and systems. This high priority placed on integration shows that enterprises now consider APIs essential to their infrastructure. CIOs are using these technologies to create new digital-first business initiatives that attract, sell and serve entirely new customers. Security must be at the center of API integration Security is core to APIs’ current and future contributions to enterprise integration. The Twitter data breach is a cautionary example of why getting API security right at the platform level is critical for protecting customers’ data. Enterprises are also suffering from API sprawl. Without enough controls to discover, track and manage APIs, they leave open entry points for attackers to take control of code and apps and potentially gain access to networks. API breaches have become so severe that they are delaying new product launches. Nearly every devops leader (95%) says their teams have suffered an API security incident in the last 12 months. API protection and security innovations help to harden web APIs against exploits, abuse, unauthorized access and denial of service attacks. These solutions often protect internally developed APIs that are publicly available and connected to enterprise applications. They work by examining the content and parameters of APIs, managing traffic and, at a minimum, analyzing traffic for unusual activity. “API security, like application security overall, must be addressed at every stage of the SDLC [software development life cycle],” Sandy Carielli, principal analyst at Forrester, told VentureBeat in an interview. “As organizations develop and deploy APIs, they must define and build APIs securely, put proper authentication and authorization controls in place (this is a common issue in API-related breaches), and analyze API traffic [so as] to only allow calls in line with the API definitions.” Carielli continued, “A common issue with organizations is inventory. Owing to the sheer number of APIs in place and the tendency to deploy rogue APIs (or deploy and forget), many security teams are not fully aware of what APIs might be allowing external calls into their environment. API discovery has become table stakes for a lot of API security offerings for just this reason.” CIOs need to partner with CISOs and start by taking a least privileged access approach that aligns with their zero-trust framework and helps to prevent sprawl. This approach should be integrated into devops and CI/CD processes, rather than treated as a separate entity. “When considering API strategy, work with the dev team to understand the overall API strategy first,” Carielli said. “Get API discovery in place. Understand how existing appsec tools are or are not supporting API use cases. You will likely find overlaps and gaps. But it’s important to assess your environment for what you already have in place before running out to buy a bunch of new tools.” API protection rules should be flexible and able to change based on the specific needs of the API. One-size-fits-all approaches like static rate limits or IP allow/block lists are ineffective in production environments or when the API is being used at a large scale. A system that can adapt to the API’s usage patterns, and implement protection measures accordingly, is essential. Graph APIs are defining the future of enterprise integration A graph API is a way for a developer to access and manipulate data organized into a graph structure, which includes both objects and the relationships between them. Graph APIs are different from REST APIs, which represent data as isolated resources without relationships. GraphQL is one way to define a graph API. It lets devops teams and developers query and change the data by following and filtering the connections between objects. Some graph APIs, such as the Facebook Graph API, provide a unified interface for accessing multiple data sources and APIs. GraphQL’s adoption has soared from 6% of developers in 2016 to 47% in 2020, according to the State of GraphQL 2022 survey. Graph APIs are becoming more popular because they allow developers to quickly access data as they build modern front-end enterprise applications. GraphQL federation is also gaining traction. Here, larger enterprises including Airbnb and Netflix use devops teams and platform providers to combine multiple independently managed subgraphs into a larger graph schema. Graph APIs also enable organizations to model, expose and use the valuable metadata associated with the relationships between data entities. One key factor contributing to graph APIs’ growth is that they enable developers to easily access data independently, without assistance. Graph APIs also allow API users to specify the exact data they want to be returned in the API response; this provides more flexibility and control than REST APIs, which follow a more rigid structure. In addition, several companies offer graph APIs that allow for data access across a range of different applications. Examples include Microsoft’s Microsoft Graph API , which can be used to access various Microsoft applications such as Azure AD and Exchange Online; and SAP’s SAP Graph API , which provides access to various SAP applications, including SuccessFactors and S4/HANA. There is a growing trend for API management products to support GraphQL. API life cycle management is table stakes API life cycle management platforms are indispensable for enterprise devops teams that need to manage and govern APIs at scale — and these APIs are essential for building multi-experience applications and enabling digital transformation. API life cycle management allows for increased reliance on API products to generate new revenue streams. All API life cycle management platforms also provide security measures to protect against API breaches and the associated business risks. The API management market is projected to grow by $6.7 billion between 2021 and 2026, attaining a 20.6% compound annual growth rate (CAGR). One factor contributing to API life cycle management’s central role in defining the future of enterprise integration is that API adoption is skyrocketing in enterprises, growing over 200% as CIOs implement them to connect systems, applications, devices and other businesses. Another is that enterprises’ large-sale adoption of cloud-native architectures, particularly in microservices, service mesh and serverless computing, is leading to increased use of APIs in devops and across software engineering. These approaches rely heavily on APIs to facilitate communication and integration between different components and services. Two API innovations to watch in 2023 One key API innovation to watch this year is event-driven APIs. These are proving effective in enabling faster response to streaming analytics, which many enterprises use to create new business models and digital transformation projects. Event-driven APIs also enable push notifications, which are more efficient and cost-effective in terms of time and networking resources than polling. The OpenAPI Specification (OAS) version 3, introduced in 2017, has become a widely accepted standard for publishing APIs. It includes a feature called callbacks for describing event-driven APIs. OAS version 3.1, released in February 2021, added support for webhooks, a popular method for implementing event-driven APIs accessible via the internet. However, it’s important to note that while webhooks can be used to implement event-driven APIs, they only support a one-to-one communication pattern rather than the many-to-many pattern possible with event-driven architecture (EDA). A second API innovation to watch this year is API security testing, which is gaining rapid adoption for identifying vulnerabilities in APIs. It involves checking for general application vulnerabilities, such as injection attacks, as well as API-specific issues, such as broken object-level authorization. API-based discovery technologies are used to identify unknown APIs exposed to the outside world. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,607
2,022
"Enterprise News | VentureBeat"
"https://venturebeat.com/security/how-cios-can-drive-identity-based-security-awareness-across-the-enterprise"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Enterprise CelerData release said to address data lakehouse limitations Tibco scales data analytics and visualization with an eye on AI While OpenAI has been working on text and images, iGenius has been working on GPT for numbers Guest Does India have what it takes to challenge China in electronics manufacturing? Sponsored The best of hybrid and native apps is a no-code solution Community How the Web3 stack will automate the enterprise Guest Three ‘soft skills’ for every developer’s toolbox Community Dissecting the hype over low-code Community 5 simple ways design leaders can build a meaningful approach to inclusivity Community Enterprise data is like air: Here’s how you can secure it AiDash aims to improve natural disaster management with digital twins Community Cybersecurity is a corporate social responsibility, especially in times of war Community The Great Resignation and its unintended consequences for IT Community What you need to know about managing the modern supply chain Sponsored Leveraging modern technology to create intelligent forecasts at scale Report: Average time to detect and contain a breach is 287 days Report: Orgs wasted $4.12M on failed digital transformation projects Community How to make co-innovation work for your business Iterate integrates more visual tools with Interplay 7 low-code platform update for AI Stripe’s new apps marketplace brings third-party tools directly into Stripe Blockchain network provider Horizen launches no-code tokenization platform Vultr brings GPU options to a wider audience Data observability company Cribl raises $150M CockroachDB update aims to ease creation of data-intensive applications ServiceNow powers hybrid work with indoor mapping VB Lab Insights Autonomous trucking company Plus drives faster transition to semi-autonomous trucks Headless CMS platform Payload goes open source How Softiron used digital twins to reduce its carbon footprint Report: Can Slack have an impact on mental health? Here’s what employees say Is database-as-a-service in Percona’s future? How to choose between on-premise or cloud product information management Recovering from ransomware attacks starts with better endpoint security Imec launches sustainable semiconductor program Agora rebrands to Kojo as it expands construction procurement Community In a decentralized Web3, DAOs will be the driving force of decisions Community COVID-19 makes automation more important than ever for enterprise integration Community Big tech vs. data privacy: It wasn’t meant to be this way Community For the metaverse, embodied reality is the true final frontier Report: Sustainability is a top 10 priority for CEOs this year Report: 60% of orgs have experienced data loss due to employee mistakes Community 3 ways to leverage NFTs Community Check your privilege: The critical principle for keeping your SaaS data safe Community Unified data observability: Helping IT deliver winning digital experiences 2023 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov 2022 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2021 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2020 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2019 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2018 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2017 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2016 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2015 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2014 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,608
2,023
"Making security invisible with adaptive access management | VentureBeat"
"https://venturebeat.com/security/making-security-invisible-with-adaptive-access-management"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Making security invisible with adaptive access management Share on Facebook Share on X Share on LinkedIn This article is part of a VB special issue. Read the full series here: The CIO agenda: The 2023 roadmap for IT leaders. The more invisible cybersecurity safeguards are, the more they help improve adoption and stop breaches. With every organization obsessed with speed as a competitive differentiator, it is no wonder that CIOs are tasked with streamlining login and system access user experiences. When security measures are fast and seamless, users are much more likely to embrace them, contributing to, rather than detracting from, speed and accelerated response. CIOs and CISOs tell VentureBeat that improving mobile security user experiences across managed and unmanaged devices is the highest priority. In 2022, many enterprises were hacked from mobile and IoT devices. Verizon’s Mobile Security Index (MSI) for 2022 discovered a 22% increase in cyberattacks involving mobile and IoT devices in the last year. The study also found that the attack severity is at levels Verizon’s research team hasn’t seen since they began the security index years ago. Enterprises still sacrifice security for speed Verizon’s study found that 82% of enterprises have set aside a budget for mobile security, but 52% have prioritized meeting deadlines and boosting productivity over the security of their mobile and IoT devices, even if that means compromising security. “During the last two years specifically, many organizations sacrificed security controls to support productivity and ensure business continuity,” Shridhar Mittal, CEO of Zimperium, said in the company’s 2022 Global Mobile Threat Report. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Enterprises’ willingness to prioritize speed and productivity over security highlights how cybersecurity budgets affect every aspect of a company’s operations and employees’ personal information. This shows how cybersecurity budgeting and investment needs to be treated as a business decision first. “For businesses — regardless of industry, size or location on a map — downtime is money lost,” said Sampath Sowmyanarayan, chief executive at Verizon Business. “Compromised data is trust lost, and those moments are tough to rebound from, although not impossible. As a result, companies need to dedicate time and budget to their security architecture, especially on off-premises devices. Otherwise, they are leaving themselves vulnerable to cyberthreat actors.” Adaptive access management is designed to be transparent and non-intrusive, protecting enterprises’ systems and data without disrupting normal business operations. By adopting an adaptive security approach, enterprises can better balance the need for security with the need for speed and productivity, removing what would otherwise be security roadblocks that get in the way of increased productivity. The more adaptive security is, the more invisible it becomes Adaptive access management is a security approach that continuously monitors and adjusts access controls based on changing user and system behaviors. An example of adaptive access management solutions is risk-based authentication that uses machine learning (ML) algorithms to analyze user behavior and assign a relative risk score to each request for access. Enterprises are prioritizing the purchase of adaptive access management technology to secure remote access for hybrid workforces, create more secure collaboration platforms and bring zero trust to supplier and customer sites and portals. Additional technologies used as part of adaptive access management platforms include context-aware access control and anomaly detection. The latter technique uses ML-based algorithms to identify unusual or suspicious behavior, such as a sudden increase in login attempts from a particular location or an unusual pattern of access requests. If the system detects an anomaly, it may trigger additional authentication measures or block access to the resource. All adaptive access platforms are making strides in improving access policies that automatically adjust access controls based on changing risk levels or other factors, including the sensitivity of data being accessed. According to Gartner , by 2024, 50% of all workforce access management (AM) implementations will use native, real-time user and entity behavior analytics (UEBA) and other controls. By 2026, 90% of organizations will be using some embedded identity threat detection and response function from access management tools as their primary way to mitigate identity attacks, up from less than 20% today. Forrester has found that using Azure AD’s adaptive risk-based policies and multifactor authentication can help organizations reduce the risk of a data breach, saving them an estimated $2.2 million over three years. And a study by the Ponemon Institute found that organizations that adopted an adaptive security approach had a significantly lower total cost of ownership (TCO) and a faster time to value compared to those that relied on traditional, static security measures. Microsoft Defender for Cloud uses ML to analyze the applications running on machines and create a list of the known-safe software. Allow lists are based on specific Azure workloads, and organizations can further customize the recommendations (see below). Building a case for paying for adaptive access out of the zero-trust budget CIOs tell VentureBeat that when they can deliver measurable outcomes and quick wins as part of their zero-trust frameworks and initiatives, they can better defend their budgets with CEOs and boards. Zero trust is a security approach that assumes that all users, devices and networks inside and outside an organization’s perimeter are potentially compromised and must be continuously verified before being granted access to resources. By increasing the accuracy and strength of identity verification to match the context of each request, adaptive access management platforms quantify and track the perceived risk of every query. For example, a request for access to sensitive financial data might require more robust identity verification than a request for access to a public website. Workflows that can ensure least privileged access while eliminating implicit trust are critical for enterprises’ reaching their zero-trust strategic goals. “In the more dynamic digital world where attacks happen at cloud speed, zero-trust architecture recommends continuous risk assessment — each request shall be intercepted and verified explicitly by analyzing signals on user, location, device compliance, data sensitivity, and application type,” Microsoft’s Abbas Kudrati and Jingyi Xia wrote in a blog post. They continued: “In addition, rich intelligence and analytics can be leveraged to detect and respond to anomalies in real time, enabling effective risk management at the request level.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,609
2,022
"What the end of third-party cookies means for personalization | VentureBeat"
"https://venturebeat.com/business/what-the-end-of-third-party-cookies-means-for-personalization"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages What the end of third-party cookies means for personalization Share on Facebook Share on X Share on LinkedIn This article is part of a VB special issue. Read the full series here: How Data Privacy Is Transforming Marketing. We’ve been shaking the crystal ball on the cookieless future, and it’s still cloudy — we know it’s coming, but we’re not sure when, or how exactly it will play out. Still, now is the time for organizations to prepare, lest their marketing methods become obsolete. It is imperative, experts say, that enterprises be proactive in balancing the dual consumer demand for privacy and personalization. How can they achieve this? By harnessing lower-level types of data — including second-party, first-party and zero-party — and leveraging artificial intelligence (AI) in a way that is both ethical and accurate. “Moving forward, brands have to think about how to collect data transparently and use it in a way that delivers value to the customer,” said Stephanie Liu, privacy and marketing analyst at Forrester. “That’s a relatively new mindset for marketers, and many are struggling today because for decades they’ve prioritized benefits to the business while neglecting the customer.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Comparing first-party data and third-party data Essentially, first-party data is “data that customers and companies share ownership of,” said Andrew Frank, VP analyst at Gartner. This lets a brand tailor experiences in the way of loyalty programs and incentives. Putting it in human terms: First-party data is like being friends with someone and sharing information directly, said Liu. “You know each other well and your friendship can deepen over time,” she said. Third-party data, by contrast, is akin to having an acquaintance who you’ve mostly heard things about “through the grapevine” — and not all that is accurate. “Personalization has turned into an amorphous catch-all, but when it comes to asking customers for data, brands need to think about what data they need, how they’ll use it to benefit the customer and how they’ll encourage customers to actually share that data,” Liu said. With changes occurring and more afoot, “marketers are facing data deprecation,” she added. Cross-site tracking is becoming more difficult, privacy regulations are adding new consent requirements, consumers are more protective of their data and walled gardens are limiting data access and use. “It’s not just the death of third-party cookies ,” said Liu. “There are multiple significant forces impacting marketers’ ability to collect and use customer data.” The power of AI Organizations are increasingly leveraging AI to fill in this gap. AI and machine learning (ML) models can categorize and segment third-party data to correlate, segment and make predictions. Liu pointed to one common use case of lookalike modeling. When a customer hasn’t shared “a plethora of information about themselves,” a brand can take what it does know about them and try to match them with customers who look similar, she explained. “It’s a way of filling in the gaps for customers whose profiles are data scarce,” said Liu. Unsurprisingly, there are risks. If someone has chosen not to share much about themselves, it’s probably because they don’t know the brand well or don’t see value in sharing data, she pointed out. Brands can nail it pretty accurately and personalize based on data a customer hasn’t explicitly shared, but this can be perceived as “creepy and invasive,” she said. Case in point: The infamous example of Target recognizing a customer was pregnant before she’d even broken the news to her own father. On the other hand, if a brand gets it wrong, it risks personalizing off faulty assumptions. “So, marketers need to think about what benefit the customer will get from this type of modeling and if the benefits (to marketers) are worth the risks (to customers),” said Liu. ‘Small data’ trend The conventional wisdom is that the most cutting-edge AI is dependent on large volumes of data. However, other approaches do not require massive labeled datasets — a few examples are transfer learning, data labeling, artificial data, reinforcement learning and Bayesian methods, according to the Center for Security and Emerging Technology. This is what’s known as “small data.” “Behaviors have changed so much in so many different ways in society around the world, that the data you collect is less indicative of the future than it used to be,” said Erick Brethenoux, VP analyst at Gartner. Organizations may have a lot in terms of volume, but not quality, he said. And, when there isn’t enough quality data or data is fragmented, that’s when model accuracy decreases. This is prompting the use of additional AI techniques in the background to “enhance or complement” data, said Brethenoux. For example, in insurance, applying knowledge graphs to provide more context and better accuracy. “The people who say they have too much data don’t know what is in their data,” said Brethenoux. Other types of data collection But, as third-party data from cookies to fuel AI models decreases, brands can increasingly rely on another tool: “Zero-party data.” This was termed by Forrester in 2017, and it refers to data that a customer proactively and intentionally volunteers. Such as, Liu said, product preferences, purchase intent, and content preferences. For example, they can specify, “I have a cat.” A brand can then use this information to show them cat products on their site or app — and stay away from hawking dog products. “This is data customers are choosing to share with a brand because they like the brand and are getting some benefit or value in return,” said Liu. It is much more transparent and straightforward than buying from a data broker, she said, and helps reduce the creepiness factor of “why do you know that about me?” Right now, it’s still just a concept, contended Frank. He does see it evolving into something “more substantial,” and potentially used with distributed or decentralized ledgers. Still, he pointed out that first and zero-party data, where there is “incentivized consent” is not always permitted — or even a possibility. More generic categories that don’t sell directly — say, a tissue paper supplier — don’t have that ability, and the cost of losing access to third-party data is higher. Second-party data Another emerging method for procuring data? Second-party data via data clean rooms. This is a collaboration between brands with direct relationships to consumers with brands that don’t, explained Frank. Data clean rooms allow companies to leverage intelligence extracted from personal data without exposing personal data to any parties, he explained. A new Interactive Advertising Bureau standard is “seller defined audiences,” which allows companies with large amounts of data to define an audience that an advertiser could buy without revealing specifics, he said. Then there are concepts such as Unified ID 2.0, an unencrypted alphanumeric identifier created from emails or phone numbers. This method allows advertisers to target specific consumers without compromising their privacy. Responsible AI — and marketing The key to all this is getting the right kind of consent, and making sure that that is always honored and enforced in different contexts. Then, of course, there’s the imperative that AI models be responsible, ethical and trustworthy — undoubtedly one of the most pressing discussions occurring in tech right now. Respective to third-party data, organizations must be cautious and seek advice on how to use it, said Brethenoux. “It is the responsibility of the organizations getting that data to do that work,” he said. The future of procuring data, said Frank, could either be a “walled garden” concept, where a few large companies have a great wealth of data and sell that data; a “consent economy” controlled by consumers; or a decentralized, self-sovereign identity where people would control their identity. In any case, “we’re heading for a world where people do have more control over their personal data and can make more intelligent decisions with how they share it with brands,” said Frank. Ultimately, third-party data isn’t going to go away, said Frank. Brands just must get smarter about how they use all other types of available data — whether that’s zero, first, second, or creative procurement of third-party data that respects privacy. In the meantime, continue to keep an eye on that cookieless crystal ball. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,610
2,022
"Why privacy-enhancing technologies may be the future of adtech | VentureBeat"
"https://venturebeat.com/data-infrastructure/how-privacy-enhancing-technology-is-securing-first-party-data"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Why privacy-enhancing technologies may be the future of adtech Share on Facebook Share on X Share on LinkedIn This article is part of a VB special issue. Read the full series here: How Data Privacy Is Transforming Marketing. Marketers are feeling the pressure — from consumers, regulators and security teams — to protect current and potential customers’ data privacy. They are also on the hunt for solutions. So, when Gartner added privacy-enhancing computation, also known as privacy-enhancing technologies (PETs), to its list of 2022 strategic technology trends, it was clear that these measures were inching up the Hype Cycle as a way to solve the consumer privacy conundrum. According to the IAB Tech Lab, a non-profit consortium created to develop foundational technology and standards that enable growth and trust in the digital media ecosystem, “PETs” is a broad umbrella term that covers a range of technologies focusing on protecting personal information, born out of the disciplines of encryption, machine learning, de-identification and cryptography. “This is a means by which not only can we solve for consumer privacy, but also data security,” said Anthony Katsur, CEO of IAB Tech Lab. “Given the direction of the privacy landscape at the moment from a regulatory point of view, and [a] technology point of view (think crumbling cookies and device IDs), it is becoming more apparent that PETs, which are ultimately technologies focused on maximizing data security to protect consumer privacy and minimizing the amount of data being processed, are going to form the future foundations of the ad-funded internet.” PETs address mounting challenges for marketers Between legislation like GDPR and CCPA , Apple’s new iOS standards and Google’s pending deprecation of third-party cookies for Chrome, privacy issues have reached a tipping point, explained Rich Sobel, founder and CEO at marketing consultancy Marcato Solutions. That has led to mounting challenges for marketers. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “Historically, understanding customers and working with first, second and third-party data sets on those customers has been handled somewhat ‘upstream,’ where the data was applied and modeled in advance of buying ads,” Sobel explained, adding that digital media allows advertisers to determine the value of an ad at the moment, now applying — in relatively real-time, at the point of buying the ad — all the data previously used for modeling. “That methodology, along with an opt-out tracker, was always going to run into issues,” he said. “As a result, moving and addressing privacy through PETs has become one of the most, if not the most, important actions advertisers and publishers will take over the next 12 months.” Companies want to share data and collaborate Kansas City, Missouri-based startup TripleBlind provides an API that “allows your data to remain behind your firewall while it is made discoverable and computable by third parties for analysis and ML training,” said Chris Barnett, vice president of marketing at TripleBlind. Barnett explained that there are several different taxonomies for PETs, but they typically include areas like differential privacy, federated learning, synthetic data, secure multi-party computation, secure enclaves, homomorphic encryption and tokenization/data masking/data hashing. “Generally, companies are going down this road because they are moving to public cloud infrastructure and plan to share data and collaborate with different people,” he told VentureBeat. “Our technology is available right in the AWS or Azure store, for example.” Marketers are trying to use data collaboratively to understand their customers’ journeys from browsing to check out. “The customer journey is the double-bullseye use case for this,” Barnett said. “The classic example is if I’ve got a product that I’m marketing on my website and I advertise on social media, search and other internet properties, how do I understand where my customers are being influenced and generated? It’s getting harder and harder to do that.” That’s where there is typically an impasse — where one party doesn’t want to take on the risk of giving all of their data to another party. PETs, he explained, allow different parties to share and collaborate around sensitive information while preserving privacy and ensuring compliance. Jonathan Moran, head of martech solutions at SAS , said no one data alternative has yet emerged as “bulletproof,” but “a combination of PET practices like Universal ID, device fingerprinting and Digitrust will attempt to fill the gap and allow brands to prosper in the post-third-party cookie world.” Of those three, he says Universal ID — an identifying cookie that is stored in the HTML5 storage space of a user’s browser and is limited to first-party use only — is perhaps the most viable alternative. “But it will take some time and work before being rolled out more widely,” he said. The future of marketing and PETs There has certainly been plenty of movement on the PETs front over the past year. In August 2021, Meta said it was “investing in a multi-year effort to build a portfolio of privacy-enhancing technologies and collaborate with the industry on these and other standards that will support the next era.” In February, IAB Tech Lab announced a new working group on privacy-enhancing technologies, saying it “invites developers working on advanced cryptography, data scientists, privacy and security systems engineers, and others in the digital advertising community to come together to develop privacy-enhancing standards and software tools for the digital advertising industry.” For brands and publishers who have strong, direct customer relationships to build strong first-party datasets, the adoption and application of PETs is fairly straightforward and the path clear, said Sobel. But brands and publishers that have less direct customer relationships and less deterministic data on their customers will need proxy tools and partnerships to build better datasets through data clean rooms — places where “walled gardens” like Google, Facebook and Amazon, for example, share aggregated rather than customer-level data. “PETs and data clean rooms are incredibly powerful tools, so their adoption needs to be aligned with business use cases and not just technology for technology’s sake,” he explained. “They’re too expensive to ‘just have them.’” But according to IAB Tech Lab’s Katsur, while it is still early days for the use of PETs to tackle marketing’s most thorny privacy challenges, maturity is coming. “We’re currently still defining practical use cases within and outside the advertising ecosystem for PETs,” he said. “Next year, you will see more practical applications as we move from theory and education to real-world usage.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,611
2,022
"Is privacy only for the elite? Why Apple's approach is a marketing advantage | VentureBeat"
"https://venturebeat.com/data-infrastructure/is-privacy-only-for-the-elite-why-apples-approach-is-a-marketing-advantage"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Is privacy only for the elite? Why Apple’s approach is a marketing advantage Share on Facebook Share on X Share on LinkedIn This article is part of a VB special issue. Read the full series here: How Data Privacy Is Transforming Marketing. Shortly after GDPR went into effect in 2018, Apple began running privacy-focused advertisements and since then, has released several more along the same line — coming out with unique angles to showcase its enhanced security features. Using privacy as a marketing asset was viewed as a smart marketing move by Estelle Masse, Europe legislative manager and global data protection lead at Access Now , a data privacy advocacy organization that defends the digital rights of users worldwide. “Privacy is actually a commercial advantage,” Masse said. “Companies need to move beyond thinking it’s part of an annoying compliance checklist. It can be a competitive advantage for you and build trust for your users.” As other companies clamored to navigate compliance with enhanced privacy regulations while maintaining their marketing data strategies, Apple embraced privacy issues as a key point for its marketing. The company proved privacy could be an asset, rather than the liability it became for its Silicon Valley neighbor, Facebook (now Meta ), which spent 2018 navigating the Cambridge Analytica data privacy scandal. Meanwhile, for other tech companies, privacy became a downfall instead of a key feature. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Though, the divergent approaches to privacy by these two tech giants may have foreshadowed another problem: A privacy divide that’s only widening between consumers who can afford the products and devices that include strong privacy protections and those who cannot. Accessing data privacy comes at a cost Between the high cost of Apple devices and Facebook’s free model, where its users are the commodity sold — the differences paint a picture of the price consumers pay to protect their data and what it costs them if they cannot afford it. “Privacy should not be a luxury,” said Masse. “We need to see a lot of the privacy features created by Apple, or similar tools, replicated in more affordable products and devices.” Apple is making products that contribute to protecting privacy, in particular by limiting what other companies can know about us, she explained, but cautioned that Apple doesn’t always apply these standards to itself. “Apple has made it extremely easy for us as customers to reject ads from other apps and services, and with it, they help us protect our privacy,” Masse said. “Apple should not try to benefit from this feature to then serve us with their own ad services or tracking. Those should be turned off by default in all Apple products and apps.” Expecting consumers to spend more time and money to have autonomy over their own data isn’t a great way to treat customers, argues Daniel Weitzner, director of MIT’s Internet Policy Research Initiative and principal research scientist at its Computer Science and Artificial Intelligence Lab ( CSAIL ) “I give Apple a huge amount of credit for setting high expectations for the apps in their app store and the third-party devices that they interact with,” Weitzner said, “… But I worry that what we’ve done is put a lot more burden on the user to have a sense of privacy protection. Some of the costs are very direct. You have to pay more for a more privacy-protective, smartphone or you have to deny yourself access to certain kinds of savings for free services.” Data privacy for the powerful? Masse’s point begs the question: Have can robust privacy protections become a luxury instead of a basic option for consumers? It’s a question, in fact, that has been asked for years. In 2017, Amanda Hess, internet and pop culture journalist, wrote in The New York Times : “Now that our privacy is worth something, every side of it is being monetized. We can either trade it for cheap services or shell out cash to protect it. It is increasingly seen not as a right, but as a luxury good.” A Morgan Stanley research report released in 2021 reported that 81% of individuals feel they have little or no control over the data collected. Just as with the digital divide, those of lower socioeconomic backgrounds may not have the resources to take advantage of privacy protections from every angle and may be less likely to shell out extra cash for advanced privacy protections. Some experts argue that individuals can either pay more to have privacy protections built-in to the services they use, or educate themselves for free on how to take control of their privacy online by turning off cookies, asking apps not to track, scanning lengthy terms and conditions documents or using a VPN. Others disagree and say that socioeconomic factors contribute to issues around data privacy. “I think privacy in terms of data should be a fundamental right,” said Rafal Los, head of services and GTM at security solutions company, ExtraHop. However, he admitted that it can be hard to advocate for a right that it can sometimes seem like few people actually care about. “It seems like people are willing to trade their passwords for a Snickers,” he said. Los added that he has a difficult time agreeing that there is a widening privacy gap where protections are more accessible to those who are more affluent. “Kim Kardashian is just as dumb with her privacy as anybody as, like the barista at Starbucks. It’s just not something people think about unless they’ve had a problem with it,” he said. “… Maybe I’m wrong, but I don’t think there’s a correlation between being wealthier or more affluent, or being better educated and caring about your privacy more … In practice, I just don’t see it.” Either way, others say it simply isn’t fair to put the responsibility to manage individual privacy on consumers alone. Consumers do care, they say, but often feel powerless in the face of large companies they need to use services from — having to just be okay with clicking through to be able to interface with whatever app or website they need at the moment. “I’ve done some of the empirical work that supports the argument that people do care,” said Jennifer King, Ph.D., privacy and data policy fellow at the Stanford Institute for Human-Centered Artificial Intelligence. “I certainly think there can be educational holes there.” Bearing the burden of data privacy King pointed out that low socioeconomic status individuals, like many of us, may have access to technology, but may not have the knowledge to take advantage of protecting their privacy from every possible angle. They may use location services, for example, or click “agree” without fully knowing what is at stake. “My own research and others’ has demonstrated that people fundamentally don’t understand the trade-offs in many cases,” she said. Weitzner agreed, pointing out that the burden on the everyday person to control their privacy is too much. He noted that consumers have to agree to give up data to participate in everyday life such as getting a credit card, to get a loan for a mortgage, or to apply for a job. “Most people are in a position where they’re forced to trade their personal data for things that they want or even need,” Weitzner said. “So I think it’s true, if you’re prepared to spend a lot of time and effort and extra money, you can put some distance between yourself and the whole kind of profiling process that goes on — but I think it’s really hard for most people in any practical sense… we have to work too hard to get privacy today, and that’s not right.” Tough challenges for marketers Companies that aren’t Apple, of course, can’t simply incorporate robust privacy protections without figuring out how to still market to potential customers. Shoppers send a mixed message: As much as consumers do want privacy protections, further research from BCG and Google shows that two-thirds of consumers also want customized ad content — while simultaneously reporting that half are still uncomfortable sharing their data to receive such personalization. Still, with many regulations already in place and more on the way, no marketing organization will be able to ignore data privacy – whether or not their customers have the ability to pay for more privacy-centric products and tools. So, where do marketing teams go from there? Just as privacy comes with a price for consumers, companies are shelling out money as well as they work to get up to speed on compliance with laws like GDPR or CCPA. In fact, a report from McKinsey predicts that companies that don’t figure out privacy solutions and rework their marketing strategies to comply with such, can expect to spend as much as 10-20% more on marketing and sales just to see the same returns. Enterprise organizations governed by GDPR have had to make hard pivots in their strategies, and it hasn’t been easy. Preparing ahead of legislation as much as possible is ideal, according to Dan Peden, strategy director at performance marketing agency Journey Further. “We’ve seen marketing efficiencies drop … they’re being asked by their businesses to get more for less budget, or get more for the same budget or hit aggressive targets that came out of COVID,” he said. Without a lot of data, he said, that gets harder and harder to do because we end up being more general with our targeting and we fall back into what will be more traditional marketing methods. “We use a lot of holdout testing, which is then designed to look at masses and whether we’re improving marketing — or whether the limited data that we have is actually the right people that we’re trying to reach,” Peden said. What marketers need to do — now To remain successful, the same McKinsey report recommends marketers stay vigilant about what’s coming next regarding privacy regulations and work now to demonstrate that privacy protections are a priority. The report notes that trust is key: When a consumer trusts a company, they are twice as willing to share their data than when they don’t. On top of that, hurdles for organizations may depend on what sector they’re in. McKinsey found that highly regulated industries like healthcare and financial services are already trusted by consumers. Companies in those sectors have regulations around privacy already baked-in and won’t have to work as hard to overcome as many hurdles building that trust. However, companies in technology, travel, transportation, media and entertainment have to work harder, as these are the industries consumers report trusting the least with their data. As privacy regulations continue to evolve in the U.S., an investment of time, resources and capital should be expected for enterprises in any sector. That means marketers need to get prepared. “The biggest thing you need to get ready for is your auditing,” Peden said. “Understanding what data you hold, where it comes from, how you store it and how long you store it for — that was a big undertaking for a lot of businesses in the EU.” For the time being, he added that marketers can prepare by “getting used to not having as much data and it being less personalized, less trackable, and then moving back towards more traditional methods for tracking — so, surveys, polls, surveys, pre- and post-surveys, and holdout testing.” While it’s difficult to predict what is ahead, Weitzner hopes companies will see the growing need to aid consumers in protecting their privacy and will make it easier to do so. He suggests looking back may actually help marketers as regulations continue to unfold. “In the early days of the Internet, we had to face the challenge of figuring out how to provide people assurance that you could safely use your credit card numbers, for example, online and it was far from a foregone conclusion in the late 90s,” he said. “But it worked out because we figured out the right kind of trust equation. I think now we have to kind of do it again, looking at much more intensive use of personal data, and providing more detailed accountability while giving people a sense of trust.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,612
2,022
"Marketing in the era of data growth and privacy | VentureBeat"
"https://venturebeat.com/data-infrastructure/marketing-in-the-era-of-data-growth-and-privacy"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Marketing in the era of data growth and privacy Share on Facebook Share on X Share on LinkedIn This article is part of a VB special issue. Read the full series here: How Data Privacy Is Transforming Marketing. For more than two decades, the holy grail of marketing has been focused on one-on-one connections between brands and shoppers. Companies that previously used television commercials to target the masses raced to take advantage of technologies like third-party cookies that tracked consumers across the internet — sweeping up vast swaths of easy-access data in order to serve precise ads to potential customers who might be interested in that very thing at that very moment. Now, the marketing landscape is in the midst of another near-total transformation, thanks to a growing focus — by consumers, regulators and Big Tech companies — on data privacy. By 2023, 65% of the world’s population will have modern privacy regulations protecting personal data, according to Gartner , while only 10% had those protections in 2010. The EU’s GDPR and California’s CCPA have led the way. Meanwhile, in late June , the Energy and Commerce Committee formally introduced the American Data Privacy and Protection Act (ADPPA) to the U.S. House, marking a major step forward for congressional data privacy negotiations. It’s getting real “I think the wake-up call is here,” Anthony Katsur, CEO of IAB Tech Lab, told VentureBeat. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! IAB Tech Lab is a nonprofit consortium with a global member community, created to develop foundational digital media technology and standards. “The industry is starting to react to the fact that this is real, and it’s going to become more real with real penalties, real fines, real ramifications for your business,” he said. Beauty retailer Sephora is one company that is already feeling the heat, with a $1.2 million settlement with the State of California announced last month. Meanwhile, third-party cookies have been almost completely phased out. Chrome, the most popular browser, remains the last major holdout, as Google recently announced it won’t get rid of third-party cookies in Chrome until the second half of 2024. But while this gives marketers a reprieve, advertisers see the writing on the wall about the deprecation of third-party cookies and most major brands have long been testing other options. In addition, some Big Tech companies have changed their privacy policies. Tim Cook, Apple CEO, has called protecting privacy “the most essential battle of our time” and Apple released App Tracking Transparency in its April 2021 mobile software update. Meanwhile, Google announced a multiyear plan to update Android privacy policies in February 2022, in order to catch up to Apple in limiting third-party data sharing on its devices. Marketers are second-guessing individual targeting All of this has led to a dizzying sea change for marketers, experts say, who have to adjust to a new age of marketing in a world focused on data privacy. “I think for the first time in 10 years, we see marketers second-guessing whether or not one-to-one communication and personalization is actually what they should strive for any longer,” said Samrat Sharma, global marketing transformation leader at PwC. “The reality is it’s not clear that will be possible or necessary.” Audience-based communication will still be the norm, he emphasized, “The question will be how to do that in a way that’s still personalized because we do know people don’t want to feel individually targeted.” The trick is, consumers want it all, which means marketers have to walk an increasingly treacherous tightrope to meet their expectations. According to Boston Consulting Group (BCG) research , two-thirds of consumers want ads that are personalized to their interests, yet nearly half are uncomfortable sharing data to create personalized ads. “At the core of it, the consumer is getting more aware of their privacy and demanding more from the value exchange around providing more privileged access to the brands,” said Sharma. “That’s what’s driving regulators to act, but then, in turn, manufacturers and publishers can respond to that,” he explained. A new marketing direction In a shifting marketing universe where shoppers crave personalization but also want privacy protection, what is a marketer to do? The answer, experts say, is to market smarter, with different formats and even newer technologies that help maximize conversion while keeping data privacy at the forefront. “​​It’s not going to be as simple as just targeting people based on third-party data,” said Andrew Frank, VP analyst at Gartner. “If you are a retailer or financial services company and you have a direct relationship with your customers, you have a lot more opportunities to solicit consent for personalized services.” Companies that have an indirect relationship with consumers, such as in consumer packaged goods, will have to start looking at more subtle efforts, like contextual targeting, an emphasis on tailored creative, and advanced artificial intelligence (AI) and analytics capabilities that can optimize based on non-personal signals, he explained. That has led to many, many efforts to replace third-party cookies with privacy-focused alternatives. Frank says he’s bullish on recent innovations such as IAB Tech Lab’s seller-defined audiences, in which rather than publishers sharing person-specific identifiers with advertisers like a cookie-based ID or an email address — audiences are grouped into categories based on demographics, interests and purchase intents using IAB Tech Lab’s Audience Taxonomy standard. “This enables publishers and retailers to define audiences for brands in a constructive way that doesn’t violate privacy,” Frank said. “I think they’re still working on modifications to the transparency and consent framework that would enable some kind of secure market for conceptual data in the advertising space.” Zero-party data, which goes beyond first-party data to focus on data that consumers voluntarily and deliberately share through website activity, messages, profiles and quizzes is becoming an essential trend, according to Vivek Sharma, CEO and cofounder of Movable Ink, which uses AI to personalize marketing content. “If you fill out a wedding registry, that’s an example of zero-party data — you’re actively telling them what your preferences are and what you’re interested in,” said Vivek Sharma (no relation to PwC’s Samrat Sharma). “But this whole world of third-party data — where your information is broadcast — is over and done. No credible company is betting on that in the future.” Still, Katsur says he doesn’t think there will ever be a single solution to the future of addressability — the ability to target specific individuals — at scale for marketing purposes. “It’s going to be a portfolio solution, whether that be first-party identifiers or seller-defined audiences,” he said, adding that IAB Tech Lab also recently formed a working group to advance privacy enhancing technologies (PETs). This working group brings together developers working on advanced cryptography, data science and privacy, as well as security systems engineers, to develop privacy-enhancing standards and software tools using encryption, de-identification and machine learning. “That said, I think we’re on the cusp, perhaps, of a third act in digital marketing where I think there’s an opportunity for a renaissance in the ecosystem,” Katsur said. “There will be pain and turbulence, but I will not count out this industry in their ability to innovate to solve for the needs of marketers, media companies and consumers.” Things marketers should do 1. Evaluate your investments “I think there is a need to invest in new technologies and reevaluate the investments you may have made three or four years ago because the landscape has changed and it will probably continue to change,” said Gartner’s Frank. This is a volatile period, he explained, with big shifts in regulatory constraints, technology and what marketers can expect to deliver in terms of data access. “For some retailers and publishers, this looks like an opportunity because they have the capacity to capture data and use it as part of their relationship building, such as through a loyalty program,” he said. Others will need to invest in emerging technologies such as data clean rooms, which enable the secure collaboration of organizations around consumer data without leaking personal data to counterparties. “I think those technologies hold a lot of promise and clearly require some investment and experimentation to get right,” he added. 2. Work in partnership across the organization If consumers value privacy, and that trend is growing, technology solutions have to be in service to customers, said PwC’s Samrat Sharma, adding that while it is easier said than done, it has to start with partnering across the organization. “It’s about what you are trying to achieve,” he said. “If you don’t do it in partnership with IT or transformational teams, then you might stand up a DMP replacement, for example, but it won’t address broader business goals.” That means areas including analytics, IT, marketing and transformation need to come together so that everyone knows what the ultimate goals are, then ask “How are the technology and solutions we’re deploying in service of those goals?” 3. Ask the right question Frank added that one of the biggest questions he gets from marketers is, “How can we continue to target and measure our advertising in a way that keeps us accountable to the business under these increasingly restrictive constraints?” However, that may not be the right question, he explains. “I think the question that they should be asking is, how can we design a future that both respects consumer privacy and interests in general, and still enables us to deliver the best possible experience to our customers in a way that enhances the value of our brand, as opposed to genericizing it?” he said. Of course, these questions don’t have easy answers: “I think this is a problem that has a solution,” he said. “I think the road to the solution is very complicated and thorny.” Marketers won’t wait to take action Today’s consumers, of course, can easily vote with their feet — or with a website click. “So, it’s incumbent upon the marketing and advertising industries to figure out how to give consumers the privacy and data security they want, as well as the personalization they crave,” said Katsur. “If they’re going to be served ads, they might as well be relevant,” he added. “And let’s be clear: advertising isn’t going away. I think we all realize that.” As the holiday season approaches, complying with the new world of data privacy is becoming table stakes, added Movable Ink’s Vivek Sharma. “Marketers have to put on their thinking caps and go back to the drawing board about fundamentally creating value for their customers and earning their customers,” he said. Still, experts agree it is early days when it comes to solving issues related to marketing and data privacy. “I think I’m somewhat optimistic in the long term, but I think it’s one of those situations where you have to be careful not to confuse a clear view for a short distance,” said Frank. But marketers aren’t just going to wait for the final nail in the third-party cookie coffin to take action, emphasized PwC’s Samrat Sharma. “There’s still uncertainty, but they know they need to do something,” he said. “Everyone’s sick of kicking the can down the road. They’re moving forward with solutions.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,613
2,022
"The new meaning of PII — can you ever be anonymous? | VentureBeat"
"https://venturebeat.com/data-infrastructure/the-new-meaning-of-pii-can-you-ever-be-anonymous"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages The new meaning of PII — can you ever be anonymous? Share on Facebook Share on X Share on LinkedIn This article is part of a VB special issue. Read the full series here: How Data Privacy Is Transforming Marketing. Personal data doesn’t have to identify you to be personal. With the right technologies and artificial intelligence (AI), even a string of random numbers can be combined with other information to discover your identity, and become personally identifiable information (PII). This raises significant data collection challenges for organizations that need to collect data to generate insights and optimize customers’ experience, without leaving PII exposed to mismanagement or unauthorized third parties. At the same time, regulations like the EU’s General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA) expect enterprises to be much more proactive and transparent about how they process PII. Any mistake in collecting or processing this data can result in costly regulatory penalties and legal action, as Facebook parent company Meta found out last month after it received a $400 million fine for exposing children’s personal data on Instagram. But what is PII exactly? VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! What data counts as PII? Before an organization can protect PII, it needs to identify what type of data falls under this classification. This is difficult because there is no universal definition of PII. According to Gartner VP analyst, Bart Willemsen, what PII is, “depends on who you ask.” Regulators in the U.S. and EU, for example, have different opinions on what constitutes PII. “In the U.S., with their fragmented and mostly absent privacy legislation, PII historically refers to two or three dozen identifiers like name, address, SSN, driver’s license or credit card number and such,” Willemsen said. However, in regions like the EU, and jurisdictions including China, Brazil, and states like California and Virginia, PII can be “anything that directly or indirectly identifies or assists in the identifiability of an individual,” Willemsen said. These core differences result in a different perception of PII under each regulator. For instance, categories of data that are PII under the GDPR but not under the CCPA include racial or ethnic origin, political opinions, religious or philosophical beliefs, trade-union membership, sexual orientation and others. Willemsen also highlights that personal data is anything that says something about a person, that can be used to single out an individual even if they “remain nameless at first sight.” As a result, any information that carries a privacy risk, when processed outside of its original purpose, can be considered personal data. Generally, if this personal data can be combined with other information to reidentify an individual, then it becomes PII. However, organizations need to remember that developments in AI and machine learning are constantly changing how much data is needed to re-identify an individual. As AI solutions become more complex, it will take fewer data to tie a user to their online identity. In short, user anonymity could become a myth. Case study: Is GPS data personal data? There’s an argument to be made that even GPS data can be considered PII. “Location data has long been considered personal information, when tied to personal identifiers, like a name or phone number or device ID,” said Cobun Zweifel-Keegan, managing director of the International Association of Privacy Professionals ( IAPP ). So if GPS data can tie to a specific individual or device, it can be considered PII. However, Zweifel-Keegan notes that it’s not classified as personal data when it’s de-identified and stripped of identifying information that can be tied back to an individual or if the location data gathered isn’t precise enough. In August of this year, the FTC announced it was suing Kochava for allegedly selling the personal GPS data of customers who’ve visited reproductive health clinics, places of worship, homeless and domestic violence shelters and addiction recovery facilities. Although, it’s important to note that Kochava maintains it does not collect GPS data with its SDK and doesn’t sell any of the data it collects on behalf of its customers to third parties. “The FTC and this lawsuit wrongly assert that the Collective data marketplace provides real-time identifiable information about consumer activity that crosses locations that could be considered sensitive locations,” said Founder and CEO of Kochava, Charles Manning in a statement. While this is a single case with its own nuances — where the FTC suggests that Kochava wasn’t just collecting GPS data, but also selling it to third parties — it still highlights that this is something regulators are paying close attention to. The FTC argues that Kochava’s data processes took information from mobile devices and packaged it into customized data feeds that matched unique mobile device identification numbers with time-stamped latitude and longitude locations, which it sold to third parties who could potentially use the information to re-identify the users. In this instance, according to the FTC’s press release, the agency “alleges that by selling data tracking people, Kochava is enabling others to identify individuals and exposing them to threats of stigma, stalking, discrimination, job loss, and even physical violence.” The regulatory crackdown on PII blunders Another problem surrounding the management of PII is that regulators are constantly developing new expectations for how organizations should manage and process it. Peter Hoff, vice president, security and risk at IT consulting company Wursta , suggests there is an ongoing crackdown on the mismanagement of PII across the U.S. “In 2023, five states are cracking down on what information companies gather about their customers and how that information is used and shared,” Hoff said. “At the same time, information security will become more of a concern at the federal level, with the U.S. government focusing on preventing American companies from knowingly or unknowingly putting customer PII and intellectual property into the hands of foreign governments,” he added. It does appear that regulators are being much less forgiving in how they assess data protection practices. Just last month, Morgan Stanley Smith Barney received a fine of $35 million for failing to properly dispose of approximately 15 million customers’ PII data. The financial giant received this fine for hiring a moving and storage company that failed to conduct adequate data destruction and decommissioning devices before selling servers and hard drives containing PII to unauthorized third parties. How to make PII anonymous: Reverse engineering While managing PII in a way that’s compliant with international and domestic data protection regulations can be challenging, enterprises can mitigate the risks by periodically testing whether their users’ personal data can be re-identified. “Privacy should be integrated into the design of new products and services while trying to balance legitimate business interests,” said Criss Bradbury, principal and U.S. cyber data risk leader at Deloitte Advisory. “It’s important for organizations to select a data privacy rationalized framework that aligns well with their organizational strategic objectives — and to regularly test to confirm that data cannot be reverse-engineered to identify an individual,” Bradbury said. Without testing whether data can be reverse-engineered, an organization has no guarantee that PII can’t be recompiled to identify the end user. At the same time, Bradbury highlights that solutions offering encryption (at rest, in transit, in use), role-based access control (RBAC) and identity and access management (IAM) can mitigate some challenges surrounding managing personal data. Although de-identifying personal data is an option, it’s easier said than done, particularly when considering that some privacy regulations mandate that de-identified data be confirmed as such by an “expert determination” and a statistical analysis that confirms it can’t be traced back to an individual. Organizations that do attempt to de-identify PII should consider the risk that anonymous data can be combined with other information to discover a user’s identity. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,614
2,022
"Data privacy is expensive — here's how to manage costs | VentureBeat"
"https://venturebeat.com/security/data-privacy-is-expensive-how-to-manage-costs"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Data privacy is expensive — here’s how to manage costs Share on Facebook Share on X Share on LinkedIn This article is part of a VB special issue. Read the full series here: How Data Privacy Is Transforming Marketing. Data privacy has always been a top priority in both consumer and business circles. Individuals, including company employees, demand more control over how their personal data is used and greater transparency into how businesses manage customer information. If data is the currency of the future, then ensuring data privacy is the key to gaining user trust. In light of high-profile breaches and data leakage incidents such as the Sunburst SolarWinds attack , the Estée Lauder customer database leak, the discovery of Facebook and MGM Resorts confidential data on the dark web, the resurgence of WannaCry , REvil and other ransomware attacks companies have realized the need for robust data privacy strategies and processes. Solutions should focus on how personal data is collected, processed, stored, shared, retained and destroyed while ensuring data availability and integrity and safeguarding assets from unauthorized access. This should also cover agreeing, blocking and disabling online cookies. In cases where organizations are sharing data with each other, including those of third-party vendors, the above practices also apply. Executives need to collaborate to balance risk, transparency, customer and stakeholder satisfaction, and compliance. Needless to say, privacy policies must strike a balance between risk, prioritization, the cost of failure or breach as well as management commitment and operational and reporting costs. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! According to Gartner research , 75% of all organizations will restructure risk and security governance for digital transformation as a result of imploding cybersecurity threats, insider activity, and an increase in attack surfaces and vulnerabilities. Some companies have even appointed chief privacy officers, who are custodians and responsible for this important function. Enlisting services of privacy and compliance consultants vis-à-vis full or partial in sourcing are also active and ongoing considerations of management. Non-compliance costs Data privacy often comes at a huge price — one that can’t be quantified in certain terms because the implications are vast. “It’s easy to see that data breaches can be costly for companies of all sizes. Companies should be investing in data protection at all levels like encryption, access control and incident response to prevent dangerous and expensive attacks,” said Soumendra Mohanty, chief innovation officer and chief strategy officer of data analytics company, Tredence. “The cost of non-compliance are massive from both a financial and reputational perspective. It can cost companies up to nearly $31 million to maintain compliance, depending on the industry, yet non-compliance can quickly double those numbers,” Mohanty said. Fines, legal fees, and the loss of business are all potential consequences of failing to meet regulatory requirements. In some cases, companies may even be forced to shut down if they cannot comply with regulations. According to a HelpSystems report , the costs of non-compliance continue to grow annually, increasing by 45%over the past decade. These costs incorporate fines and penalties, the indirect costs of reputational harm, revenue and time lost, and business interruptions. Data privacy losses go beyond dollar value “The true cost of data privacy, broadly, is their trust with their customers,” said Akbar Mohammed, lead data scientist, Fractal AI. “In this era of customers increasingly becoming tech-savvy, as soon as they realize that their data isn’t secure, the company will risk loss of trust from consumers. This eventually results in a lot of business disruption.” Almost all companies that need to collect data for their operations should have a data privacy infrastructure in place. Companies should also set up dedicated security and compliance teams surveying data and technology assets along with maintaining an aggressive threat detection policy. It’s imperative for companies today to have a data strategy and have policy and procedures governed by a data governance entity. “For large organizations, it’s best to have regular audits or assessments and get privacy-related certifications,” Mohammad said. “Lastly, train your people and make the entire organization aware of your activities, your policies.” Data Privacy compliance regulations that matter To help project costs and financial implications, companies should be mindful of existing legislation and regulations like GDPR, the CCPA, HIPAA, the FTC Act and the GLB Act — alongside those on the horizon to address the pressing privacy and data challenges facing business operations everywhere. Navigating data privacy management As per Dan Garcia, CISO of EnterpriseDB , a provider of software and services based on the open-source database PostgreSQ, organizations should prioritize the security of their data, which first starts with discovery within the systems. Having controls mapped to a data classification policy helps ensure appropriate protections from cyber threats such as cybercriminals. It’s a conscious effort within and across the business to support more secure practices. Organizations lacking internal resources, employee education, appropriate encryption and firewalls, and adopting poor password and privacy practices could experience a serious breach and resulting lawsuits that could cripple their business. Its imperative organizations invest in a strong backup solution, as backing up important files and information is essential for data security. With reliable backups in place, an organization can withstand common occurrences like system failures, hard disc failures, corruption, and ransomware scenarios. “Cybercriminals have become skilled at identifying where backups are stored and purging them during ransomware attacks, so organizations should pay extra attention to how backups are protected, storing them in offsite locations, and ensuring they are securely managed,” he said. Developers and business leaders alike seek data ownership and control and they simply don’t have time— or money — to waste. As enterprises adopt a cloud-first approach to their data management, they should invest wisely in technology providers that ensure robust privacy measures—without sacrificing ownership and access to their data. Data privacy checklist There is no one-size-fits-all checklist for data privacy management, as the specific requirements will vary depending on the type and size of the company, as well as the industry sector. Nonetheless, Evalueserve’s VP and Global Head of Data and Analytics Swapnil Srivastava shared some tips on managing data privacy within a company in order of importance and cost. Cost Overhead Why is it Important? Data protection initiatives Country-specific laws mandate strong governance and control of customer personal data Investments in specialized technologies to protect data and IT infrastructure assets Implementing compliance solutions require investments in specialized software Compliance audits Companies are mandated to report to regulatory authorities and demonstrate proof of staying compliant. Compliance policy development Clear policies with roles, responsibilities, and ownership must be implemented in organizations regarding compliance activities. Incident response ecosystem As part of responding to a situation of breach of compliance, companies must invest in incident response solution Staff Certification Mandated by regulatory authorities Communications and training To ensure organizations have trained officials to engage, roll out, and implement a compliance strategy Redress activities To enable companies, to have standard operating procedures to deal with and settle issues arising out of breach/fall out of a compliance violation Sridhar Damala, CTO of Acuity Knowledge Partners , recommends companies to look at privacy by design rather than an afterthought if they wish to spend less than most companies. “Privacy by design ensures that you have the foundation built for scalability,” he said. “If you have the right set of tools, processes and automation in place from day 1, your spend on data privacy will be incremental rather than linear.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,615
2,022
"How to navigate marketing with a focus on data privacy and compliance | VentureBeat"
"https://venturebeat.com/security/navigating-marketing-with-a-focus-on-data-privacy-and-compliance"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How to navigate marketing with a focus on data privacy and compliance Share on Facebook Share on X Share on LinkedIn This article is part of a VB special issue. Read the full series here: How Data Privacy Is Transforming Marketing. After driving online tracking and personalization for years, third-party cookies are on their way out. The privacy concerns associated with tracking software have led internet giants to discontinue them. Apple and Mozilla have already got the ball rolling with their respective browsers, while Google is on track to follow suit sometime in 2024. While this is great news for privacy enthusiasts, marketers are not thrilled with the development. They are also exploring alternate ways of driving their marketing and personalization efforts, such as bringing artificial intelligence (AI) into the loop. “When marketers think about AI, they broadly think about three separate types of capabilities. First is generative-AI stuff, which is the ability of AI to create text or images (like GPT-3 or Dall-E ) for content production,” Andrew Frank , distinguished VP analyst for Gartner, told VentureBeat. “The other area is the analytics side , which covers things such as emotion sensing or eye tracking to infer attention and response to an ad. Finally, the third capability is AI-based decisioning, where an AI decides how to personalize a site or what offer to send you next.” Bringing AI and data into the mix With these AI-driven capabilities, marketers can quickly scale to reach more customers with relevant content and campaigns. However, when the talk is about AI, data becomes an inherent part of the conversation. It is the “fuel” that powers the models, but also one that should be used responsibly to ensure regulatory compliance and the trust of customers. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “The increase in data privacy regulations is forcing brands to take greater control of how they understand their customers,” said Raj De Datta, CEO of ML-driven personalization company, Bloomreach. “…They must truly understand who their customers are (instead of using cookies) to personalize and scale marketing efforts. The challenge this presents for marketers is that it forces them to take stock of how they capture and organize first and zero-party data.” First-party data is information gathered from customers’ web activity, while zero-party is the data consolidated through surveys and interviews. Both are equally important for marketing, which is why companies have already started looking at their whole data infrastructure to figure out how it has to change to store information in accordance with laws and technical restrictions. The matter, as Frank said, has escalated up to the highest level across organizations. The ideal approach: Minimize use of personal data To use first-party information, which is often a point of contention, with a focus on privacy, organizations need to focus on a few key elements, starting with minimizing the collection and use of personally identifiable information (PII). “A lot of algorithms today don’t necessarily rely on personal data. In fact, there’s maybe a misconception that in order to do personalization well, you really have to have a lot of personal data and you really have to have intimate knowledge about people’s personal behaviors and habits from history,” Frank said. “This isn’t the case. It’s possible to infer things about people’s intent and priorities, just based on their behavior in a session, without necessarily capturing any data about who they are or any data that would be considered personal data.” Marketers, to begin with, should try to make the most of technical information at their disposal – like device, language or operating systems being used — as well as contextual data that’s available through multiple tech touchpoints. The latter could include anything from general location or time of customer interaction to content-based contexts, like what the person was looking at or what path they took to arrive at a certain location or experience. Then, marketers can build on it by bringing synthetic or artificially generated data into use. Getting to know customers without relying on cookies “In the cookieless world, privacy-respecting, AI-based marketing will be able to learn from broad, unlabeled signals and generate engagement and sales from a much larger set of people than today,” Nadia Gonzalez, CMO at Scibids , “The best practice here is to only utilize signals that do not require user tracking and profiling for targeting.” If all these elements are not enough and the company still sees clear consumer value from gathering some level of personal data, then it should clarify the value proposition and seek consent for the data. This way, customers could weigh the benefits against the level of intrusion, and make a rational decision on whether they want to opt into sharing or not. “In case you’re a specialist retailer or a mono-brand retailer, you might have a good case for wanting someone to personalize their experience because they’re intimately involved with the products and you have a transactional relationship that you can use to recommend new products or remind them when it’s time to get service,” Frank said. “However, if you’re a CPG manufacturer that sells mostly through the retail channel, then maybe you don’t have such a good case for why someone would want to share data with you.” “For the most part, you’re going to have to think about how you can acquire aggregate data through collaborative strategies, like data clean rooms , to understand patterns and trends and help inform your marketing strategy, without getting involved in the acquisition of personal data,” he added. Transparency is critical However, whenever one resorts to collecting personal data, just making the value proposition clear will not be enough. The organization should also ensure transparency by fully disclosing and explaining how the data sought will be put to use — and that it will be deleted or modified if the user chooses so. This way, marketers can gradually build trust with their customers, ultimately giving them enough confidence to be open to sharing their information. “You can’t assume you’re too small to get caught or that certain regulations don’t apply to you. Data privacy regulations will only continue to grow, and businesses that are building their practices (marketing or otherwise) with their customers’ privacy at the forefront today will be ready for whatever may be down the road,” Datta said. “When it comes to your marketing, be transparent about how you are using peoples’ data. Simple things such as ‘These items are here because you marked them as favorites’ help to build trust with your customers.” Yes, at times, going into the details of data use can prove tricky, owing to the complex nature of AI algorithms. However, in these cases, companies should focus on creating a pathway that could provide the required information while not becoming burdensome or disruptive at the same time. Prepare for conflicting scenarios In addition to this, enterprise marketers should prepare to deal with certain conflicting imperatives owing to regulatory restrictions. Many privacy laws require the deletion of data when requested by users or when it has outlived its purposes. And at the same time, there are also regulations that require certain data to be retained for different periods depending on the category of data and the region in which it was collected. Alternatively, there can be conflicts between wanting more data for reasons such as eliminating bias in algorithms or keeping less of it for privacy. While this can be complicated and become worse if states continue to pursue the enforcement of local laws outside their geographical boundaries, Frank suggests planning for different scenarios instead of settling on a single strategy. “You can’t settle on one choice and have to create enough flexibility and adaptability in your solution. This way, if things take a turn one way or the other, you’re able to adapt quickly and effectively to the way that the situation unfolds because it’s very unpredictable and unstable right now,” he said. The big opportunity More than 80% of industry experts already integrate some form of AI in their marketing activities, and the adoption is expected to grow thanks to the obvious benefits around targeting, personalization and analysis. According to Statista , the market for AI in marketing was estimated at $15.8 billion a year ago and will likely surpass $107 billion by 2028. Clearly, there is much innovation to come by all players in the industry that will make marketing more effective. But, as Gonzalez pointed out, driving ROI will become increasingly dependent on AI solutions that do not rely on personal data or cross-site behavioral analysis. “We all have a great opportunity today to grow the category and significance of digital marketing while reducing friction between consumers, brands and the economic engine of the Web. “The future of ad tech and post-cookie digital marketing and the business growth it enables is bright. We have the technology to leave burdensome, intrusive, legacy technology from another age behind,” she said. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,616
2,023
"Data de-identification: Best practices in the new age of regulation | VentureBeat"
"https://venturebeat.com/data-infrastructure/best-practices-on-data-collection-and-consumer-protection"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Data de-identification: Best practices in the new age of regulation Share on Facebook Share on X Share on LinkedIn Illustration by: Leandro Stavorengo Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. This article is part of a VB special issue. Read the full series here: Building the foundation for customer data quality. As organizations work to glean as much knowledge as possible from the massive trove of customer information available today, it is becoming increasingly important to de-identify that data as it moves around and between an organization, the third parties it works with and the various applications that consume the data — particularly those in the cloud. Of course, healthcare professionals in the U.S. have been well aware of this imperative for years, having labored under the Healthcare Information Portability and Accountability Act (HIPAA) privacy standards since the mid-’90s. More recently, similar privacy concerns over personally identifiable information (PII) have become a top concern of regulators, consumers and companies around the globe. IT research firm Gartner estimates that by the end of 2024, 75% of all consumer information globally will fall under some type of regulation. For its part, California has recently passed the California Consumer Privacy Act (CCPA) and the California Privacy Rights Act (CPRA), both dealing with consumer data and privacy. And the EU’s General Data Protection Rule (GDPR) is starting to be enforced with vigor. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Facebook’s recent $1.3B fine for moving data from the EU to the U.S. is a painful reminder that regulators are taking the issue seriously. Had that data been de-identified, the fine might never have been levied, said Joseph Williams, a partner in the cybersecurity practices at Infosys Consulting. And then there is the reputational threat to organizations that do not at least give the appearance of protecting their customers’ personal information should the company ever be breached and the information end up in the hands of cyber-criminals. Cybersecurity professionals believe that most consumers have been the unwitting victims of a data breach in the last 10 years. Much of that data is for sale on the Dark Web. Some would argue that any data de-identification work is really just an exercise in virtue signaling, given the ease with which individuals can be identified today from cross-correlating publicly available data, said Williams. “When you start to blend the processing power of AI with what’s out on the Dark Web and social media … and open datasets, suddenly they can put everything into automatic discovery mode and come up with all kinds of interesting things,” he said. “And so the de-identification of data as a technology approach is a way [for regulators] to say, ‘We have imposed these burdens on these companies in order to protect your privacy.’” Data de-identification techniques and practices Virtue signaling or not, there are a lot of ways to de-identify data today, said Sameer Ansari, managing director and practice lead for the data privacy team at business consulting company Protiviti. The main challenge isn’t necessarily technical (although given the large volumes of structured and unstructured data to be de-identified, the task is challenging), it’s using the least disruptive technique to achieve the required results. “Some of it depends on what the problem actually is,” Ansari said. “So, starting at why are you looking for a solution and what industry are you in, there might be use cases where you’re saying, ‘Listen, masking [for example] is not an option.’ That’s going to be the challenge. It’s going to depend a lot on the use case, unfortunately.” One technique being deployed today is redacting. This is where PII such as social security numbers, addresses and email addresses are either masked with symbols such as an asterisk or replaced with synthetic or fake data, explained Anshumali Shrivastava, an associate professor of computer science at Rice University and founder of ThirdAI Corp. Aggregation, where datasets are generalized into groups such as by age range, are also popular and effective, he said. Tokenization is a method that replaces the sensitive data with consistent replacement strings that have no meaningful value if breached, said Kayne McGladrey, a senior member of the IEEE, the world’s largest technical professional organization. “One of the most common standards in the United States is the HIPAA Safe Harbor method , which requires the removal of all 18 identifiers of individuals, relatives, employers and household members,” he said. Emerging trends in de-identification The privacy vault method, where data passes through a “vault” to be de-identified, is gaining in popularity, said Infosys’ Williams. A vault can apply various de-identification techniques and relies on encryption keys to keep records from being re-identified after they pass through the vault. “It wouldn’t do that for all the data, but … the privacy vault would mask [PII] to the customer support person [for example] who would be looking at my record. [The real data] would still be there, still be useful to the company, but … there’s no reason why the customer support person in the state of Washington needs to know my date of birth.” Confidential computing also is an emerging technology meant to protect data in use, said McGladrey of the IEEE. “Confidential computing can allow the processing of data from multiple parties without sharing the input data with those other parties,” he said. “For example, if an organization wants to perform processing on a large set of healthcare data collected from multiple third-party organizations, properly configured confidential computing potentially permits those third parties to provide their data for processing in aggregate. In this scenario, not even the cloud provider can see the cleartext data provided by the third parties, or the results.” Another area of interest for de-identification advocates is synthetic data generation for research purposes, said Shrivastava. In this approach data is generated to mimic the real data it is replacing. Because the data retains the same statistical characteristics and patterns of the original information, data quality isn’t compromised. This method reduces the risks of exposing sensitive information when sharing datasets for scientific studies and research. The challenge ahead For most organizations today, data de-identification isn’t going to protect them from the fallout of a serious data breach, but it will help them ensure that the customer data they share during the normal course of business is protected from casual or uninformed misuse and exposure. Fortunately, from a technical perspective there are many ways to do this, including using a service from an organization’s existing software vendor, such as Salesforce or Snowflake. The main challenge most organizations will face is understanding where and when it is needed and, when it is, what method of de-identification will serve the purpose at hand without causing a ripple effect that breaks other business processes along the way. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,617
2,023
"Navigating the personalization minefield: How businesses can improve customer experiences and loyalty | VentureBeat"
"https://venturebeat.com/data-infrastructure/personalization-is-harder-than-it-seems"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Navigating the personalization minefield: How businesses can improve customer experiences and loyalty Share on Facebook Share on X Share on LinkedIn Illustration by: Leandro Stavorengo Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. This article is part of a VB special issue. Read the full series here: Building the foundation for customer data quality. From Netflix to Amazon, almost every company today is racing to grow its business through personalization. They are using data; they are pushing out ads, recommendations and messages; and they are expecting customers to interact with those communications for increased sales and long-term relationships. The idea is to treat each prospect as a unique individual, bringing a much-needed human touch to the way businesses are run. Even end users are on board with this approach, with as many as 66% saying in a Twilio survey that they are likely to stop using a brand and switch to alternatives if their experience is not tailor-made. But here’s the thing: Even as brands and consumers are all in favor of personalization in concept, the execution still appears to be going wrong. In the same Twilio poll, which involved nearly 5,000 B2C leaders and 6,000 customers, 91% of brands said they often or always personalize engagement with consumers, but just 56% of consumers agreed that this was the case, highlighting a major gap that needs to be addressed. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “Consumers today are bombarded with marketing messages across several channels from brands. And more often than not, what they’re receiving is irrelevant and only serves to frustrate,” Robin Grochol, VP of product management at Twilio , told VentureBeat. Simply put, either companies are not delivering the right content or they are delivering the right content at the wrong time — leading to poor customer experience, missed opportunities and a waste of valuable marketing dollars. Building the right data foundation While most businesses realize gathering data is the key to successful personalization, what many miss out on is understanding the type of data they need to capture to build a strong customer profile, and the right approach to activate that profile. Even if cookies are going away , about 80% of companies continue to rely on third-party data to drive personalization. This is a major roadblock as most of this information comes from outside sources and is not that accurate for building customer profiles, even when one employs AI and ML models to correlate and fill in the blanks. As an alternative, Grochol suggests companies should focus on acquiring zero- and first-party data, which comes directly from customers, is more accurate and better reflects current demand. Zero-party data is information proactively shared by customers, while first-party data stems directly from their interactions. This data provides an easy way to gain insights into user behavior (like how they work, travel, dine and spend) and related context. “That’s why companies track user ‘events,’ i.e. every significant action a user takes within the digital product, like retaining, churning or transacting,” Amir Movafaghi, CEO of product analytics platform Mixpanel , told VentureBeat. “User event data can be segmented to help understand what different groups of users have in common or how they’re different. This allows companies to build user cohorts and understand what those groups value during each stage of the customer experience. This understanding is crucial to building a personalization strategy that delivers long-term value to customers.” Once a team knows what it wants to track, the next step is to break down silos and bring every relevant piece of information (covering data from all touchpoints and interactions) into a single customer data platform — while also maintaining the quality and accuracy of that data through master data management (MDM) or the use of data clean rooms. This is critical, as most companies today maintain dozens of internal systems, keeping data scattered across not only various databases and formats (separated by product line, geography, etc.) but also different departments. Such fragmentation makes it difficult to build a unified 360-degree view of the customer. The gaps can also mean the data might have missing fields or is outdated. “If data is siloed or gets stuck in one system of record, a brand effectively misses a piece of its customer puzzle. Businesses need to have the tools to not only merge customer data from different sources into a single, unified customer profile but to then easily move that profile to whichever downstream tools they’re using to engage customers,” Twilio’s Grochol noted. Activating customer profiles Once relevant customer profiles are ready, they have to be activated in real time via the customer data platform (CDP) to serve personalized messaging just when customers need it. To do this, the platform groups customers with similar characteristics or behaviors and targets those segments with tailor-made marketing campaigns, offers and experiences, delivered via integrated marketing and advertising tools. For instance, if a customer goes through the process of buying an SaaS product subscription but pulls back at the last minute, the data from that interaction would go into their CDP profile in real time and the system could email them a personalized discount code. This not only increases the chance of new signups but also helps build loyalty and customer lifetime value (LTV). “Ideally, these crucial aspects (customer profiling and activation) should be utilized on one unified platform, enabling businesses to act on their data across multiple channels, while also measuring results,” Anil Mathews, CEO of data intelligence platform Near , told VentureBeat. “Without a unified approach, using multiple vendors can lead to data silos and confusion about governance, creating difficulties for the organization — before even attempting to reach customers. Having a solid foundation and a clear data strategy with the right elements in place will result in a clear and accurate view of your customers, further resulting in hyper-personalization and business success,” Mathews said. Crawl, walk, then run While deploying the right data and technology can help companies deliver individualized experiences and accelerate revenue growth, it’s important to note that personalization may not be needed everywhere. To make sure leaders do not lose sight of what’s needed amid the growing pursuit of personalization, Arun Kumar, EVP, data and insights at digital experience company Hero Digital , recommends a crawl, walk and run approach. He says businesses should first be crystal-clear on why they want to deliver personalized experiences and understand where these will create value in the customer journey. Only when it’s clear what customers want, and that this can be addressed with personalization, can the business align expectations and articulate its value proposition to specific customer segments — moving on to the next (“walk”) stage. Finally, when the personalization efforts are up and running, teams should keep an eye on the ground to see what’s working and optimize accordingly, Kumar says. “When you personalize, you get a lot of customer signals back. If you’re personalizing well, it will show up in your data. People will sign up more. They will buy more. This means you need to listen to customer behavior and be very proactive with data analytics to understand how people are responding,” he said. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,618
2,023
"Turning Customer Data into Gold: The Key to Exceptional Experiences and Business Growth | VentureBeat"
"https://venturebeat.com/data-infrastructure/unleashing-power-data-transforming-customer-experience-business-success"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Turning Customer Data into Gold: The Key to Exceptional Experiences and Business Growth Share on Facebook Share on X Share on LinkedIn Illustration by: Leandro Stavorengo Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. This article is part of a VB special issue. Read the full series here: Building the foundation for customer data quality. In today’s fast-paced business landscape, where competition is fierce and customers are more demanding than ever, data has become the ultimate game-changer for enterprise leaders. Data collection holds the key to unlocking the secrets of customer behavior and preferences, paving the way for exceptional customer experiences and unparalleled business success. But what exactly is data collection? Collecting data is more than just a mundane process of gathering scattered bits of information. It’s a meticulous art form that involves capturing the right data points from the right sources, organizing them in a structured manner, and transforming them into actionable insights. It’s about understanding that data comes in various forms, from individual customers’ personal details to the behavioral patterns of entire markets. Enterprise leaders who embrace the transformative potential of data collection position themselves as pioneers in their industries. They recognize that every piece of data holds a hidden treasure waiting to be unearthed. With the right tools and technologies at their disposal, they can delve deep into the vast ocean of data, uncovering invaluable gems that empower them to make informed decisions, shape exceptional customer experiences and ultimately stay ahead of the competition. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Why is it important for your business? For enterprise leadership, data collection isn’t just a tool — it’s the catalyst for unprecedented growth, customer-centricity and strategic decision-making. Here’s why: Unveiling the customer’s mind: Delve deep into the hearts and minds of your customers by harnessing the power of data. By capturing and analyzing insights on their preferences, purchasing habits and demographics, you gain an intimate understanding of your target audience. Armed with this knowledge, you can tailor your products, services and marketing campaigns to exceed their expectations. Crafting unique experiences: Data collection empowers you to create extraordinary experiences for your customers. By sifting through customer data, you uncover hidden patterns and trends that enable you to offer personalized recommendations, exclusive deals and bespoke marketing messages. This level of personalization not only delights your customers but builds unwavering loyalty. Guiding the way with intelligent decisions: The treasure trove of data you accumulate provides invaluable insights into market dynamics, customer behavior and operational performance. Armed with this knowledge, you can make informed decisions that shape the destiny of your enterprise. Whether it’s developing groundbreaking products, devising pricing strategies or unearthing untapped markets, data-driven decision-making becomes your secret weapon. Mastering the art of efficiency: In embracing data collection and analysis, you embark on a journey toward operational excellence. Discover bottlenecks, streamline processes and boost efficiency by meticulously monitoring your operational data. Armed with real-time key performance indicators (KPIs), you can make agile adjustments that lead to cost savings and remarkable gains in productivity. Unlocking the future: Embrace the power of data collection and propel your enterprise toward predictive analytics and forecasting. Using historical data, unveil future trends and anticipate customer needs like never before. Be a visionary leader, foresee potential challenges, and take proactive measures to prevent churn, improve customer retention and secure your competitive edge. Types of data collected By harnessing a diverse range of customer data, enterprise leaders can uncover hidden patterns, decode customer preferences and make informed strategic choices. Such data intelligence empowers enterprises to customize their offerings, provide personalized experiences and design effective marketing strategies, resulting in heightened customer engagement and loyalty. Personal data Personal information: This category encompasses data such as names, addresses, contact details and other personally identifiable information (PII) that customers willingly provide during their interactions with a business. Personal information enables the identification of individual customers and facilitates effective communication. Demographic data: Demographic data includes characteristics such as age, gender, income level, occupation, education and marital status. Collecting demographic data allows businesses to segment their customer base, understand their target audience and develop targeted marketing strategies. Purchase history: Gathering data on customers’ past purchases provides insights into their buying behavior, preferences and product interests. This enables businesses to personalize product recommendations, offer relevant promotions and enhance the overall customer experience. Customer preferences: Obtaining data on customer preferences, such as preferred communication channels, product features or delivery options, helps businesses tailor their offerings to individual preferences. This data facilitates the provision of personalized experiences and the cultivation of long-term customer loyalty. According to Soumendra Mohanty, chief strategy officer at data science company Tredence , personal data plays a crucial role in helping businesses understand customer behavior, preferences and needs. This understanding enables businesses to personalize their services and products, leading to improved customer experiences. Additionally, personal data allows for targeted marketing campaigns, increasing conversion rates and sales. Analyzing personal data also provides insights into customer churn, allowing businesses to take proactive measures to retain customers. It is important, however, to use personal data in compliance with privacy regulations to maintain trust. In summary, when used responsibly and effectively, personal data can significantly drive business growth and return on investment (ROI). Profiling customers across various digital platforms, physical stores and other touchpoints can be challenging because of the scattered nature of first-party customer data. Moreover, the utility of third-party data is decreasing thanks to stricter privacy regulations. Therefore, businesses need to develop intelligent strategies to consolidate and interpret customer data while respecting privacy norms. By using first-party customer data, third-party data partnerships, and machine learning algorithms, businesses can accelerate their journey towards customer personalization. Implementing personalization strategies tailored to an enterprise’s maturity level is key, ranging from channel-specific recommendations to cutting-edge next best actions. By structuring and streamlining customer data, businesses can integrate valuable customer insights into all their operations and applications. This approach empowers businesses to achieve a high level of customer-centricity, resulting in increased engagement and customer satisfaction. Behavioral data Website analytics: Website analytics tools track and collect data on user interactions with a business’s website. This includes information on page views, click-through rates, bounce rates and conversion rates. Analyzing website analytics data helps optimize website design, navigation and user experience. Clickstream data: Clickstream data refers to the sequence of web pages a user visits and the actions they take on a website. It encompasses data on the duration of visits, specific links clicked and interactions within the website. Analyzing clickstream data provides insights into user behavior and interests as well as areas for potential website improvement. Social media interactions: Data collected from social media platforms includes user engagement metrics such as likes, comments, shares and followers. It offers businesses insights into customer sentiment, brand perception and trends. Social media data assists in social media marketing, content creation and reputation management. App usage patterns: For businesses with mobile applications, collecting app usage data is crucial. It involves tracking user activities within the app, including time spent, features used and user flows. App usage data assists businesses in improving app functionality, optimizing user experience and identifying opportunities for engagement. Operational data Sales and transaction data: This category encompasses information on purchases, order history, payment methods and revenue generated. Analyzing sales data helps identify popular products, revenue trends and customer buying patterns, aiding in inventory management and forecasting. Inventory management data: Data related to inventory management includes stock levels, replenishment rates and supply chain information. By collecting and analyzing this data, businesses can optimize inventory levels, avoid stockouts and improve supply chain efficiency. Supply chain data: Supply chain data includes information on suppliers, logistics, transportation and production processes. Collecting and analyzing supply chain data enables businesses to streamline operations, identify inefficiencies and optimize the supply chain for cost savings and improved customer satisfaction. Customer support interactions: Data collected from customer support interactions, such as emails, live chats and phone calls, provides insights into customer issues, inquiries and feedback. Analyzing customer support data helps identify recurring problems, improve support processes and enhance overall customer satisfaction. Data collection methods Surveys and questionnaires Surveys and questionnaires are widely used for collecting customer data. Businesses can design and distribute surveys to gather specific information from customers, such as their preferences, satisfaction levels and feedback on products and services. Surveys can be conducted through various channels, including online platforms, email and in-person interactions. The collected data can be analyzed to gain insights into customer opinions and preferences. Online tracking tools Online tracking tools, such as web analytics software, are used to collect data on customer behavior and interactions with a business’s online platforms. These tools capture data on website visits, page views, click-through rates, conversion rates and other relevant metrics. By implementing tracking codes or cookies, businesses can track user activities and analyze the collected data to understand user behavior, optimize website design and improve the user experience. Loyalty programs and customer accounts Loyalty programs and customer accounts provide businesses with valuable data about their customers. By incentivizing customers to create accounts or enroll in loyalty programs, businesses can gather information such as customer demographics, purchase history, preferences and contact details. This data helps businesses personalize offerings, track customer loyalty and develop targeted marketing strategies. Additionally, customer accounts enable ongoing data collection and allow customers to manage their preferences and interactions with the business. Social media monitoring Social media monitoring involves tracking and analyzing customer interactions and mentions across social media platforms. By monitoring social media conversations, businesses can gather data on customer sentiment, brand perception and trends related to their products or services. Social media monitoring tools enable businesses to collect data on likes, shares, comments and other engagement metrics, providing insights into customer preferences and enabling proactive engagement and reputation management. Data partnerships and third-party sources Businesses can also collect data through partnerships with external data providers or third-party sources. These sources may include market research firms, data aggregators and industry-specific databases. Through data partnerships, businesses can access additional demographic data, market research insights and industry trends that complement their existing data. These partnerships allow businesses to expand their data collection capabilities and gain a more comprehensive view of their target audience or market. It is important for businesses to ensure compliance with data protection regulations and ethical guidelines when collecting data through these methods. Using various data collection methods, businesses can gather valuable information to better understand their customers, improve their offerings and make data-driven decisions to enhance customer experiences and overall business performance. Uses of data in customer service Personalization and Customization Tailoring recommendations and offers: By analyzing customer data, businesses can personalize product recommendations and offers based on individual preferences and purchasing history. This level of personalization enhances the customer’s shopping experience, increases the relevance of recommendations and improves the likelihood of conversion and customer satisfaction. Enhancing customer experience: Customer data allows businesses to understand customer preferences, behaviors and interaction patterns. With this information, businesses can tailor the customer experience across various touchpoints, such as websites, mobile apps and customer service interactions, providing a personalized and seamless experience that meets individual needs and preferences. Improving customer satisfaction: Using customer data, businesses can identify pain points, address customer concerns and provide proactive solutions. Understanding customer preferences and past interactions helps businesses deliver more personalized and responsive customer service, resulting in higher levels of customer satisfaction and loyalty. Predictive Analytics Forecasting customer behavior: Using historical customer data, businesses can use predictive analytics models to forecast customer behavior. This enables businesses to anticipate customer needs, predict future purchasing patterns and adjust their strategies accordingly. Forecasting customer behavior allows businesses to proactively meet customer demands and enhance their overall experience. Anticipating customer needs: Customer data provides insights into customer preferences, purchasing habits and product usage patterns. With this information, businesses can identify emerging trends and anticipate customer needs. By understanding what customers may require in the future, businesses can develop new products, services and features that meet those needs, staying ahead of the competition. Preventing churn: Churn refers to the loss of customers. By analyzing customer data, businesses can identify early warning signs of potential churn, such as reduced engagement or declining satisfaction levels. Predictive analytics models can help identify at-risk customers and allow businesses to implement targeted retention strategies, such as personalized offers, proactive customer support or loyalty programs, to prevent customer churn. Customer Segmentation Identifying target customer groups: Customer data enables businesses to segment their customer base into distinct groups based on demographics, behaviors, preferences or purchase history. Once they understand different customer segments, businesses can tailor their marketing efforts, product development and customer service strategies to effectively target specific groups and maximize customer engagement and satisfaction. Developing targeted marketing strategies: With customer segmentation, businesses can create more targeted and relevant marketing campaigns. By understanding the unique needs, preferences and motivations of different customer segments, businesses can craft personalized messages, choose appropriate marketing channels and deliver content that resonates with each group. This leads to improved campaign effectiveness and higher conversion rates. Optimizing product development and pricing: Customer segmentation helps businesses identify specific customer groups that may have distinct product preferences or price sensitivities. By analyzing customer data, businesses can gain insights into what features or pricing models are most appealing to different segments. This information guides product development decisions, allows for targeted product enhancements and enables optimized pricing strategies that meet the needs of each customer segment. Uses of data in business improvement Performance monitoring and KPIs Tracking sales and revenue: Data collection allows businesses to monitor sales and revenue trends, identify top-performing products or services and evaluate the effectiveness of marketing campaigns. By analyzing sales data, businesses can make data-driven decisions to optimize sales strategies, improve pricing and identify areas for revenue growth. Analyzing operational efficiency: Businesses can use data to monitor and analyze operational metrics such as production output, resource utilization and cycle times. By tracking these key performance indicators (KPIs), businesses can identify areas of improvement, streamline processes, reduce costs and enhance operational efficiency. Monitoring customer satisfaction metrics: Customer feedback and satisfaction metrics, such as net promoter score (NPS) and customer satisfaction surveys, provide valuable insights into the customer experience. Analyzing this data helps businesses identify areas for improvement, address customer concerns and enhance overall customer satisfaction, leading to increased loyalty and repeat business. Process optimization Streamlining operations: Data collection and analysis allows businesses to identify and eliminate inefficiencies in their processes. By examining operational data, businesses can streamline workflows, automate tasks and reduce manual errors, resulting in improved productivity and cost savings. Identifying bottlenecks and inefficiencies: Data helps identify bottlenecks or areas of low efficiency in a business’s operations. By analyzing data on process flow, resource allocation and cycle times, businesses can pinpoint areas that require improvement, optimize resource allocation and enhance overall operational performance. Improving supply chain management: Data collection and analysis play a crucial role in supply chain management. By analyzing data related to inventory levels, lead times, supplier performance and demand patterns, businesses can optimize their supply chain processes, minimize stockouts, reduce costs and improve delivery timelines. Decision-making and strategy development Data-driven decision-making: By leveraging data, businesses can make informed decisions based on insights and trends rather than relying on intuition alone. Data analysis provides businesses with a factual basis for decision-making, enabling them to mitigate risks, seize opportunities and make strategic choices that align with customer needs and market trends. Identifying new market opportunities: Data collection allows businesses to identify emerging market trends, customer preferences and unmet needs. By analyzing market data and consumer behavior, businesses can uncover new market opportunities, develop innovative products or services, and stay ahead of the competition. Assessing the competitive landscape: Data analysis helps businesses understand their competitive landscape by examining market share, pricing strategies, customer reviews and other relevant data. By analyzing competitive data, businesses can identify areas where they can differentiate themselves, refine their marketing strategies and gain a competitive edge in the market. Using data for performance monitoring, process optimization and decision-making, businesses can drive continuous improvement, increase operational efficiency and develop effective strategies to stay competitive in the market. Data-driven insights enable businesses to make more informed decisions and maximize their potential for growth and success. Need of the hour: Data privacy According to a Gartner survey, 60% of marketing leaders anticipate difficulties in collecting customer data while maintaining a delicate balance between privacy and value in 2023. Despite 85% of respondents having implemented formal data management policies, privacy remains a persistent challenge. However, there is a notable increase in the adoption of personalized messaging, with 42% of marketers embracing this approach. The survey also highlights that 78% of marketing leaders empower customers to control their own data, with 82% prioritizing the use of first-party data to deliver immediate value. Significantly, concerns about trust and privacy have led almost one-third of respondents to sever partnerships with agencies or channel partners. Furthermore, proactive marketers who prioritize first-party data consistently exceed customer retention expectations. Interestingly, organizations managing 11 or more marketing channels have shown more significant growth in their first-party customer data collection compared to those managing fewer channels. Data privacy remains a critical and ongoing concern, requiring digital marketing leaders to develop strategies that effectively secure the data they need while placing customer needs at the forefront. The road ahead According to Tredence’s Mohanty, the future of data collection holds immense potential for personalized experiences and real-time decision-making in business. Advancements in AI and machine learning will enable companies to customize products and services based on individual customer preferences, leading to increased satisfaction and loyalty. Moreover, businesses will be able to respond quickly to market changes and customer needs. As concerns around data privacy grow, evolving technologies will ensure that data collection maintains user anonymity, strengthening customer loyalty and retention. AI and ML technologies are expected to automate data collection and analysis, providing predictive insights for proactive business strategies. The Internet of Things (IoT) will further expand data collection opportunities, granting access to real-time, detailed data that can enhance operational efficiency and customer understanding. However, evolving data privacy regulations are reshaping how enterprises handle customer data, from capture and security to distribution and analysis. Businesses must prepare for increased legal responsibilities regarding consumer data protection, driven by growing demands for privacy rights. Despite regulatory changes, data collection by private corporations will remain a fundamental practice, albeit with changes in its nature. Companies need to adapt their data management strategies to meet the requirements of new legislative landscapes, all while upholding their commitment to customer privacy. Data collection is moving toward a cookie-less model due to mounting privacy concerns. This shift emphasizes the importance of developing strategies that respect user privacy while still delivering personalized customer experiences. Although this change presents certain challenges, it also offers businesses an opportunity to innovate and cultivate deeper, more meaningful relationships with customers. While businesses have extensively utilized “data at rest,” the future lies in harnessing “data in motion.” Real-time data can unlock new opportunities for customer engagement, quick responses to market changes and faster and more precise decision-making, providing a competitive edge. One significant challenge in effective data utilization has been the fragmentation of enterprise data into isolated data “islands.” Technologies like data mesh and zero-ETL data integration are emerging as breakthrough solutions to overcome these barriers, empowering businesses to fully leverage data for tangible results. These innovations make data more accessible and impactful for organizations. Last, with the increasing prominence of large language models (LLMs), businesses can explore innovative ways to use unstructured data. Concepts like multi-mode data management techniques will further revolutionize how businesses can tap into structured and unstructured data. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,619
2,023
"The true cost of dirty data -- and how to address it head on | VentureBeat"
"https://venturebeat.com/enterprise-analytics/the-true-cost-of-dirty-data-and-how-to-address-it-head-on"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Sponsored The true cost of dirty data — and how to address it head on Share on Facebook Share on X Share on LinkedIn Presented by Treasure Data This article is part of a VB special issue. Read the full series here: Building the foundation for customer data quality. Customer data has become the lifeblood of delivering connected customer experiences. But without the right customer data, it’s practically impossible to deliver personalized experiences at scale. In the rush to deliver these personalized digital experiences, it’s easy to overlook one simple fact: your customer data must first be accurate. If it’s not, the results can be worse than you think. Without the right data, your customers could end up with a personalized experience — but one that belongs to someone else. With so much data to manage, it can be hard, or even impossible, to cleanse your data manually. A customer data platform (CDP) can help automate the process, and create a system that continuously improves your data quality over time. It can also help marketing and IT teams make better decisions though consistent insights, which improves efficiency and delivers higher ROI. The true cost of bad data In order to ensure the data you’re activating is useful, actionable and accurate, it has to be clean. Otherwise, you risk making bad decisions based on bad data — and that can be costly. According to global research from Treasure Data , while a majority of marketers say they have access to customer data, around one-fifth say that data isn’t accurate or high-quality. This poor-quality data resulted in inaccurate targeting (30%), lost customers (29%), lost leads (28%), reduced productivity (27%) and wasted marketing spend (28%). Wasted marketing spend adds up over time. According to our research, marketers in the U.S. predicted budget wastage of up to 38% — or nearly $6 million on average over a period of six months. The principles of data hygiene Improving data quality starts with good data hygiene. Data cleansing , also known as data scrubbing or data cleaning, is the process of fixing or removing incorrect, incomplete, duplicate or corrupted data to ensure your data is accurate, trustworthy and consistent across your organization. There are several core elements that make up quality data. During your data cleansing process, organizations should consider these factors: Accuracy: Does your data accurately reflect the correct information? Completeness: Are all needed attributes included in the dataset? Consistency: How consistent is data across the organization? Timeliness: Is the data relevant, and up-to-date? Uniqueness : Does data only appear once in the ecosystem, without duplicates? Validity: Is your data usable, or presented in a usable format? Here’s how data cleansing works, how a customer data platform can help, and what it ultimately means for the customer experience: 1. Inspect and audit your data A data hygiene audit consists of assessing all of the entry points your organization uses for customer data collection, along with how that data is collected, and when. Understanding how data is collected can help you tailor your first-party data strategy to create a fair value exchange of data at the right touchpoints along the customer journey. A CDP unifies data from across sources and systems, eliminates data silos and gives you a clear view of all your data within a centralized system. With this visibility, you can pinpoint different areas of the customer journey, and tailor your data collection strategy to make the experience more transparent for customers. Asking for the right data at the right moments can prevent dirty data from being collected at all, because you’re only asking consumers to give you data that’s relevant and contextual to the interaction they’re having with your brand. Nearly one-fourth of U.S. consumers admit to providing false data about themselves to organizations. 2. Merge duplicate customer profiles Consolidating duplicate data into one unified customer profile is a critical step for maintaining accurate records. A CDP with AI-powered identity resolution can automatically rectify duplicative data and tie it back to an individual profile. A CDP can also continuously update customer profiles in near-real time, ensuring data remains accurate and up-to-date. Rectifying duplicate profiles makes the customer experience seamless. For example, say a shopper has two email addresses that they’ve registered with across two different brands in your brand portfolio. In a siloed system, this may seem like two different people, which means one customer would receive duplicates with each marketing effort. With a CDP, this data is combined to one customer record that can be used across brands for more effective marketing outreach, and less wasted spend. 3. Manage old or outdated leads As you cleanse your data, be sure to remove leads that are inaccurate, and set up thresholds within your marketing solutions that can help you identify leads that have the highest propensity to convert. With a CDP, analytics and AI-powered next-best action allow you to set up triggers to re-engage customers who have gone quiet, or have performed an action that signals new intent. You can also suppress customers from marketing activity if they’re not interested in a certain product or service at any given time. This ensures that customers only receive marketing that is relevant to them — and not inundated with messaging. Tailoring outreach also improves marketing spend, since you’re only engaging with consumers who have a high likelihood of conversion. 4. Use your data for actionable insights Once your data is cleansed and stored in your CDP, marketing and data teams can use it to inform strategy, optimize campaigns and use machine learning to segment and analyze highly targeted audiences. With a CDP, marketers can activate campaigns for specific segments automatically, making processes more efficient. Continuous insights allows for swift optimization, which improves return on investment and reduces wasted spend. The power of clean data with a CDP Data is a powerful tool. But if it isn’t accurate, it won’t help you get to where you need to go. A CDP helps marketing and data teams make sense of customer profiles at scale, and serves as a foundation for data cleansing, enrichment and activation. It’s time to ditch the dirty data and make better business decisions. Investing in the right data solutions is the first step. Learn more about how to optimize your data strategy with our guide here. Jim Skeffington is Senior Technical Product Marketing Manager at Treasure Data. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,620
2,023
"The Future of Software: Building Products with Privacy at the Core | VentureBeat"
"https://venturebeat.com/security/building-for-privacy"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages The Future of Software: Building Products with Privacy at the Core Share on Facebook Share on X Share on LinkedIn Illustration: Leandro Stavorengo Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. This article is part of a VB special issue. Read the full series here: Building the foundation for customer data quality. Like cybersecurity , privacy often gets rushed into a product release instead of being integral to every platform refresh. And like cybersecurity DevOps and testing, which often gets bolted on at the end of a system development life cycle (SDLC), privacy too often reflects how rushed it’s been instead of being planned as a core part of each release. The result is that the vision of what privacy could provide is not achieved, and a mediocre customer experience is delivered instead. Developers must make privacy an essential part of the SDLC if they are to deliver the full scope of what customers want regarding data integrity, quality and control. “Privacy starts with account security. If a criminal can access your accounts, they have complete access to your life and your assets. FIDO Authentication, from the FIDO Alliance , protects accounts from phishing and other attacks,” Dennis Moore, CEO of Presidio Identity , told VentureBeat in a recent interview. Moore advises organizations “to really limit liability and protect customers, reduce the amount of data collected, improve data access policies to limit who can access data, use polymorphic encryption to protect data, and strengthen account security.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Privacy needs to shift left in the SDLC Getting privacy right must be a high priority in DevOps cycles, starting with integration into the SDLC. Baking in privacy early and taking a more shift-left mindset when creating new, innovative privacy safeguards and features must be the goal. DJ Patil, mathematician and former U.S. chief data scientist, shared his insights on privacy in a LinkedIn Learning segment called “ How can people fight for data privacy? ” “If you’re a developer or designer, you have a responsibility,” Patil said. “[J]ust like someone who’s an architect of the ability to make sure that you’re building it (an app or system) in a responsible way, you have the responsibility to say, here’s how we should do it.” That responsibility includes treating customer data like it’s your own family’s data, according to Patil. Privacy starts by giving users more control over their data A leading indicator of how important control over their data is to users appeared when Apple released iOS 14.5. That release was the first to enforce a policy called app tracking transparency. iPhone, iPad and Apple TV apps were required to request users’ permission to use techniques like IDFA (I.D. for Advertisers) to track users’ activity across every app they used for data collection and ad targeting purposes. Nearly every user in the U.S., 96 % , opted out of app tracking in iOS 14.5. Worldwide, users want more control over their data than ever before, including the right to be forgotten , a central element of Europe’s General Data Protection Regulation ( GDPR ) and Brazil’s General Data Protection Law ( LGPD ). California was the first U.S. state to pass a data privacy law modeled after the GDPR. In 2020, the California Privacy Rights Act (CPRA) amended the California Consumer Privacy Act (CCPA) and included GDPR-like rights. On January 1, 2023, most CPRA provisions took effect, and on July 1, 2023, they will become enforceable. The Utah Consumer Privacy Act (UCPA) takes effect on December 31, 2023. The UCPA is modeled after the Virginia Consumer Data Protection Act as well as consumer privacy laws in California and Colorado. With GDPR, LGPD, CCPA and future laws going into effect to protect customers’ privacy, the seven foundational principles of Privacy by Design (PbD) as defined by former Ontario information and privacy commissioner Ann Cavoukian have served as guardrails to keep DevOps teams on track to integrating privacy into their development processes. Privacy by engineering is the future “Privacy by design is all about intention. What you really want is privacy by engineering,” Anshu Sharma, cofounder and CEO of Skyflow , told VentureBeat during a recent interview. “Or privacy by architecture. What that means is there is a specific way of building applications, data systems and technology, such that privacy is engineered in and is built right into your architecture.” Skyflow is the leading provider of data privacy vaults. It counts among its customers IBM (drug discovery AI), Nomi Health (payments and patient data), Science37 (clinical trials) and many others. Sharma referenced IEEE’s insightful article “ Privacy Engineering ,” which makes a compelling case for moving beyond the “by-design” phase of privacy to engineering privacy into the core architecture of infrastructure. “We think privacy by engineering is the next iteration of privacy by design,” Sharma said. The IEEE article makes several excellent points about the importance of integrating privacy engineering into any technology provider’s SDLC processes. One of the most compelling is the cost of shortcomings in privacy engineering. For example, the article notes that European businesses were fined $1.2 billion in 2021 for violating GDPR privacy regulations. Fulfilling legal and policy mandates in a scalable platform requires privacy engineering in order to ensure any technologies being developed support the goals, direction and objectives of chief privacy officers (CPOs) and data protection officers (DPO). Skyflow’s GPT Privacy Vault , launched last month , reflects Sharma’s and the Skyflow team’s commitment to privacy by engineering. “We ended up creating a new way of using encryption called polymorphic data encryption. You can actually keep this data encrypted while still using it,” Sharma said. The Skyflow GPT Privacy Vault gives enterprises granular data control over sensitive data throughout the lifecycle of large language models (LLMs) like GPT, ensuring that only authorized users can access specific datasets or functionalities in those systems. Skyflow’s GPT Privacy Vault also supports data collection, model training, and redacted and anonymized interactions to maximize AI capabilities without compromising privacy. It enables global companies to use AI while meeting data residency requirements such as GDPR and LGPD throughout the global regions they are operating in today. Five privacy questions organizations must ask themselves “You have to engineer a system such that your social security number will never ever get into a large language model,” Sharma warns. “The right way to think about it is to architect your systems such that you minimize how much sensitive data makes its way into your systems.” Sharma advises customers and the industry that there’s no “delete” button in LLMs, so once personal identifiable information (PII) is part of an LLM there’s no reversing the potential for damage. “If you don’t engineer it correctly, you’re never going to … unscramble the egg. Privacy can only decrease; it can’t be put back together.” Sharma advises organizations to consider five questions when implementing privacy by engineering: Do you know how much PII data your organization has and how it’s managed today? Who has access to what PII data today across your organization and why? Where is the data stored? Which countries and locations have PII data, and can you differentiate by location what type of data is stored? Can you write and implement a policy and show that the policy is getting enforced? Sharma observed that organizations that can answer these five questions have a better-than-average chance of protecting the privacy of their data. For enterprise software companies whose approach to development hasn’t centered privacy on identities, these five questions need to guide their daily improvement of their SDLC cycles to integrate privacy engineering into their processes of developing and releasing software. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,621
2,023
"AI and ML: The new frontier for data center innovation and optimization | VentureBeat"
"https://venturebeat.com/data-infrastructure/ai-and-ml-the-new-frontier-for-data-center-innovation-and-optimization"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages AI and ML: The new frontier for data center innovation and optimization Share on Facebook Share on X Share on LinkedIn This article is part of a VB special issue. Read the full series here: Data centers in 2023: How to do more with less. As the demand for data processing and storage continues to surge, data centers are grappling with the challenge of evolving and expanding. The changing landscape of platforms, equipment design, topologies, power density requirements and cooling demands all underscore the pressing need for new architectural designs. Data center infrastructures often struggle to align current and projected IT loads with their critical infrastructure, resulting in a mismatch that threatens their ability to meet escalating requirements. Against this backdrop, traditional data center approaches must be revised. Data centers are now integrating artificial intelligence (AI) and machine learning (ML) technologies into their infrastructure to remain competitive. By implementing an AI-driven layer within traditional data center architectures, companies can create autonomous data centers that can optimize and perform generic data engineering tasks without human intervention. Turbocharging traditional architectures with AI The proliferation of AI and ML technologies within data centers has been notable in recent years. AI is driving efficiency and performance across various use cases. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “AI-driven data centers can help organizations gain a competitive advantage by optimizing application performance and availability, which in turn helps increase customer satisfaction and loyalty,” said Sajid Mohamedy, EVP of silicon valley based technology consulting firm Nisum. “Adding AI to the mix aids optimized resource allocation, which improves data center efficiency and reduces costs.” Fast failure detection and prediction, root cause analysis, power usage optimization and resource capacity allocation optimization are just a few examples where data and algorithm-driven technologies are being deployed to maximize data center efficiency. Incorporating AI into the data center is becoming increasingly necessary for every data-driven business, as outages are becoming more frequent and expensive. AI-driven data centers offer an array of benefits, chief among them the potential to slash downtime and enhance overall system reliability, ultimately translating into massive cost savings for organizations. Increased fault detection and prediction abilities According to Ellen Campana, leader of enterprise AI at KPMG U.S. , AI has historically been employed to enhance data storage optimization, energy utilization and accessibility. However, in recent years, there has been a discernible trend in expanding AI’s utility to encompass fault detection and prediction, which can trigger self-healing mechanisms. “The key to streamlining automated detection is providing the AI with a window into the details of hardware and software operations, including network traffic,” Campana told VentureBeat. “If traffic within a certain node is slowing, AI can detect that pattern and trigger restart to a process or the entire node.” Pratik Gupta, chief technology officer at IBM Automation , posits that AI has transformative potential across the data center and hybrid cloud environments. By bolstering user experiences in applications, streamlining operations, and empowering CIOs and business decision-makers to glean insights from an array of data, AI catalyzes innovation and optimization. A clear picture of app resourcing levels IBM expect data center energy consumption to increase by 12% (or more) by 2030, due to the expiration of Moore’s Law, and an explosion of data volume, velocity and energy-intensive workloads, said Gupta. “Simply put, AI can reduce the amount of hardware to purchase, maintain, manage and monitor,” he said. Data center managers must maintain a clear picture of their organization’s application resourcing levels, allowing for nimble scaling to meet demand in real-time, said Gupta. AI-powered automation can play a key role in this process, mitigating the risk of resource congestion and latency while ensuring that hardware workloads remain safe and performance standards are upheld. IBM’s Turbonomic, for instance, can automatically optimize application resourcing levels and scale with business needs. “This enables IT managers to have a single dashboard to oversee resourcing levels, make decisions in real-time and brings efficiency as they ensure none of their apps get over-provisioned,” said Gupta. Maximizing the benefits of AI-driven data centers AI and ML use cases in data centers continue to grow, but organizations must consider some key factors before implementing them. While pre-packaged AI and ML solutions are increasingly available, they still require integration beyond individual point solutions. DIY AI deployments are possible but require investment in sensors to collect data and expertise to convert that data into usable insights. “Many organizations choose to implement their own data centers precisely because they can be sure that data will not be pooled with others’ data or used in ways they cannot control,” said KPMG’s Campana. “While this is true, organizations must then accept the responsibility of maintaining security and privacy.” With the right resources, data centers can become smarter and more efficient, but achieving this goal requires optimal planning. “Planning should be a key pillar of implementing AI-driven data centers,” said IBM’s Gupta. “Successful deployments don’t happen overnight, and need a significant amount of iteration and thought before being rolled out. IT leaders need to consider factors such as understanding what hardware they can and should keep and what workloads they need to move to the cloud.” Flexibility critical The key to success for AI-driven data centers is to take a strategic approach. This means identifying the right use cases for AI and ML, investing in the necessary infrastructure and tools and developing a skilled workforce to effectively manage and maintain systems. “Companies often maintain sprawling infrastructure — from distributed data center locations to various cloud deployments,” said Gupta. “IT Leaders need to consider whether they need to build a lake for all data sources to converge…or bring the data preparation, ML and AI tools to each location. As companies transform their IT infrastructure, they must not only consider the value being delivered but also the vulnerabilities being created.” He added that best-laid plans can go awry. “The same can be true for technology rollouts, and the nimble organization that can adjust course quickly will be more successful,” he added. Four emerging strategies for improving IT and data center performance AIOps, MLOps, DevOps and SecOps each have unique strengths. When combined, they are optimizing data center operations and broader IT performance, reducing costs and enabling service improvements. AIOps automates and scales corporate-wide data center and IT workflows AIOps is becoming core to enterprises’ sustainability and carbon reduction efforts in data centers and has proved effective in identifying why performance gaps occur. Core to this technology is its ability to interpret and suggest actions based on real-time performance data (causal analysis). For example, Walmart is using AIOps to streamline e-commerce operations. AIOps relies on a combination of ML models and Natural Language Processing ( NLP ) to discover new process workflows that can improve the accuracy, cost-effectiveness and efficiency of data center operations. Retailers also use AIOps to detect and resolve inefficient and disconnected processes in real-time while also automating tech stacks and broader infrastructure management. AIOps enables more accurate real-time anomaly detection within e-commerce platforms. The technology also excels at correlating data from all available sources across a data center to provide a 360-degree view of operations and identify where availability, cost control and performance can be improved. Retailers rely on DevOps to accelerate app development Retailers rely on DevOps to stay competitive and shorten time-to-market for new apps and features. DevOps is based on a software development methodology approach that emphasizes collaboration and communication between software developers and IT operations teams. It’s proven effective in streamlining software delivery and development for new mobile apps, website features and customer experience-based enhancements. Amazon , Target, Nordstrom, Walmart and other leading retailers have adopted DevOps as their main software development process. Retail CIOs tell VentureBeat that the higher the quality of the DevOps code base, the more efficient data centers run with the latest app release to customers worldwide. MLOps offers a lifecycle-based approach As retailers recruit more data scientists, MLOps becomes just as important as DevOps for keeping models current and usable. MLOps applies DevOps principles to ML models and algorithms. Leading retailers use MLOps to design, test and release new models to improve customer segmentation, demand forecasting and inventory management. MLOps is proving effective in solving the most costly and challenging problems in retail, starting with inventory management and optimization. Supply chain uncertainty, chronic labor shortages and spiraling inflationary costs are making inventory management a make-or-break area for retailers. Macy’s, Walmart and others are using MLOps to optimize pricing and inventory management, helping retailers make decisions that reduce costs and protect themselves from the downside risk of holding too much inventory. SecOps relies on AI and ML to secure every identity and threat surface SecOps ensures data centers and the broader IT infrastructure stay secure and complaint. Zero trust security, which assumes no user or device can be trusted and every identity must be verified, is the foundation of any successful SecOps implementation. The goal is to reduce the attack surface and risks of increasingly sophisticated cyberattacks. SecOps optimizes data center security by combining the most proven techniques for reducing intrusions and breaches. Adopting zero trust security measures helps retailers protect the identities of their customers, employees, and suppliers, and microsegmentation can limit the blast radius of any attack. AI and the future of data center technology Edge computing is emerging as one of the most promising technologies for developing AI-driven data centers. By processing data closer to the source, edge computing reduces latency and improves overall performance. When combined with AI, this technology offers the potential to achieve real-time analysis and decision-making capabilities, making data centers capable of handling mission-critical applications in the future. “The move to 5G was a major step in this transition and is fueling a wave of innovation in AI-based software infrastructure,” said KPMG’s Campana. “For businesses beginning new data centers, it is worthwhile to consider their timeline for adopting 5G and making other updates of end-user hardware.” For his part, IBM’s Gupta sees data intelligent automation as a way to continue making inroads into heavily regulated industries, as AI and data center tools will be designed to automatically meet compliance requirements. “As AI and automation get embedded further into data centers, they will be able to meet the most stringent compliance protocols,” he said. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,622
2,023
"Case study: How a credit union leveraged data analytics to improve member service | VentureBeat"
"https://venturebeat.com/data-infrastructure/case-study-how-a-credit-union-leveraged-data-analytics-to-improve-member-service"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Case study: How a credit union leveraged data analytics to improve member service Share on Facebook Share on X Share on LinkedIn This article is part of a VB special issue. Read the full series here: Data centers in 2023: How to do more with less. The world of financial services is hurtling toward digital transformation at an unprecedented speed, and industry insiders are understandably concerned. In fact, 81% of banking CEOs expressed concern about the speed of technological change. With big banks, community banks, and emerging fintechs all vying for a piece of the pie, credit unions must adapt to meet the needs of their members. However, credit unions face unique challenges in this digital transformation. Many are struggling to keep up with their larger counterparts, with limited budgets for investing in new digital channels. In today’s competitive environment, advancing digital transformation isn’t just a good idea — it’s essential. Credit unions that fail to modernize risk becoming irrelevant and losing relationships and revenue. Delta Community Credit Union (DCCU) understands this: The largest credit union in Georgia, with an asset size of $9.1 billion, is meeting digital transformation head-on. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Nailing the basics DCCU first implemented a comprehensive full-service model that provides products and services and seamless communication and interaction channels for its members, whether they bank in person or online. The credit union is helping businesses create a self-service model that allows members to access their accounts, view transactions and perform several other functions through an online portal or mobile application. This self-service model has been instrumental in reducing the operational workload and increasing member satisfaction. Sujatha (Su) Rayburn, VP at DCCU, explained that the institution had begun transitioning to self-service models for members before the pandemic. Investment in online and mobile banking applications and systems enabled them to continue providing services to members when in-person transactions became challenging. In the wake of the COVID-19 pandemic, DCCU tapped into data analytics, allowing a comprehensive view of members’ preferences and needs. By understanding what members value and require, the credit union has been able to offer the right products and services to maximize its value proposition. The power of data analytics Rayburn explained that DCCU has automated processes and reduced operational inefficiencies. The credit union uses data streams to match and resolve exceptions, reducing manual efforts and increasing efficiency. This focus on automation has not only helped DCCU reduce costs but eliminate the need for additional headcount. Rayburn explained that the credit union understands that automation is a chain that spans from the front office — which is the member experience — to the back office where processes are streamlined to make the overall experience more efficient. Unlike many of its peers who have closed their doors or limited access to their branches, DCCU has kept its doors open while also allowing members to use apps for specific transactions. This has made it easier for members to manage their finances during the pandemic and has contributed to better member experiences, said Rayburn. The need for a data strategy Although DCCU weathered the storm, Rayburn wanted to ensure that it was planning for the future, especially with the rise of AI and machine learning (ML). “I realized that while DCCU was performing well, there was room for improvement in terms of thinking more expansively about what these technological advancements mean for the business and how they will impact it,” she said. To develop the data strategy, Rayburn conducted assessments of the institution’s data maturity and worked with partners to understand the current state of the business. She looked at the top 10 credit unions in the marketplace and surveyed them to get a better understanding of the industry. She then looked at their business strategy and identified what differentiated it from other credit unions. She realized that there was a significant gap in terms of developing member relationships in a holistic way. In her words, “Delta Community was doing things like building models to identify fraud and next-best-offers, but these efforts were not cohesive or well-thought-out.” To address these gaps, Rayburn developed a member-centric data analytics strategy. Delta Community is in the process of creating a strategic analytics center of excellence that will focus on building member centricity to bring data from all its digital channels. This can help ensure that the data is oriented towards serving members supporting member service in a more robust way. “Delta Community wanted to meet members’ needs at the right time, at the right place and at the right level,” said Rayburn. “The strategic analytics center of excellence will help Delta Community focus on member needs and use analytics to support member service; it will also modernize our tech stack and help us scale and perform better.” Data governance takes center stage DCCU has been protecting its data for several years by tokenizing personally identifiable information (PII) data in its data warehouse, making it inaccessible to those without the need to see it in the clear. To take its data governance strategy to the next level, DCCU partnered with OvalEdge , a business glossary that helps organizations manage and classify data more efficiently. The software enables DCCU to tag and categorize data, show what rules were applied during data transformations, and help business users serve themselves. One of the game-changers for DCCU has been the ability to use its employees to build upon the stewardship model, ensuring customers have visibility into reports and data elements. With OvalEdge, business users can now understand how data sources were built, where data is coming from and what rules were used to create metrics. DCCU has started crowdsourcing to the stewards within the business, curating content within the OvalEdge platform. This approach has enabled employees to gather at a ‘water cooler’ and chat about anything related to data. A “data as an asset” organization DCCU is ultimately shifting to a “data as an asset organization.” In this, OvalEdge worked closely with DCCU to understand its drivers, priorities and goals, and assisted with developing a deployment plan that included term curation, training, lineage building and more. The company established a crawl plan to build lineage and configured the tool while designing a roadmap for a successful implementation. OvalEdge has helped DCCU establish trust in its data via its data catalog and business glossary, operationalize meaningful data stewardship, enable report governance through micro/macro lineage tracing and automate data profiling. As DCCU continues to mature, they plan to leverage OvalEdge’s data quality score-carding capabilities, enforce least privilege security for NPI/PII data across the enterprise, and conform to privacy regulation. “DCCU’s data governance strategy rollout is still in its early stages, but OvalEdge has helped (us) establish the foundations necessary to scale its analytics operations and remain competitive in a rapidly changing market” said Rayburn. A recession on the horizon? Recession risks have been re-ignited by the recent banking collapses and rescue deals, and there are now concerns that global growth will weaken as the crisis heralds the end of the “easy-cash era” and the arrival of a credit crunch. Rayburn explains that the institution performs a monthly asset liability management aimed at stress-testing their assets and deposit portfolios. It also has a conservative approach to managing finances, which includes a test-and-try approach. “We don’t want to be the first to try something new and prefer to wait for others to test the waters before they take the plunge,” said Rayburn. “This way, when we do roll out something new, we do so in a complete and thorough manner to avoid any potential disruptions to our customers.” Ultimately, having a smart data strategy is more and more important, especially as black swan events like the pandemic become more common, said Rayburn. “The retail banking industry is at a critical juncture,” she said. “Having a smart data strategy will not only help DCCU weather any future economic storms, but will help improve performance and better address customer needs. A smart data strategy can also help us cultivate our brand and stand out in a crowded market.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,623
2,023
"Case study: How Interstates rehauled its aging data center infrastructure | VentureBeat"
"https://venturebeat.com/data-infrastructure/case-study-how-interstates-rehauled-its-aging-data-center-infrastructure"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Case study: How Interstates rehauled its aging data center infrastructure Share on Facebook Share on X Share on LinkedIn This article is part of a VB special issue. Read the full series here: Data centers in 2023: How to do more with less. Cloud adoption is growing, but data centers still remain the driving force for most enterprises worldwide. The reason is obvious: You get more control and flexibility over the servers and associated components (like networking equipment and storage). But, when you have a lean IT team at the helm, keeping such infrastructure ready for complex and modern workloads can also be quite a task. Case in point: Iowa-based Interstates , a 60-year-old provider of electrical, construction and engineering solutions, faced the challenge of frequently upgrading data center hardware to meet its business needs. We dive into its digital transformation story Interstates’ VM-heavy workload Interstates focuses on design-build electrical projects, plant floor automation and operational technology support for manufacturing and industrial environments. To provide these services, the company emulates its customers’ environments within virtualization stacks hosted in two data centers — one primary and the other secondary. “On average, we run between 500 to 700 customer-specific virtual machines as we develop applications to run their environments and simulate that,” Nathan Bullock, the company’s IT operations manager, told VentureBeat. “It’s a pretty heavy workload for a virtualization stack. Plus, there is traditional Windows server architecture within manufacturing environments and traditional-type workloads.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Bullock leads a team of about 20 IT experts who, among other things, make sure that data center infrastructure meets the demand of these business-critical applications and virtual machines. However, with core infrastructure aging, his team often struggled to quickly spin up required systems as needed. They had to update the hardware footprint now and then, which demanded major upgrades of the main chassis and took a lot of time. “We had to overhaul the hardware from time to time,” Bullock explained. “We had to look at upgrading server midplanes and backplanes (circuit boards) to be able to sustain new blade infrastructure and maintain our growth. It wasn’t allowing us to be very agile.” He noted that the issue was too disruptive and costly for the business and was happening too often, thus creating clear scalability and management bottlenecks for their tight-knit group. The overhaul In search for a simpler and easily scalable compute infrastructure to deliver 3D models in real-time and enable customers to review projects virtually from construction trailers, Interstates landed on Cisco’s UCS X-Series servers and Intersight operations platform. “We looked at Dell, IBM and others, but they (Cisco) came in easily 20% less than what we were getting from some of these other vendors,” said Bullock. “That allowed us to invest in Intersight and potentially other tools.” But, pricing is not the hero here. It is modularity. According to Bullock, with its latest UCS platform, Cisco was able to showcase how within one chassis, one could have not only compute nodes but also need-specific memory nodes and GPU nodes — something very relevant to their line of work. The vendor has partnered with Nvidia and Intel for GPU footprints, giving them the flexibility to pick and choose based on the workload at hand. Meanwhile, the cloud-based Intersight offering brought a single pane of glass to the table, giving cross visibility to manage not only compute, memory and GPU but also other infrastructure in the data centers , like storage and networks. This was, again, very lucrative for the company. It purchased the first UCS-X platform for its secondary (disaster recovery) data center about 12 months ago. The company purchased the next set three months ago for its primary data center. Data center footprint reduced by half While Interstates is still in the process of migrating workloads from its production environment to the new platform, it already claims to have witnessed notable benefits, including a reduction of more than 50% in the server footprint of its two data centers. “We’ve significantly reduced our rack space, we’re using less power and cooling, and we can upgrade components over time instead of full rip-and-replace overhauls,” said Bullock. His team can now tune each box for specific workload needs, independently scaling memory, storage, CPUs and GPUs on a server-by-server basis. No more need to constantly overprovision or buy new ones. This has helped Interstates reduce the frequency of full-fledged server infrastructure overhauls, improve systems integration, automation and scalability and simplify and accelerate data center operations. “Every time you upgrade, there’s greater performance,” said Bullock. “We’re seeing at least two terabytes of RAM in each system so we were able to reduce our blade count or node count from five to three, and still have the growth capacity we need.” Plus, the reduced consumption of power and cooling as well as the 10 to 15-year shelf-life of the new platform (as claimed by Cisco) is helping with cost savings, he added (without sharing the exact numbers). Spinning up systems 75% faster With this modernization, Interstates is also leveraging automation support in the deployment experience, spinning up physical and virtual machines 75% faster than it could with the former infrastructure. “The installation of (Cisco UCS X server with Intersight) is extremely faster than what we’re accustomed to… it would usually take us a day to set up a basic VMware host and it was able to do that within an hour and create automated profiles, which allowed us to have our entire environment set up within a day,” said Bullock. “Traditionally, it would probably take us a couple (days), at least.” “Because of the simplicity of the platform, we can do more on our own than ever before,” he added. “And we can do it all through a single pane of glass.” Moving ahead, the company plans to extend its new computing environment to the cloud and align its internal cloud and data center network with the public cloud to manage it all as a single environment. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,624
2,023
"Data center modernization: The heavy -- and rising cost -- of doing nothing | VentureBeat"
"https://venturebeat.com/data-infrastructure/data-center-modernization-the-heavy----and-rising-cost----of-doing-nothing"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Sponsored Data center modernization: The heavy — and rising cost — of doing nothing Share on Facebook Share on X Share on LinkedIn Presented by AMD Competitive advantage today rests on an enterprise’s ability to deliver exceptional time-to-results for business-critical applications or consumer-facing services, ever faster and ever more reliably. In the background — or in the data center, to be more precise — this digital transformation and the accompanying explosion in company data has thrust CIOs and IT leaders into an era of relentless scaling. Even as organizations grapple with inflation and economic uncertainty, they are facing calls to provide a high-performance foundational compute infrastructure across the enterprise to develop new delivery models and handle new use cases. These include: Streamlining operations and reducing costs while enhancing sustainability (by lowering energy expenses and emissions). Enabling permanent remote and hybrid working, often with virtualized desktop infrastructure (VDI). Supporting AI, machine learning and database analytics, plus new deployment models — notably containerization and cloud native. Mining data effectively to deliver insights that drive revenue growth and increase customer “stickiness.” Responding and adapting quickly and flexibly to rapid business changes and evolution to enable ongoing transformation. New demands on the data center, economic uncertainty, inflation running well above recent levels (principally due to much increased energy costs) — these challenges are outside the control of CIOs and infrastructure decision-makers. And, with CAPEX and OPEX budgets under strain, a tempting option in these circumstances might be to hold fire and postpone investment in data center infrastructure — even if that investment would deliver higher performance within a shrinking power, cost and space envelope. This can be an especially seductive argument when the data center’s servers are already paid for, which is a common fact given that the average age of these servers is 3–5 years. Surely, the argument goes, it’s better to wait a while, reducing CAPEX and avoiding the effort and cost of upgrades. The older the infrastructure, the greater the cost The trouble is those aging servers are not cost-free. The performance of older equipment declines over time, while the time, cost and space needed to keep it running rises. Older servers are more likely to crash, causing unplanned downtime and higher maintenance costs. They are far more vulnerable to sophisticated, targeted attacks. Data centers that push the limits of existing power, cooling and space face increased power costs. Over time they will become more unable to keep pace with the increased, changing business demands. So, eking out a few more years from aging infrastructure may save on CAPEX, but at the expense of rising OPEX. It also risks loss of revenue and competitive advantage. The fact is, when it comes to data centers, kicking IT infrastructure refreshment down the road is not an option. To serve modern customers, the modern enterprise needs modernized data centers that can support simpler, software-defined environments that improve operations, agility, flexibility and scalability with a lower TCO. The three pillars of data center modernization CIOs looking to modernize their data centers need to focus on three main pillars. The first is the requirement to harness all the data in the enterprise to deliver real-time, actionable insights the business thrives on. This calls for systems with the highest bandwidth, lowest latency and fastest throughput. Second is driving savings through infrastructure consolidation. And third is reduced energy consumption and a smaller carbon footprint to meet the sustainability targets that are increasingly a component of corporate stewardship. The good news for CIOs who still have an eye on their CAPEX and OPEX budgets is that all three objectives are achievable using the latest-generation CPUs. These feature huge improvements in the key performance areas of core density and per-core performance, which in turn deliver record-breaking reductions in the number of servers, power consumed and savings in CAPEX and OPEX. As an example, take the performance of AMD’s 4th Gen EPYC processors on each of those pillars of data center modernization. Optimized for different workloads and segments — cloud, performance enterprise and mainstream enterprise — 4th Gen EPYC processors significantly outperform competitive processors in key tasks such as server-side Java applications operations/second for commerce (by 2.1 times) 1 and supporting 2 times the number of ERP users. 2 Meeting the virtualization challenge A key factor in reducing TCO is virtualization efficiency. Increased virtualization performance is an enabler of infrastructure consolidation, which translates into the ability to deploy hundreds more VMs. With space- and power-critical challenges, it’s imperative to fit the maximum compute into the smallest footprint, and here 4th Gen EPYC processors deliver quite remarkable performance figures. In a typical deployment scenario of 2,000 VMs, enterprises can replace a rack of 17 Intel Platinum 8490H servers with just 11 AMD EPYC 9654 servers. In hard figures, this adds up to 35% fewer servers, consuming 29% less energy annually, and cutting the enterprise’s annual CAPEX by up to 46%. 3 High performance like this not only enables CIOs to reduce both CAPEX and OPEX, it also plays directly into an enterprise’s drive for greater sustainability and lower carbon footprint. The cost of waiting is increasing all the time Updating to the newest generation of CPUs can improve a data center’s TCO. The latest CPUs are more efficient, allowing IT leaders to provide the same, or greater, level of performance with fewer servers, resulting in lower costs overall. As much as IT leaders are concerned about increasing CAPEX costs from servers that are already paid for, and adding to the problem with upgrades, the cost of doing nothing will very soon overtake the cost of modernizing. That means the cost of waiting is steadily increasing all the time. Robert Hormuth is CVP, Architecture and Strategy at the AMD Data Center Solutions Group. Footnotes: SP5-104: https://www.amd.com/en/claims/epyc4#SP5-104 SP5-056A: https://www.amd.com/en/claims/epyc4#SP5-056-A SP5TCO-036: https://www.amd.com/en/claims/epyc4#SP5TCO-036 The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,625
2,023
"Data center ops: How AI and ML are boosting efficiency and resilience | VentureBeat"
"https://venturebeat.com/data-infrastructure/data-center-ops-how-ai-and-ml-are-boosting-efficiency-and-resilience"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Data center ops: How AI and ML are boosting efficiency and resilience Share on Facebook Share on X Share on LinkedIn This article is part of a VB special issue. Read the full series here: Data centers in 2023: How to do more with less. Data centers must deliver more granular, real-time data to keep retailers’ operations resilient, responsive and online despite potential security and outage threats. Unpredictable supply chains, chronic labor shortages, spiraling inflation and energy costs are just a few of the challenges that retail CIOs and senior management teams face when optimizing their data centers. AI and machine learning (ML) can help identify how existing data centers can be redesigned to make them less rigid, siloed and more reliable. One of the key goals of using AI and ML is to troubleshoot why so many outages are occurring on-premises and in the cloud. Add to that the spiraling costs of electricity and energy costs with the need to optimize data center performance for aggressive sustainability performance targets, and data centers become a perfect use case for solving complex problems with AI and ML. “Workload volumes are set to continue growing at around 20% a year between now and 2025. Traditional data center approaches are struggling to meet these escalating requirements,” writes Tracy Collins, VP of Americas at EkkoSense. According to Brons Larson, AI strategy lead at Dell , “data centers can leverage AI/ML to improve performance and optimize configuration and deployments.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Wendy Zhao, senior director and principal engineer at Alibaba Cloud Intelligence , adds that “AI and ML continue to make great strides in their evolution, and they are now having a tangible impact on data center operations and IT management.” And according to IDC, 50% of IT assets in data centers will run autonomously due to embedded AI functionality. For enterprises investing in AI to automate their IT infrastructure, the firm says, improving customer satisfaction and automating decision-making and repetitive tasks are the top organization-wide benefits. AI and ML gaining adoption More than half ( 57% ) of data center operators said they would trust AI to make routine operational decisions last year, up from 49% in 2021. Given how manually intensive many tasks are in data centers, AI and ML could significantly reduce costs and increase efficiency. CIOs tell VentureBeat that taking on the challenging problems of reducing outages, strengthening multisite resiliency, optimizing direct liquid cooling (DLC) and improving capacity planning and security are areas where they’re interested in applying AI and ML-based solutions. Energy costs are soaring, which means that operating data centers under budget is more challenging. CIOs and data center operators are concentrating on evaluating how software-designed power and AI can help exponentially reduce energy and cooling costs. Equinix is a global provider of data center services and network infrastructure for many of the world’s leading enterprises. Their CIO Milind Wagle says that the company operates a fleet of more than 220 data centers in 26 countries. They’re using AI to tune up their internet ‘engine room’ by estimating how much power and space will be consumed in their data centers. Where AI can help optimize data center performance Utilizing AI, CIOs and data center operators can optimize power consumption and improve power usage effectiveness (PUE) for future efficiency gains. As sustainability pressures increase industry-wide, many operators are unprepared to meet carbon emissions reporting requirements. In addition, outages continue to be costly and frequent, with cloud applications especially susceptible. AI has the potential to aid in the resolution of a number of these issues by enhancing efficiency, decreasing outages and streamlining operations. The following are key areas where AI can help optimize data center performance Improve capacity planning and resource allocation Real-time data is critical to capacity planning and resource allocation across any data center. Real-time data holds insights into where, how and what needs to be optimized to improve performance. One critical area is identifying any bottlenecks in capacity planning and load balancing. These are constraint-based problems that supervised ML algorithms excel at solving. Getting capacity planning and resource allocation right is critical to running a thriving data center under budget. AI and ML can help improve data center security By learning a network’s normal behavior and detecting anomalies and deviations, AI can help prevent massive data breaches and hacks. AI cybersecurity tools can thoroughly screen and analyze all incoming and outgoing data for security threats. “Never trust; always verify” underpins zero trust enterprise security. This approach trusts no user, application or device unless explicitly allowed by a security policy. Organizations can improve hybrid environment visibility, security, and compliance while reducing costs by adopting a zero trust mindset. Get in front of carbon footprint reduction and reporting AI excels at identifying diverse data patterns and helps form-fit models for how data changes over time. Supervised ML has proven effective in solving complex constraint-based carbon reduction problems that involve hundreds of potential variables and factors that impact emissions. Getting sustainability right means combining the strengths of AI and ML to excel at carbon footprint reduction. It’s too important to leave it to chance, and it has a significant impact on any retail brand in the future. CIOs say they are seeing their peers’ compensation plans indexed to ESG targets, making sustainability a high priority with carbon footprint reduction and reporting core to the effort. Improve uptime maintenance levels and benchmark data center performance over time Knowing why a given type of server needs rebuilding more than others, identifying what’s causing interruptions to power management systems and troubleshooting why resource balancing isn’t working are all the types of problems that ML can help solve. The key is getting real-time data monitoring in place and building a data set that can track all available variables to troubleshoot performance bottlenecks. Supervised ML models excel at predictive accuracy. Mining machine data and building models that predict when a given server will need preventative maintenance can save thousands of dollars and hours of lost availability. Think of the real-time data generated by every asset in a data center as the intelligence needed to track performance over time and find new ways to improve. Combine the strengths of AI and ML to automate cooling, electricity, power and security systems The goal is to have a data center that can operate autonomously. It’s possible to accomplish that by capturing real-time data that tracks air temperature, cooling, power loads, internal air pressure, resource loads and server performance. What motivates CIOs and data center operators to collaborate to accomplish this is the need to measure data center performance against sustainability and ESG goals set by senior management. Using ML to interpret and create models based on environmental monitoring and control is essential for measuring progress to ESG targets. It’s a given that AI and ML need to be extensively used for tracking power and cooling consumption, two of the most expensive areas of running a data center. Identifying AI use cases in data centers Identifying where AI can make the most significant contribution to securing and optimizing a data center must start where risks to operating costs and security are the greatest. CIOs tell VentureBeat that taking on the challenge of finding new ways to reduce energy consumption to meet carbon reduction and sustainability goals needs to be balanced against the staff shortage they continue to experience. Getting cooling, space, power and server optimization right is core to keeping a data center running under budget and averting potential outages. It’s estimated that 35% of the energy used in a data center is consumed through cooling infrastructure alone. Optimizing data center cooling, implementing more renewable energy options and improving IT utilization can improve sustainability gains by 57%, based on Uptime Institues’ Global Data Center Survey, 2022. Nascent use cases for AI use in the data center include efficiency risk analysis, capacity planning, security and budget impact forecasting. In cybersecurity, using AI to close the gap between IT and OT systems is a given, as is defining least privileged access and identity management for every data center and system. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,626
2,023
"Data center dilemma: Retail CIOs seek ways to balance cost and value in 2023 | VentureBeat"
"https://venturebeat.com/data-infrastructure/data-centers-cios-seek-ways-to-balance-cost-value-2023"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Data center dilemma: Retail CIOs seek ways to balance cost and value in 2023 Share on Facebook Share on X Share on LinkedIn This article is part of a VB special issue. Read the full series here: Data centers in 2023: How to do more with less. Retail CIOs and their teams face complex challenges in reducing data center costs and increasing the value their data centers deliver. 2023 is turning out to be a more challenging year than many expected, thanks to the need to support an expanding scope and variety of digital-first revenue initiatives, defend infrastructure against a spike in cyberattacks , deal with supply chain disruptions and keep sharpening their competitive edge with AI. “The pressure on CIOs to deliver digital dividends is higher than ever,” said Daniel Sanchez-Reina, VP Analyst at Gartner , during his keynote at the Gartner IT Symposium/Xpo in Orlando. “CEOs and boards anticipated that investments in digital assets, channels, and digital business capabilities would accelerate growth beyond what was previously possible. Now, business leadership expects to see these digital-driven improvements reflected in enterprise financials.” Summing it up, he said that “a triple squeeze of economic pressure, scarce and expensive talent, and ongoing supply challenges is heightening the desire and urgency to realize time to value.” Retail CIOs’ challenges are intensifying “While the rules of the game have changed, it’s the speed of the game — driven by the accelerating pace of technology adoption — that is the primary source of disruption,” writes EY America’s research team. Retail CIOs face the challenge of getting more projects done with less budget and new equipment, starting with their data centers. Retail CIOs must define a scalable strategy to secure new infrastructure, including edge devices, non-x86 architectures, content delivery networks, service meshes, and 5G mobile service while controlling data center costs. CIOs need to demonstrate the business value data centers deliver To hold onto their budgets and gain new funding, retail CIOs must demonstrate the business value of data centers in the context of current and future digital-first revenue initiatives. CIOs tell VentureBeat that their boards of directors want to see progress on new digital transformation initiatives that drive new revenue without increasing data center capital expense (CAPEX) spending. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! That’s forcing CIOs to do more with less and devise workarounds with existing data center hardware, systems and software. Many rely on operating expense (OPEX) budgets to support business growth with data center spending. For example, one CIO provided monthly data showing how data center upgrades reduced order cancellations and increased customer satisfaction. Scenarios like this, which show data centers’ direct contribution to reducing costs, increasing customer loyalty, and driving revenue, are invaluable. Constant pressure to reduce IT costs and optimize performance Retail CIOs face the daunting paradox of cutting data center and infrastructure costs without compromising support, services, security or responsiveness to customers and internal operations. Every retail brand is impacted by its stance on sustainability, whether the retailer acknowledges it or not. More often than ever, consumers decide whom they buy from based on how sustainable a business is. With C-level executives and boards defining progressively more ambitious environmental, social and governance (ESG) targets, CIOs managing data center consolidations are getting significant support. CIOs tell VentureBeat that sustainability is part of the lean data center. Sustainability programs and data center infrastructure parameters that prioritize energy efficiency and asset utilization are among the most prominent ways leaders can reduce IT costs while increasing data center performance. In a recent Gartner survey of business leaders , 80% stated that sustainability reduces costs, and even more — 86% of these executives — believe that sustainability investments protect them from disruptive impacts. Retail data centers are a prime target for attacks Because of the valuable information they store and process, data centers are a prime target for cyberattacks. Cybercriminals and nation-state-backed hackers frequently target data centers to steal or destroy data. Target’s 2013 data breach is an example. Credit card and personal information for millions of customers was compromised. The company’s expenses related to the breach totaled $162 million. Cybersecurity , data protection and privacy are forcing CIOs to work towards consolidating tech stacks to make them more effective at identifying intrusion attempts, threats and endpoint breaches. Compliance and security aren’t the same thing — especially in retail Retailers’ data centers need to excel at meeting a growing base of global compliance laws while continually hardening their security. Yet while improving compliance with customer privacy rules is paramount, it doesn’t solve the problem of keeping data centers more secure. Zero-trust security is proving effective at achieving compliance while hardening endpoints and data centers. In a recent interview with VentureBeat, John Kindervag, creator of zero trust , said that “the biggest and best-unintended consequence of zero trust was how much it improves the ability to deal with compliance , auditors, and things like that.” Retail CIOs tell VentureBeat their zero-trust security initiatives to secure data centers need to start with validating identities and roles. It’s imperative to move beyond passwords to newer, more robust authentication technologies, including passwordless authentication. Ivanti , Microsoft Azure Active Directory (Azure AD), OneLogin Workforce Identity , Thales SafeNet Trusted Access and Windows Hello for Business are the leading providers. Ivanti’s approach of combining zero sign-on (ZSO) with passwordless authentication and zero trust on its Unified Endpoint Management (UEM) platform reflects how vendors are responding to the need to improve every aspect of identity security. Ivanti ZSO replaces passwords with mobile devices as the user’s primary identifier and authentication factor, and relies on FIDO2 authentication to eliminate passwords. CIOs tell VentureBeat that Ivanti ZSO is successfully gaining user awareness and adoption because it can secure any device, centrally managed or not. The board’s ask: Making digital transformation happen with existing data centers and budget Retail CIOs are being asked to do more with less, in delivering digital transformation initiatives using existing data centers with no incremental CAPEX spend. “CIOs must prioritize digital initiatives with market-facing, growth impact,” said Janelle Hill, distinguished VP analyst, Gartner , during her keynote at the Gartner IT Symposium/Xpo. “For some CIOs, this means stepping out of their comfort zone of internal back-office automation to instead focus on customer or constituent-facing initiatives.” VentureBeat has learned that the CIOs who work to connect data center contributions to revenue are more likely to get the support they need from the board when new systems and technologies are needed. Taking ownership of the revenue contributions data centers make is an intelligent career move. Eighty-nine percent of CIOs expect to have revenue-generating responsibilities in their career today. “Business equals technology,” said David Gledhill, CIO and group head of technology and operations for DBS Bank, a global bank based in Singapore. “My job is not just providing information technology but delivering on business outcomes and customer satisfaction.” Driving more value from data centers while reducing costs Retail CIOs face the dual challenges of increasing data center value while reducing costs. The goal is to achieve significant cost savings without sacrificing performance or quality of service. Ways that data centers are delivering more value at lower cost include: Using AI-based techniques to optimize resource utilization Selectively adopting automation while consolidating infrastructure Adopting energy-efficient technologies. Here are some of the strategies that are delivering more value from data centers while reducing costs. Going all-in on optimizing resource allocation and utilization Retail CIOs told VentureBeat that they start with monitoring and analytics to set a performance baseline. First, they use real-time monitoring to immediately uncover bottlenecks, inefficiencies and immediate opportunities to improve performance. Second, they commit to fine-tuning their infrastructure and optimizing it with real-time data. That’s the lifeblood of getting more value out of existing data center assets. Third, they’re using virtualization and containerization to consolidate workloads on fewer physical servers. This is an area where AI and machine learning are helping to greatly optimize performance today. AI-based tools, they have told us, directly reduce hardware, energy and maintenance costs, optimizing workload placements for performance, security and other factors. Nearly every CIO VentureBeat interviewed also said that optimization must include on-premises, cloud and edge computing to maximize resource utilization throughout the data center. Capacity planning and forecasting, automating routine tasks and measuring performance gains are essential to foster continuous improvement. Consolidating data center infrastructure to reduce costs Streamlining hardware environments reduces costs for energy, maintenance and hardware and can improve data center performance. Using AI and machine learning-based tools, CIOs and their teams are finding new ways to make infrastructure leaner. Relying on AI and machine learning to improve resource allocation, workload density and computing power efficiency is proving effective in reducing costs and improving performance. Integrating sustainability to reduce costs and increase ESG compliance Retail CIOs tell VentureBeat that sustainability principles are now core to their long-term strategies — and, for many, a significant part of their compensation plans. CEO, CIO and senior management financial incentives are increasingly tied to ESG performance targets. Pursuing sustainability is quickly becoming a core part of every CIO’s cost-reduction strategy for data centers as they respond to rising energy costs, supply constraints and uncertain economic conditions. Reducing excess power, investing in clean energy and delaying replacement cycles are crucial for attaining this goal. Using cloud or colocation services is also helping CIOs consolidate data centers and close no-longer-needed facilities. Public cloud and colocation providers, knowing the pressure CIOs are facing to consolidate and get more done with less, are prioritizing sustainable computing and clean energy to attract new data center business. Transitioning workloads to cloud services Data centers are no longer defined by physical facilities alone. They’re a core part of an infrastructure that needs to be hybrid and adaptive enough to flex as business needs change. That’s why so many retail CIOs are moving workloads into the cloud and creating a hybrid infrastructure — to meet rising market requirements for speed, scale and responsiveness to customers. Cloud services’ pay-as-you-go model helps businesses cut capital and operational costs and better allocate resources. By moving workloads to the cloud, retailers can focus on their core business, streamline operations and improve the customer experience while saving money. Conclusion Data centers need to be smartly managed to maximize digital dividends, defend against cyberattacks and absorb supply chain disruptions while minimizing costs. Retail CIOs can meet these challenges by concentrating on optimizing resource utilization, automating processes, consolidating infrastructure, investing in energy-efficient technologies, using cloud services, implementing more predictive, proactive maintenance, and outsourcing non-core functions. Retail CIOs and their teams will then have more time and resources to devote to customer-facing initiatives while improving compliance and security to reduce risks to revenue and operations. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,627
2,023
"Economies of scale: How and why enterprises are outsourcing their data centers | VentureBeat"
"https://venturebeat.com/data-infrastructure/economies-of-scale-how-and-why-enterprises-are-outsourcing-their-data-centers"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Economies of scale: How and why enterprises are outsourcing their data centers Share on Facebook Share on X Share on LinkedIn This article is part of a VB special issue. Read the full series here: Data centers in 2023: How to do more with less. The data center is the bedrock of the insight economy. To be competitive, enterprises need IT infrastructure that can process data at scale, on a cost-effective basis. However, many organizations don’t have access to the internal expertise and resources they need to keep the data center performing at the highest level. This leads many organizations like Netflix to look to outsourcing as a force multiplier. In fact, Forrester research finds that nearly 60% of organizations are looking to outsource or partner with a provider to transform their operations. This way, organizations can focus on daily innovation without getting sidetracked by the complexity of managing increasingly complex infrastructure. Many services can be outsourced to third-party providers. These can include hardware installation and maintenance, management of computing and storage resources, systems configuration, uptime management and monitoring of application and service performance. Also, database administration, backup and disaster recovery services, physical access control and incident response. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Making data center insights cost-effective The main reason most organizations outsource the management of the data center is cost. Building and maintaining a data center is extremely costly, particularly for larger organizations that not only need to invest in physical space but servers, infrastructure, power, security access controls and specialist IT staff. As a result, many organizations turn to outsourcing as a way to lighten these costs without sacrificing their operational excellence. “Outsourcing is a very common business activity for enterprises of all sizes, across industries,” said Naveen Chhabra, Forrester principal analyst. “Tech leaders outsource for many reasons including staff augmentation, cost efficiency, strategic direction/alignment, ability to focus on its own strengths and innovation.” The right partner, the right balance The core financial argument for outsourcing management of the data center is that “outsourcing provides the ability to minimize upfront capital expenses, and benefit from the economies of scale that an experienced data center provider will offer,” said Leo Casusol, managing director of Forgepoint Capital. Part of that involves renting secure and resilient infrastructure on an ongoing subscription basis at a predictable price that requires less initial investment. Although it’s important to note that this can come at the cost of control. “The right decision — whether to outsource or not — normally comes down to the tradeoff between control over direct hardware and customization capabilities versus the economic and operational benefits of relying upon a third-party provider,” said Casusol. “If an organization finds the right partner, the right balance can be achieved.” Types of data center outsourcing At the same time, there are many different forms that outsourcing can take. There are a few primary types of data centers, according to Brian Lewis, managing director of advisory at KPMG. These include individual client, managed service provider, cloud-based colocation and hyperscale computing. “Each type of data center aligns with different business priorities and outcomes which adhere to one of four availability tiers; Tier 4 with 99.99.5% uptime to Tier 1 with 99.5% uptime,” said Lewis. The organization can choose which type, tier and associated services meet their priorities, outcomes and budgets. Facilities management, smart hands, IT operations, asset management and monitoring are all services that enterprises can pick and choose to outsource. Although, it’s important to remember that there’s no-one-size-fits-all solution, and organizations have the option to build services around their own own distinct use cases. “There is a high-level shift in outsourcing trending toward transformation-enabled deals that bring the right value at the right time,” said Lewis. “Traditional activity based, one-size-fits all, managed service models still exist in the market; however, organizations are increasingly looking for flexible, product and consumer centric contracts to support the needs of the varying needs of their business.” Colocation as the go-to choice One of the most widely used examples of data center outsourcing is colocation. Data center colocation is where an organization will rent physical space from a third-party provider so they can store their hardware. Colocation enables organizations to “leverage existing data center facilities to rent space for closer proximity to major fiber routes,” said Ernest Lefner, chief product officer of network automation provider Gluware. “Colocation frees up the reliance on personal data centers from having to manage critical infrastructure, which can be some of the most difficult infrastructure to manage, move or relocate in the event of data center issues or closures,” said Lefner. Researchers valued the data center colocation market at $50.3 billion in 2021, and anticipate it will reach $159.8 billion by 2030. Key vendors in the market include Alphabet, Amazon, DXC Technology Co. and Equinix. Typically, colocated data center offerings provide temperature controlled, 24/7/365, secure highly-monitored environments populated by servers offering guaranteed performance and uptime. They are also a good option for organizations looking to reduce risk because they increase resilience against power outages or natural disasters — if a company’s primary site of operation goes down, there is still another site with IT infrastructure online. The limits of outsourcing Of course, outsourcing management of the data center does create new risks that security teams need to be prepared to confront. “As with any outsourcing, data center infrastructure comes with increased supply chain risk,” said Claude Mandy, chief evangelist for data security at Symmetry Systems. “This includes the risk of unauthorized access to data and the challenge of securing data regardless of where it is in a hybrid cloud — including public cloud or on-premises. This has driven the focus on data encryption at rest, but more importantly requires organizations to monitor more carefully who is accessing their data.” Ultimately, these risks are manageable if organizations commit to differentiating the security responsibilities of the provider from their own and implement adequate security controls and disaster recovery processes. The key to making this happen is to maintain active communication with third party vendors. “Disaster recovery and business continuity plans need to be tested to include data center partners where the management has been outsourced,” said Timothy Morris, chief security advisor at Tanium. “Plans should be tested frequently and included in incident response plans for both cyber and disaster (fire, flood, tornado, hurricane, pandemic) contingencies.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,628
2,023
"Why some companies are forging ahead with cloud investments | VentureBeat"
"https://venturebeat.com/data-infrastructure/why-some-companies-are-forging-ahead-with-cloud-investments"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Why some companies are forging ahead with cloud investments Share on Facebook Share on X Share on LinkedIn This article is part of a VB special issue. Read the full series here: Data centers in 2023: How to do more with less. The retail industry is changing so quickly that no CIO can afford to miss an opportunity to gain a competitive edge on costs, insights and speed. A typical retailer generates tens of thousands of real-time digital transactions across various channels. Legacy infrastructure and applications and dated on-premise data centers struggle to scale and secure all this traffic. Retailers can’t let data centers become roadblocks to revenue growth, especially with more CIOs expected to make greater revenue contributions. While many CIOs’ roles were once limited to prioritizing cost-cutting and efficiency, this is rapidly changing. For any CIO to grow their career, they must demonstrate that they and their teams can generate revenue by combining business strategy and IT. They need to be strategists first and technologists second, with a clear sense of where customers’ digital experiences need to go for their companies to stay relevant. The best business cases balance cost reduction and revenue growth A successful business case for cloud migration balances cost reduction, revenue growth from new digital initiatives, and faster access to innovations and vertical expertise. Revenue contributions from data centers help CIOs achieve better results and faster payback. Finding innovative ways to get greater value out of customer data is a growth catalyst for data center spending, over and above cost reduction alone. Another growth driver is the scalability of cloud platforms, which can flex and adapt to changing operational efficiency needs. The Home Depot, for example, is taking advantage of this adaptability for its supply chain operations. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Because of these factors, the global data center market, estimated at $220 billion in 2021, is expected to grow at a compound annual growth rate (CAGR) of 5.1%, reaching $343.6 billion by 2030. For its part, the data center services market, valued at $48.9 billion in 2020, is expected to reach $105.6 billion by 2026 , growing at a CAGR of 13.69%. What’s behind that forecast? Factors include innovations that increase data center performance and scale; advances in hybrid cloud; solid-state storage; DCIM; and sustainability-based improvements that will further reduce costs and increase efficiency. McKinsey found that by 2024, the average company intends its cloud spending to account for 80% of its total IT-hosting budget. No one wants to lose a step on competitors Many retailers’ data centers are designed to support predictable customer behaviors, known buying cycles and long-standing customer preferences. Data centers assumed customers wouldn’t change, wouldn’t want personalized products or buying experiences or the freedom to buy through any channel at any time and get the excellent service, scale and speed they expect. Amazon’s online buying experience is still considered the gold standard, the model for all retailers pursuing an ecommerce strategy. VentureBeat has been in over a dozen virtual meetings and conference calls with leading retailers and distributors, and they have explained how challenging it is, for example, to get one-click ordering to work for a single product when a retailer’s data centers and systems are designed for bulk-load electronic data interchange (EDI) transactions that stream in nightly. Data centers can’t afford to run at bulk-load speed in real time. CIOs tell VentureBeat that that’s one of the main factors driving them to double down on cloud platform investments. No one has ever cost-reduced themselves into market leadership. Retail is one of the most inventory-dependent businesses, so finding new ways to make every aspect of supply chain management, warehousing, fulfillment and returns efficient directly contributes to gross margin growth. That’s what’s driving the most successful business cases for migrating data centers to the cloud. It’s essential to attain the scale and speed to drive efficiency so well that it positively affects inventory. Migrating workloads to the cloud for cost alone is a challenging and time-consuming process and it’s difficult to get any appreciable ROI. The primary goal needs to be revenue upside potential driven by greater efficiency. On the fast track to vertical expertise Data centers are not adapting fast enough to keep up with the way customers are changing how, where and why they buy. CIOs are understanding that moving workloads to the cloud, combined with the necessary vertical expertise, can turn this potential liability into a strength. “We have most workloads running on AWS, and we talk to them about creating fit-for-purpose offerings for financial services,” said Ken Meyer, Truist Financial Corp chief information officer for consumer technology. “We work with Microsoft in the insurance space, and their financial services offerings go to multiple verticals. And we’ve leveraged Google’s Apigee platform because they’re leaders in open banking.” Public cloud providers know that if they’re only known for storage and compute, they’ll become commoditized, and every cloud platform provider will compete on price and availability. The race is on to differentiate in vertical expertise. That’s a valuable service most CIOs look for as they continue moving workloads to the cloud and building out hybrid cloud infrastructure. AWS offers 19 industry -specific solutions , Microsoft’s Azure platform offers nine , and Google offers 21 industry solutions. Migrating the data center to the cloud to be more competitive The race is on to innovate faster than competitors and keep customers, who have more choices of brands and buying experiences than ever before, brand-loyal. Data centers are now part of the competitive mix for any retailer, as they are crucial for keeping the heartbeat of any retailer robust. Real-time data is crucial to keeping a revenue stream moving, especially for new digital business initiatives. CIOs who build business cases that set up their teams to deliver revenue growth are managing their careers for growth. CIOs’ old role of risk and cost reduction is being replaced with a strategic role that’s core to driving a business forward. It’s time to look at workload migration from data centers not just through a cost lens but through an opportunity lens. Migrating workloads can free up resources internally while ensuring retailers can adapt as their customers’ preferences for what, where and how to buy evolve in the years ahead. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,629
2,023
"Everywhere and nowhere: Metaverse leaders plan for data centers on a whole new scale | VentureBeat"
"https://venturebeat.com/metaverse/everywhere-and-nowhere-metaverse-leaders-plan-for-data-centers-on-a-whole-new-scale"
"Game Development View All Programming OS and Hosting Platforms Metaverse View All Virtual Environments and Technologies VR Headsets and Gadgets Virtual Reality Games Gaming Hardware View All Chipsets & Processing Units Headsets & Controllers Gaming PCs and Displays Consoles Gaming Business View All Game Publishing Game Monetization Mergers and Acquisitions Games Releases and Special Events Gaming Workplace Latest Games & Reviews View All PC/Console Games Mobile Games Gaming Events Game Culture Everywhere and nowhere: Metaverse leaders plan for data centers on a whole new scale Share on Facebook Share on X Share on LinkedIn This article is part of a VB special issue. Read the full series here: Data centers in 2023: How to do more with less. The metaverse was once pure science fiction, an idea of a sprawling online universe born 30 years ago in Neal Stephenson ‘s Snow Crash novel. But now it’s gone through a rebirth as a realistic destination for many industries. And so I asked some people how the metaverse will change data centers in the future. First, it helps to reach an understanding of what the metaverse will be. Some see the metaverse as the next version of the internet, or the spatial web, or the 3D web, with a 3D animated foundation that resembles sci-fi movies like Steven Spielberg’s Ready Player One. In the last few years, the metaverse went through a hype cycle, spurred by Mark Zuckerberg’s decision to rename Facebook as Meta in a bid to make virtual reality and mixed reality headsets into the windows of the metaverse. Others see it extending far beyond that to smartphones, PCs and just about any gadget. While games such as World of Warcraft and virtual worlds like Second Life have taken a step in the direction of the metaverse, others say that Fortnite, Minecraft and Roblox are the true forerunners of the metaverse with their emphasis on user-generated content and daily online multiplayer gaming. And still, more believe it will be something much bigger than that. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Matthew Ball, author of The Metaverse: And How It Will Revolutionize Everything , refers to it as a persistent and interconnected network of 3D virtual worlds that will eventually serve as the gateway to most online experiences, and also underpin much of the physical world. “When they’re having discussions about the metaverse, people always focus on upper levels of the stack,” said Rev Lebaredian, VP of Omniverse and Simulation at Nvidia, in an interview with VentureBeat. “There has been almost no discussion about what the infrastructure underneath is, and we care a lot about that. We’re building that stuff.” More precisely, he said it is a “massively scaled and interoperable network of real-time rendered 3D virtual worlds that can be experienced synchronously and persistently by an effectively unlimited number of users with an individual sense of presence, and with continuity of data, such as identity, history, entitlements, objects, communications and payments.” I think of it as a real-time internet where we can have experiences like in the Star Trek holodeck, where you can immerse yourself in a world that can be indistinguishable from reality. And you should be able to use this metaverse to switch to another such world instantaneously. When you get to that kind of definition of the metaverse, it’s clear that it isn’t here yet. And the question becomes: How long will it take to get there, and will technologists ever be able to build it? Raja Koduri, the chief architect at Intel, predicted in 2021 that the metaverse will need 1,000 times more computing power than was available back then. That’s obviously going to take a while. Lower down the stack Lebaredian said that there has been a lot of discussion about the applications that everyone wants to see for the metaverse. But he noted there has been very little discussion of the technology lower down in the stack for the data centers. He wants to see more attention to what it will take to really build the metaverse and the data centers that support it. He said, “I believe that the infrastructure that we need to build out is going to be different for the metaverse as a whole compared to the data centers we have today. There are going to be differences even within these two classes of the metaverse, the consumer one and the industrial metaverse.” As soon as I asked Jon Peddie, president of Jon Peddie Research and author of a three-book series dubbed The History of the GPU , he answered with a question about what the metaverse is. He believes it will be a single entity that ties together all of the networks — something like today’s internet, but evolved. And he said it will be a good question about where it actually resides. “Should a metaverse ever be created, where will it be? It will be everywhere and nowhere,” Peddie said in an email to VentureBeat. “It won’t have a central location at NSA’s headquarters or Google. One of the basic tenets of a metaverse is that all tokens from any subverse will be frictionlessly exchangeable with any other subverse via the metaverse.” He added, “If I buy a dress in Ubisoft’s subverse and sell it in Nvidia’s Omniverse, the tokens exchanged (Bitcoin or Euros) will flow to my digital wallet without me having to do anything more than let the computer look at my beautiful blue eyes. I may or may not be wearing a suffocating VR headset, and I may or may not be almost fainting at the enthralling aspects of blockchain transactions on Web3 or Web4, it will just happen — that is a metaverse, and for that to work, it has to be on every machine, much like a browser is on your laptop, tablet, TV, smartphone and car. So don’t look for a zip code for the metaverse.” Omid Rahmat is an analyst who works with Peddie and he will be a speaker at our GamesBeat Summit 2023 event. He noted that the notion of digital twins , or building a copy of something in the real world in order to simulate something — like a BMW factory — could be a big part of the metaverse. Under that notion, he said, “the metaverse is the total sum of digital data that mirrors the real world as well as extending into new ones that connect to this one.” He also believes that the revolution in generative AI will also lead to a vast expansion in conversational man-machine interfaces. Rahmat thinks that “younger generations are going to be happy to move to conversational man-machine interfaces because they’re probably going to be fed up [with] getting neck-aches constantly looking down at their phones.” He said these younger generations are going to then be more amenable to the use of heads-up displays because they are mobile, can manage their digital environments with conversational AI, and will probably demand mixed-reality experiences. “All of these assumptions will extend the amount of data and computational demands placed on data centers because no matter how powerful mobiles become, they will never be powerful enough to handle the vast amounts of computing power needed to support this sea change in user behavior,” Rahmat said. “Into this mix, you add the vast amount of data that is going to be used when we move to the internet of sensors, a natural extension of our need to model, simulate, measure and interact with the metaverse to mitigate costly behaviors in the real world,” he added. “Just the sheer volume of data and servers that will be needed to enable a semi-autonomous automotive experience, not even [fully] autonomous, is beyond existing datacenter capacities.” This isn’t just about fun and avatars, either. “The companies that want to control the metaverse, the big tech giants that are investing in all of the above, will need to own vast amounts of data and will have to devote ever more resources to adapt that information into viable products for businesses and consumers. All of this is happening, today, but we’ve only scratched the surface of demand,” Rahmat said. He believes the metaverse is ultimately going to be a reinvention of our shared reality, a way to create a digital transformation of real-world interactions. That means we are going to need to create infrastructure for a 10 billion-user client-server model by 2050. We are nowhere near having the resources in place to support that kind of expansion, and are only at the beginning of the road to finding more energy-efficient, recyclable approaches to building out the infrastructure, he said. These are different strategic views of the metaverse that I’ve come across, and there are counter-arguments being made by folks who want the metaverse to be open and decentralized. We’ll see what others foresee as well over the course of this story. Roblox’s view of data centers of the future Roblox prefers to run its own data centers, with only a small amount [of data] handled by outsiders, said Dan Sturman, CTO of the company, which has 67 million daily active users. As such, it’s one of the leading companies on the metaverse today, with its focus on user-generated content. That allows the company to save money and give more money back to creators. For the metaverse, Sturman sees changes ahead for data centers. To enable that, the company has to create custom solutions that deliver the functions that Roblox and its developers need. It also has to deploy data centers around the world while respecting local networking requirements and national restrictions on storing private user data. Roblox keeps its data in its core data centers and is also pushing a lot of processing out to the edge of the network. Gamers who have good computers can do a lot more of the processing required for running Roblox on the computers at the edge of the network. But those who have older computers do less of that work at the edge. In that sense, Roblox takes advantage of infrastructure at the edge. “Pushing compute to the edge is even more important. The frames per second for interactivity is just a whole other level compared to what we’ve had on traditional web apps,” Sturman said. “If you start looking at it, we want to do at least 30 frames a second. Interactivity is really important. We share the load with the client devices.” By contrast, with virtual reality , much of the computing load is handled by a standalone VR device. “One thing we’re learning is we need to be ready to kind of shift [computing] work based on the client device we’re talking to,” Sturman said. “That’s the direction we’re heading. And I think it’s important. So that takes me into GPUs because with most devices out there today, most graphics can be done on the end device.” For voice processing, Roblox does a lot of that in the cloud using GPUs. Roblox is also doing a lot more machine learning inference processing with more algorithms running. Some of that happens on the CPU, but GPUs are also likely to be used in the data center for that purpose. “Interpreting voice into our facial expressions is something we want to do at the edge, not at the core data center,” Sturman said. “We want to take what you’re saying and put that on your avatar. So your lips move accordingly. I think all good data center design comes down to total cost of ownership. What is my workload? And how do I assemble the tech available to execute that workload efficiently as possible?” Roblox is also exploring large experiences, like rock concerts with tens of thousands of people. Those could benefit from advances in networking technology like Nvidia’s Mallanox technology. Sturman thinks it would be “incredible” to do a 50,000-person concert. But that’s likely to require changes in both software and hardware architecture, he said. It’s hard to imagine that networking between servers will ever be faster than the memory bus within a server, he said. But it’s worth looking at. Sturman said his company uses the Lua programming language because it makes it easy to run an app on any device. And it has to run anywhere in the world. To make that happen and build the data centers for all of that, it takes a lot of focus on the game engine, data centers and infrastructure support. “It doesn’t just happen by itself,” Sturman said. Generative AI will be a revolution for many industries, and in the case of gaming it will lead to better user-generated content. Creators will be able to craft things much faster and with less help. Roblox has already launched a coding assist feature with generative AI on Roblox. Over time, it could lead to a lot more user-generated content, and, as a result, the need for more data center infrastructure. Pushing the problem to the cloud Lisa Orlandi, CEO of 8agora, said in a message to VentureBeat that we’ll see an early push of metaverse processing and applications into the cloud. “If you look at metaverse companies today, the heavy compute requirements and rendering are downloaded onto the user device, but this model does not scale well to billions of people across the globe,” Orlandi said. “This will need to be pushed to a multicloud infrastructure (similar to what Amazon or Netflix are doing to stream in the cloud). This also means that data centers will need to ramp up their compute capacity and continue to build out their infrastructure to support the high-speed, high-compute requirements.” But she noted it will be a challenge to do this kind of processing in the cloud in a sustainable way as power consumption for these heavy compute-intensive environments and bandwidth requirements to the user will increase significantly. “Even when you look at Nvidia’s streaming, where they render in the cloud, the user still needs to download an app (it’s not web-based),” Orlandi said. “They don’t support bidirectional audio and the bandwidth requirements are high. For instance, GeForce Now requires at least 15Mbps for 720p at 60FPS.” That goes up to 25Mbps for 1080p at 60FPS and 35Mbps for streaming up to 2560×144/2560×1600/3480×1800 at 120FPS. These higher bandwidths equate to a higher power requirement and higher cost to the consumer and will require data centers to increase their capacity exponentially while maintaining sustainability, Orlandi said. “In Europe, they’ve adopted green energy requirements in their data centers and we believe that this will soon be the case in the U.S. as well,” she added. “This means that new technologies will need to be implemented that can scale to billions of people across the globe. It won’t be just a matter of lowering the component power, but a new strategy to enable an end-to-end multicloud solution that can handle the increase in users, lower their cost and power footprint, and also lower the cost and power footprint for the data centers.” This is really the problem that 8agora focused on, which is moving the client app to the cloud to integrate it with the streaming app and thereby allowing the use of green energy data centers and high-quality rendering that can scale across any use case, Orlandi said. “Multiple sessions can be rendered (20) with a single GPU card rated at 70 watts while encapsulating the audio/video data stream back to the user down to 1Mbps,” Orlandi said. “This allows the optimizations needed by the data centers to build out and scale high-performance environments at low power. Because bandwidth requirements are lowered, this means they can support much higher capacity across a multicloud infrastructure in a sustainable way across billions of people.” The industrial metaverse will drive datacenters The thing about the metaverse, as Peddie noted, is that processing will take place in the cloud for some applications, like real-time games with massive numbers of players. But much of the processing will also take place at the edge, Nvidia’s Lebaredian said. You may need to access the metaverse with your smartphone if you’re at a location where you can capture data on the scene but not have access to a supercomputer. Some people are legitimately wondering if the metaverse craze, rising out of the pandemic when we were forced to communicate digitally, has waned as the hype cycle has moved on to AI and as the effects of the pandemic have lessened and enabled more people to go out publicly. It’s natural for some of the interest on the consumer side to subside, as mixed reality technology is still a long way from fruition as a consumer product. But Nvidia sees a huge amount of metaverse activity on the industrial and enterprise side, said Lebaredian. “On the industrial side, the parts we’ve been focused on, the metaverse is alive and kicking. And everybody wants it from all the customers that we’re working with,” Lebaredian said. “The enterprise is more like the lead horse of the metaverse.” The industrial metaverse is a business-to-business ecosystem. The parts of the metaverse, or virtual environments, that connect back to the real world are where Nvidia is focused. That means things like digital twins, where a company designs a factory and makes it perfect in the digital world before it builds the real thing in the physical world. And it will outfit that physical factory with sensors that can feed data back to the digital twin so the company can have a data loop that improves the design over time. It follows that enterprise data centers are going to be the ones that will evolve to serve the customers of the metaverse. New technologies like the metaverse will start out expensive — note the $3,300 cost of the Magic Leap 2 mixed reality headset — and only enterprises will be able to afford it. “When you get to the market, you have to have the perfect confluence of conditions” like low costs and seamless user experiences, Lebaredian said. “In industry, you have less of those constraints. If you simulate things that help you design products in the metaverse for things that cost billions of dollars and then can save you millions of dollars, then your price sensitivity is different. We are building systems that let you scale at high fidelity with extreme scale.” The result is likely to be that data centers will adapt to meet the needs of enterprise metaverses first, Lebaredian said. We will likely see an explosion of technologies to serve the metaverse and its infrastructure, just like search engines and accompanying businesses like Akamai served the needs of the fledgling internet. “Once people figured out a business model around the internet, then that’s what it took to make the internet really grow,” he said. “Then look what happened. Google made search work. They built a business model around it. We’ve seen this before.” Lebaredian isn’t sure the metaverse term itself will stick. He remembered how Al Gore referred to the internet as the information superhighway, but that buzzword didn’t last. But he thinks the technology itself will absolutely be necessary and useful in the long run. “Somebody has to keep the ball moving forward, and it makes sense that Nvidia would be one of those companies investing in this particular set of technologies,” Lebaredian said. “We’ve done computer graphics. We’ve done gaming. We continue to do supercomputing, AI — all of this stuff. It all comes together right here in the metaverse.” The hardware underneath Right now, Nvidia’s lead system for datacenters running metaverse applications is the Nvidia OVX system, which is a SuperPod architecture that offers scalable performance for operating real-time simulations and AI-enabled digital twins on a factory, city or planetary scale. “OVX systems are designed for the industrial metaverse for digital twins and designed to scale into data centers, networking, low latency and high bandwidth,” Lebaredian said. “That is the foundation we are building Omniverse Cloud on. And Microsoft Azure is about to stand up a whole bunch of OVX systems.” BMW demonstrated how factory planners from around the world can come together in a digital twin — a factory that is ready in the virtual sense now and will be built physically in 2025 — and walk through it together virtually to figure out what is right or wrong about the design. In Japan, this is known as a “ gemba walk. ” Those people have to see what the others are modifying in real time as they interact with a factory that has something like 20,000 robots. During the walk, they can make agile decisions. “Getting that factory to simulate in real time, that’s a major challenge,” said Lebaredian. “Gaming systems like a PlayStation can’t do that. But the OVX has GPUs, CPUs and enough memory to handle something like that. The physical factory for BMW will be in Hungary and be miles long. It’s so big the curvature of the earth matters in the design. But it will exist as a simulation in the Omniverse.” BMW and Nvidia have to make thousands of GPUs available to run that digital-twin simulation. That’s essentially going to be running in a Microsoft Azure datacenter. With this infrastructure in place, an engineer can make a change in one part of the factory and it can immediately be visible to everyone. The problems that enterprise designers run into — and the need for access to massive amounts of real-time data — have some parallels in the game world. You could make a game that is so realistic that a building can collapse and produce a pile of rubble. That rubble has to be calculated with care since there are so many pieces of data that must be accessed in real time by different players in the game. If one player’s PC at the edge calculates the rubble faster than another’s, the rubble will look different to different players based on how fast their machines are. That doesn’t work. But if you put the game in the cloud and do all of the calculations in hardware inside a data center, then the calculations can be done quickly and shared among all of the users whose game data is in the data center itself. “If we change it up, instead of doing computation at the edge, so that it all happens in the same data center, like with GeForce Now, we can ensure almost zero latency between the players,” Lebaredian said. The sniper and the metaverse Another problem was summarized as the “sniper and the metaverse.” Most players in a multiplayer game are limited to participating in a single server, with maybe something like a maximum of 100 players in a server. But if you’re a sniper on high ground, you might be able to see beyond the borders of a single server. You might see another soldier that you can snipe a mile or two away. But if the bullet crosses a server border, then it might slow down and the simulation might be out of sync. And so most of the time, the game makers have to limit such games so the sniper can’t see that far. Inside the data center, one of the key differences compared to the edge is the speed of networking. Nvidia’s Mellanox acquisition enabled it to acquire networking for supercomputers. That is so fast and the latency is so low that communication can be faster between different nodes on a network than it is within nodes. That means server-to-server communications can be faster than communication within a server. “When that happens, it becomes blurry what is the actual computer, and there is no limit to the scaling that you can do,” Lebaredian said. “The problem of snipers and distance goes away.” That is, when the networking is so fast within the data center, the notion of shards — or different servers with distinct borders between them — can go away. “Supercomputing it’s all about how fast you can move data between the nodes. Because once you have that interconnect, it’s actually kind of blurry what is the computer. What we’re doing at Nvidia in general is that we are democratizing supercomputing,” Lebaredian said. The same problem with industrial robots It turns out the sniper and the metaverse are not so different from the robot and the metaverse. “In a factory, there are thousands and thousands of robots, and they all are doing their own thing, and they need to run their simulations locally,” said Lebaredian. “And it needs to be done in one unified space. As far as the infrastructure to do this goes, one of the things we believe is a key technology [is] to allow more players to be in the same space and be more interactive together in a physically consistent way.” He added, “The big problem you have on the internet is not bandwidth. It’s latency. If I have a 200-millisecond round trip time from where I am to the game server, and yours is only 10 milliseconds round trip time, then the world you and I are experiencing is slightly different. If we are in a shooter, and you’re going to shoot me and I duck behind a wall, then what you see when you are targeting me is not in actual physical time. I’m not behind the wall yet.” Generally, in that situation, the players see different things but the game usually favors the shooter. But the person who is shot probably thinks there was something wrong with the timing. If the calculation happens inside the data center, via cloud gaming, then that calculation of both player locations can be done at the same time by the same computer, resulting in a better adjudication on whether one player shot the other or the other player hid behind a wall. “Once we move all of this computing to the cloud, a lot of those problems become easier,” Lebaredian said. “You can do a lot of computation in a distributed manner, across different nodes and computers. But between those computers, you have enough bandwidth, and the stability of connections and latency is low enough. And, the actual players can be in different parts of the world. They’re just, they’re just further from the computer doing the calculation. They experienced the latency of the video and the clicking, but not the massive problem of distributing the simulation across the globe. That’s what we’re essentially doing right now when you play Fortnite.” Earth 2 and The Lord of the Rings Nvidia’s big project for the leading edge of supercomputers is Earth 2. Jensen Huang, CEO of Nvidia, said that Earth 2 is a simulation, a digital twin, of the entire planet. Nvidia will simulate all of the weather formations on Earth and make it accurate to a meter level. If it can pull this off, using all the supercomputers of the world, then it could predict climate change for decades to come. But the only way to tackle that is to break the problem down into local parts, like calculating the weather in one region and then passing along the impacts to other regions through fast interconnections such as the Mellanox technology. If the network is really slow, and the network becomes a bottleneck, then the computing can be subdivided on a local level with an attempt to get the processing done in one region. Supercomputers concentrate on computing the fluid dynamics of water in the clouds and the environment and how they change second to second. “You spatially subdivide the problem and chop it up so each computing unit can communicate on an island of its own and communicate quickly when it needs to do that,” he said. Some of this is not so different from what occurs in a giant Lord of the Rings battle, like the Ride of the Rohirrim in the Battle of the Pelennor Fields in Peter Jackson’s The Lord of the Rings: Return of the King film. If you tried to replicate this battle in 3D in the metaverse, it would be a huge undertaking. But rather than try to render it all at once in edge-based player machines, you could do it in the cloud and subdivide the problem. A supercomputer could handle a chunk of the battle where the Orcs and Riders are swirling around in combat. Getting tens of thousands of humans and Orcs into the same scene isn’t so different from getting tens of thousands of robots into an industrial factory, as far as computing problems go, Lebaredian said. “Instead of calculating molecules in fluid simulation for water, it’s simulating humans and Orcs,” Lebaredian said. Will the metaverse be decentralized? Not every metaverse application is going to require massive centralized cloud computers. In fact, much of the blockchain gaming world may want the computation to happen in a decentralized way via blockchain and Web3 technologies. Ultimately, as the metaverse grows, assuming it grows in a decentralized fashion, its dependence on centralized hosting and data centers should decrease as it is pushed across more peer-to-peer networks enabled by blockchain-based ecosystems, said Jason Brink, president of blockchain at game maker Gala Games, in an email to GamesBeat. “At its core, ‘metaverse’ is indistinguishable from ‘gaming’ in form, if not function. Decentralized hardware, hosting and image processing can revolutionize the gaming industry,” Brink said. “By leveraging the power of distributed networks and edge computing, these technologies can bring about transformative changes in performance, accessibility and innovation within the gaming landscape. This goal of driving innovation and decentralization is central to what we do at Gala Games.” One of the most significant advantages of decentralized hardware and hosting is the enhanced performance it offers for both gaming and metaverse applications, he said. Traditional gaming relies heavily on centralized servers and high-powered personal computers or consoles. However, decentralized systems utilize a vast network of devices, distributing the computational load across multiple nodes, Brink said. The nodes of a computing network could be run by external companies that are incentivized to supply the computing power for the decentralized network, said Robby Yung, CEO of Animoca Brands, in an email to GamesBeat. “This broader distribution enables superior processing power, resulting in faster load times and smoother gameplay experiences, even for the most graphically-intensive titles,” Brink said. “Furthermore, the inherently scalable nature of decentralized networks allows for accommodating ever-evolving technological demands.” Amy LaMeyer, managing director of the WXR Fund, said in an email to GamesBeat, “With the amount of data needed for high-resolution, real-time multiplayer games, I think caching content near the end user will continue to be necessary, as well as more optimized streaming capabilities and edge computing. I expect the visibility on eco-computing to rise with data centers looking at more climate-friendly ways to compute.” Another critical aspect of decentralization is its potential to improve accessibility to the metaverse. By shifting the computational burden from individual devices to a distributed network, players with lower-end hardware or mobile devices can also enjoy high-quality gaming experiences. This democratization of access removes barriers to entry and encourages a more diverse and inclusive community of users and players. Of everything that the decentralized future of the metaverse brings, this is probably the most important from both a social and a business perspective, Brink said. Not every person in Web3 expects decentralization to be so sweeping. Ryan Mullins, CEO of Aglet, believes that the metaverse “would require even more expansive data center growth and strategic proximity to end users.” Bitcoin and Ethereum took ideological jabs and uppercuts for the amount of energy consumption they demand given the intensity of proof-of-work. “My hypothesis would be that, should the metaverse emerge as anything more than a pandemic trend, something similar will happen to the metaverse,” Mullins said. “For example, Meta’s proposed data center in the Netherlands (to serve “the metaverse” to European users) would consume half as much energy (1,380 gigawatt-hours per year) as all other data centers in the Netherlands combined.” Mullins believes the metaverse will be accessible via mobile people using mobile devices, and that would be a lot more energy-efficient than jacking into centralized data centers. Of course, mobile devices can use a lot of data center technology as well, but Mullins thinks that we’re going to need more data centers and more ways to enable computing and movement at the edge as well. As for blockchain, Roblox’s Sturman said he isn’t saying no yet but he is waiting for someone to show him the user benefits of blockchain technology, except perhaps when you’re coupling the purchase of virtual Nike shoes with the purchase of Nike shoes in the real world. Sturman said Roblox’s North Star is really that game platforms of the future will focus on user-generated content. “It is very important to have as many options open to allow ideas, flowers to bloom that you never expected,” he said. As for the infrastructure for that, Sturman said, “I have to be ready for anything. And I always want to make it as easy for [users] as possible to do what they want to do.” Everything Everywhere All at Once All of this means that the technology used to build the metaverse will likely involve the cloud, and it will be the same technology across enterprises and game worlds. It will be a new kind of distributed computing for the metaverse, and the configurations of data centers will be very dependent on the kind of processing they’re doing with their applications. We don’t know exactly what architecture will evolve to meet the demands of the metaverse, but we know it will be different from today’s architecture. At a time when Moore’s Law has stalled and data centers are producing a lot of energy use on the planet, there are huge problems to solve with the metaverse. Let’s hope these problems will be solved. As Lebaredian said, if networking technology comes along that can make server-to-server communication faster than communication within servers, we can solve a lot of the problems of huge battles in a tight space. Then it becomes easier to do massive real-time simulations of games that involve a lot of people and a lot of territories all at once, regardless of whether there are snipers trying to draw a bead on one individual within a huge crowd. This definitely makes me think of the film Everything Everywhere All at Once. If you’re Nvidia, you see the solution as bigger GPUs and more memory. But companies like Intel could see the solution in a much different way. “With 600 million daily players, we’re already in the metaverse in some form every day,” Lebaredian said. “That’s a lot of people. If the metaverse has to be one planet, where all seven billion people on Earth can go crowded into a stadium together, I guess that’s one definition of it. I’m not sure how useful that would be. But every day, there are people having multiplayer experiences in virtual worlds that are meaningful to them. It’s just going to get bigger, more complex, and have more players in the same place at the same time.” GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! Games Beat Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,630
2,023
"VB in Conversation - Reimagining The Data Center in Today's Environment - JoAnn Stonier - Mastercard | VentureBeat"
"https://venturebeat.com/vb-in-conversation-reimagining-the-data-center-in-todays-environment"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages VB in Conversation – Reimagining The Data Center in Today’s Environment – JoAnn Stonier – Mastercard Reimagining The Data Center in Today’s Environment with JoAnn Stonier, Mastercard As the quantity of data snowballs into unchartered territory, companies continue to evaluate and adjust their strategies to manage this avalanche of data – especially as tech leaders are being asked to do more with less. In this VB in Conversation, VentureBeat’s Editor-in-Chief Matt Marshall speaks with JoAnn Stonier, Chief Data Officer at Mastercard. They cover Mastercard’s data strategy, including the company’s urgent need for speed and heightened security in their data stack, supporting over 125 billion worldwide transactions every year. Stonier also talks about Mastercard’s move to a hybrid structure including private cloud and modernized data centers to ensure their customers’ transactions are processed safely in milliseconds around the globe. This video is part of a VB special issue. Read the full series here: Data centers in 2023: How to do more with less. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,631
2,023
"VB in Conversation - Reimagining The Data Center in Today's Environment - Promiti Dutta - Citi | VentureBeat"
"https://venturebeat.com/vb-in-conversation-reimagining-the-data-center-in-todays-environment-promiti-dutta-citi"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages VB in Conversation – Reimagining The Data Center in Today’s Environment – Promiti Dutta – Citi Reimagining The Data Center in Today’s Environment with Promiti Dutta, Citi As the complexity of data continues to spiral, data and tech leaders are addressing a wide range of issues to optimize their approach, particularly against an economic backdrop that is anything but certain. In this VB in Conversation, VentureBeat’s Editor-in-Chief Matt Marshall speaks with Promiti Dutta, Head of Analytics Technology and Innovation at Citi. From data sprawl to build-versus-buy, Dutta explains how Citi is developing a hybrid approach to their data and tech stack in a highly regulated industry – and how radical empathy is critical to cultivating change within an organization. This article is part of a VB special issue. Read the full series here: Data centers in 2023: How to do more with less. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,632
2,022
"7 ways the metaverse will change the enterprise | VentureBeat"
"https://venturebeat.com/2022/01/26/7-ways-the-metaverse-will-change-the-enterprise"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages 7 ways the metaverse will change the enterprise Share on Facebook Share on X Share on LinkedIn This article is part of a VB special issue. Read the full series here: The metaverse - How close are we? A buzzword among tech buffs, the metaverse can be summarized as a natural evolution of the internet where a network of persistent digital spaces will be populated by people (as digital avatars) hanging out, making a living, and claiming ownership of their digital belongings. It is expected to take shape over the next decade with technologies such as AR, VR, IoT, 5G, blockchain, and cloud computing coming together and interacting with one another. A few companies are already contributing to the emergence of this virtual space, and they’re big and small and varied in what they do. On one hand, for instance, tech giants such as Facebook (now Meta ) and Microsoft are pumping money into various aspects of the metaverse, starting from AR/VR headsets to communication platforms. Meanwhile, on the other hand, there are smaller players who are working on certain specific elements, like VirBELA for virtual collaboration in work settings, and Second Life for playing around as an avatar in a virtual world. In the long run, these (and many other) collaborative systems and technologies would be implemented for consumer or enterprise use-cases , allowing people to interact in a way that is far more personal and engaging than Zoom calls. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! The exact impact on enterprises will take some time to develop. Take a look at the smartphone revolution that started 15 years ago. At the start, consumer fart apps were briefly the most popular app on the iPhone. People had no actual clue what was being built, but they were sure that something big was on the cards. The smartphone didn’t have an app store at the beginning. 3G networks were terrible. And the batteries were awful. A similar evolution will happen with the metaverse. Here are some functional areas within the enterprise that are likely to be transformed with the metaverse. Customer support and experience Organizations that rely on telephone calls and digital-first channels (automated chats) for customer service will be able to leverage the metaverse to pivot to an absolute virtual-first experience, Nancy Pekala, VP for content marketing and strategy at HGS Digital, said in a blog post. In this experience, digital twins of customer service agents could assist customers in an immersive, shared digital space, helping them assemble, repair, or exchange their products. This would not only allow customer service agents to do their job better than ever but also build long-term trust between the organization/brand and the customer. For instance, if a customer gets stuck while assembling a piece of furniture, a virtual help desk could literally instruct the consumer using a virtual and manipulatable version of the furniture to give them a clear path to resolve and assemble their purchase. On the customers’ side, this would be a lot like getting help from a service agent in person. Sales and marketing Just like customer service agents, sales and marketing executives within organizations can leverage the metaverse’s ability to recreate the fidelity of real-life communication and visualizations and pitch their products/services to prospective targets, Michael Pryor, head of Trello at Atlassian, told VentureBeat. According to Pryor, platforms like Zoom do help with sales but the metaverse will make the whole process more lifelike, ensuring effective communication with nuances of body language, tone, and visual cues – while also allowing some level of choice with anonymity at the same time. This would allow customers to be in the virtual outlet and see/experience the product – without actually being there. A good example of this can be Ralph Lauren’s virtual ski store as well as Drest , an interactive game that allows people to try on different outfits to decide what looks best on them and then connects to an ecommerce platform to buy those looks in the real world. Various automakers, including Nissan and Mercedes, have also created virtual showrooms to give prospective buyers a good look at their vehicles inside out and drive sales. Advertising Organizations, especially those in the retail industry, will also be able to leverage the metaverse for product placement and advertising. It’s hard to envision what these advertisements would look like in the future, but a number of fashion labels are already working in this space, giving a fair idea. Fashion giant Balenciaga, for instance, promoted its real-world collections as digital skins in Epic Game’s Fortnite and launched its own game on Unreal Engine to showcase its Fall 2021 collection. Similarly, Burberry launched in-game outfits for Chinese strategy game Honor of Kings, and fashion-tech startup BigThinx organized a virtual fashion show featuring designs from Rebecca Minkoff, Alivia, and multiple other brands. Many retail players, including Gucci and OTB Group (parent of Diesel), have also set up their metaverse division to make the most of this opportunity. Large events and conferences From press conferences to massive trade shows, metaverse will play a crucial role in making large events truly immersive and virtual for enterprises. This is because, when the technology matures, tens of thousands of people – maybe even millions – might be able to see and interact with each other in the virtual world. Imagine the Travis Scott concert Epic Games held within Fortnite, only bigger and better. Furthermore, the metaverse won’t be bound by the physical constraints of the real world. Enterprises could easily have an event venue with as many meeting rooms and stages as they want, no questions asked. Engineering and architecting Enterprises could also leverage the metaverse to create digital twins that could help with engineering and development efforts. For instance, Boeing plans to leverage Microsoft Hololens and 3D digital avatars to strengthen aircraft engineering and prevent manufacturing flaws. Airbus too had partnered with Microsoft to use Azure Mixed Reality and Hololens to reduce design validation time by 80% and accelerate complex aircraft assembly tasks by 30%. Beyond this, enterprises could also create immersive digital simulations of their production line and identify bottlenecks that could potentially affect the quality or delivery of their product. This way, they could address the issues – be it a machinery fault or human error – well before they affect the real production process. Automaker BMW is using Nvidia’s Omniverse to simulate every aspect of its manufacturing operations. According to TrendForce , the metaverse could propel global “smart” manufacturing revenue to $540 billion by 2025. Workforce training Another important application of the metaverse will be in the area of skill development. Instead of gathering employees and training them on actual pieces of machinery, which could instead be used for production, organizations could set up virtual plants where trainees could learn to perform all essential operations from startup to shutdown. They could also use the virtual environment to simulate accidents/emergencies and train workers on safety response measures more effectively. Currently, organizations have to rely on monotonous training manuals or videos for such tasks. “Immersive learning and training have been common use cases for the enterprise metaverse,” a Deloitte spokesperson told Venturebeat. “During the pandemic, we virtualized Deloitte University and built an immersive space where colleagues from all over the world met and collaborated in a natural way: we held 50+ events just in the first three months. We built a Hololens 2 experience that brings William Deloitte to life and an AR experience that showcases 3D and 2D art on a dedicated wall in the new Deloitte University in India. “Additionally, through Deloitte Studio, we built immersive onboarding experiences for our new hires,” the spokesperson said. Scenario planning Since there are no physical constraints in the virtual world, enterprises could also use the metaverse as a way to ensure effective scenario planning and problem management. For instance, an electric scooter company could simulate operational issues stemming from supplier-side bottlenecks (like shortage of a particular component) and develop strategies to tackle those problems well in advance. “Solving highly complex and mission-critical operational improvement challenges are areas where metaverses can offer superior solutions. The convergence of artificial intelligence, digital twin, and 3D design creates groundbreaking use cases,” the Deloitte spokesperson said. “Deloitte helped some of the world’s busiest airports optimize airline landing and taking off through 3D digital twin applications,” they noted. A digital twin has also been created for the Hong Kong International Airport to help authorities streamline the design review for new construction projects and deploy resources effectively for maintenance. More metaverse to come We don’t know what exactly the metaverse would look like in the next five or six years but these applications might be a good place to start. As systems evolve, the usecases and applications within the enterprise could also become more comprehensive. Tuong Huy Nguyen, a senior principal analyst at Gartner, notes these applications are based on what we know today, but in the future, it will all come down to how enterprises are able to utilize the key features of the metaverse. “How would persistent data be useful for an enterprise? For example, maybe a digital twin overlay onto a machine for employees to view and interact with (for example for repair and maintenance). Or how would collaborative and interoperable content be used? Maybe to allow multiple parties (employees, customers, vendors, etc) to crowdsource information and knowledge and interact with the digital twin, or other data available to the enterprise. Or how would decentralization be useful? For example, having locally stored and processed information to improve on speed and latency,” he said. “This is how organizations should be evaluating the metaverse – in terms of these (persistent, decentralized, collaborative, interoperable) aspects and how they impact their business,” he said. Read more from this VB Special Report : The metaverse: Where we are and where we’re headed Why the metaverse must be open but regulated How the metaverse will let you simulate everything 7 ways the metaverse will change the enterprise Identity and authentication in the metaverse Understanding the 7 layers of the metaverse Can this triple-A game usher in the promise of the metaverse? (sponsored by Star Atlas) How the metaverse could transform upskilling in the enterprise Why the fate of the metaverse could hang on its security Gaming will lead us to the metaverse The potential environmental harms of the growing metaverse VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,633
2,022
"Can this triple-A game usher in the promise of the metaverse? | VentureBeat"
"https://venturebeat.com/2022/01/26/can-this-triple-a-game-usher-in-the-promise-of-the-metaverse"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages VB Lab Insights Can this triple-A game usher in the promise of the metaverse? Share on Facebook Share on X Share on LinkedIn This article is part of the VB Special Issue — The metaverse: How close are we? A fully realized metaverse is, maybe first and foremost, a digital representation of the physical world, with all of the real world’s essential trappings, like socializing, commerce, and, yes, spaceship battles. And blockchain games are looking like they can step up to the challenge in making the vision of the metaverse a reality. For one, blockchain functionality means these games have got true ownership at their heart. Along with that comes a play-to-earn model that’s powered in part by NFTs, which, for good or for ill, are shaking out as a viable way to establish and verify ownership in a way that bridges the virtual and real worlds. Games like Star Atlas are aiming to add the other elements required to build a real, live metaverse: an immersive VR environment, the interconnectivity of social media, and a persistent online world. Star Atlas’ CEO Michael Wagner calls the property a next-gen gaming metaverse, offering both traditional core game and blockchain mechanics. Triple-a quality, the game is centered around space exploration, territory control, and political domination. Players still have to work in this metaverse if they want to participate, so before them lay a series of career paths, activities, and industry they can tackle, with the hope of reaping big rewards and maybe bigger starships. It’s also the spiritual successor to Eve Online, the space-based, MMORPG that launched nearly two decades ago. The famously difficult, often toxic, frequently tedious game wasn’t the first to introduce the idea of an online virtual world, but it pioneered the idea of a single shared virtual world that any player could impact (usually violently) with their actions. It also introduced the idea of object permanence, which breaks hearts but makes a game feel undeniably real. Players can spend massive amounts of in-game currency to build enormous spaceships to explore in – or more often go to war in. But when a ship hit zero hit points, it’s gone forever. The makers of Star Atlas, while carrying on the storied history of MMORPGs, are looking toward the future of gaming, Wagner says. The metaverse they’re imagining is one that opens up opportunities for people all over the world to become virtual entrepreneurs in any number of ways, including content creation and experiences to sell to friends and enemies alike. “We’re building this out as both a gaming platform and a metaverse,” he explains. “The metaverse component of it is the true platform, with gaming being an application that lives on top of or within the metaverse itself.” How Star Atlas fits into the metaverse timeline Wagner describes Star Atlas’s three unique value propositions thusly: 1) an extraordinary gameplay experience, 2) the play-to-win and – to-earn possibilities, and 3) the opportunity for future creators to contribute to the ultimate vision of the metaverse. By building on blockchain with Solana as its core protocol, they’re hoping to develop a robust and sustainable open economy that anybody with a wallet can participate in. “We want to provide opportunities for people to build content around Star Atlas that isn’t directed solely from us, but rather opening up the sandbox and the tool set,” he says. “We want to enable individuals to create experiences that attract the users from our world into theirs, and at one point become essentially synonymous as a unified, singular universe where all of these different planets and experiences can exist.” Earnings through the game, facilitated through the distribution of one of two native digital tokens, could transcend the Star Atlas universe, or games entirely, and get used across the entire crypto ecosystem, converted into domestic currency, or enable users to participate in various decentralized finance primitives such as liquidity pools or automated market-making. “The idea is that players can not only generate wealth through earned income, but then they can compound that through exposure to all of these new tools and tech options that are available because of decentralized finance,” he says. Creating player self-governance with blockchain Star Atlas’s vision of an open metaverse not only includes monetization, commercialization, and socialization but also, ultimately, full decentralization, wherein the developers are kicked off the throne, losing sole possession of the Star Atlas universe, allowing the players to run free and unfettered, deciding who lives and who dies. “We’re legitimately creating a product that we believe will be a public utility that anybody can participate in, with or without our permission,” Wagner says. That will happen with the game’s decentralized autonomous organization (DAO) and their own POLIS token. It’s literally pay-to-play: Through ownership and staking of the token, users buy themselves some rights and governance across the platform, which includes the ability to direct the creative team on feature design and development as well as make modifications to things like the economy itself and decisions around community governance. “This is, I think, is extremely transformative, since it’s so different from the way that most businesses operate — most businesses create IP and then want to retain that IP,” Wagner says. “We just have a unique model through which we’re able to retain value through our own ownership of that Polis token.” A look at the Star Atlas economy “First and foremost, I think of the economy as the bedrock for everything we’re doing,” Wagner says. “It’s the foundation for the platform itself.” Star Atlas uses ATLAS and POLIS as its blockchain based in-game currencies, but also offers decentralized NFT asset ownership, with an NFT marketplace for peer-to-peer sales of assets. Built on the Solana blockchain protocol with decentralized finance (DeFi) directly integrated into game interface via Serum, it also offers a decentralized digital currency exchange and automated market making (AMM) in-game, with an on-chain governance model that’s intended to give players complete independence and the ability to decide who lives and who dies, if they’ve got enough currency. To create balance around not only gameplay mechanics, but the game’s economy, the company has appointed a chief economist who is blessed with comprehensive knowledge about the way the global economy functions. Beneath this role are a head of game economy and a head of token economy, who specifically manage the crypto elements, as well as a head of monetization, who creates the unique revenue streams for themselves as an entity. Virtually every activity inside the game can be considered an independent business. Space explorers create maps of territories they can sell to other players, data runners sell information, and miners and farmers create a supply chain of the materials required to craft new NFTs. Each of these jobs creates revenue streams, whether that’s in ATLAS, in components or structures, the ability to mint NFTs, or payments from other players. But each also has a cost center. A ship needs to be refueled and a repaired, a crew member’s got to eat. Even the metaverse can’t escape the second law of thermodynamics, so mining equipment degrades over time and needs to be replaced, and so on. “What’s novel about us is that we’ve isolated those cost centers not as revenue to us as a studio, but rather revenue that will flow directly into the Star Atlas DAO,” Wagner says. “This is further incentive for the governance participants across the network.” Not only are these participants earning POLIS in emissions through staking to those contracts, but they’re capturing a part of the total GDP of the economy in the metaverse. As these expenses are incurred, they flow into a separate treasury, and not only does the DAO get to decide who lives and who dies, they can decide how to use those resources too. “We think a lot about game balance and gameplay mechanic balance, but also how to ensure that there are avenues to use proceeds that are being earned as you play the game,” he says. ” “If everyone was simply earning ATLAS and there was no further utility for it except to sell it on the market, then you enter into what’s prominently called Ponzi mechanics. For us, it’s important that it’s necessary to spend and hold and retain ATLAS to operate within the universe itself.” Reshaping the gaming marketplace Star Atlas is in a league of its own, with a vanishingly small likelihood of any major triple-A studios able to throw their hats in the ring by building a metaverse or introducing NFTs, Wagner says. “I don’t see that as being significant competition for us in the near term, simply because there’s a lot of complexity to understanding how cryptocurrency works, how the technology works, how to integrate that into your game,” he says. “It radically changes the revenue models and revenue streams that can be derived from the individuals and from the players. Again, we’re looking at more of this circular, robust, sustainable economy owned by the players, and not necessarily an extractive system where all the revenue flows directly to us as a company.” But there’s also the thorny issue of regulatory uncertainty around operating in cryptocurrency, especially in the U.S., with no real legislation or policy framework in place to guide the ultimate outcome of the cryptocurrency surge. Unsurprisingly, that uncertainty will lead to a lot of apprehension from traditional game studios in incorporating many of the same elements Star Atlas is bravely promising to bring to the market. The reception has been overwhelmingly positive, Wagner says, and their growth has been explosive. Virtually all of the inventory of assets they’ve released to the market have already been purchased. They have nearly 300,000 Twitter followers and 145,000 members in their Discord channel, and thousands of people join their Town Halls. “We think that the metaverse is going to be completely transformative to the way the world operates,” Wagner says. “That’s across governments, across economics, across social interaction. We think we’ll create something that will not only be fun, but truly empowering. That’s what’s most important to me.” Join the GamesBeat community! Enjoy access to special events, private newsletters and more. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,634
2,022
"Gaming will lead us to the metaverse | VentureBeat"
"https://venturebeat.com/2022/01/26/gaming-will-lead-us-to-the-metaverse"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Gaming will lead us to the metaverse Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. When CEO Mark Zuckerberg changed Facebook’s name to Meta and committed to the metaverse, he said that gaming would lead the way. That’s an interesting comment, considering that Facebook has billions of social media users, while game companies like Roblox and Microsoft (Minecraft) have amassed hundreds of millions of gamers. In fact, the entire game industry’s reach is about as big as Facebook’s alone, at around three billion people. Though, the numbers of users isn’t the only thing that matters when it comes to building a metaverse that people actually want to go to. It’s challenging to predict who will win, as the incumbents in the space — game companies — may not have an advantage if another party enters the space and creates an ambitious next-generation metaverse. But it is important to note that, Zuckerberg isn’t alone in believing in gaming. “Gaming is the most dynamic and exciting category in entertainment across all platforms today and will play a key role in the development of metaverse platforms,” Microsoft chairman and CEO Satya Nadella said in a statement. Nadella’s statement came alongside Microsoft’s announcement that it would purchase game publisher Activision Blizzard for $68.7 billion last week. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! One thing that the game companies are good at is keeping audiences engaged for a long time, from many hours a day to many years. Randy Pitchford, CEO of Gearbox Entertainment, calls this the strategy of “games as a hobby,” where game developers can craft work that makes lifelong fans of specific games. This in-depth engagement is vital to the metaverse because the focus is about getting people to come back, every day, and getting people to have the desire to stay in your realm of the metaverse. App Annie studied gamer behavior during the pandemic on mobile devices, and it found that people spent more hours in the day on their phones and more hours playing games than they did before the pandemic. And while social media commands more hours of time, games generate more revenue from a smaller but dedicated audience. Leaders in the space include Epic Games, Unity, Roblox, and the new combination (deal pending) of Microsoft and Activision Blizzard. All of them have tools that enable game developers to create engaging content for gamers, and they all come at the challenge of building the metaverse from different directions. Roblox learned early on that a platform for user-generated content could turn its players into creators, and that enabled it to get far more content made by millions of people on its platform. While some of it is amateur, the best content generates millions of dollars in revenue for player-creators and billions of play sessions. Roblox and Epic Games also appreciate the fact that the universe of game players still has its limitations. Those who are not comfortable with game controllers or the steep learning curve for skillful play may be too intimidated to try. They also might prefer more passive entertainment such as music or videos. That’s why Epic Games and Roblox have branched out to get more users beyond gamers. Those companies as well as Improbable have staged massive concerts with music stars inside gaming platforms. Adding music as an adjacent entertainment market is appealing to many gamers, but it also expands the appetite for the gaming platform to non-gamers as well. Rival companies and industries have noticed. Reed Hastings, CEO of Netflix, noted that the biggest competition for its movies and TV shows is time, and games like Epic’s Fortnite are commanding a lot of time. Who wins in this battle for attention spans? A lot of it depends on what users will like. If users want ultra-realistic characters and environments, that will favor one type of company. If people prefer fanciful cartoon graphics, it would favor someone else. If they want to immerse themselves in virtual reality, Facebook will have an advantage. “Gaming is definitely a core part of it. I mean, pulling games out of it for a second, I don’t think any of this could exist without the game engine. And gaming created the game engine, right?” said Jason Rubin, head of content for Meta, in a session at one of VB’s recent events. “For a long time, the game engine was the only major tool for real-time 3D graphics. Obviously, there was simulation and other things universities were doing, but for consumers for the most part, it was gaming.” He noted that game engines are sometimes used to make movies as well. So, the game engine is fundamental to what everyone is building for the metaverse. He noted that much of mobile phone revenue is gene rated by gaming, and it is generating a lot of revenue for VR headsets. “People like to play. We were born as children and we love to play. We never give it up,” Rubin said. “And honestly, I think more gaming would make a happier world. I think play is a good thing. And so gaming becomes a great way to get people to spend a significant amount of time together. And it gives people a way to build an identity system and do other things and invest in things together. And then I think adjacencies start forming because the people are there, and the possibilities are there.” The Omniverse Above: A scene from an Ericsson Omniverse environment. But there is also room for cooperation. Marc Petit, general manager of Unreal Engine, said in a panel that his company is making efforts to provide its game engine tools to non-game industries, such as moviemakers, commercial production firms, and industrial designers such as carmakers. The interests of these companies intersect with a commonality in the creation of 3D animated assets. These companies likely don’t want to reinvent the wheel. For instance, if Porsche is using something like game tools to design cars, then the game makers don’t have to design cars for their own purposes. The companies could trade assets, particularly if there are common standards. That’s why Nvidia came up with the Omniverse , a simulation environment originally designed for testing robots. Nvidia backed Pixar’s Universal Scene Description (USD) 3D data standard, which Pixar developed for its movies and gave to its animation partners out of a weariness for reinventing 3D tools with every new tech generation. With USD, companies can re-use assets created by others and trade them around (soon) in a marketplace. Nvidia has drummed up more than 700 enterprise customers for the Omniverse, and it is giving individual versions away free. Richard Kerris, vice president of Omniverse development, said in a panel that the would-be metaverse makers in games and industrial design have different aims. Game makers care about anything that might be viewed on a screen, meaning the exterior of objects such as cars. But they don’t care as much about designing the complete car, and instead they focus on making sure it moves extremely fast through a computer-designed landscape. By contrast, the designers of automobiles have to care about animating every single part of the car, and they’re not as concerned with making the environment look pretty or running the cars at extreme speeds. In this way, the cars designed by game makers aren’t interchangeable with those designed by carmakers (though Porsche did design a car to run only in Sony’s Gran Turismo PlayStation 5 game). Above: BMW Group is using Nvidia’s Omniverse to build a digital twin factory that will mirror a real-world place. It’s interesting to see who gamers think will win. American gamers see the gaming industry and traditional “big tech” corporations as equally likely to come out ahead, while in the U.K., 29% think that the odds favor the gaming industry, compared to only 18% for large technology companies such as Apple and Microsoft, according to a survey of 2,000 gamers and 800 game developers by Improbable. The Omniverse is trying to standardize things so that object trading becomes a lot more popular and so designers can focus on the things they need to get done, rather than reinvent something created by someone else. In this way, the Omniverse could ultimately enable any party to create assets for a metaverse and launch such a universe of worlds. Game developers are the leading candidates to come up with the metaverse because they’re the ones that are skillful at duplicating reality and creating simulations so engaging that people come back to them over and over again. Kim Libreri, chief technology officer of Epic Games, said in a session that we’re very close to being able to reproduce reality in 3D computer animations that could run on a game console or game computer. Gaming’s best competition on this front for creating the reality simulation is Hollywood, whose special effects artists have long been able to create lifelike humans and realities that you can’t tell apart from real life. But Hollywood’s creations aren’t real time. They’re pre-rendered and played back to us as linear films. By contrast, games run in real time and they’re interactive. But eventually, it’s quite possible that the non-game companies could bring out bigger guns. BMW is busy building a digital twin of a factory in the Omniverse, and when it’s done, it will build the factory in the physical world. And Nvidia recently promised that it would create a digital twin of the Earth. It would marshal the talent of the AI experts of the world and the graphics tools of the Omniverse to create a simulation of the Earth with meter-level accuracy, said Jensen Huang, CEO of Nvidia, in a recent speech. Running on the best supercomputers, this digital twin would enable experts to create a climate change model that could predict the Earth’s fate for decades to come. Above: Jensen Huang, CEO of Nvidia, introduces Omniverse Avatar. Such a considerable effort could probably dwarf the efforts of any individual game company. These companies will square off against big tech firms and non-games application makers in the race to build the metaverse. But the question is whether any one company or sector can truly populate a metaverse with enough content. The hope is that a combination of game design, user-generated content, and AI will be the way to flesh out the metaverse. That’s the plan for Brendan Greene, director of PlayerUnknown’s Productions. Greene is the game designer who pioneered PlayerUnknown’s Productions, the hugely successful battle royale game that has sold more than 70 million copies. Greene recently created his own startup to create Prologue, a single-player game world that is 64 kilometers on a side. Over several years, he hopes to build Project Artemis, a planet-scale game world. Of course, if the metaverse comes from the game developers or Hollywood, it will probably be dystopian. The sector of individuals who want the metaverse to mirror more of an ordinary world may need to be prepared to hijack the gamer’s metaverse and refashion it as they like. That way, it becomes a digital twin of the world as we know it now and mirrors all the things we can do in the physical world — but perhaps isn’t quite as dreadful. Read more from this VB Special Report : The metaverse: Where we are and where we’re headed Why the metaverse must be open but regulated How the metaverse will let you simulate everything 7 ways the metaverse will change the enterprise Identity and authentication in the metaverse Understanding the 7 layers of the metaverse Can this triple-A game usher in the promise of the metaverse? (sponsored by Star Atlas) How the metaverse could transform upskilling in the enterprise Why the fate of the metaverse could hang on its security Gaming will lead us to the metaverse The potential environmental harms of the growing metaverse GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings. Join the GamesBeat community! Enjoy access to special events, private newsletters and more. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,635
2,022
"The environmental impact of the metaverse | VentureBeat"
"https://venturebeat.com/2022/01/26/the-environmental-impact-of-the-metaverse"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages The environmental impact of the metaverse Share on Facebook Share on X Share on LinkedIn This article is part of a VB special issue. Read the full series here: The metaverse - How close are we? Some companies believe that the metaverse — a yet-to-be-realized, internet-like series of connected worlds — has enormous potential in the enterprise. For example, it could be used to improve work productivity by allowing employees to train or collaborate in workplace-like virtual environments. Or it could host home and office tours, a boon for a real estate market contending with pandemic travel restrictions. One source anticipates that the metaverse will be a $10 trillion to $30 trillion opportunity within the next 10 to 15 years. But this projection ignores a massive potential downside of the technology: its environmental impact. The datacenters required to run the metaverse’s persistent worlds aren’t cheap from an energy consumption standpoint. And at least one vendor, Intel, figures that a 1,000 times increase in power is needed over our current collective compute capacity to power the metaverse — which could grow its carbon footprint even further. Energy-hungry technologies The technologies that’ll enable the metaverse of the future vary depending on whom you ask, but it’s commonly understood that virtual reality (VR), AI, and blockchain will play roles. Each can consume large amounts of power. For example, a 2020 Greening The Beast study estimated that high-end gamers, which have the hardware required for state-of-the-art VR, will spend as much as $2,200 over the course of five years on electricity and pump as much as 2,000 pounds of carbon emissions into the atmosphere each year. The servers delivering the various elements of the metaverse are costlier still. While it’s difficult to know the environmental impact of every datacenter — many don’t report key metrics — one study posited that datacenters were responsible for about 2% of global greenhouse gas emissions in 2015, which is the same amount generated by the entire aviation industry. Notably, 2015 was before the advent of cloud gaming platforms like Google’s Stadia and Microsoft’s Xbox Cloud Gaming, which studies have shown to be highly compute-resource-intensive. In a 2016 paper, researchers at the Lawrence Berkeley National Laboratory found that the additional energy used in cloud gaming can cause annual electricity use to rise 40% to 60% for desktops, 120% to 300% for laptops, 30% to 200% for consoles, and 130% to 260% for streaming devices. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! The findings were mirrored in a more recent study by researchers at the U.K.’s University of Bristol. It found that, if just 30% of gamers using 720p or 1080p devices were to transition to cloud gaming by 2030, it’d cause a 29.9% increase in carbon emissions. If 90% of gamers moved to the cloud, it’d increase gaming’s overall carbon emissions by 112%. To mitigate latency and the other issues that come along with cloud connectivity, the type of VR rendering required for the metaverse is mostly done locally — at least today. But several companies are developing streaming platforms optimized for VR, akin to Nvidia’s GeForce Now. Google has endorsed Nvidia’s CloudXR technology running on its datecenters, which performs environment rendering in the cloud and streams it to any compatible VR headset. And Meta (formerly Facebook) CEO Mark Zuckerberg has said that he envisions Meta’s cloud gaming service being “useful for VR” down the line. AI Given the envisioned scale of the metaverse, AI has been proposed as a way to quickly fill the virtual worlds that inhabit it with content. Among others, Metaphysic — the startup behind the viral series of Tom Cruise deepfakes — aims to apply its technology to generate avatars, plants, animals, and inanimate objects that can be remixed by users to create custom experiences. “There will be markets for trading AI models to generate new content, much like users can purchase custom in-game items today. The combination of AI-generated content and virtual reality will allow for total immersion in alternative realities,” Metaphysic writes in a blog post. But these types of generative AI systems require a lot of compute power to train and run. Take OpenAI’s DALL-E , for example, which can craft an image given a text prompt. While OpenAI hasn’t detailed the compute requirements for DALL-E, the system is a scaled-down version of the text-writing AI system GPT-3 trained on pairs of text and images from the internet. GPT-3 used 1,287 megawatts and produced 552 metric tons of carbon dioxide emissions during training — the same amount emitted by 100 average homes’ electricity usage over a year. Nvidia’s StyleGAN3, which can be used to generate portraits of people that don’t exist, is similarly expensive to train. Nvidia reports that it consumes around the same amount of energy (225 megawatts) as tens of thousands of U.S. homes. Aditya Ramesh, a researcher working on the DALL-E team, conceded in a recent interview with VentureBeat that the training process for generative models like DALL-E is “always going to be pretty long and relatively expensive” — especially if the goal is a single model with a diverse set of capabilities. Blockchain Some experts believe that blockchain technologies will — and perhaps already have — become essential to the metaverse. Among other applications, blockchain enables non-fungible tokens (NFTs) — unique pieces of data associated with photos, videos, audio, and other types of media. NFTs come in the form of avatars, artwork, music, digital creatures, and HTML code, as well as plots of land in virtual worlds like Decentraland and The Sandbox. In one vision of the metaverse, users could use NFT avatars as a secure, authenticatable way to enter and hop between different worlds. Because blockchain technology makes NFTs practically impossible to forge, they could provide greater security than that afforded a traditional account. But NFTs are notoriously expensive to produce. The reason is that the mechanism used to “mint” most NFTs — proof-of-work — relies on a computationally costly system called mining. Most NFT minters opt for the Ethereum blockchain, which requires computers — “miners” — to take turns guessing the answer to an increasingly challenging math problem. The miner that correctly guesses the answer wins a reward in Ether, a cryptocurrency, and a new math problem is generated. One Cambridge University study suggested that global bitcoin mining, which is similarly proof-of-work, consumes more electricity annually than the entire country of Argentina. Potential solutions It’s worth noting that several companies have pledged to take steps to reduce the impact of their metaverse-powering datacenters on the environment. For example, Google has committed to operating on 24/7 carbon-free energy in all its datacenters by 2030. Microsoft intends to be “carbon negative” by 2030, which includes a plan to stop using diesel fuel in its datacenter generators by 2030. And Amazon Web Services (AWS) aims to power its operations with 100% renewable energy by 2025. Advances in technology, too, could decrease the footprints of datacenters in the years to come. A 2020 analysis in the journal Joule found that, while the server, storage, and network workloads hosted by the cloud datacenters increased 2,600% from 2010 to 2018, energy consumption for all datacenters rose less than 10%. The coauthors attribute the inverse trend to a shift of workloads to bigger, more efficient server hardware. On the local compute level, a range of approaches could be taken to reduce energy usage. A 2019 study in the Computer Games Journal found an average 13% energy savings potential for improving power supplies in gaming PCs. Underclocking, or reducing the speed of certain hardware components, subsequently reduced power use by up to 25%, the researchers said. Software innovations could lead to efficiency gains, as well. For example, in recent years, foveated rendering has come into wider use, enabling VR engines to render fewer pixels in a headset wearer’s periphery while maintaining a high resolution in the center of vision. Tobii, a provider of eye-tracking technologies, including products that support foveated rendering, estimates that foveated rendering can reduce a key GPU workload by an average of 16% * On the blockchain side, energy-efficient alternatives to proof-of-work are gaining attention. The next generation of Ethereum will move to proof-of-stake, which will allow cryptocurrency owners to stake their assets as collateral in order to validate transactions by consensus on the network. According to the Ethereum Foundation, the organization developing standards for the Ethereum blockchain, proof-of-stake will use roughly 99.95% less energy than the current standard. Game developer Ubisoft plans to use a proof-of-stake blockchain, Tezos, for its forthcoming Quartz NFT platform. Ubisoft blockchain technical director Didier Genevois has been quoted as saying that one transaction on the Tezos network uses the same amount of energy as streaming 30 seconds of video, while the previous generation of blockchain networks can consume the same energy required for one year of non-stop streaming. Beyond proof-of-stake, startups like StarkWare claim to have developed techniques that can reduce the carbon impact of Ethereum mining and transactions by packing more information into each block of the blockchain. “Layer 2” solutions like these let users make transactions outside the blockchain and then batch-process them at once in one big transaction, saving costs. But as the Motley Fool noted in a recent piece, market demand will ultimately dictate the uptake of these more environmentally friendly technologies. “If the market demands a more environmentally responsible way to buy, sell, and collect NFTs, the industry will deliver … But no other blockchain that supports smart contracts necessary for NFTs has the reliability and reputation of Ethereum,” The Motley Fool’s Adam Levy writes. “So, bigger NFTs … may still want to use the Ethereum blockchain. To that end, it’s on the Ethereum network to migrate to Ethereum 2.0 or develop a reliable layer 2 solutions for NFTs on the Ethereum blockchain.” Looking ahead The metaverse, even in its proof-of-concept phase, could contribute substantially to emissions. Truly detailed virtual environments will require more powerful — and potentially environmentally ‘unfriendly’ — infrastructure. A combination of hardware, software, and protocol improvements could to combat the worst of the metaverse’s effects. Intel’s Raja Koduri believes that algorithmic solutions could lead to further gains in compute efficiency, in addition. But it’s not at all clear whether adopters of metaverse technologies will prioritize efficiency over scale — barring regulations that force change, like the rules under consideration by the European Union. As evidenced by Bitcoin and other early blockchain protocols, growth sometimes comes first, creating problems that the community must scramble to reactively fix — if they’re sufficiently motivated to fix them in the first place. It wasn’t until relatively recently that the AI industry started taking a hard look at the environmental impacts of its increasingly large systems. The same may be true for companies investing in the metaverse — unless users demand better. Of course, if the metaverse someday significantly eliminates the need for physical offices and commutes, the benefits could be worth the trade-offs. In the U.S., commercial buildings consume 35% of all of the country’s electricity and generate 826 million metric tons of carbon dioxide emissions each year. Commuting to those buildings takes the average American just under one hour each day and 32 miles by gas-powered car, equating to 3.2 tons of CO2 per person annually. It’s not just car commutes that the metaverse promises to reduce — or eliminate. Flights are outsize polluters, responsible for about 11% of all transportation-related emissions in the U.S. A single round-trip flight between New York and California generates roughly 20% of the greenhouse gases that a car emits over an entire year. The wastefulness of paper — which the metaverse would digitize — must be considered, too. It’s estimated that U.S offices use 12.1 trillion sheets of paper a year; paper accounts for 25% of landfill and 33% of municipal waste. One ton of copy paper — 400 reams — requires 11,341 kilowatt-hours of energy (the same amount of energy used by an average household in 10 months) and 5,869 pounds of greenhouse gases (the equivalent of six months of car exhaust) to produce. But the calculus isn’t that simple. For example, research from WSP U.K. found that remote work in the U.K. may only be more environmentally friendly in the summer due to the need to heat individual workers’ buildings versus one office. This might not hold true in regions that derive energy from more sustainable sources, like Iceland, which uses a significant amount of geothermal power. This much is clear: the metaverse will be costly. But more research must be done on whether the costs can — and will — be offset, and to what degree the impact will be distributed across geographies. Read more from this VB Special Report : The metaverse: Where we are and where we’re headed Why the metaverse must be open but regulated How the metaverse will let you simulate everything 7 ways the metaverse will change the enterprise Identity and authentication in the metaverse Understanding the 7 layers of the metaverse Can this triple-A game usher in the promise of the metaverse? (sponsored by Star Atlas) How the metaverse could transform upskilling in the enterprise Why the fate of the metaverse could hang on its security Gaming will lead us to the metaverse The potential environmental harms of the growing metaverse VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,636
2,022
"Understanding the 7 layers of the metaverse | VentureBeat"
"https://venturebeat.com/2022/01/26/understanding-the-7-layers-of-the-metaverse"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Understanding the 7 layers of the metaverse Share on Facebook Share on X Share on LinkedIn This article is part of a VB special issue. Read the full series here: The metaverse - How close are we? When it comes to describing the metaverse, definitions and opinions abound. And while it’s difficult to put something as vast, conceptual, and, frankly, still emerging as the metaverse into quantifiable terms, Jon Radoff, entrepreneur, author and game designer, breaks it down logically and thoroughly in Measuring the Metaverse. He moves up the value chain from infrastructure at the bottom to experience at the top, stopping at human interface, decentralization, spatial computing, creator economy, and discovery along the way. A common framework is necessary in Radoff’s view of the metaverse. He writes, “And while there will be many proprietary (and very fun) theme parks in the metaverse, I’m even more excited by the opportunity in the Switzerlands: a metaverse powered by a robust creator-economy enabled through decentralization.” This, of course, isn’t the first seven-layer model to lay out a critical framework. The IT world has long adhered to the seven layers of the OSI Model to organize networking function s into a universal set of rules and requirements to support interoperability among different products and software. Perhaps Radoff’s seven-layer model will become a similar conceptual framework for the metaverse. Read more from this VB Special Report : The metaverse: Where we are and where we’re headed Why the metaverse must be open but regulated How the metaverse will let you simulate everything 7 ways the metaverse will change the enterprise Identity and authentication in the metaverse Understanding the 7 layers of the metaverse Can this triple-A game usher in the promise of the metaverse? (sponsored by Star Atlas) How the metaverse could transform upskilling in the enterprise Why the fate of the metaverse could hang on its security Gaming will lead us to the metaverse The potential environmental harms of the growing metaverse VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,637
2,022
"Why the fate of the metaverse could hang on its security | VentureBeat"
"https://venturebeat.com/2022/01/26/why-the-fate-of-the-metaverse-could-hang-on-its-security"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Why the fate of the metaverse could hang on its security Share on Facebook Share on X Share on LinkedIn This article is part of a VB special issue. Read the full series here: The metaverse - How close are we? This article is part of a VB special issue. Read the full series here: The metaverse – How close are we? Cyberattacks old and new will inevitably find their way into the metaverse, highlighting a requirement for immersive virtual worlds to provide strong security from their inception. Securing the metaverse will present new challenges in comparison to existing digital platforms, however, according to cybersecurity executives and researchers. Monitoring the metaverse and detecting attacks on these new platforms will “be more complex” than on current platforms, according to Vasu Jakkal, corporate vice president of security, compliance, and identity at Microsoft. The tech giant is a leading proponent of the metaverse and has begun developing immersive virtual platforms for both enterprises and consumers. “With the metaverse, you’re going to have an explosion of devices. You’re going to have an explosion of infrastructure. You’re going to have an explosion of apps and data,” Jakkal told VentureBeat. “And so it’s just increased your attack surface by an order of magnitude.” Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! If metaverse platforms fall short on security and privacy, they are almost certain to experience a false start — or worse — as the issues quickly turn into a major barrier to adoption, experts said. On the other hand, metaverse platforms that do focus on enabling security and privacy upfront could find greater traction as a result. “It has a lot to do with brand and with trust,” said Caroline Wong, former senior manager for security at Zynga and now chief strategy officer at cyber firm Cobalt. “If a consumer has a choice of Platform A — which they believe to be secure and private and doing all the right things — and Platform B, which they think will probably lead to getting hacked if they join, then the choice is clear.” While the coming virtual world will no doubt enable “beautiful experiences” for users, acknowledging and addressing the cybersecurity challenge will be essential for the metaverse to succeed, Jakkal said. “My wish list would be, let’s not think of security as an afterthought. Security needs to be designed into the metaverse [from the start],” she said. “We have one chance of getting this right.” Metaverse knowns and unknowns It’s not yet apparent exactly what the attack surface will look like in the metaverse. But there’s still a lot we can know about the potential security risks of the coming virtual world, experts told VentureBeat. Existing issues around web, application, and identity security are expected to crop up quickly on metaverse platforms — as attackers seize opportunities for fraud, theft, and disruption. Meanwhile, malicious cyber activity that’s only possible in an immersive virtual setting — such as invisible eavesdropping and manipulating users into actual physical harm — have been pinpointed by researchers as possible threats in the metaverse as well. Kavya Pearlman, formerly the information security director for Linden Lab and its Second Life online virtual world, said that “extended reality” platforms such as the forthcoming metaverse are a different story when it comes to cybersecurity. Pearlman has been working to raise awareness about the issue as the founder and CEO of the Extended Reality Safety Initiative ( XRSI ), a nonprofit focused on privacy, security, and safety in virtual worlds. “You can use [this technology] for the greatest good. But you can also use it to really hurt humanity,” Pearlman said. For 2D digital platforms, she said, “The attack surface has remained limited to nodes, networks, and servers.” But with the metaverse, “The attack surface is now our brain.” Securing virtual worlds Platforms such as Second Life and virtual reality (VR) headsets have existed for years, while online games such as Fortnite and Roblox have turned into major virtual universes of their own. But for the metaverse, 2021 served as a turning point. Tech industry giants including Microsoft, Nvidia, and of course, Facebook — which changed its name to Meta — threw their weight behind the concept as well in 2021. Suddenly, the idea that an immersive virtual experience really could be the successor to the internet has become more than just a sci-fi notion. The visions for the metaverse do vary, and it’s not yet clear how interoperable the different virtual universes might be with each other. But even with the unknowns, the time to start grappling with the cybersecurity implications of the metaverse is now, a number of experts told VentureBeat. And this effort should begin with the risks that can already be anticipated. Josh Yavor, formerly the head of corporate security at Facebook’s Oculus virtual reality business, said the most basic thing to realize about security for the metaverse is that it must start with addressing the existing problems of the current digital landscape. “None of those problems go away,” said Yavor, currently chief information security officer at cyber firm Tessian. “There are new problems, perhaps. But we don’t escape the current or past problems just by going into the metaverse. Those problems come with us, so we have to solve for them.” With a potential for supporting all manner of economic activity, opportunistic attackers are sure to follow the money into the metaverse. It will no doubt attract threat actors ranging from standard fraudsters, to cryptocurrency and virtual goods thieves, to financially motivated ransomware operators, cybersecurity experts say. And just like on the internet of today, social engineering aimed at acquiring sensitive information will be a certainty in the metaverse. So will impersonation attempts — which could be taken to a new level through assuming fraudulent avatars in virtual worlds. If someone acquires the credentials for your metaverse account and then assumes your avatar, that person could potentially “become you” in the metaverse in a way they never could on the internet, experts said. Focus on identity security All of which means that providing strong identity security should be a top concern for metaverse builders, said Frank Dickson, program vice president for security and trust at research firm IDC. Robust and continuous identity authentication will be critical — especially for enabling transactions in the metaverse. But this might be complicated by the immersive nature of the platforms, Dickson said. Typical forms of multifactor authentication (MFA) won’t necessarily be a good fit. “It will need to be more than just MFA. If you’re in the metaverse, you’re not going to want to stop, pull out your phone, and punch in a six-digit code,” he said. “So we’re going to need to make that authentication as invisible and seamless as possible — but without sacrificing security.” The fact that the metaverse will be built on a distributed computing technology, blockchain, does bring some inherent security advantages in this regard. The blockchain has increasingly been seen as an identity security solution because it can offer decentralized stores of identity data. Blockchain is far more resistant to cyberattacks than centralized infrastructure, said Tom Sego, founder and CEO at cyber firm BlastWave. But what blockchain can’t address, of course, is the human element that’s at the heart of threats such as social engineering, he noted. Attacks seeking to exploit exposed web services are expected to be another major issue that carries over into metaverse platforms. Current techniques used in zero-day attacks such as cross-site scripting, SQL injection, and web shells will be just as big of an issue with virtual applications, Sego said. Looking ahead, one of the largest metaverse security risks might involve compromised machine identities and API transactions, according to Kevin Bocek, vice president of security strategy at Venafi, which specializes in this area. But first, all manner of “old-fashioned crime” including fraud, scams, and even robberies can be expected, Bocek said. “I don’t know what muggings in the metaverse look like—but muggings will probably happen,” he said. “We’re humans, and the threats that are likely to arise first are the ones that deal with us.” Perennial threats Along with malicious attacks, metaverse builders will also have to grapple with other types of threats that tend to be perennial issues on digital platforms. For instance, how to protect younger users from adult content. “Early on, what drove the internet was pornography. Guess what’s probably going to show up in the metaverse?” IDC’s Dickson said. “If pornography is your thing, great. But let’s make sure that our young children don’t have access to that in the metaverse.” Meanwhile, if the history of social media can teach us anything, it’s that harassment will be another concern that must be addressed for users to feel safe in the metaverse. And the problem could be complicated by factors in the virtual environment itself. In a virtual world, the ability to “get somebody out of your face” is hampered, Yavor said. “You have no sense of bodily autonomy, and there’s no way to put your arm out and literally keep them at arm’s length. How do we solve for that?” The issue, like many others, is “one of the real-world problems that must be sufficiently solved in the metaverse for it to be something that’s an acceptable experience for people,” he said. Thus, while some threats to users in the metaverse won’t be new, many will come with added complexities and the potential for amplified impact in certain cases. Physical safety risks Researchers say a number of novel security risks in the metaverse environment can be anticipated as well, some with a potential for real-world, physical consequences. The arrival of immersive virtual environments changes things a lot for attackers, victims, and defenders, according to researchers. In the metaverse, “a cyberattack isn’t necessarily malicious code,” XRSI’s Pearlman said. “It could be an exploit that disables your safety boundary.” Ibrahim Baggili, a professor of computer science at the University of New Haven, and a board member at XRSI, is among the researchers who have spent years investigating the potential risks of extended reality platforms for users. In a nutshell, what he and his collaborators have found is that “the security and privacy risks are huge,” Baggili said in an email. “Right now, we look at screens. With the metaverse, the screens are so close to our eyes that it makes us feel that we are inside of it,” he said. “If we can control the world someone is in, then we can essentially control the person inside of it.” One potential form of attack, identified by Baggili and other University of New Haven researchers, is what they call the “human joystick” attack. Studied using VR systems, the researchers found that it’s possible to “control immersed users and move them to a location in physical space without their knowledge,” according to their 2019 paper on the subject. In the event of a malicious attack of this type, the “chances of physical harm are heightened,” Baggili told VentureBeat. Likewise, a related threat identified by the researchers is the “chaperone attack,” which involves modifying the boundaries of a user’s virtual environment. This could also be used to physically harm a user, the researchers have said. “The whole point of these immersive experiences is that they completely take over what you can see and what you can hear,” said Cobalt’s Wong, who has followed the work of XRSI and security researchers in the XR space. “If that is being controlled by someone, then there’s absolutely the possibility that they could trick you into falling down an actual set of stairs, walking out of an actual door, or walking into an actual fireplace.” Additional potential threats identified by the University of New Haven researchers include an “overlay attack” (which displays undesired content onto a user’s view) and a “disorientation attack” (for confusing/disorienting a user). Spying in the metaverse A different breed of attack, also with potentially serious consequences, involves invisible eavesdropping — or what the university’s researchers have dubbed the “man in the room attack.” In a VR application, the researchers found they were able to listen in on other users inside a virtual room without their knowledge or consent. An attacker “can be there invisibly watching your every move but also hearing you,” Baggili said. And if researchers are looking at the potential for spying in the metaverse, you can bet that state-sponsored threat actors are, too. All of these attacks are only possible through exploiting vulnerabilities, of course. But in each case, the researchers reported finding that they could do it. “The types of attacks we illustrated in our research are just so that we can showcase, as proof of concept, that these issues are real,” Baggili said. But looking ahead, he believes there’s a need for more study to determine how to develop these platforms “responsibly” from a security and safety perspective. Other researchers have focused on security issues with augmented reality (AR) technologies, which are also expected to play a key role in the metaverse. At the University of Washington, researchers Franziska Roesner and Tadayoshi Kohno wrote in a 2021 paper that forthcoming AR technologies “may explicitly interface with the body and brain, with sophisticated body-sensing and brain-machine interface technologies.” “The immersive nature of AR may create new opportunities for adversarial applications to influence a person’s thoughts, memories, and even physiology,” the researchers wrote. “While we have begun to explore the relationship between AR technologies, neuroscience, security, and privacy, much more work needs to be done to both understand the risks and to mitigate them.” Alerts in the metaverse There are other fundamental things to get right to secure the metaverse as well. One is a need for careful consideration about the design of the user interface. Many of the security and privacy measures that are relied upon in current digital environments “do not exist in a metaverse,” Tessian’s Yavor said. “In fact, the point of the metaverse is to make them not exist.” The web browser is one example. If your browser thinks a site you just clicked on might be malicious, it’ll warn you. But there’s no equivalent to that in VR. This raises a key question, Yavor said: In the metaverse, “how do you provide people the necessary context around the security decisions that they need to make?” And further: When is it even safe to interrupt a user who’s physically in motion to let them know they need to make a critical decision for their security? “If you suddenly get a pop-up while you’re playing Beat Saber in VR, that can throw you off balance and actually cause physical harm,” Yavor said. These are unanswered questions right now —and the technical aspects of information security are probably easier by comparison, he said. During his time at Oculus, “the much harder part was, how do we protect people without becoming too much of a custodian or an overbearing parent?” The bottom line: Every metaverse builder will need to strike a balance between implementing security measures on behalf of users and empowering users to make risk-informed decisions on their own. “Again, the technical part isn’t hard,” Yavor said. “The design and the user experience is the incredibly difficult part.” Meta’s take In the late October presentation that unveiled Meta and the company’s vision for the metaverse, CEO Mark Zuckerberg didn’t directly mention potential cybersecurity issues. But he did discuss the related issues of privacy and safety, which he said will be crucial to address as part of building the metaverse responsibly. Meta is “designing for safety and privacy and inclusion, even before the products exist,” Zuckerberg said — later calling these “fundamental building blocks” for metaverse platforms. “Everyone who’s building for the metaverse should be focused on building responsibly from the beginning,” he said. “This is one of the lessons I’ve internalized from the last five years — it’s that you really want to emphasize these principles from the start.” In response to questions on how it’s approaching security, privacy, and safety in the metaverse, Meta provided a statement saying that the need to address issues are a main reason the company has begun discussing the metaverse years before its full realization. “We’re discussing it now to help ensure that any terms of use, privacy controls, or safety features are appropriate to the new technologies and effective in keeping people safe,” a Meta spokesperson said in the statement, which had previously been shared with other media outlets. “This won’t be the job of any one company alone. It will require collaboration across industry and with experts, governments, and regulators to get it right.” Microsoft’s take In early November, Microsoft CEO Satya Nadella revealed the company’s aspirations to develop an “entirely new platform layer, which is the metaverse.” Microsoft’s vision for the metaverse involves leveraging many of the company’s technologies—from its Azure cloud, to its collaboration solutions such as Teams, to its Mesh virtual environment. Likewise, Microsoft’s metaverse offerings will also leverage all of the company’s existing security technologies—from cloud security capabilities to threat protection to identity and access management, Jakkal said. “I think all those foundational core blocks are going to be important for the metaverse,” she said. Establishing trust in the security, privacy, and safety of metaverse platforms should be a top priority for all virtual world builders, Jakkal said. “And it has to be very thoughtful, very comprehensive, and from the get-go. To me, trust is going to be a bigger part of the metaverse than anything else,” she said. “Because if you don’t get that right, then we are going to have so many challenges down the line—and no one’s going to use the metaverse. I would not feel safe using the metaverse if [it lacked] the principles of trust.” Given the scope of the challenge, securing the metaverse will indeed require many stakeholders to work together collaboratively—particularly across the cybersecurity industry, Jakkal said. “We need to bring the security community into the metaverse,” she said. Work is underway Some industry firms are already preparing to help make the metaverse work securely. IT services and consulting firm Accenture has already begun development of key security functionality for metaverse platforms, said senior managing director David Treat. For instance, the company is developing a mechanism to enable two avatars to securely exchange “tokens,” which could be either identity credentials or units of value, without taking a headset off, he said. “We invest heavily into R&D to make sure that we know how to make these things work for our clients,” said Treat, who oversees Accenture’s tech incubation group, which includes its blockchain and extended reality businesses. This is one of the ways that the use of blockchain technology as an underpinning for the metaverse will be so powerful. As the metaverse evolves from disparate communities into an interoperable virtual world, blockchain will help to enable new, digitally native identity constructs, Treat said. “We’ll have to redesign authentication in a fully digital world,” he said. For example, if people are meeting socially, you may or may not choose to reveal who you really are. Blockchain will help make it possible to securely share, or withhold, identifying information about yourself, Treat said. New understanding Ultimately, securing the metaverse will not only present new issues, but also new complications to old issues. The metaverse will involve the creation of massive quantities of data that would need to be monitored to detect attacks and proactively protect users, according to Pearlman. “It’s a very complex thing to tackle,” said Pearlman, whose past work has also included advising Facebook about third-party security risk. “We’re definitely going to need a new understanding for how to tackle these cyberattacks in the metaverse.” But unquestionably, it will need to be done, according to experts. “In order for us to actually have secure experiences in the metaverse, we have to be able to figure out some way to establish trust in the content, in the safety of the platform, and in the people that we’re interacting with,” Yavor said. “If we’re creating sufficiently convincing virtual reality, we need to provide the same types of outcomes for security and privacy that exist in real life.” There’s reason to be hopeful, though, Wong said. That’s in part because the industry has at least a few years to address these issues before the metaverse is ready for prime time, she said. With the metaverse, “there is absolutely the potential to create new economies, and to connect people in beautiful and meaningful ways,” Wong said. “Part of doing that successfully, I believe, will be addressing security and privacy issues.” Jakkal agreed. “I’m hopeful that the metaverse brings these beautiful experiences for our businesses and for our people,” she said. “But to do good, we need to be safe.” Read more from this VB Special Report : The metaverse: Where we are and where we’re headed Why the metaverse must be open but regulated How the metaverse will let you simulate everything 7 ways the metaverse will change the enterprise Identity and authentication in the metaverse Understanding the 7 layers of the metaverse Can this triple-A game usher in the promise of the metaverse? (sponsored by Star Atlas) How the metaverse could transform upskilling in the enterprise Why the fate of the metaverse could hang on its security Gaming will lead us to the metaverse The potential environmental harms of the growing metaverse VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,638
2,022
"19 ways digital twins improve data center sustainability | VentureBeat"
"https://venturebeat.com/2022/07/07/19-ways-digital-twins-improve-data-center-sustainability"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages 19 ways digital twins improve data center sustainability Share on Facebook Share on X Share on LinkedIn This article is part of a VB special issue. Read the full series here: Intelligent Sustainability. As enterprises look to operate more sustainably, they are demanding more from their digital infrastructure – not only from a cost and efficiency perspective, but from an environmental one, too. Arno van Gennip, vice president of global IBX operations engineering at Equinix , told VentureBeat, “Digital twins are becoming key to improving data center efficiency and reducing our customers’ carbon footprint at every stage – from design to construction to facility management.” Digital twins help to centralize data from across different areas of concern into a shared environment. This allows IT, engineering, finance, procurement and construction teams to explore and simulate the performance, financial and environmental tradeoffs much earlier in the process. Various efficiency gains in equipment and space utilization directly reduce energy and carbon footprint. Digital twins can also help improve construction and operational efficiency to reduce waste, staffing requirements and the associated environmental footprint of these activities. Enterprises and data center operators like Nvidia may cobble together a digital twin workflow from an assortment of simulation and modeling tools that combine engineering, CAD and data center information management (DCIM) capabilities. Increasingly, DCIM vendors like Schneider Electric are introducing digital twin capabilities directly into their tools. Vendors like Dassault Systèmes and Future Facilities provide more integrated digital twins for data centers. And companies like Nvidia are starting to roll out new tools like Nvidia Air for optimizing data center physical and logical layout. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Putting it into operation Equinix works with Future Facilities to build digital twins for several of the company’s data centers. The digital twin helps engineers ensure cooling systems and connected ecosystem components are designed to deliver the required capacity and optimal efficiency. Engineers can compare the expected and actual behavior and energy use of data centers. “This provides us great insight into required maintenance and possibilities to optimize energy efficiency,” van Gennip said. Equinix engineers work with partners to create a 3D model of the physical data center. The data center twin is modeled based on various factors, such as the capacity and density of computing equipment within the data center and cooling system paths. A centralized digital twin platform helps engineers predict proposed changes’ impact on power distribution, space utilization and cooling paths using live data, like power and temperature. This real-time data is merged into the existing model for accurate analysis and predictions, allowing the data center’s twin to boost efficiency by forecasting energy needs. Dassault Systèmes works with leading hyperscale data center companies to design and construct next-generation data centers. “Their biggest challenges are how to reduce project standup time to keep up with the growing demand and how to make data centers more sustainable by reducing energy, water consumption and waste during construction and operations,” said Marty Rozmanith, sales strategy director for the architecture, engineering and construction (AEC) industry at Dassault Systèmes. Making it easy Historically, data center management has been split into silos that each focus on one aspect of managing a facility, Kasper Dessing, director of global building management optimization at Digital Realty , a data center real estate investment trust, told VentureBeat. As a result, managers of different areas can miss the bigger picture. This becomes particularly important when looking at facility maintenance, now and in the future. Data centers produce inordinate amounts of data that is impossible for humans to capture, aggregate and manage. And this will only get worse as digital services continue to become more sophisticated. “With digital twins, we’re able to take a virtual representation of the elements and dynamics within our facilities and simulate their actual behavior, in real time, under any operating scenario,” Dessing said. Digital Realty has found that the generic data centers of their operations are not good enough because of the amount of data and interdependencies between components. Because of this, Digital Realty integrated digital twins of its facilities with its own proprietary artificial intelligence (AI) and machine learning (ML) platform to analyze thousands of data streams. This allows them to track all the components within facilities and make real-time adjustments. It can also help predict behavior in the future for predictive maintenance, saving time and cutting costs. This visibility into facilities and the relationships between individual components also helps improve new facility designs to make them more efficient. Digital Realty also uses digital twins and its AI platform to optimize energy consumption. “Sustainability is a priority for us, and optimizing energy consumption at each of our facilities helps us to cut costs and reduce our impact on the environment simultaneously,” Dessing said. Not every employee has the technical expertise to run a simulation when making a decision. So Digital Realty integrated a recommendation engine into its digital twin platform. “This allows us to make the technology accessible to a much broader range of colleagues, so we don’t have to rely on the experts all the time,” Dessing said. How the pieces fit together The process of designing, building and operating a data center generates a lot of data, which is stored in different formats and disparate systems. Managing and organizing the data with appropriate access control and change management is very challenging, said Rozmanith. Digital twins can bring data from multiple disciplines, levels of development (LOD) and multiple scales. This allows multiple stakeholders to collaborate on a single source of truth in real time. More sophisticated digital twins combine various techniques for simulating thermal, structural, electrical, control and monitoring, manufacturing and assembly using an integrated digital twin. “With a common platform, everyone will be working from a single source of truth, resulting in time savings, quality improvements and overall data center delivery,” Rozmanith explained, “The platform is a change agent.” Enterprises are increasingly investigating how to bring together multiple digital twins. “As we’ve started incorporating more data and simulations that connect engineering designs, construction scheduling and operations, interoperability across twins has become a challenge,” said Teresa Tung, cloud first chief technologist at Accenture. Tung’s team is working with data center providers to apply data and domain expertise to analytics to determine the number and configurations of simulations needed to drive what-if predictions. They use domain knowledge graphs, the same technology that underpins internet search, to capture these requirements and map their relationships between elements. Carsten Baumann, director of strategic initiatives and solution architect at Schneider Electric , said that providers are increasingly adding digital twin capabilities to DCIM tools to simulate the impact of infrastructure upgrades before the actual physical deployment. He believes that open standards that lead to simpler integration between data center equipment and management tools make it easier to use digital twins as part of everyday data center workflows. Here are 19 ways digital twins can improve sustainability across design, construction, operations and planning: Design Placing new servers “Perhaps the biggest impact of using digital twins in the data center industry is in airflow management and IT equipment placements,” said Baumann. The rapid and increased demand to deploy compute, storage and network resources comes with significant infrastructure challenges. Just because there is physical space in a particular rack or location does not mean that there is sufficient power, connectivity and heat removal capacity available. Digital twins can help when a seemingly simple installation might require a significant power upgrade and recommend a better alternative. Increasing density Increasing the density of equipment in a data center can reduce the climate impact of constructing new facilities. Loren Absher, director of enterprise agility with Information Services Group (ISG), said digital twins can help optimize data center designs to improve all the associated elements of power, cabling, cooling requirements, airflow and even raised floor integrity to prevent catastrophic failure. They can also help plan physical workflow changes required for the increased density. Improving thermal performance Cooling is the second-largest energy consumer in data centers, behind the equipment itself. A modern data center includes cooling systems composed of chillers, piping and HVAC equipment. Digital twins can use thermal simulation to understand the cooling system’s behavior and improve its performance. Rozmanith said teams often combine 1D simulation of the chain of equipment representing variation in the number of chillers and size of pipes with 3D computational fluid dynamics (CFD) analysis of air flows to find the best balance between the cold air generated and the cooling of equipment to optimize energy consumption. Assessing seasonal impact Digital twins can also help data center designers better plan for seasonal climate change, said Dan Kirsch, managing director and cofounder of Techstrong Research. In this case, designers can also plan for the impact of outside seasonal climate variations to reduce overall operational cost and energy footprint. “Digital twins allow a truly customized and optimized design based on the specific needs and on-site conditions of a client without the need for on-the-ground experimentation,” Kirsch said. Creating modular components Dassault works with large-scale data center operators to create modular components that can be reused across different data center designs. Rozmanith said digital twins help enterprises define and configure the properties of these modules to reduce the design, procurement and installation time with a configure-to-order approach. This can help reduce the environmental footprint of new data center builds. Testing and validating equipment Bruno Berti, senior vice president of product at NTT Global Data Centers Americas , said they are using digital twins to test and validate equipment before deploying it into data centers. These new workflows allow them to build and test electrical and generator modules, so engineers can identify any process failure before the product goes into production. This reduces the environmental impact of waste and improves risk assessment, accelerates new product development and enhances data center reliability and resiliency. The digital twin also helps schedule predictive maintenance to lower maintenance costs. Optimizing battery performance Digital twins can also be used to model and design systems to improve battery health and expected lifespan, according to Greg Ratcliff, chief innovation officer at Vertiv , a company that makes data center equipment. This could help reduce the environmental footprint associated with creating new batteries. In this case, digital twins help teams simulate different design choices using battery health measurements and facility details to predict every battery’s health and service life in a string. “If a single battery in a string fails, the entire string fails, so monitoring the health of each battery is critical,” Ratcliff said. Assessing environmentally friendly alternatives Data center operators can assess the performance, environmental benefits and downsides of new approaches. For example, Kao Data used Future Facilities digital twin tools to virtually test and deploy a refrigerant-free indirect evaporative cooling (IEC) system that uses water evaporation in place of mechanical systems to cool air on hot days. This helped Kao improve its power utilization effectiveness and reduce its environmental footprint. Construction Streamlining construction Digital twins can simulate complex tasks, assembly, equipment usage and human safety. They can also improve collaboration across the design and construction ecosystem of suppliers, integrators and contractors to remove the process friction. Rozmanith said the combination of better simulation and collaboration could reduce construction time, problems, rework, requests for information and the number of safety incidents. This has helped Dassault customers reduce time to market by an average of 10-15% and reduced the environmental impact associated with longer construction times. Reducing construction waste Data center designers are using digital twins to better plan for construction so that crews can work more efficiently, creating less waste and reducing the time between different stages of construction. “By creating a virtual model of the data center along with the full bill of materials, designers can fine-tune how a construction crew will assemble the data center down to every last detail,” said Kirsch. This planning helps to reduce the need for teams to wait on standby as other teams complete their portion of the build. Reducing waste during the construction of a data center is not trivial. Many of those components cannot be reused or recycled and instead end up in landfills, he said. Operations Recommending maintenance Digital twins can help identify the root causes of issues and make maintenance recommendations for quick fixes that reduce energy. For example, a digital twin model of Equinix’s Amsterdam facility pointed out that they had to clean the cooling towers and tune the fans, which both used more energy than the model expected. This led to an further 10% improvement in energy efficiency in an already efficient data center IBX, said van Gennip. Extending asset life Dassault’s virtual twins can contextualize operations data for AI and ML algorithms to improve predictive maintenance. Rozmanith said this extends the life of equipment, which reduces e-waste. Virtual twins can also optimize energy and water use by improving the efficiency of cooling and power systems. More efficient maintenance and repairs Digital twins can simplify access to all the information required to do maintenance, repairs and refurbishments more efficiently such as documentation, user manuals, maintenance manuals, material supplier information and spare parts lists. Lorenz Hofmann, vice president of custom air handling and modular solutions at Vertiv, said this saves time and work and reduces the CO2 footprint. Automating data center processes Improvements in process mining capabilities can help data center leaders understand how their teams interact with applications and react to changes in the data center environment. Ryan Raiker, senior director of process intelligence at ABBYY, said that the ability to understand and document procedures with digital twins could help data center teams identify automation candidates. They can also implement protocols to take action when failures happen to ensure data center uptime and reduce failure and waste. Improving collaboration between colocation providers and enterprises Colocation data centers allow multiple enterprises to share the same data center. But when an enterprise client decides to install new equipment, this could have power, thermal and weight impact on nearby equipment owned by others. Thésée DataCenter in France worked with Future Facilities to deploy a digital twin of each facility in the cloud. This digital twin enables customers to explore the impact of proposed changes from their own or nearby equipment through a web service portal. This helps Thésée engineers collaborate with customers to improve their data center space usage and reduce the need for new data center builds. Planning Ensuring compliance with data twins NTT is working on a concept for data twins to help enterprises collect and standardize data relating to all aspects of the business. The data twin replicates enterprise data sources and their interrelationships into a standard format to provide a centralized location for analysis and reporting. Bennett Indart, vice president of SMART world solutions at NTT Data Services, said this would help data centers report their progress in complying with sustainability goals. It can also identify new opportunities for improvement. Improving finance decisions NTT’s Berti said that NTT has started integrating financial data into its digital twin. This helps NTT review the cost of materials and labor using real-time data and advanced analytics as part of their planning. It can also help determine whether adjustments to the manufacturing value chain are financially sound and if the projected outcomes reduce data center operational costs. Assessing data center migration impact Accenture worked with Carnegie Mellon University on a digital twin that allows enterprises to measure the sustainability impact of migrations between data centers and cloud providers, called myNav Green Cloud Advisor. Accenture’s Tung said it starts with a twin that baselines the current data center energy consumption, computing requirements and sustainability goals. The tool allows enterprises to plan and compare various cloud solution options that include carbon emissions goals, locations, energy sources and readiness to transition to clean energy. Understanding material impact Kirsch said it’s often difficult to know the actual bill of materials within a data center until construction is complete. During data center construction, teams encounter site-specific conditions that require deviations from the original design. Design teams can plan for all on-site conditions with digital twins and specify the materials needed. “By creating an accurate bill of materials, data center creators and end-users have a full understanding of the materials to be used and their impact on overall sustainability goals before construction begins,” Kirsch said. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,639
2,022
"Choose wisely -- How technology decisions drive data center efficiency | VentureBeat"
"https://venturebeat.com/2022/07/07/choose-wisely-how-technology-decisions-drive-data-center-efficiency"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Sponsored Choose wisely — How technology decisions drive data center efficiency Share on Facebook Share on X Share on LinkedIn Presented by AMD The modern data center hinges on technologies that drive high performance, robust security and ample flexibility. In recent years, however, another key component has started to consume significant mindshare for IT leaders: energy efficiency. As the issue of climate change becomes more pressing, businesses across the globe are understanding their responsibility to reduce their carbon footprints. To address this imperative, business leaders are adopting and acting upon formalized corporate sustainability and efficiency-related goals. The data center, as the computational heart of an organization, presents a unique opportunity to drive efficiencies and propel organizations towards meeting these goals. Reconsidering energy efficiency in the data center With the volume of company data swelling and application demands intensifying, computing systems and the processors that underlie these systems must bear the brunt of the workload. More computational tasks need to be executed, likely necessitating a higher number of servers and more power. Concurrently, as these tasks generate heat, yet more power may be needed to support the cooling mechanisms that keep the data center running smoothly. Because of this, data centers have traditionally been viewed with suspicion when it comes to environmental sustainability; they are often seen as power hungry. IT organizations have had to fight against this negative perception of the data center as inherently environmentally burdensome. But choosing the right technology can help organizations address some of these environmental concerns. Power your data center, the power-efficient way When it comes to choosing the right technologies to power a data center, high performance has typically been the primary objective. Increasingly, however, IT leaders are being tasked with implementing solutions that “have it all” — that is, solutions that drive cost efficiency, energy efficiency and performance. To address this trifecta of (seemingly paradoxical) mandates, choosing the right server processor is crucial. For example, energy costs in the telecom industry can range from 20% to 40% of operating costs. [1] IT organizations need processors that can deliver the necessary amount of performance to enable optimal intra-organizational and customer experiences, while not compromising on costs or energy efficiency. This includes considering the number of servers required to complete tasks, and whether a more powerful processor will allow a data center to provide the same level of compute power with a smaller footprint. Reducing the size of a data center footprint can help lower power and cooling costs, aid in lowering both hardware investment and overall TCO. This is an easy decision to help meet challenging corporate power efficiency goals. Through their infrastructure decisions, IT leaders have a distinctive opportunity to turn sustainability goals into realities and inspire the rest of their organization. Leaner for your budget and your power consumption That all sounds rather abstract, so let us turn it into a more relatable scenario. Let’s say your organization needs to run 1,200 virtual machines (VMs). To achieve this number of VMs, you can choose to run on either 10 AMD EPYC 7713 processor-based servers, or 15 servers based on competitive top of stack x86 processors. In this example, with AMD, you have a third fewer servers, your solution cost goes down by an estimated 44%, and your hardware TCO over three years is a projected 41% lower compared to the competitive setup. You also might reduce energy consumption by an estimated 32% — a reduction which translates to the equivalent environmental benefit provided by 28 acres of United States forest annually. This means that by simply choosing to run these 1,200 VMs on EPYC in this scenario your data center is yielding roughly the same amount of greenhouse gas reduction as would 28 acres of U.S. forest, each year for every year across the life of the deployment. Let that sink in for a moment. [2] Committed to data center efficiency AMD server processors have been designed with energy efficiency in mind, making them the most energy-efficient x86 processors in the game [3] –but the path forward looks even brighter. Last year, AMD announced an aggressive goal of increasing energy efficiency by 30X for server CPUs and GPU accelerators powering servers for HPC and AI-training from 2020-2025. If all global AI and HPC server nodes were to make similar gains, AMD projects up to 51 billion kilowatt-hours (kWh) of electricity could be saved from 2021-2025 relative to baseline trends, amounting to $6.2B USD in electricity savings as well as carbon benefits from 600 million tree seedlings grown for 10 years. [4] Read more about AMD’s ongoing commitment to driving the future of data center sustainability here. Stay efficient today, tomorrow and beyond Sustainability is poised to become an increasingly important part of corporate stewardship and the data center is becoming a ripe opportunity to drive energy efficiency and enable ambitious corporate sustainability goals. Choosing the right processor is a relatively simple but very important step towards unlocking this opportunity and can give IT organizations the ability to balance a set of challenging mandates: drive high performance, do it cost-effectively and help address the burden on our planet. Ram Peddibhotla is Corporate Vice President, Product Management at AMD. [1] https://www.gsma.com/futurenetworks/wiki/energy-efficiency-2/, GSMA 2019 [2] MLNTCO-021 – https://www.amd.com/en/claims/epyc3x#faq-MLNXTCO-021 [3] As of 2/2/22, of SPECpower_ssj® 2008 results published on SPEC’s website, the 55 publications with the highest overall efficiency results were all powered by AMD EPYC processors. More information about SPEC® is available at http://www.spec.org. SPEC and SPECpower are registered trademarks of the Standard Performance Evaluation Corporation. https://www.amd.com/en/claims/epyc3x#faq-EPYC-028 [4] Scenario based on all AI and HPC server nodes globally making similar gains to the AMD 30x goal, resulting in cumulative savings of up to 51.4 billion kilowatt-hours of electricity from 2021-2025 relative to baseline 2020 trends. Assumes $0.12 cents per kwh x 51.4 billion kwh = $6.2 million USD. Metric tonnes of CO2e emissions, and the equivalent estimate for tree plantings, is based on entering electricity savings into the U.S. EPA Greenhouse Gas Equivalency Calculator on 12/1/2021. https://www.epa.gov/energy/greenhouse-gas-equivalencies-calculator The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,640
2,022
"How efficient code increases sustainability in the enterprise | VentureBeat"
"https://venturebeat.com/2022/07/07/how-efficient-code-increases-sustainability-in-the-enterprise"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How efficient code increases sustainability in the enterprise Share on Facebook Share on X Share on LinkedIn This article is part of a VB special issue. Read the full series here: Intelligent Sustainability. Everything counts in large amounts. You don’t have to be Google, or build large AI models, to benefit from writing efficient code. But how do you measure that? It’s complicated, but that’s what Abhishek Gupta and the Green Software Foundation (GSF) are relentlessly working on. The GSF is a nonprofit formed by the Linux Foundation, with 32 organizations and close to 700 individuals participating in various projects to further its mission. Its mission is to build a trusted ecosystem of people, standards, tooling and best practices for creating and building green software, which it defines as “software that is responsible for emitting fewer greenhouse gases.” The likes of Accenture, BCG, GitHub, Intel and Microsoft participate in GSF, and its efforts are organized across four working groups: standards, policy, open source and community. Gupta, who serves as the chair for the Standards working group at GSF, in addition to his roles as BCG’s senior responsible AIleader and expert and the Montreal AI Ethics Institute founder and principal researcher, shared current work and roadmap on measuring the impact of software on sustainability. The first step towards greener code is measuring its impact The first thing Gupta notes about the GSF is that it focuses on reduction, not neutralization. This means that things like renewable energy credits or power purchase agreements, aiming to offset and neutralize, aren’t part of the GSF’s mission. The focus, Gupta said, is on actual reductions in how you design, develop, and deploy software systems. This is a work in progress, and a very complex exercise. But companies at every scale can benefit from more efficient code. Thinkabout what happens to your phone, or laptop, when running apps that involve more or less processing, i.e., playing videos versus editing text. The difference in battery drain is significant. The larger the scale, the larger the stakes — making large language models more efficient, for example, could result in considerable savings. The first step towards improving is measuring, as the famous adage goes. The focal point of Gupta’s work with the GSF Standards working group is something called the software carbon intensity specification (SCI). The SCI specification defines a methodology for calculating the rate of carbon emissions for a software system. The GSF has adopted the notion of carbon efficiency as a way of thinking about the carbon impacts of software systems. This, Gupta explained, is broken down into three parts: energy efficiency, hardware efficiency and carbon awareness. Energy efficiency is trying to consume as little electricity as possible. Electricity is the main way software consumes energy, and in most parts of the world it’s priimarily generated from burning fossil fuel. This is where its carbon impact comes from. Hardware efficiency is trying to use the least amount of embodied carbon possible. Embodied carbon, Gupta noted, is meant to capture the carbon impact of everything that goes into hardware such asservers, chips, smartphones etc. Carbon awareness focuses on trying to do more work when the electricity is “clean,” and less when the electricity is “dirty,” Gupta said. He also referred to the notion of energy proportionality. The idea there is that higher rates of utilization for a piece of hardware mean that electricity is turned into more useful work, rather than idling. When it comes to actually measuring impact, however, things get messy. “Some folks look at Flops. Some look directly at the energy consumed by the systems, and there’s a variety of approaches that lead to quite different results. That’s one of the challenges that we face in the field,” Gupta said. The goal, Gupta said, is to have energy efficiency, hardware efficiency and carbon awareness talked about very explicitly in the calculation. Ultimately, the SCI aims to become an official standard, promoting comparability. Granularity and transparency are key for a complex undertaking One of the key points that Gupta made is that “software and hardware are inextricably linked”. The GSF prioritizes reducing carbon emissions in software, but the choice and use of hardware is a very important part of that. Nowadays, the cloud is where the majority of software is produced and deployed. When we talk about software systems deployed in the cloud, a question that Gupta said people often ask is about fractional use. If only a fraction of a certain hardware is used, only for a certain amount of time, how should that be accounted for? This is where time-sharing and resource sharing come into play. These are ways to calculate what part of a hardware system’s embodied emissions should be taken into account when calculating the carbon intensity score for software. Scale is also considered, through a parameter Gupta called functional unit. That can be the number of minutes spent using the software, or the number of API calls served, for example. For hardware, essentially, the entire lifecycle analysis needs to be considered to be able to calculate embodied emissions. That is really complex, so the GSF started an initiative on creating open data sets that will help people calculate embodied emissions. “When you reserve a particular instance on a cloud provider, they’ll give you some information about the performance of that node and its parameters. But then what are the specifics of that piece of hardware that is actually running your software?” Gupta said. “Getting transparency, getting data on that tends to be important as well. And that’s why we’re investing in creating some open data so that you can facilitate those calculations.” Granularity is key, as Gupta emphasized, otherwise it all ends up being rather abstract and vague. Inevitably, this also leads to complexity, and questions about boundaries, i.e., what should be included in software carbon emissions calculations. “You can think about memory, storage, compute, but also some things that we tend to forget. What is the logging infrastructure? Do you have any sort of monitoring in place? Do you have idle machines that are on standby for redundancy? Do you have some sort of build and deploy pipelines?” He said. “Then speaking of machine learning models. You can have an inventory of models that are used. You can have shadow deployments, canary deployments. You have all of these things, backups that are in place, that also end up being part of that boundary.” The other important principle Gupta emphasized is transparency. Transparency about what is included in calculations, but also about how these calculations are done. For example, where direct observability is not possible, the GSF promotes what Gupta called “a lab based, or model-based approach”. “When we talk about consumption of third-party modules, APIs, libraries, if you don’t have direct visibility, taking a lab based on model-based approach where you can approximate and get some directional intelligence on what the carbon impacts are is still useful. And you can use that in your SCI score calculation, with the requirement that you are transparent and [state] that’s what you’ve done,” Gupta said. From measuring to acting Ultimately, the SCI with all its intricacies and complexity is a means to an end, and the goal is to make it accessible to everyone. The purpose, the GSF notes, is to help users and developers make informed choices about which tools, approaches, architectures, and services they use in the future. It is a score rather than a total; lower numbers are better than higher numbers, and reaching zero is impossible. It is possible to calculate an SCI score for any software application, from a large distributed cloud system to a small monolithic open-source library, any on-premise application or even a serverless function. The product or service may be running in any environment, whether a personal computer, private data center or a hyperscale cloud. As Gupta noted, there is a panoply of related tools out there: Allen AI Institute’s Beaker , RAPL , Greenframe , Code Carbon and PowDroid , to name a few. The GSF offers a comprehensive list. These tools can help enterprises in getting a better understanding of the energy consumption of your application, but because everybody is doing it a bit differently, the results that you get also tend to be different, Gupta said. This is why the GSF promotes adoption of the SCI. An important aspect regardless of the choice of specific tool is actionable feedback. That is, the tool should not only measure the carbon impact of the software, but also offer suggestions for improvement. Some of these tools provide targeted recommendations on what parts of the code are more energy hungry, and where to optimize. But that’s not all that matters — recommendations about processes and choices are important too, Gupta said. For AI systems, Gupta explaned that, people should also think about things like system design, training methodology, and model architectures. Quantizing weights, using distilled networks, adopting TinyML approaches can all be quite useful in reducing the carbon impacts of systems. As there is “a tremendous push” for getting AI models to work on resource-constrained devices, that also has the byproduct of mitigating carbon impacts. Making the right hardware choices can also help, according to Gupta. Using fit for purpose hardware, i.e., application specific integrated circuits, or AI chips such as TPUs, may help reduce the amount of energy used to train AI models. The same goes for deploying AI models — there are systems specifically developed for that purpose, Gupta noted. Making tactical choices in terms of where and when models are trained can also provide benefits. At the moment, sustainability reporting on software is at an embryonic stage. It’s rarely done, it’s on a voluntary basis, and it’s not standardized. An example that comes to mind is Google Cloud Model Cards , used to report on AI models. Gupta believes that sustainability should become a first class citizen everywhere, alongside business and functional considerations. “When you have a product that needs to go out the door, the things that are optional are the first ones that are going to be dropped. If we start to incorporate these as mandatory requirements, then I think people would start paying more attention,” he said. At the same time, Gupta added, as consumers become more savvy, looking at environmental impact scores and making choices based on that, that will also make a difference. If users are only willing to pay for software that is green, it will impact bottom lines, and organizations will be forced to change their practices. Currently, the GSF is working on releasing the first official version of SCI, which Gupta noted will be “a huge milestone.” It is expected to be unveiled at the 2022 UN Climate Change Conference. As Gupta shared, organizations that are a part of the GSF are considering incorporating SCI into their measurement methodologies and the software systems that they build. The GSF is also working on the awareness-raising front, including by holding summits around the world. “We’re embarking on this mission to raise awareness. It’s not something that people really think about today. So, we’re getting people to become aware that — ‘Hey, green software is a thing, and this is why you should care about it,'” Gupta concluded. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,641
2,022
"How green data centers can cut your carbon footprint | VentureBeat"
"https://venturebeat.com/2022/07/07/how-green-data-centers-can-cut-your-carbon-footprint"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How green data centers can cut your carbon footprint Share on Facebook Share on X Share on LinkedIn This article is part of a VB special issue. Read the full series here: Intelligent Sustainability. One of Microsoft’s largest data centers sits near the Columbia River in Quincy, Washington. Everyone loves the scenery, which is breathtaking, and the rural ambiance, which is a welcome respite from Seattle and Bellevue. The accountants, though, love the fact that the local electrical power is cheap because it comes from hydroelectric dams. And there’s one more thing: Hydroelectric power is also considered to be one of the greenest forms of energy with a very low carbon footprint. That’s why some call Microsoft’s compound one of the greenest data centers ever. “This is actually one of my favorite places in the world,” said Brad Smith, the president of Microsoft, when kicking off a video tour during the pandemic lockdown. “Why? I think it represents the most important infrastructure of the 21st century.” Smith sees the massive data center with close to half a million servers in more than 20 buildings as “an incredible intersection between digital technology, energy technology, environmental science and the need for innovation.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The company started construction in 2006 and has been expanding the footprint ever since. The lure of low prices from hydroelectric power may make the CFOs happy, but the marketing team enjoys celebrating the low carbon footprint. What makes a data center green? Building one is a challenging mixture of architecture, network science and heat transfer. Many companies, not just Microsoft, are also asking how they can do a good job on environmental questions too. It’s not hard to understand why. Some believe that a green sales pitch can attract and keep customers. Some just think it’s the right thing to do. They want to build the best green data centers because they feel that the world will demand it. Still, a major challenge is trying to understand just what makes a data center green. Some factors, like carbon footprint , are straightforward, even if they’re not always simple to measure. Other factors are more philosophical, and the companies can make elaborate or sometimes strained arguments about how their strategy is good for some part of the environment. Microsoft’s big data center is one of the easiest to embrace. Hydroelectric power’s low price makes it a popular choice with the CFOs, but it also comes with a nice environmental bonus because no carbon dioxide was emitted into the atmosphere when creating the electricity. The sun evaporates the water and then when it falls into the mountains and rushes down the valleys, some of that energy can be captured by the massive dams that the U.S. built along the rivers in the northwest. Some, though, are starting to point out issues caused largely by the massive size. Data centers turn electricity into heat and getting rid of the heat isn’t always easy. Some biologists, for example, are protesting that dumping too many kilocalories into the water of the Columbia River is distorting the ecosystem. Further down the Columbia River near The Dalles, Oregon, Google built a data center for many of the same reasons as Microsoft. Now the company and some residents are arguing about how many gallons a day that Google should be allowed to use to cool the computers. The dispute has unfolded over a number of years and much of it revolves around non-environmental concerns like whether the city should disclose the details of any agreement to the public. Google has also found itself in similar disputes in Arizona and Texas. The company is addressing the conflicts both legally and by making public promises. In their Water Stewardship paper , the company sets forth principles for using the water efficiently and responsibly. For instance, some of their data centers recirculate water multiple times, effectively saving water over the past practice of using the water for only one cycle of cooling. Other data centers use air cooling, eliminating water use altogether. The company has also set a goal of being “water positive” by returning even more water than their offices consume. They explicitly promise to “replenish 120% of the water we consume, on average, across our offices and data centers and help restore and improve the quality of water and health of ecosystems in the communities where we operate.” Some supporters of the data centers also put the water use in historical context. They point out that cooling modern electronics usually requires much less water than the aluminum smelters that used to be the dominant industry in the Columbia Gorge. Extracting aluminum from bauxite is an energy-intensive process, and companies like Alcoa have long been drawn to Washington State for access to the electricity. Lately, though, Silicon Valley companies are driving out the old metal companies by outbidding them for electrical power. Deploying artificial intelligence Water use, of course, is only part of the debate. Google is one of the leaders in trying to control electrical use and put a limit on their carbon footprint, not just in their data centers, but also in their offices around the world. They are deploying advanced algorithms, sometimes using artificial intelligence , and creating partnerships with green energy companies. “We aim to operate on carbon-free energy 24/7 by 2030 at all our data centers, cloud regions and campuses across the globe – the first company of our size to set this goal,” explained Corina Standiford, a member of Google’s sustainability team. “While it’s important to eliminate our carbon footprint, it’s best if we can use less energy in the first place.” Google speaks openly about many projects. For example, one of the challenges of using some renewable sources like wind power is the supply varies with the weather. Google turned around and trained a machine learning algorithm to predict the wind, something that’s proven to be accurate over about 36 hours. The predictions from the AI are then used to schedule electricity purchases from the larger grid when wind power will not be available. Predicting the needs so far in advance gives other power sources the chance to plan, reducing the costs. Now, their cloud data centers can make longer-term commitments to electrical usage and notify non-wind generators in advance if the wind won’t be blowing. “We can’t eliminate the variability of the wind, but our early results suggest that we can use machine learning to make wind power sufficiently more predictable and valuable,” said Sims Witherspoon, a project lead of the Deep Mind AI tool, and Will Fadrhonc, the lead of the Carbon Free Energy Program, in a blog post. One of Google’s solutions is to support other green sources with both technology and purchase agreements. One of their investments is in Fervo, a company that makes geothermal electrical generators that are always on. The output will help sustain one of the company’s data centers in Nevada. The company is also working directly on improving the technology. The engineers have installed a network of fiber-optic cables near the geothermal plant and Google is training their AI to predict how the heat flows in the ground. They hope to use this data to optimize production and also predict the best times when geothermal energy can replace renewable energy sources. “This collaboration also sets the stage for next-generation geothermal to play a role as a firm and flexible carbon-free energy source that can increasingly replace carbon-emitting fossil fuels,” said Michael Terrell, the Director of Energy at Google in a blog post about the effort. Another important strategy for Google is to calculate useful metrics about their energy profile and set goals around them. When contacted for the article, Google’s spokeswoman rapidly cited a number of facts that revolved around numerical goals: During 2020, all of Google’s data centers delivered “67% round-the-clock carbon free energy”. Five data centers are “operating at or near 90% carbon free energy.” Google tracks power usage throughout the year and makes sure to include all overhead. This allows them to calculate a system wide “power usage effectiveness” of 1.10. Their goal is to turn these numbers into marketing that can help sell users on switching to Google’s Cloud Platform. When companies are looking for cloud instances, they’ll include the numbers in the choice matrix like the Power Usage Effectiveness (PUE) or the amount of Carbon Free Energy (CFE). The Google Cloud Region Picker is one tool for letting customers choose to move their workloads to the greenest Google data centers. Overall price and latency, two values normally used to help select cloud providers, are placed right next to the carbon footprint. At the moment when I was writing this, the Cloud Picker steered me to their servers in Iowa where the price was $0.021811 per virtual CPU hour while the carbon-free energy score was 93%. The site also calculated that carbon output was 454 grams of CO2 equivalent per kilowatt-hour. Google’s marketing efforts show that their customers are willing to consider the environmental effects most often for non-interactive work that would run in the background. “For best-effort workloads like batch jobs or backup, carbon scores were ranked as the top characteristic more than any other factor,” said Chris Talbott, from Cloud Sustainability, and Steren Giannini, a senior product manager, in a blog post. Bitcoin to the rescue? Sometimes the question of carbon footprints and green computing is a bit more complicated. One of the most difficult philosophical knots to unravel is the environmental value of some local Bitcoin mining operations that are parked next to oil wells. Lately, both environmentalists and technologists have complained that Bitcoin mining is incredibly energy intensive, and some estimates suggest that the blockchain miners consume more electricity than a small European nation. The miners do no useful work beyond running an elaborate mathematical game to establish consensus. But what if the energy powering the mining had been wasted – or worse, vented into the atmosphere where it could sit for decades absorbing heat as a greenhouse gas? A number of companies are starting to build mobile data centers by filling shipping containers with computers. Then they drive them to local sources of low-cost energy. One example, Giga, brings specialized hardware for mining bitcoin and parks it near oil wells where it runs the mining computations using energy harvested from burning the natural gas in a generator instead of burning it in an open flare. The developers of these data centers argue that their Bitcoin mining solutions are greener, at least, than the alternative. Many drilling operations end up releasing natural gas as a by-product and they often burn the gas in open flares because there’ often no easy or economically feasible way to capture it and bring it to market. The mobile data centers divert the flare to a local generator that burns the gas to produce electricity. Instead of being wasted, it’s put to use. Not only that, but the engines powering the generators usually do a better job of burning the gas than the naked flares. Little methane escapes into the atmosphere. This may not be as green as, say, using renewable energy, but the operators of the small, portable data centers still claim to be improving the health of the planet. Bitcoin mining is an easy option for these data centers because the algorithms don’t need much interaction with the larger network. The shipping containers can be moved to remote locations with very low bandwidth connections from cellular networks or satellites. In the future, these mobile data centers could also handle other compute-intensive work that doesn’t require fat or fast connections to the larger Internet. Some scientific tasks like simulating protein folding, for instance, might also be ideal. Far-flung data centers Not all of the distant data centers rely upon natural gas for their electricity. One operation in Kenya relies upon geothermal energy extracted from some of the geological fault lines in the Great Rift Valley. The opportunities are great, and some estimate that there may be many thousands of megawatts of untapped power. In one report , the state power company, KenGen, is said to be reaching out to Bitcoin miners to bring their operations to the Rift Valley. They believe that Bitcoin mining is easily moved to Africa so that the untapped geothermal energy can be converted into digital currency. Geothermal power is believed to be very environmentally friendly because it does not release carbon dioxide into the atmosphere. Large versus small An interesting debate is emerging about whether small or large data centers are greener. On one side are the big cloud companies that tout the ability of the large data centers to embrace the latest technologies and exploit all of the economies of scale. Microsoft’s big data center on the Columbia Gorge, for instance, runs on hydroelectricity, but when it needs other power, it uses diesel generators. Microsoft wants to make even this usage greener. They’ve committed to using the best diesel generators that run on biodiesel and only release the well-filtered exhaust. They also intend to phase out diesel by 2030. Many smaller data centers, especially the ones that are built by companies in their office buildings, are often afterthoughts, and the local companies often cannot use the best practices. They may not have very efficient air conditioning, nor can they use the greenest electric options. The fans of the hypersized data centers point to their age and inefficiencies. But sometimes the small data centers have other options. In the wintertime, a local data center can use the extra heat to keep the rest of the office space warm. The extra square footage devoted to the data center may simply be surplus space that the corporation was already paying to support. Simply moving the data center from the basement of the local office may not save as much as it might appear. Fear and financial worries For all the easy excitement, though, it’s important to keep in mind that many data centers do not have green goals or saving the environment at the top of their list. Where the solutions are easy and come with often substantial cost savings at the same time, as they do with adopting hydroelectric power, the companies are quick to embrace green power. But when the cost of alternative energy drives up the price of electricity, many aren’t so openly enthusiastic. Cloud costs remain a major part of IT budgets and, in many cases, they’re already growing because companies are embracing the clouds for other reasons. Adding in still higher prices for greener options is often met with resistance from CFOs in budget-minded firms. This is obvious after a bit of research. While all of the cloud companies are happy to speak about their performance and pricing, many don’t have much to say about any green initiatives. They feel that price is still the most important factor for their customers. Several cloud companies, both big and small, refused to discuss the matter on the record. One said simply through their public relations firm that they wanted to “pass on this for now, as they don’t have much to contribute.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,642
2,022
"What are dual-use data centers and how they drive energy efficiency | VentureBeat"
"https://venturebeat.com/2022/07/07/what-are-dual-use-data-centers-and-how-they-drive-energy-efficiency"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages What are dual-use data centers and how they drive energy efficiency Share on Facebook Share on X Share on LinkedIn This article is part of a VB special issue. Read the full series here: Intelligent Sustainability. In 1984, the global internet traffic was about 15GB per month. By 2014, the average internet traffic per user was 15 Gigabytes per month. Today, the number is even higher , thanks to the rise of mobile devices and digital services that have brought close to 5 billion people online. As more of the world’s population becomes connected, the internet protocol (IP) traffic will skyrocket, increasing the utilization of data centers – which see most of the world’s traffic and big data pass through – as well as the electricity needed to run them (produced primarily in coal-fired plants). According to the International Energy Agency (IEA), global data centers already consume approximately 200-250 TWh of electricity, contributing to 0.3% of global CO2 emissions every year. This is more than the national energy consumption of some countries and around 1% of the global electricity demand. By 2025, with the increase in IP traffic and big data, these data factories are expected to consume one-fifth of the world’s power supply, making the problem much worse. “The majority of the energy demand comes from powering the servers that process the data, but they, in turn, produce heat and need to be cooled,” Michael Strouboulis, business development director for digital infrastructure at Danfoss, told VentureBeat. “This cooling also requires a lot of energy and generates significant excess heat – most of which is currently being admitted into the surrounding environment,” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! This heat, currently being dissipated into the atmosphere, is the hidden golden opportunity to drive energy efficiency and decarbonize data centers. Heat recovery Currently, most organizations are pushing efforts to offset the increasing data and electricity load by ditching legacy data centers in favor of hyperscale ones, streamlining computing processes, using lower-GWP (global warming potential) refrigerants and switching to energy-efficient measures like varying the speed of motors driving fans, water pumps or refrigerant compressors. Google, for one, claims that its measures have reduced the average power usage effectiveness (PUE) – total data center power divided by the energy used just for computing – for all its data centers to 1.12, which is very close to the ideal score of 1.0. “If a data center has a PUE of 1.0 it means that the information technology equipment (ITE) uses 100% of the power and none is wasted in the form of heat … But if the PUE is 1.8 then for every 1.8 watts going into the building, 1 watt is powering the ITE and 0.8 are consumed elsewhere for what is non-ITE power that most is rejected in the form of heat out of the building,” Strouboulis explained. With heat that is otherwise considered ‘waste,’ organizations can meet electricity requirements elsewhere, possibly at a closely located location. In addition to the above-mentioned measures, companies can capture the heat being discharged from their data centers and then turn it into steam or electrical energy for use at other sources. In case the heat temperature is too low, they can use a heat pump to elevate the levels to 60 degrees Celsius or higher to meet the requirements. “Reusing heat generated by processing data in data centers requires technology such as heat recovery units and energy transfer stations to capture and distribute this energy to consumers that need this heat for their industrial or commercial process or simply for comfort heating,” Strouboulis said. Possible applications The heat from a data center can be used for a myriad of applications, starting from something as simple as servicing swimming pools and laundries to vertical farming or meeting the heat requirements of a hospital. “The heat produced by data centers can serve as a new resource for an energy cluster, an integrated heating source, or as a source for a steam system which are all part of local district energy systems (which collects and generates heat for dispersion to a nearby campus or entire municipality),” said Baruch Labunski, president at RankSecure. “More than 940 district energy systems exist in the U.S. so any of them are suitable for garnering a new energy source like data center heat. This could help many local communities reduce energy costs and provide for additional energy needs as the city grows without leaving a major footprint on the environment.” For some perspective, a NeRZ white paper notes that more than 13 billion kWh of electricity was converted into heat in Germany’s data centers and released unused into the environment. This, if reused, could have met the energy needs of Berlin. How enterprises are using data center heat? While the possibilities are endless, leading enterprises are keeping their heat recovery efforts focused on specific areas, such as warming up households or their office buildings. Facebook and H&M both have been reusing the heat from their data centers to heat thousands of nearby households and apartments in Denmark. Amazon , meanwhile, has built a setup to save energy at its Seattle headquarters. “It has developed an internal energy and water system using its campus and a nearby building that houses a data center,” Labunski explained. “The headquarters gets waste heat produced by the data center. It moves underground and warms the water sent to the campus… providing hot water to the entire Amazon headquarters, which includes multiple high-rise buildings and a conference center. When the water cools, it is sent back to the data center to keep computer and data equipment cool before the cycle starts again.” Similarly, Danfoss too plans to utilize excess heat from its data centers to provide 25% of the overall heat required by its headquarters. Investment and gains While heat recovery and utilization can defray energy use elsewhere, the project has certain hurdles. Firstly, heat doesn’t tend to travel that well, which means the consumer of the captured heat must be closely located to the source data center. Secondly, the infrastructure for reusing heat comes with high upfront investment. Labunski estimates that data center improvements such as heat reuse come at $520 to $900 per gross square foot and can almost double the cost of developing the facility. However, once the upfront investment is over, a data center reusing heat can also become a profit driver for the business. “By capturing the waste heat and reusing this for neighboring buildings, you can get significant gains. Consider the cost of heating a building from scratch. Data centers, with their 24/7 operation and a constant stream of heat, are ready-made as a de-facto highly consistent and reliable ‘generator.’ Once that paradigm shift is made, the concept can pay for itself and quickly become a profit center based on recent demonstration projects,” Strouboulis added. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,643
2,022
"What is the environmental impact of Web3? | VentureBeat"
"https://venturebeat.com/2022/07/07/what-is-the-environmental-impact-of-web3"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages What is the environmental impact of Web3? Share on Facebook Share on X Share on LinkedIn This article is part of a VB special issue. Read the full series here: Intelligent Sustainability. Assessing the environmental impact of such a broad ecosystem as a digital communication network is merely a guessing game, even for experts in the sector. How many data centers are there that can be identified? How many stealth centers are operating (particularly in the military and government sectors)? How many are operating above or below their capacities? How much power are they taking “from the wall”? The questions go on and on, but when trying to measure the power usage and carbon footprint of the internet, one also has to consider all the transactions that take place on it. Web3 , like the conventional web, has layers, so the only way to analyze its sustainability is by segments. The application layer will be the most challenging in terms of how it will affect the environment – and ultimately, climate change. A useful evaluation of Web3 must contain this: If Web3 represents an evolution of Web2 to the betterment of mankind, it must also be more sustainable. This means less power for more online services that will come with the advent of Web3; at the moment here in mid-2022, this does not appear to be a feasible proposition. Web3 defined During the last 40 or so years the internet has been classified into three stages of development: Web1, Web2 and Web3. Web1 came into public usage in the early-to-mid-1990s, prior to smartphones. Websites were mostly static, providing only text and a few images. Web2 came into play in the 2000s with interactive websites; portable devices (smartphones, tablets, smartwatches) allowed users not only to utilize content but also to create it. The cloud, which came into being in late 2006 with the introduction of S3 storage from AWS, changed everything, because it made available the complete integration of life in society on computers – from finance to personal life. The 2020s are a jumping-off point to a new internet, Web3, which represents the decentralization of everything. Decentralization is not having a central agent responsible for major decisions; the opposite of this would be a service such as Google, which singularly manages many kinds of transactions on Web2. On Web3, we can expect to see and use a lot more 3D video, augmented reality applications, faster-moving and more impactful video games, AI/ML-powered applications for business and entertainment, and a number of other things we don’t normally see on today’s Web2. Decentralized finance, or DeFi , will be another central resident of Web3. The first major cryptocurrency, Bitcoin, was also the first Web3 project to succeed in this sector. Bitcoin is decentralized through a distributed architecture (called a blockchain) in which each segment has many agents interacting with each other in search of consensus. Yes, it’s pretty labor-and-time-intensive, which works against the idea of automation, which carries so many applications today. Why cryptocurrency takes so much power Here’s an example of the way cryptocurrency works in democratized online transactions: Joe and Diane want to do a sales transaction on the bitcoin network. To do this, it needs to be verified, validated and recorded. The people responsible for validating the transactions are the “miners” who compete among themselves to be selected for this service. Whenever a miner does their job, the miner receives a reward payment in bitcoins. When a miner has created a new block in the network containing valid transactions, other miners will check that everything is indeed correct. If there are any inconsistencies in the information, that block of transactions is rejected and another miner will be selected to redo the job. Sound more complicated than most people want to deal with? Probably, and that’s why this type of transaction may take a long time to become commonplace. Still, the idea of having a universal currency with no ties to governments or other institutions (such as banks and hedge funds) is attractive to a growing number of people globally. At this time, the computational overhead of completing these transactions is alarming, but relatively few people are using the system in mid-2022. However, with multiple millions of people potentially using Bitcoin, Ethereum or another form of cryptocurrency in the future, the power from the wall for handling these high-powered interactions will become a serious problem for Web3’s sustainability goals. The amount of energy it takes to mine a single bitcoin is estimated to be between 86,000 to 286,000 kWh. A kWh is the amount of energy a 1,000-watt appliance uses in over an hour. To put that into perspective, that is about 59 days’ worth of power consumed by an average U.S. household. On an average day , 240,000 to 300,000 bitcoin transactions are sent over the network. When these numbers reach into the seven and eight-figure realms, red lights will be blinking in data centers all over the world, temperatures inside data centers will rise and environmentallists will be furious. Bitcoin’s network consumes around 128 GWh a day in order to produce 900 bitcoins. This is not a good starting point for trying to control the power and carbon footprint used in the internet – the current version or the one to come. Crypto is good, but it’s power-hungry Tesla CEO Elon Musk recently tweeted his concerns that “cryptocurrency is a good idea on many levels … but this cannot come at great cost to the environment.” Shortly after he wrote that, Bitcoin’s business value tumbled 15%. The growing global pressure on bitcoin miners to use more renewable energy has led to the creation of initiatives such as the Bitcoin Mining Council and pushed thoughtful investors to seek out “greener” cryptocurrencies. However, it is not known at this time if there are such cryptocurrencies in which to invest; they all take loads of electrical power to provide the computations needed to provide a secure and successful system for billions of daily transactions. Energy demands around Bitcoin have long been a concern, especially now that we have seen network activity quadruple since its last peak in 2017. The network is still maturing; at its present level, Bitcoin consumes 81.51 terawatt hours (TWh) annually. If it were a country, it would rank as number 39 for annual electricity consumption, ahead of Austria and Venezuela. As this trend continues upward, it is clearly an unsustainable progression in terms of being environmentally desirable. Up to now, we’ve only touched on the DeFi that will be used on Web3. We haven’t discussed all the 3D video, AI/ML, augmented reality apps and tools, and dozens of other heavy power-hungry applications that will be commonly put to work using Web3 as the facilitator. Alternative power sources sought Does the potential Web3 network using conventional power sources appear to be sustainable for decades to come? Not a chance. Are there alternative ways to provide the additional power that will be needed for the richer, deeper services (3D video, AR, more AI/ML-powered services, etc.) that Web3 will provide for its users? There are indeed. In a concerted effort to shake off the fossil fuels humankind has been using for centuries, other power sources such as hydroelectric, wind, solar, biofuels, and geothermal are being developed. The future may reveal other fuel sources yet to come. Processor makers are continuing to design and make cooler-operating chips for all our electronic devices. But we’re still a long way away for having a majority of these new chips in our devices. In 2018, Microsoft sank an experimental data center off the coast of Orkney, England, in an experiment called Project Natick to determine whether placing these units underwater would result in them being more reliable and energy-efficient. In September 2020, the company retrieved it from the ocean floor and called the experiment a success. Could thousands of underwater data centers be the future of data storage? Possibly, but they alone won’t be the answer to all new power requirements that Web3 will present to its users. When to expect a sustainable Web3 While we’ve touched on a few facts here regarding the sustainability of a network that only exists in part today, there is no way we can confidently point to what is going to happen here in the future – even to the end of this decade. It’s much too early, and too much happens quickly in the IT world. Major innovations – especially involving reduction of power and carbon footprints – need to be created and operationalized before a sustainable Web3 can be scaled up and deployed on a regular basis for billions of users. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,644
2,022
"Why data has a sustainability problem | VentureBeat"
"https://venturebeat.com/2022/07/07/why-data-has-a-sustainability-problem"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Why data has a sustainability problem Share on Facebook Share on X Share on LinkedIn This article is part of a VB special issue. Read the full series here: Intelligent Sustainability. Every year in the U.S., it’s estimated that about 5,130 million metric tons of energy-related carbon dioxide is added to the atmosphere. In tech enterprises alone, the explosion of data hasn’t helped matters, as innovation in the sector continues to grow rapidly. Some experts like Sanjay Podder, managing director and global lead of technology sustainability innovation at Accenture, say that if left unchecked, the exponential growth in data could result in increased energy demand and carbon emissions, counteracting progress on climate change. The last two years have only added to the problem. As a result of COVID-19, cloud adoption , AI deployment and consequently data — all exponentially increased as the demand for accelerated digital transformation heated up. Accelerated adoption of these technologies may have helped companies adapt, kept business afloat, allowed employees to keep their jobs during a volatile time and paved the way for future innovation, but what did it do to the environment? VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Data collection and storage, cloud compute and AI all significantly contribute to carbon emissions, but how much and what can enterprises do to mitigate the impacts while propelling forward with innovation? And if data fuels these innovations, what is being done right, and what could companies do better when it comes to data sustainability? “Hopefully, people move from focusing on data at rest to data in motion,” said Phil Tee, CEO of Moogsoft , an AI-driven observability company. “There’s a sort of a culture that has built up around the idea of throwing away nothing and maintaining every bit of data that you ever received. The trouble is when that gets turned into taking that approach to data that you don’t need to keep. Then what happens is that data is instead of it just being thrown away, or minimally retained, it gets maximally retained. So, in other words, there’s a sort of a knee-jerk reaction that because we don’t throw anything else away, we mustn’t throw that away, even if it’s data that’s purely got real-time significance — like literally six milliseconds — and after receiving that data you’ve got no further use of it. I think that is ultimately if you like the lowest hanging fruit on this tree.” Defining the data sustainability problem Technological innovations aren’t going to slow down, and, in fact, they’re booming. A report by Activate Consulting affirms that data and automation in the enterprise are driving the explosion. And while some of these innovations will likely aim to create a better, more efficient reality, their environmental impacts may not be so pretty. A Stanford Magazine article cites that “saving and storing 100 gigabytes of data in the cloud per year would result in a carbon footprint of about 0.2 tons of CO2, based on the usual U.S. electric mix.” However, the cloud and its data centers can come with their own set of environmental issues. MIT reported that “the Cloud now has a greater carbon footprint than the airline industry. A single data center can consume the equivalent electricity of 50,000 homes. At 200 terawatt hours (TWh) annually, data centers collectively devour more energy than some nation-states.” The piece goes on to explain that although power from data centers accounts for 0.3% of overall carbon emissions, if the calculation is broadened to include devices that make these innovations happen like laptops, smartphones and tablets, the total adds up to 2% of carbon emissions worldwide. And AI, which uses vast amounts of data and often leans on the cloud, also has its share of issues — part of which is that the datasets used to train AI are increasingly large and take much energy to run. Researchers from McKinsey confirmed this, stating in an article that “researchers discovered that the environmental costs of training increased in direct proportion to model size.” Similarly, MIT found that “training a single AI model can emit as much carbon as five cars in their lifetimes.” Innovating while mitigating But it’s not all doom and gloom. George Kamiya, analyst with the International Energy Agency (IEA), asserts that while it’s important to pay attention to the sustainability issues, keep in mind that “tech companies have different types of effects on emissions: 1) direct emissions from operations (i.e., their footprint); 2) positive indirect effects utilizing their technologies to reduce emissions; 3) negative indirect effects where their technologies actually result in net increase in emissions.” He argues that while a large amount of attention so far has fixated on companies’ direct carbon footprints, these emissions are relatively small compared with the effects on emissions from the use of digital technologies, services, and platforms. “We certainly need companies to cut their emissions footprints, but companies and policymakers should not lose sight of the fact that the use of these technologies could have much larger impacts in terms of both reducing emissions and increasing emissions in other sectors and services,” Kamiya said. “For example, videoconferencing could help cut emissions from aviation by ‘substituting’ for some business trips, but some uses of machine learning could promote more consumption or increase the competitiveness of fossil fuels, resulting in higher emissions overall. Focusing only on the ‘footprint’ risks missing opportunities — (and risks) — of larger emissions impacts in other sectors and services.” Stanford Ph.D. and Juris Doctor candidate Peter Henderson, a researcher on natural language processing, reinforcement learning, machine learning, artificial intelligence, computer vision and AI ethics, agrees that there are reasonable actions execs can take to keep innovation flowing while reducing environmental harm. “Any generation task requires a lot of data, especially if you don’t have constraints on the topic or the subject matter that the model has to deal with. So, it is true that some areas just need a lot of data. But when you’re building a model, you have a target task in mind, right? And in those cases, where you have a target task in mind, you don’t need all the data in the world. What you need is sort of shown in ML benchmarks,” said Henderson. “A lot of benchmarks are already close to superhuman accuracy on sentiments, like classification or analysis … and so, in those cases, it’s very clear you don’t need all the data in the world because we’re able to solve those with much less. I think people really need to think about the target tasks they are using, and think about how you can constrain the amount of data you’re using to still get your benefit while reducing the amount of costs. That being said, it’s not clear how that interacts with scale.” Stanford has taken steps itself with a tool specifically designed to measure AI and ML’s hidden carbon costs. Additionally, in a paper titled Energy and Policy Considerations for Deep Learning in NLP , researchers Emma Strubell, Ananya Ganesh and Andrew McCallum found that four deep learning NLP models – Transformer, ELMo, BERT, and GPT-2 – have been responsible for the most significant improvements in performance when it comes to energy efficiency. Another way to mitigate the impact of data explosion is to consider how impact is measured. Tools like Microsoft Cloud for Sustainability , SustainLife and Salesforce’s Net Zero Cloud offer ways to measure a company’s carbon footprints and sustainability impacts and even store data needed for companies to visually see and understand potential missteps and opportunities to improve. “We continuously look for ways to advance our carbon emissions reporting and improve our carbon accounting process to deliver faster, better, and more accurate data,” said Ari Alexander, general manager of Salesforce’s Net Zero Cloud. “The vast majority of an organization’s emissions come from its value chain — also known as scope 3 emissions. That includes carbon emissions from partners like data and cloud service providers. With Net Zero Cloud, customers can track scope 1, 2 and 3 emissions, and streamline how they track their supply chain carbon footprint data to effectively engage with suppliers to align on sustainability efforts — all in one place.” Of course, how and where data is ultimately stored, even when in the cloud, also makes a difference. “A lot of the time, the biggest machine learning jobs are run in the cloud and many times that can be moved around to different parts of the world,” Henderson said. “A lot of the carbon emissions from the energy costs can be mitigated by just moving your jobs to a carbon friendly region like Montreal , for example, which has a lot of data centers that run on almost all hydroelectricity. So, running all your machine learning jobs there would lower emissions.” Though, Henderson notes that if everyone moved machine learning jobs to Montreal, it could overwhelm the energy grid in that region. But taking small steps to think about ways to move performance around can make a difference when it comes to climate impact. Providers of data centers like Equinix are innovating toward green data storage. The company in particular — which provides data center services to the likes of enterprise customers such as Zoom, Netflix, Salesforce, AT&T and Verizon — even focuses on the build and design of their data centers being sustainable and has been at this work for more than a decade. “It seems likely that energy efficiency in data centers will continue to improve, but the key question is whether it can keep pace with the increase in demand for data services – in other words, whether overall data center energy use will continue to stay relatively flat (as we’ve seen over the past 10 years) or if it will start to increase more quickly because energy efficiency can’t keep pace with demand growth,” Kamiya notes. “Some of the easier (i.e., low-hanging fruit) efficiency opportunities have already been tapped (notably the shift from less efficient enterprise data centers to more efficient cloud and hyperscale data centers), so it’s possible that we could see a moderate increase in total data center energy use over the next few years. But how much and how quickly is uncertain, and how quickly these can be powered with low-carbon electricity (to keep emissions flat or decreasing).” To keep perspective, Kamiya also noted that development of data centers is not uniform across the world. “Even though globally the energy use has been mostly flat, there have been huge increases in data center hubs such as Ireland, as noted by their central statistics office. Similarly, in the U.S., there will be states where data center energy use will go up a lot and others where it will remain flat and others where it could fall,” he said. Industry predictions for data and energy efficiency While the SEC is moving ahead with regulatory ESG reporting requirements for companies, this is a new area for many companies to navigate right now, particularly in the U.S. where this type of reporting has not been mandated before. “I do think it’s important that the SEC and other regulatory agencies take action in terms of this, but I think companies are already acting without it,” Henderson noted. “At the same time, it’s important to have that baseline level of transparency across the board. I think there are other regulatory actions that can be taken to help move this process along in terms of making sure that carbon footprints and environmental friendliness are mitigated.” As for what’s next while regulatory actions are pending, Kamiya suggests companies can also focus efforts on reducing environmental impacts across their supply chains, as well as “utilizing their platforms and tools to inform consumers on how they can reduce emissions (e.g., providing information on the environmental impacts of different products or shipping options; suggesting low-carbon travel options in map apps; and addressing climate misinformation/disinformation on social media). These core services that these companies provide are where important, additional emissions impacts could be realized.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,645
2,022
"Why hyperscale, modular data centers improve efficiency | VentureBeat"
"https://venturebeat.com/2022/07/07/why-hyperscale-modular-data-centers-improve-efficiency"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Why hyperscale, modular data centers improve efficiency Share on Facebook Share on X Share on LinkedIn This article is part of a VB special issue. Read the full series here: Intelligent Sustainability. As the world moves from Web 2.0 to Web3 – which is taking shape for us to deploy later this decade – power plants that will be providing new and expanded services are undergoing major upgrades in order to handle all that users will require. They will be delivering more bandwidth than we’ve ever seen before, yet they will be using less power from the wall. How is that possible? It’s because we’re going modular: We can replace the individual parts of a data center much more quickly and efficiently than in previous years. We also don’t see the high number of data bottlenecks as were common in the past. This is because we now have more efficient network pipelines, better/leaner software, more solid-state data storage, newer, faster, cooler-running processors and a score of other improvements. All of these components can now be slipped in or out of data centers at a moment’s notice when they aren’t doing the job. It used to take weeks or months to make data center hardware upgrades or improvements. This means we’ll always have the best and fastest components working in our data centers at any given time. New super data centers and telecom interconnects are also replacing whole first-generation facilities at an increasing rate. There are some model data centers that stand out as prescient examples of scalable power usage, lower-tier power draw, low carbon footprint and carefully planned sustainability using natural power sources. Data center builders can learn much from these facilities as examples of how to provide plenty of IT power and still embrace the environment. Much more power, bandwidth will be needed for Web3 We will need a lot more power and bandwidth to run Web3 and metaverse-type applications that require much higher envelopes of power, including apps involving cryptocurrency, high-end gaming, big data analytics and machine learning, 3D video and images as well as augmented reality. AWS, Google, Alibaba, IBM, Microsoft, Dell EMC, Apple, Facebook, VMware, Oracle, AT&T, Verizon and other industry leaders are building new hyperscale, modular data centers around the world that will provide the bulk of the horsepower for the IT requirements of the future. They all are using new federal and state guidelines for power consumption, providing carbon footprint metrics, and incorporating natural power sources (mostly hydroelectric, wind and solar). They all have exemplary PUE (power usage effectiveness) ratings. PUE is a metric – or score – used to determine the energy efficiency of a data center; it is determined by dividing the total amount of power entering a data center by the power used to run the IT equipment within it. For example, Facebook’s Prineville, Oregon’s data center facility has been running an exemplary PUE of 1.078; Google’s numerous data centers average under 1.20 across its global system. Generally, a PUE of under 1.50 is considered top of the line. A conventional data center can take around two years to be installed from conceptualization to deployment into functional use. In contrast, implementing a modular data center is much faster, often taking 50 to 75% less time – and, as CFOs like to note, that equates to a lot of capital saved. Facebook’s exemplary Prineville modular data center campus Being able to install a data center in a shorter amount of time is a major competitive advantage. This is precisely what Meta is now doing. In Prineville, Oregon, a little town at the western edge of the state’s eastern desert 80 miles south of the Columbia River, there are 11 huge buildings on a single sandy-ground campus, comprising a whopping total of 4.6 million square feet of space. Each of these buildings is the size of a couple of large Walmarts, and they appear to be terribly out of place in an area known more for hunting and cattle ranching than anything else. Those 11 data centers were all built in a span of 10 years. Each of the data centers has a single job, such as handling the main Facebook app, the company’s corporate sites, WhatsApp, Instagram, apps for Quest AR, and other services; several are the holders of stored images. Some of the data centers contain as many as 15,000 servers, and most of those slide-out units are custom designed and built by Facebook itself. Several staff workers are deployed to do only one thing day after day: look for red lights on the stacks of servers, then pull those out and replace them with new units. Modest Prineville was the location selected for Facebook’s first and largest hyperscale greenfield data center development, and it continues to run operations efficiently on a 24/7 basis as required by Meta. The Prineville Data Center is supported by 100% renewable energy, including two solar projects located in Oregon. The facility, one of the most energy-efficient in the world, features an innovative cooling system created for the unique climate characteristics of central Oregon. These facilities are designed to take advantage of the prevailing wind from the south that blows into them, is cooled through large water-covered screens, is directed down into the central server room and then blown out of the building through vents on the other side. Little or no air conditioning is required, even when the desert environment runs into the 100-degree-plus range. These precise design features, plus the usage of alternative power sources everywhere on the campus, are what sets a modern modular data center apart from first-generation facilities built 10 to 30 years ago – which still comprise about 90% of all data centers in operation. So, there’s a long way to go in modernizing the bulk of cloud and enterprise IT, the entirety of which is housed in data centers of some kind. How can a modular data center enable sustainability? Modular data centers offer flexibility by letting enterprise customers who are renting colocation space for their servers start with small installations and increase them in size based on need. They can use any type of hardware they need for their use cases: standard servers, storage, and networking or hyper-converged hardware that includes multiple functions inside one device. The latter has been a huge trend for more than a decade; generally, the hyper-converged infrastructure (HCI) models have provided more power-efficient performance than separate footprint server/storage/networking setups because all functions are included in a unit using a single power source. Speed of deployment, supply chain disruptions and limited availability of skilled IT workers are three commonly cited reasons enterprises are moving to modular data center solutions. Colocation facility owners are also influenced by four specific industry trends: edge computing, expanding remote workforces, reducing CapEx and OpEx and increasing sustainability and eco-friendliness. Gartner Research predicts that by 2025, 75% of enterprise data will be processed at the edge, with many of these new data centers handling the inflow of streaming data from cloud applications. For colocation facilities, this means that now is the time to establish a presence in up-and-coming edge markets by using modular data center components. By 2025, 85% of infrastructure strategies will integrate on-premises, colocation, cloud, and edge delivery options into modular data centers, compared with 20% in 2020, according to Gartner. More IT being processed, less power being used Industry thought leaders believe that by the end of the decade, about 75% of the world’s data centers will be drawing more than half of their power supply from renewable natural sources, such as wind, solar and hydroelectric. Because that number is only at about 10% now, that means the IT industry has a long way to go. However, data center efficiency is improving steadily, thanks largely to modular data centers that can have components changed out easily and quickly when they don’t perform well. Currently, industry experts estimate that data storage and transmission in and from data centers use 1% of global electricity. This share has hardly changed since 2010, even though the number of internet users has doubled and global internet traffic has increased 15-fold since, according to the International Energy Agency. The goal of the data center industry is that the use of coal, natural gas and petroleum products to power these large providers of IT will be largely a thing of the past by the start of the next decade. And the industry is well on its way to that goal. GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,646
2,022
"10 ways analytics improves endpoint security and asset management | VentureBeat"
"https://venturebeat.com/2022/03/03/10-ways-analytics-improves-endpoint-security-and-asset-management"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages 10 ways analytics improves endpoint security and asset management Share on Facebook Share on X Share on LinkedIn This article is part of a VB special issue. Read the full series here: Intelligent Security Achieving greater visibility and control over endpoints is table stakes for any organization pursuing zero-trust security. Human and machine identities are the new security perimeter in any network, and protecting those identities with data-driven insights and intelligence is one of the highest priorities for CISOs today. Knowing the current configuration and condition of every endpoint asset helps to keep patches current and endpoints safe. To underscore how essential endpoint security is to zero trust strategies, the White House published the Federal Zero Trust architecture (ZTA) strategy last month. The strategy states that federal agencies need to ensure that Endpoint Detection and Response (EDR) tools will meet Cybersecurity and Infrastructure Security Agency (CISA) technical requirements and are deployed government-wide. The strategy provides practical, pragmatic advice for securing endpoints that are applicable to any organization, also identifying the need for greater analytics-based visibility across networks. Analytics improve endpoint visibility and control Analytics are proving effective in helping enterprises take on these challenges, becoming a growth catalyst for Endpoint Protection Platform (EPP) and Endpoint Detection and Response (EDR) platform. Enterprises spent $13.3 billion on EPP in 2021, predicted to reach $26.4 billion by 2025 , attaining a compound annual growth rate of 18.7%. By the end of 2025, more than 60% of enterprises will have replaced older antivirus products with combined Endpoint Protection Platforms (EPP) and EDR solutions that supplement prevention with detection and response capabilities according to Gartner. Overall enterprise spending on information security and risk management market is projected to reach $233 billion by 2025 , attaining an 11.2% compound annual growth rate between 2020 and 2025. The following are ten ways analytics improves endpoint security, contributing to more effective zero trust architectures and strategies in the process: Predictive analytics and AI show the potential to become the primary detection method for identifying and stopping malware attacks. AI-based techniques such as algorithms have long contributed to improving endpoint security by identifying potential malware attack patterns. More cybersecurity vendors are designing AI into EPP and EDR platforms as the primary detection method and technology for malware. AI-based algorithms can detect file-based malware and learn which files are harmful or not based on the file’s metadata and content. Broadcom’s Content & Malware Analysis illustrates how machine learning is being used to detect and block malware. Their approach combines advanced AI and static code file analysis to detect and analyze threats and stop breach attempts before they can spread. Analytics and AI-based techniques for deriving risk scores based on previous behavioral patterns, time of login, location, and many other quantifiable factors is proving to be effective at securing and controlling access to endpoints. Using AI- and machine learning-based techniques to fine-tune risk scores in milliseconds is proving effective in stopping breach attempts using privileged access credentials. By combining supervised machine learning models that mine historical data to find patterns and unsupervised machine learning to find new anomalies and interrelationships, cybersecurity vendors integrating AI into their platforms are helping to stop breaches. There’s a broad spectrum of cybersecurity vendors either working on or delivering solutions with these technologies, with Microsoft Defender for Endpoint being noteworthy. Microsoft has integrated AI into the Defender platform so its customers can initiate threat hunting across networks, provide real-time threat-monitoring and analysis, detect and respond to advanced attacks with AI-based monitoring, and reduce attack surfaces. Additional vendors providing AI-based endpoint protection include CrowdStrike , Trend Micro , SentinelOne , McAfee , Sophos , VMWare Carbon Black , Broadcom , Cybereason , Ivanti , Kaspersky and others. Integrating predictive analytics, AI and SIEM (Security Information and Event Management) into a single platform enables enterprises to predict, detect and respond to anomalous behaviors and events. Predictive analytics are a core part of SIEM platforms today as they provide automated, real-time correlation and ongoing analysis of all activity observed within a given IT complex. Capturing and analyzing endpoint data in real-time using predictive analytics and AI is providing new insights into asset management and endpoint security. LogRhythm continues to be a leading provider of SIEM platforms for enterprises. The LogRhythm NextGen SIEM Platform relies on predictive analytics and AI-based algorithms to provide automated,real-time analysis and correlation of all activities across an IT environment. Predictive analytics are also helping to keep every endpoint in compliance to regulatory and internal standards. In highly regulated industries including financial services, healthcare and insurance, predictive analytics is increasingly being relied on to discover, classify and protect sensitive data. This is especially the case with HIPAA (Health Insurance Portability and Accountability Act) compliance in healthcare. Amazon Macie is representative of the latest generation of cloud security services. Amazon Macie is often used in workflows aimed at recognizing sensitive data such as personally identifiable information (PII) or intellectual property and provides enterprises with contextual insights that give visibility into how data is being accessed or moved. Amazon Machine monitors data access for any anomalies and creates alerts when it detects the risk of unauthorized access or inadvertent data leaks. Predictive analytics and AI combined are enabling threat analytics to drive greater precision regarding the risk contexts of privileged users’ behavior, creating notifications of risky activity. Combining predictive analytics and AI is the foundation of the most effective threat analytics engines on the market today. High-risk events are immediately flagged, alerted, notified and elevated to IT’s attention. Machine learning-based threat analytics also provide new insights into privileged user access activity based on real-time data related to unusual recent privilege change, the command runs, target accessed and privilege elevation. Leaders in the area include Broadcom , CrowdStrike , Cybereason , Ivanti , Kaspersky SentinelOne , Microsoft , McAfee , Sophos , VMWare Carbon Black and others. Performing real-time endpoint scans and using predictive analytics to identify potential threats in real-time. CISOs are looking for more effective approaches to achieving Hunt and Respond across diverse device networks with a large number of endpoints. Predictive analytics combined with supervised and unsupervised machine learning algorithms are becoming more ingrained in EPP and EDR platforms, helping to identify and resolve potential threats and breach attempts. Predictive analytics are also being used to discover patterns in known or stable processes where anomalous behavior generates an alert then pauses a given process in real-time. Predictive analytics are table stakes in Unified Endpoint Management (UEM) platforms today. The goal CISOs want to accomplish when they acquire and install a UEM often centers on consolidating the many diverse, often conflicting security apps and tools across their organizations. Today UEM platforms rely on predictive analytics, and in some cases, AI-based systems to deliver greater identity, security and remote access reliability and accuracy. The goal of streamlining UEM apps is to better pursue a zero trust security strategy for the long-term. UEM vendors are concentrating on making the connection between predictive analytics, AI and zero trust, showing how they can support an everywhere workplace. Leading UEM platforms are relying on analytics, AI and machine learning to deliver intelligence-driven experience automation to reduce IT overhead and improve employee experience. Leading UEM vendors include Microsoft , VMWare, Ivanti , IBM , ManageEngine , BlackBerry , Matrix42 and Citrix. Privileged access controls to the API level on endpoints need more analytics-driven adaptive intelligence. Endpoints could benefit from having privileged access controls be more adaptively intelligent. That’s the goal many EPP and EDR vendors are pursuing by replacing their static-based approaches to securing machines with session-based API calls from a vault. Knowing the access patterns of machine-based endpoints and identities relative to human ones reduces false-positives and better secures endpoints from API-based attacks. Using predictive analytics, AI and machine learning to define privileged access control levels and identify potential breach attempts to the API level is the fastest-growing area of R&D in endpoint security today. Predictive analytics combined with AI and machine learning is proving effective battling ransomware, starting with patch management. CISOs see the potential of using predictive analytics to gain pre-emptive insights into how they can best identify the start of a potential ransomware attack across any threat surface. As attacks are multifaceted and becoming more complex, the greatest weaknesses enterprises have today is a lack of solid data on patch management progress. Cybersecurity vendors need to concentrate on the long-standing CVEs that cybercriminals keep coming back to and exploiting, using analytics to better understand how CVE gaps can be closed. As ransomware becomes more weaponized , it’s becoming more urgent for EEP and EDR vendors to improve the depth of analytics insight and predictive accuracy of CVD-based attack scenarios. Analytics are proving invaluable for asset management including track and trace of endpoints on or off the network. Every endpoint is another threat surface that needs to be protected. Real-time analytics and a reliable, resilient connection to every endpoint make track-and-trace possible, giving CISOs the visibility and control they need. By combining real-time track-and-trace information with device data, CISOs can find gaps in endpoint security that need to be closed. Having analytics on asset’s health, current patch levels to the OS level and hardware configurations is also invaluable. One of the more interesting vendors is Absolute Software , who provides real-time analytics on the current condition of every endpoint on a network. Absolute’s approach of collaborating with 28 different hardware partners to have their endpoint client integrated at the BIOs level in a wide variety of endpoint devices provides asset management data in real time. Endpoint asset management is an area that private equity and venture capitalists show high interest in, given the increased reliance enterprises have on endpoints that’s driven by rapid growth of virtual workforces and cloud-first business initiatives. Analytics in 2022 and beyond Analytics is defining the future of endpoint protection platforms and is the differentiator from a technology standpoint all vendors are looking to strengthen today. It’s feasible in 2022 there’s going to be heavy merger, acquisition and private equity activity on the part of leaders in the EPP and EDR to address the areas in their product strategies most needing more data-driven insights to remain competitive for the long-term. As the cybersecurity arms race continues to escalate, improving contextual intelligence with analytics, AI and machine learning is key. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,647
2,022
"AI brings greater resilience to self-healing endpoints | VentureBeat"
"https://venturebeat.com/2022/03/03/ai-brings-greater-resilience-to-self-healing-endpoints"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages AI brings greater resilience to self-healing endpoints Share on Facebook Share on X Share on LinkedIn This article is part of a VB special issue. Read the full series here: Intelligent Security CISOs’ time and teams are stretched too thin, keeping remote and hybrid workforces as well as the fast-growing number of machine-based endpoints secure from new, unpredictable attack patterns. Cybersecurity professionals, including CISOs, are doubtful their existing endpoint security systems can thwart an advanced attack. Fifty-five percent of cybersecurity professionals estimate that more than 75% of endpoint attacks can’t be stopped with their current systems, based on a survey by Tanium. Security teams admit they’re behind on patches and often don’t know if a patch will create a collision at the endpoint, leaving it less secure than before. Only 29% of security teams are very confident that the patches they’re installing with stop a breach. The hardest hit by cyberattacks and ransomware last year are also among the slowest to complete endpoint patching. Absolute’s 2021 Endpoint Risk Report found that retailers are on average 101 days out of date on patching endpoints, followed by healthcare at 78 days and financial services at 69 days. Self-healing endpoints are a growth catalyst for the endpoint protection platform (EPP) market, which is predicted to grow from $16 billion in 2022 to $26.4 billion in 2025 , attaining an 18.1% Compound Annual Growth Rate (CAGR) in just three years. This makes it one of the fastest-growing markets in the cybersecurity industry. Enterprises that procrastinate about patch management give cybercriminals the time to weaponize new endpoint attack strategies. Most IT and security professionals say patching takes a backseat to other tasks. Ivanti’s recent survey found that 71% of IT and security leaders say it’s overly complex, cumbersome, and time-consuming. Fifty-seven percent say remote work and decentralized workspaces make a challenging task even more difficult. 6 ways AI brings greater resilience to endpoints Self-healing endpoints differ by their self-diagnostics, combined with their ability to regenerate their operating system and apps, while using AI and ML to identify suspected or actual breach attempts and thwart them. They’re regenerative by design to achieve greater resilience. Self-healing endpoints shut themselves off, re-check all OS and application versioning, and then reset themselves to their specific configuration. All these activities happen autonomously while providing real-time tracking of events. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! CISOs tell VentureBeat that building a business case for self-healing endpoints often involves factoring in ITSM cost and time savings, reduced security operation workloads, asset losses, and improved audit and compliance. VentureBeat sees the urgent need for endpoint security vendors to deliver greater visibility and control, more efficient workflows for rolling back malicious changes and more flexibility in re-configuring endpoints automatically back to correct configurations. A core part of CISOs’ zero trust security strategies center on endpoint security, which are pivotal to current and planned digital business initiatives. AI and ML techniques are proving to be effective core technologies for self-healing endpoints due to the following factors: AI-based endpoints can flex faster to stop complex attacks and self-heal after. CISOs tell VentureBeat that AI and ML-based endpoints can be trained to identify when attackers attempt to poison their algorithms with deliberately misleading attack data. They’re also able to identify when misleading data attempts to redefine classifications across models – all meant to throw the endpoint off a potential breach. Endpoint algorithms know the sequences of a rebuild to the operating system level, enabling autonomous self-healing – and averting a time drain on ITSM service desks. They’re also able to scale patch management across the entire fleet of devices more efficiently than any manual or previously automated approach could. Three key questions CISOs need to ask potential endpoint vendors. Over 70 cybersecurity vendors are promoting their AI and ML-based self-healing endpoint systems and platforms today. Unfortunately, finding the endpoint vendors who can deliver is difficult. In all fairness, there’s a wide spectrum of AI and Ml use cases for self-healing endpoints today. The challenge is to find the approach that works best for your organization. The three questions to ask are: Specifics on data sets used for model training. Ask the vendor to provide an overview of the volume and variety of data sets they’re training their models with. Ask how these data sets are helping to reduce false positives and identify actual breach attempts. What are their track record training models? Is the data from only a given industry or cross-industry global or just from your country? The more diverse the industry coverage in the data set, the greater the chance breach attempts will be caught. How can I retrain classifiers and algorithms at scale? Cloud platforms’ scalability is an advantage on this requirement – and it’s good to check and see if the vendors you’re considering for endpoint security have that capability. They’re harder to evade versus rules-based endpoints. IT and cybersecurity teams find that the latest generation of AI-based endpoints is easy to deploy. However, they’re a challenge to fine-tune as synthetic data is a work in progress. Despite their limitations, AI-based endpoints are more resilient than their rules-based counterparts because they’re designed to identify and act on anomalies faster. It helps set a high bar for vendor innovation. Table stakes are for self-healing endpoints that can regenerate themselves after an attack either purely through software or from being embedded in the BIOS. Arguably being embedded into the firmware of an endpoint is the most reliable approach there is to achieving greater resilience. Absolute resilience is factory-embedded in firmware by 28 device manufacturers today, making it the world’s only firmware-embedded endpoint visibility and control platform. Keeping up with the many changes to firmware across their manufacturing partners while providing predictive analytics of endpoint health is innovative. Today, AI and ML future releases are on the roadmaps of the more than 70 different software-based self-healing endpoint providers. 2022 will be a pivotal year for innovation in the self-healing endpoint security market. Cloud platforms are proving to be a faster, more secure onramp for self-healing endpoints. Microsoft, McAfee, Broadcom, and CrowdStrike dominate the endpoint security market, and each of them has been delivering self-healing endpoint security systems on the cloud for years. When it comes to Endpoint Detection and Response (EDR), CrowdStrike is the market leader. Microsoft leads the broader endpoint protection platform market. Microsoft rebranded ATP to Microsoft Defender for Identity earlier this month, and together with CrowdStrike Falcon , Ivanti Neurons, Symantec Endpoint Protection, Sophos Intercept X, Trend Micro Apex One, ESET Endpoint Security, Kaspersky Endpoint Security, McAfee Endpoint Security, and several others, these vendors all are emphasizing cloud-first deployment strategies today. Each of them relies on AI and ML to differentiate themselves from each other by finding new approaches to reduce attackers’ attempts at misdirecting models with adversarial inputs, using generative adversarial networks and developing new approaches to stop attackers from poisoning data. Reduce ITSM costs and improve compliance at the same time. Self-healing endpoints that include AI and ML eliminate IT Help Desk backlogs by keeping endpoints up-to-date. Reducing the call volume on IT Help Desks can save over $45K a year, assuming a typical call takes 10 minutes and the cumulative time savings in 1,260 hours saved by the IT help desk annually. The more AI-enabled an endpoint is, the more automated audit and compliance reporting become. The Health Insurance Portability and Accountability Act (HIPAA), General Data Protection Regulation (GDPR), and the Payment Card Industry Data Security Standard (PCI DSS) all require periodic IT audits. The time and cost savings of automating audits by organizations vary significantly. It’s a reasonable assumption to budget at least a $67K savings per year in audit preparation costs alone. The future of self-healing endpoints With IT and security teams stretched thin already, CISOs and CIOs need to add thousands of new endpoints to secure their growing remote and hybrid workforces. According to Forrester, their workloads are compounded with new machine identities growing twice as fast as the human ones. CISOs tell VentureBeat that the most valuable aspect of AI and ML in endpoint security is how reliable and resilient self-healing endpoints are becoming. CISOs want greater visibility and control, more efficient workflows for rolling back malicious changes and more flexibility in re-configuring endpoints automatically back to correct configurations. Add to that the need for more detailed, real-time asset management data and the future of self-healing endpoints is moving in an AI-driven direction. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,648
2,022
"AI-powered intelligent security makes the hybrid enterprise possible | VentureBeat"
"https://venturebeat.com/2022/03/03/ai-powered-intelligent-security-makes-the-hybrid-enterprise-possible"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages AI-powered intelligent security makes the hybrid enterprise possible Share on Facebook Share on X Share on LinkedIn This article is part of a VB special issue. Read the full series here: Intelligent Security At A.S. Watson Group, a global health and beauty retailer, shifting employees into the home meant that many of the company’s critical cybersecurity tools were no longer effective. Vulnerability scanning and automatic updates for endpoint protection, for instance, were only configured to work on an internal corporate network. And so, in the midst of the pandemic, the company shifted gears and added a new vendor to fill the gap in security coverage created by having remote workers outside the corporate perimeter. The vendor that A.S. Watson selected was Vectra, which specializes in offering artificial intelligence (AI) for threat detection and response across the customer’s environments — regardless of where their users might be located geographically. For a global company with a complex hybrid environment like A.S. Watson, an AI-driven security approach was the only way to get the job done. And Vectra’s tool deployed rapidly, since it didn’t require an agent to be deployed onto endpoints such as laptops and desktop PCs. “You simply hook a sensor into your network and almost instantly you see every active host in your network,” said Arjan Hurkmans, a cybersecurity manager at AS Watson, in an email. “The AI does an excellent job in figuring out what behavior is legit or not. We currently monitor 50,000 unique IP addresses and only a handful of detections need manual investigation from a SOC [security operations center] analyst.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! In that way, “our 24/7 SOC doesn’t waste time on detections that do not matter and can act swiftly to anything that needs to be investigated,” Hurkmans said. “We want to see a cyberthreat as fast as possible and keep our business going.” Why Ai is an essential technology For countless companies around the world, being able to keep business going with a remote or hybrid workforce during the pandemic has been essential. And while the cybersecurity challenges of having workers in the home have been massive, the use of advanced AI, machine learning (ML) and deep learning technologies in many security tools has been among the key factors in making this all possible. Put another way, intelligent security is having its moment. AI for security has been an “enabling foundation” that has allowed remote work to function at scale during the pandemic, said Mark Driver, a research vice president at Gartner. Because of all the differences of working in a home versus in an office, the ability for companies to determine what constitutes normal behavior in a remote work setting, from a security perspective, is monumentally more difficult. “You end up with significant levels of false positives, which can slow your security systems to a halt,” Driver said. “You have to have a way to cut through that — reduce those false positives, find the signal in that noise — without overly restricting the remote worker.” And while AI for security is not a silver bullet, what it’s very good at is analyzing and watching the behavior in an environment and quickly adapting to the changes. In this case, the AI “understands if there are changes happening, because more employees are accessing [corporate resources] remotely,” Driver said. “It learns to adapt to those changes and can reduce those false positives — understanding what is normal but an outlier, and what is potentially a dangerous attack.” Thus, while intelligent security hasn’t gotten as much attention during the pandemic as has the role of collaboration tools and cloud software, it’s ultimately played a similarly essential part in making the past two years possible for businesses. And as many workforces settle in to a permanently hybrid approach, the use of AI/ML in security tools will only become more crucial. The new cybersecurity perimeter Even before the pandemic began, 69% of companies felt they could not effectively defend against cyberattacks without AI capabilities, according to Capgemini research from 2019. It’s a fair assumption that the number is much higher now — if not verging on unanimity — amid the experience of trying to securely enable remote workers during the pandemic. According to numerous findings, threat actors have seized on the shift of workers into the home to escalate attacks including phishing and social engineering — leading to malware deployment, such as ransomware, and data theft. “These bad actors out there have recognized that it’s no longer about attacking the perimeter in the corporation. What they’re doing now is they’re attacking the human — the person,” said Patrick Harr, CEO of AI-powered security vendor SlashNext. In other words, with the shift to remote work, “the users are now the new perimeter of security,” Harr said. Email phishing attacks have surged as high as 220% above normal at points during the pandemic, according to F5 — while the total number of ransomware attacks more than doubled in 2021, SonicWall reports. Data leaks related to ransomware jumped 82% last year, CrowdStrike data shows, and 79% of IT teams report an increase in endpoint-related breaches, according to HP Wolf Security. Attackers quickly pinpointed the shift to remote work as an ideal scenario for their ends: Reduced security protections, increased email communications and general stress and confusion. Attackers targeting the remote workforce “know they’re distracted. They know they’re busy,” Harr said. Threat actors also embraced new ways to target workers: Maybe the workers themselves won’t click on a phishing email — but maybe their kid who uses the same computer will. “The threat surface for companies has expanded, because it’s moved into the house — and you have no control over what’s going on in that household,” said Chuck Everette, director of cybersecurity advocacy at Deep Instinct, which offers deep learning technologies for protecting endpoints. “Cyber criminals go after the weakest link in an organization’s defenses — frequently, untrained individuals.” Autonomous security To combat these tactics, customers have turned to intelligent security companies such as Deep Instinct in order to head off malicious cyberthreats before they can even reach their remote workers. The company’s deep learning algorithm is “fully autonomous,” trained on huge sets of raw data samples, and ultimately capable of predicting known and unknown attacks before they take place, Everette said. The technology can do this because it “thinks like a human brain,” he said. At financial services firm Equity Trustees, Deep Instinct and its deep learning approach has proven invaluable amid the shift of workers into the home, according to the company’s chief technology officer, Phing Lee. The Deep Instinct technology has brought the ability to actively detect and stop new threats from even entering the environment — across every device used by employees — including sophisticated advanced persistent threats and previously unknown, zero day attacks, Lee said. Meanwhile, false positives — which previously ate up 40% of the company’s SOC resources — have been “dramatically minimized,” he said. For endpoint protection alerts, false positives have been reduced by 95% for the company using Deep Instinct’s solution. Because the deep learning technology prevents threats from executing, “our security team can dedicate more time to understand where the threats originate from, analyze those threats and take steps to improve our overall security posture,” Lee said in an email. Deep learning also comes into play as one of the AI/ML technologies behind Ivanti’s Neurons solution suite. Ivanti Neurons can be used for securing endpoints with capabilities including anomaly detection and self-healing for issues such as vulnerabilities and configuration drift, says Ivanti president Nayaki Nayyar. The Ivanti Neurons technology can also automatically discover all of a customer’s assets, and deliver intelligence about risks from unpatched devices — both of which have proven extremely useful for businesses with distributed workforces, Nayyar said. At SouthStar Bank, deploying Ivanti Neurons for these use cases has enabled the bank’s IT staff to more easily handle many tasks around securing its remote workers, according to SouthStar Bank IT specialist Jesse Miller. “Without these technologies, it would’ve been extremely difficult to make this all happen,” he said. Behavior-based security Another frontier for intelligent security involves using AI/ML technology to assess user behavior, providing new avenues for improving security controls. Darktrace, a provider of self-learning AI for security, has been a pioneer in terms of behavior-based security approaches using artificial intelligence. In January, spurred by the need to better protect distributed workforces, the company for the first time unveiled capabilities for autonomous response on endpoint devices. Using AI/ML, the tool assesses the behavior of users on endpoints and learns what’s normal for them, and what represents a deviation. It then prevents the anomalous activity from taking place, while allowing any normal behavior to continue. All of this is done autonomously on the endpoint, and is tailored to the exact context of the user and device, says Max Heinemeyer, director of threat hunting at Darktrace. For instance, the tool could curtail one specific type of activity that has been deemed abnormal, and do so for just a limited amount of time to give the security team a chance to catch up, Heinemeyer said. This avoids the major pitfalls of many security technologies — which either block too much, and interrupt productivity, or don’t block enough, he said. Instead, the technology is “actually responding in real time, based on the context and situation,” he said. “It’s behavioral containment.” At Groupement Hospitalier Territorial de Dordogne in France, several weeks after deploying Darktrace technology in mid-2021, the hospital system was struck with a ransomware attack, said CISO Vincent Genot. But Darktrace’s autonomous response capability intervened to block the attack before it could cause any interruption to operations. The hospital system was able to “continue working, continue being connected to the internet and continue to care for patients even while under attack,” Genot said in an email. On a more day-to-day level, Darktrace’s AI-driven technology is “clever enough” to know the difference between unusual behavior that is harmless – such as one an employee working from a café – and a malicious attack, he said. And with a greater number of employees working remotely, the fact that the vendor’s AI can now defend endpoint devices is a “huge game-changer,” Genot said. As one recent example, Darktrace’s AI spotted that an employee had connected their laptop to a potentially insecure Wi-Fi network. “The AI flagged this immediately, allowing us to act before attackers could compromise our organization,” Genot said. Importantly, because Darktrace offers security for cloud, network, software-as-a-service and email, in addition to endpoint, “the contextual awareness the algorithms gain from other parts of our digital estate is beneficial in stopping endpoint attacks,” Genot said. “Darktrace is our AI-powered eyes looking across the entire digital business.” How AI/ML prevents phishing exploits When it comes to email security, AI/ML has been used for years for automatically quarantining malicious emails. But some vendors, including Darktrace, aim to offer enhanced email security by using AI/ML for analyzing user behavior in email applications — another way that behavior is being factored in for improving cybersecurity. Doing so can unearth and address additional security risks, ranging from unintended errors to insider threats, said Kevin Lynch, CEO at security consultancy Optiv. By correlating and learning the behavior of workers, “you start to see the behavioral tendencies of certain participants in your environment versus others,” Lynch said. Such systems can then add more rules for workers that need them — and fewer rules for those that don’t. For companies that want to address the critical human element of email security, a static policy set will not do the job, according to Lynch. But for assessing the evolution of behaviors and actions, he said, “machine learning is perfect for that.” This capability proved itself in a major way when the workforce shifted into the home environment, and email behaviors suddenly changed, said Josh Yavor, chief information security officer at email security vendor Tessian. Rather than depending on a set of rules to govern email security, Tessian’s product uses machine learning to evaluate behavior and adapt to changes — without customers needing to do very much, Yavor said. The technology can “establish new meanings of what normal looks like” automatically, he said. “This type of technological approach has the value of being flexible in situations exactly like this.” Like many companies, Waverton Investment Management saw a rise in phishing campaigns and an increase in the number of impersonation attempts following the shift to remote work, said Mudassar Ulhaq, chief information officer at the investment management firm. “The challenge with remote working is that your people are in isolated environments,” Ulhaq said in an email. “They can’t check with their colleagues if the email is legitimate, and you’re ultimately asking them to make the call, alone, on whether an email is safe or not.” Tessian’s use of ML to understand what normal user behavior looks like has thus been key to detecting and preventing threats during the pandemic, he said. “Manually fine-tuning your security tools to every single individual’s risk profile is incredibly time-consuming,” Ulhaq said. But with Tessian’s ML-driven solution, Waverton has been able to implement a security strategy that is “tailored to protect every employee, without burdening the security team.” Evolving tactics AI/ML technology can enable email security tools to automatically adapt to the ways that attackers are evolving, as well, said Aaron Higbee, chief technology officer at email security vendor Cofense. “We don’t know how an attacker will evolve their tactic inside of an email to bypass automated filtering technology,” Higbee said. “We just know that they will.” To help counter these changing tactics, Cofense deploys ML-driven computer vision technology that analyzes the visual appearance of both the email body and any web pages that are linked out to from the email. And if something looks off, the email is prevented from reaching the user. Thus, while the text that a phishing email uses will inevitably keep changing in order to sneak past traditional filters, Cofense spots the visual clues that point toward malicious intent, Higbee said. At managed IT services firm Rader, the Cofense solution has excelled at identifying impostor emails and purging them from user inboxes during the pandemic, said Rader chief information security officer Tim Fournet. During a time when verifying the authenticity of emails has become more challenging for workers, Cofense’s AI/ML capabilities “have increased the reliability of our email filters,” Fournet said. Beyond email, a growing number of phishing attacks are now taking place in other messaging avenues — including mobile SMS, Facebook Messenger and increasingly even LinkedIn, said SlashNext’s Harr. His company’s solution uses AI/ML technologies including computer vision and natural language processing — trained on massive quantities of data — to understand the behavior and the intent in messages, and detect phishing attempts with high accuracy. Crucially, the solution works across email, browser and mobile, including personal communication channels. With workers in the home and often working on mobile devices, “it’s important to be able to protect the personal channels as well as the corporate channels,” Harr said. AI’s crucial role Ultimately, it’s just one of the many examples of things that businesses now need to rethink, as they look to secure their workers and their corporate data in the age of remote and hybrid work. And to make it all happen, intelligent security should play a pivotal role. In the pre-pandemic world, says Oliver Tavakoli, chief technology officer at Vectra, users would sit behind a firewall, and access applications and data that were also behind a firewall. Now, “you sit at home, connect through the internet, and access another thing that is outside of a firewall connected to the internet,” Tavakoli said. “In that world, you just open yourself up to much more threat.” To effectively counter this increased cyberthreat, the key for security teams is to get “very good at separating the signal from the noise,” he said. “And you can’t do that without AI and ML.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,649
2,022
"How AI-powered XDR can secure the hybrid workforce | VentureBeat"
"https://venturebeat.com/2022/03/03/how-ai-powered-xdr-can-secure-the-hybrid-workforce"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How AI-powered XDR can secure the hybrid workforce Share on Facebook Share on X Share on LinkedIn This article is part of a VB special issue. Read the full series here: Intelligent Security A year ago, NOV Inc. was in the middle of evaluating a new security product to help with securing its globally distributed workforce, spread across more than 60 countries. The oilfield equipment maker was considering deploying an extended detection and response (XDR) solution from SentinelOne — and as part of the evaluation, NOV deployed the XDR platform across a company it had recently acquired. “Immediately” after deployment, SentinelOne’s Singularity XDR detected and halted a cyberattack in progress against the acquired company, said NOV chief information security officer John McLeod — and then remediated the attack, as well. “This was all done during the pandemic lockdown, in a country on the other side of the globe, where we didn’t speak the same language,” McLeod said in an email. Perhaps unsurprisingly, NOV ended up becoming a customer. And the artificial intelligence (AI) and machine learning (ML) capabilities at the heart of the Singularity XDR solution have continued to prove the value of the product for protecting the company and its distributed workers, McLeod said. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! How behavioral AI stops threats SentinelOne’s XDR platform ingests and correlates data from numerous sources, with the help of distributed AI models that run on every endpoint and cloud workload in the customer’s environment, according to chief product officer Raj Rajamani. The platform uses “behavioral AI” technology that monitors and links behaviors — then autonomously shuts down activities that are deemed a threat, Rajamani said. The AI/ML capabilities bring a clear advantage for the XDR platform over endpoint protection platform (EPP) and endpoint detection and response (EDR) tools — including by making cybersecurity a more-autonomous operation than it’s been previously, McLeod said. “Their behavioral AI/ML approach was far superior to our legacy EPP, and the native integration of XDR allowed us to eliminate a separate EDR agent,” he said. “It’s much more effective to secure a remote workforce with technology requiring very little administrative interaction versus our legacy human-powered solutions with inherent delays.” Ultimately, “having technology that can act in real time, without human intervention, is a big step forward in cybersecurity,” McLeod said. While still a relatively nascent category within security, XDR has found its chance to shine during the pandemic — at a time when cyberattacks such as ransomware and data theft have skyrocketed. Ransomware attacks spiked 62% in 2020, then surged 105% in 2021, according to SonicWall. Meanwhile, data leaks related to ransomware jumped 82% last year, CrowdStrike reports. XDR vs. SIEM While capabilities can vary across vendors in XDR, the overall concept is to integrate and correlate data from numerous security tools — and from across varying environments — in order to help customers prioritize the biggest threats. In the process, XDR is capable of addressing many of the biggest challenges facing security teams simultaneously: security tool sprawl, alert fatigue and shortage of cybersecurity personnel to make sense of all the data flooding in from their systems. While this may sound a lot like what security information and event management ( SIEM ) was supposed to provide, XDR actually delivers in a way that SIEM was never able to, according to Alex Burinskiy, chief product security officer for the Americas at access solutions firm Assa Abloy. The bottom line, said Burinskiy — a customer of SentinelOne both at his previous company, edtech firm Cengage, and in his current role — is that XDR is “accomplishing what SIEM promised to do.” One key reason for this, experts told VentureBeat, is the use of advanced AI and ML technologies in XDR platforms. Many XDR solutions excel at using ML for detection of anomalies that indicate a new, previously unknown threat, said Forrester analyst Allie Mellen. For instance, ML-driven XDR can reveal malicious behavior by correlating a string of actions that aren’t typical for a user, Mellen said. While SIEM can also use AI/ML, XDR uses the technologies in “more discrete, targeted ways,” she said — such as by correlating data prior to an analyst starting an investigation, or orchestrating response actions. XDR vs. EDR Importantly, many XDR platforms go beyond EDR by bringing in telemetry from more than just endpoints. And the ability to correlate data across all those areas — including email, applications and cloud environments — is how XDR can provide enhanced visibility into malicious activity, Mellen said. Which, of course, is exactly what businesses with remote workers are really looking for when it comes to security. “That’s where things start to get really interesting — because you get a lot more context about what’s happening in the environment than you can get with just the endpoint alone,” Mellen said. At this point, EDR is now table stakes in cybersecurity. And the complexity of the tools landscape — paired with the challenges of securing a distributed workforce — suggest that it’s worth considering XDR in order to leverage detection and response that can go beyond the endpoint, experts said. While less than 5% of organizations are using XDR today, that’s expected to climb to 40% by 2027, according to a recent report from Gartner. “When you look at your cybersecurity strategy, you need to protect the applications, network, data, email, endpoints, identities — including identities of devices — and of course the cloud,” said Patrick Hevesi, a vice president and analyst at Gartner. “And so XDR — as it plugs into more and more of these different types of assets as part of delivering that detection and response — is going to definitely help any cybersecurity strategy.” AI engine And AI/ML algorithms are pivotal to how XDR platforms make it all happen. Ultimately, XDR is powered by AI/ML as its “engine” and core technology, said Aimei Wei, founder and CTO at Stellar Cyber. The company’s XDR platform uses AI/ML throughout the threat detection process, from normalizing and correlating data that it ingests from different security tools, to analyzing time series and peer groups (using unsupervised ML), to pinpointing attack patterns with supervised ML. The Stellar Cyber XDR platform also uses advanced Graph ML to generate context for security teams around the highest-priority threats. “If we can automatically add context and piece things together for the security analyst, it makes their work much more efficient,” Wei said. And this is even more essential when many workers are remote, she said. “What we can do is achieve full [security] coverage, regardless of what the customer’s environment is,” Wei said. “It covers the whole attack surface.” One customer that has come to rely on XDR as part of its remote workforce security strategy is EBSCO Industries, a provider of discovery services and databases to libraries. The shift of workers into the home meant the company needed to change the way it looked at external access and devise a better method for securing its devices, said Ryan Loy, chief information officer at EBSCO. “We suspected we had blind spots and areas of our environment where we did not have complete visibility,” Loy said in an email. Native vs. open XDR EBSCO ended up selecting Stellar Cyber as its XDR vendor, in part because the company offers an “open” XDR platform that can ingest data feeds from other vendors’ security tools. Open XDR — sometimes referred to as “hybrid” XDR — is one of the two major varieties of extended detection and response available today. The other is “native” XDR, which relies solely on data feeds from an XDR vendor’s own tools and capabilities. With open XDR, businesses that already use a significant number of cybersecurity tools in their environment can leverage many or all of those. For EBSCO, using Stellar Cyber’s “open” XDR meant the platform “worked with our existing investments,” Loy said. “We did not want to disrupt our toolsets just to do something new.” Customers can then use an open XDR platform to ingest and correlate all of their security data, and prioritize the threats that are uncovered across their current toolset. XDR serves to provide a view of the big picture in terms of security, Loy said. “Each tool’s output is like looking at an individual tree in the forest. But by combining inputs from all of our tools with XDR, we see the entire forest,” he said. When it comes to the artificial intelligence capabilities of Stellar Cyber’s XDR platform, “their AI/ML is baked into the user interface. And my team is presented with ‘look here’ types of correlated indications when something is awry,” Loy said. “That is how AI should work.” While EBSCO’s security team still has to perform some analysis on the correlated information, he said, “alert-chasing” and manual correlation tasks “are now history.” AI-powered analysis XDR approaches vary by vendor, not only in terms of whether they are open/hybrid or native, but also when it comes to who they partner with to augment their data analysis. At Cybereason, for instance, the company’s XDR platform is “powered” by the Google Chronicle cloud security analytics service. Among the advantages is that, “unlike other solutions,” the XDR platform is cloud-native, said Eric Sun, director of product marketing at Cybereason. This means that the XDR platform “is built to support diverse, cloud-first remote workforces” and can integrate with key collaboration and identity management solutions such as Microsoft 365, Google Workspace and Okta, Sun said in an email. Key AI/ML capabilities include Cybereason’s MalOp detection engine, which identifies malicious behaviors using conditional probability tables and Markov chain algorithms, in order to predict potential cause-and-effect cyberattack relationships and “stitch together logs that match these predictions,” Sun said. Other AI-driven approaches to XDR include CrowdStrike’s ExPRT.AI model, used in the company’s Falcon XDR platform. ExPRT.AI identifies vulnerabilities that pose the highest risk to an organization and prioritizes them for remediation, said Amol Kulkarni, chief product and engineering officer at CrowdStrike, in an email. Crucially, Ex.PRT.ai analyzes the evolving threat landscape and produces a daily risk rating for each vulnerability — Critical, High, Medium or Low, Kulkarni said. The platform’s AI/ML models are trained on massive datasets that enable the CrowdStrike Falcon XDR to “identify attack trends that a human couldn’t unearth,” he said. “This level of comprehensive insight is essential with today’s rapidly evolving remote work environment — as attackers are continually advancing their attack methodologies.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,650
2,022
"How AI protects machine identities in a zero-trust world | VentureBeat"
"https://venturebeat.com/2022/03/03/how-ai-protects-machine-identities-in-a-zero-trust-world"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How AI protects machine identities in a zero-trust world Share on Facebook Share on X Share on LinkedIn This article is part of a VB special issue. Read the full series here: Intelligent Security Bad actors know all they need to do is find one unprotected machine identity , and they’re into a company’s network. Analyzing their breaches shows they move laterally across systems, departments, and servers, looking for the most valuable data to exfiltrate while often embedding ransomware. By scanning enterprise networks, bad actors often find unprotected machine identities to exploit. These factors are why machine identities are a favorite attack surface today. Why machine identities need zero trust Organizations quickly realize they’re competing in a zero-trust world today, and every endpoint, whether human or machine-based, is their new security perimeter. Virtual workforces are here to stay, creating thousands of new mobility, device, and IoT endpoints. Enterprises are also augmenting tech stacks to gain insights from real-time monitoring data captured using edge computing and IoT devices. Forrester estimates that machine identities (including bots, robots, and IoT) grow twice as fast as human identities on organizational networks. These factors combine to drive an economic loss of between $51.5 to $71.9 billion attributable to poor machine identity protection. Exposed APIs lead to machine identities also being compromised, contributing to machine identity attacks growing 400% between 2018 and 2019, increasing by over 700% between 2014 and 2019. Defining machine identities CISOs tell VentureBeat they are selectively applying AI and machine learning to the areas of their endpoint, certificate, and key lifecycle management strategies today that need greater automation and scale. An example is how one financial services organization pursuing a zero trust strategy uses AI-based Unified Endpoint Management (UEM) that keeps machine-based endpoints current on patches using AI to analyze each and deliver the appropriate patch to each. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! How AI is protecting machine identities It’s common for an organization not to know how many machine identities it has at any given moment, according to a recent conversation VentureBeat had with the CISO of a Fortune 100 company. It’s understandable, given that 25% of security leaders say the number of identities they’re managing has increased by a factor of ten or more in the last year. Eighty-four percent of security leaders say the number of identities they manage has doubled in the last year. All of this translates into a growing workload for already overloaded IT and security teams, 40% of which are still using spreadsheets to manually track digital certificates, combined with 57% of enterprises not having an accurate inventory of SSH keys. Certificate outages, key misuse or theft, including granting too much privilege to employees who don’t need it, and audit failures are symptoms of a bigger problem with machine identities and endpoint security. Most CISOs VentureBeat speaks with are pursuing a zero trust strategy long-term and have their boards of directors supporting them. Boards want to see new digital-first initiatives drive revenue while reducing the risks of cyberattacks. CISOs are struggling with the massive workloads of protecting machine identities while pursuing zero trust. The answer is automating key areas of endpoint lifecycle management with AI and machine learning. The following are five key areas AI and machine learning (ML) show the potential to protect machine identities in an increasingly zero-trust world. Automating machine governance and policies. Securing machine-to-machine communications successfully starts with consistently applying governance and policies across every endpoint. Unfortunately, this isn’t easy because machine identities in many organizations rely on siloed systems that provide little if any visibility and control for CISOs and their teams. One CISO told VentureBeat recently that it’s frustrating given how much innovation is going on in cybersecurity. Today, there is no single pane of glass that shows all machine identities and their governance, user policies, and endpoint health. Vendors to watch in this area include Ericom with their ZTEdge SASE Platform and their Automatic Policy Builder , which uses machine learning to create and maintain user or machine-level policies. Their customers say the Policy Builder is proving to be effective at automating repetitive tasks and delivering higher accuracy in policies than could be achieved otherwise. Additional vendors to watch include Delinea Microsoft Security , Ivanti , SailPoint , Venafi , ZScaler , and others. Automating patch management while improving visibility and control. Cybersecurity vendors prioritize patch management, improved visibility, and machine identity control because their results drive funded business cases. Patch management, in particular, is a fascinating area of AI-based innovation for machine-based innovation today. CISOs tells VentureBeat it’s a sure sign of cross-functional teams both within IT and across the organization not communicating with each other when there are wide gaps in asset inventories, including errors in key management databases. Vulnerability scans need to be defined by a given organizations’ risk tolerance, compliance requirements, type and taxonomy of asset classes, and available resources. It’s a perfect use case for AI and algorithms to solve complex constraint-based problems, including path thousands of machines within the shortest time. Taking a data-driven approach to patch management is helping enterprises defeat ransomware attacks. Leaders in this area include BeyondTrust , Delinea , Ivanti, KeyFactor , Microsoft Security , Venafi , ZScaler , and others. Using AI and ML to discover new machine identities. It’s common for cybersecurity and IT teams not to know where up to 40% of their machine endpoints are at any given point in time. Given the various devices and workloads IT infrastructures create, the fact that so many machine identities are unknown amplified how critical it is to pursue a zero-trust security strategy for all machine identities. Cisco’s approach is unique, relying on machine learning analytics to analyze endpoint data comprised of over 250 attributes. Cisco branded the service AI Endpoint Analytics. The system rule library is a composite of various IT and IoT devices in an enterprise’s market space. Beyond the system rule library, Cisco AI Endpoint Analytics has a machine-learning component that helps build endpoint fingerprints to reduce the net unknown endpoints in your environment when they are not otherwise available. Ivanti Neurons for Discovery is also proving effective in providing IT and security teams with accurate, actionable asset information they can use to discover and map the linkages between key assets with the services and applications that depend on those assets. Additional AI ML leaders to discover new machine identities include CyCognito , Delinea , Ivanti, KeyFactor , Microsoft Security , Venafi , ZScaler , and others. Key and digital certificate configuration. Arguably one of the weakest links in machine identity and machine lifecycle management, key and digital certificate configurations are often stored in spreadsheets and rarely updated to their current configurations. CISOs tell VentureBeat that this area suffers because of the lack of resources in their organizations and the chronic cybersecurity and IT shortage they’re dealing with. Each machine requires a unique identity to manage and secure machine-to-machine connections and communication across a network. Their digital identities are often assigned via SSL, TLS, or authentication tokens, SSH keys, or code-signing certificates. Bad actors target this area often, looking for opportunities to compromise SSH keys, bypass code-signed certificates or compromise SSL and TLS certificates. AI and machine learning are helping to solve the challenges of getting key and digital certificates correctly assigned and kept up to date for every machine identity on an organizations’ network. Relying on algorithms to ensure the accuracy and integrity of every machine identity with their respective keys and digital certificates is the goal. Leaders in this field include CheckPoint , Delinea , Fortinet , IBM Security , Ivanti, KeyFactor , Microsoft Security , Venafi , ZScaler , and others. UEM for machine identities. AI and ML adoption accelerate the fastest when these core technologies are embedded in endpoint security platforms already in use across enterprises. The same holds for UEM for machine identities. Taking an AI-based approach to managing machine-based endpoints enables real-time OS, patch, and application updates that are the most needed to keep each endpoint secure. Leading vendors in this area include Absolute Software’s Resilience , the industry’s first self-healing zero trust platform; it’s noteworthy for its asset management, device and application control, endpoint intelligence, incident reporting, and compliance, according to G2 Crowds’ crowdsourced ratings. Ivanti Neurons for UEM relies on AI-enabled bots to seek out machine identities and endpoints and automatically update them, unprompted. Their approach to self-healing endpoints is noteworthy for creatively combining AI, ML, and bot technologies to deliver UEM and patch management at scale across their customer base. Additional vendors rated highly by G2 Crowd include CrowdStrike Falcon , VMWare Workspace ONE , and others. A secure future for machine identity Machine identities’ complexity makes them a challenge to secure at scale and over their lifecycles, further complicating CISOs’ efforts to secure them as part of their zero-trust security strategies. It’s the most urgent problem many enterprises need to address, however, as just one compromised machine identity can bring an entire enterprise network down. AI and machine learning’s innate strengths are paying off in five key areas, according to CISOs. First, business cases to spend more on endpoint security need data to substantiate them, especially when reducing risk and assuring uninterrupted operations. AI and ML provide the data techniques and foundation delivering results in five key areas ranging from automating machine governance and policies to implementing UEM. The worst ransomware attacks and breaches of 2021 started because machine identities and digital certificates were compromised. The bottom line is that every organization is competing in a zero-trust world, complete with complex threats aimed at any available, unprotected machine. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,651
2,022
"Tipping the scales: Hardening your cyber defenses by thinking like your attacker | VentureBeat"
"https://venturebeat.com/2022/03/03/tipping-the-scales-hardening-your-cyber-defenses-by-thinking-like-your-attacker"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Sponsored Tipping the scales: Hardening your cyber defenses by thinking like your attacker Share on Facebook Share on X Share on LinkedIn For decades, cyber-attackers have had the upper hand. In an ever-changing security risk landscape, organized cybercriminals have leveraged information-sharing more to their benefit. But what could tip the scales in the other direction? What could give the defenders the much-needed advantage? With cyber-attacks against businesses and governments escalating in frequency and severity, it is no longer enough for your organization to understand only your own defenses. You must be able to understand your cyber “enemy,” too. Across the cybersecurity industry, there is an urgent need to make it harder for attackers to succeed. That’s where AI can help. Step 1: Understand yourself The first step in gaining the upper hand is to understand your infrastructure, vulnerabilities, and the obstacles you may face in ratcheting up defenses. Organizations must have visibility into the entire spectrum of their digital assets continuously and in real-time. You cannot protect what you do not know. Without an end-to-end understanding of what is going on within your organization, you will not be able to identify when something is amiss. But unfortunately, detection alone is not enough. Historically, the cybersecurity industry has been reactive, only detecting threats it observed beforehand. When researchers and security professionals identify a new cyber-attack, an automated system issues a pre-programmed action, or a human operator runs a series of pre-planned playbooks to counter the attack step-by-step. This process can take too long or miss parts of an attackers’ movements. Blanket response mechanisms also fail to react to and contain real-world attacks, which creative and determined threat actors are constantly tweaking and improving. At a fundamental level, attackers want to exploit vulnerabilities. That’s why cyber-attackers have always had the upper hand because security teams have not seen many of these unnoticed cracks in advance. Attackers can identify novel paths, leveraging security gaps that organizations may not realize exist. They continue innovating, combining new methods to create novel approaches, and exposing dormant software vulnerabilities like Log4Shell. Our adversaries are showing no signs of slowing down. A small proportion of organizations currently conduct no adversarial assessment, meaning they do not actively look for vulnerabilities within their systems. Even those organizations with a mature and well-resourced blue team who complete adversarial assessments still often fall short of defending their critical assets from cyber-attack simulations by the red team. An accomplished red team requires a qualified group of individuals with a honed skillset. The demand for such services is high, while proficient individuals are in short supply — this is true across the entire industry. These exercises take time and other resources, meaning organizations often end up testing their defenses irregularly. And frequently, security teams focus on patching vulnerabilities and updating systems between these exercises, only to meet a new laundry list six months later. Step 2: Understand your enemy The next step for defenders to gain the upper hand is to understand the enemy deeply: you need to comprehend the tactics, techniques, and procedures (TTPs) they will use. You cannot prevent future attacks from an enemy you do not know. While identifying your “crown jewels,” or your most critical assets, might be a solid first step to mounting a robust defensive posture, understanding the potential routes a threat actor may take to access that data can help you better defend those assets. Security leaders must understand: What TTPs do attackers commonly use? What paths might they take to cause the most disruption to the business, the most damage to systems, or even the most danger to infrastructure safety? There is an urgent need to take preventive measures against these cyber-attackers with broad-spectrum adversarial simulation across all digital assets. AI has already improved defensive progress, innovating the areas of threat detection, investigation, and response. Despite this advancement, organizations are still reacting to attackers. We need to make it easier for organizations to become more proactive. The fundamental priorities of cybersecurity organizations need to change. We need to leverage AI to emulate attack paths, launch controlled attacks, and test defenses. This “attack path modeling” activity can show organizations the most likely routes an attacker will take to access its “crown jewels” and help organizations discover their cyber risks from the inside. These modeled attack paths based on real-time data from your organization’s environment can help blue teams prioritize mitigations, ranking the allocation of resources to maximize efficacy. The approach allows organizations to understand priority vulnerabilities continuously rather than depending on irregular exercises, reducing the needed resources to complete these simulations. Organizations can leverage this technology to compare attack paths based on impact and occurrence probability to distinguish which will be most valuable to an adversary. Adversaries want to exploit vulnerabilities across a wide range of domains, both internal and external to an organization. So, sourcing data across those domains is critical to creating a complete, end-to-end model of potential attacks. These capabilities are often only available to major banks and governments that often have bigger security budgets but without the power of AI to support human skills. However, attack path modeling can help expand access. This process can aid in neutralizing potential attack paths at the most critical “choke points” — without disrupting business activity. Focusing security spent on the choke points bespoke to your organization, in real-time, across multiple domains has a significant impact on preventing adversaries from achieving their objectives. It moves the needle in hardening an organization holistically. By leveraging a combination of technologies to model and simulate attacks, security teams will finally identify their risks proactively rather than reactively. Security teams can tackle risk head-on by understanding themselves and their enemies by enabling innovative technology approaches. In the art of cyber warfare, attack path modeling supports the potential to give security teams ways to “future proof” people and organizations against unknown threats. With forward-looking cybersecurity, we may finally be able to tip the scales in favor of the defenders and give them the power to defeat an aggressive enemy. Max Heinemeyer is Director of Threat Hunting at Darktrace. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,652
2,022
"Why analytics are core to any endpoint security business case | VentureBeat"
"https://venturebeat.com/2022/03/03/why-analytics-are-core-to-any-endpoint-security-business-case"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Why analytics are core to any endpoint security business case Share on Facebook Share on X Share on LinkedIn This article is part of a VB special issue. Read the full series here: Intelligent Security Taking a rigorous, data-driven analytical approach to creating a business case for endpoint security delivers the added benefit of uncovering glaring weaknesses in an enterprise network. The goal needs to be greater visibility and control of every endpoint as a threat surface and asset. Complicating that challenge is the mercurially changing nature of machine identities, making a 360-degree view of endpoint security elusive to maintain. Endpoints are the attack surface of choice for cybercriminals and nation-states who often launch Advanced Persistent Threats (APT) simultaneously at a broad base of endpoints. Their goal is to evade detection, move laterally, install ransomware, exfiltrate valuable customer, employee, and company data, identify systems with the most valuable data. A recent study by Tanium found that 55% of security and risk management leaders estimate that 75% or more of endpoint attacks can’t be stopped. A recent Cybersecurity Insiders report found that 60% of organizations are aware of fewer than 75% of the devices on their network, and only 58% of organizations say they could identify every vulnerable asset in their organization within 24 hours of a critical exploit. It’s taking enterprises an average enterprise 97 days to test and deploy patches to each endpoint. Benchmark endpoint benefits first CISOs tell VentureBeat that one of the best actions they took early in the process of creating their business cases for endpoint security was to complete an extensive audit of every endpoint they could locate. There’s a running debate in IT and cybersecurity teams if all endpoints in the world’s largest enterprises are accounted for. In reality, they are not. One leading manufacturer of consumer packaged goods’ CISO told VentureBeat that up to 35% of endpoints, especially those with machine identities, aren’t known today. A good business case for endpoint security will close that 35% gap and put guardrails in place to ensure it never gets that large again. Quantifying the benefits works best when IT and cybersecurity teams take an audit mindset and delve into each endpoint, and the process they’re relying on today to identify them. Taking this approach often uncovers which endpoints are overloaded with agents, so many that software conflicts render the endpoint just as unprotected as if there were no agents at all. Absolute’s recent 2021 Endpoint Risk Report found that there are on average 11.7 security agents or controls on an average endpoint, creating potential software conflicts. The more security controls per endpoint, the more frequent the collisions and decay, leaving them more vulnerable than before. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Endpoint audits using advanced analytics identify over-configured endpoints and other potential areas that put enterprises at risk of a breach. The shift to the cloud for Endpoint Protection Platforms (EPP) is providing a faster onramp for enterprises looking for endpoint data. Combining anonymized data from their customer base and using Tableau to create a cloud-based real-time dashboard, Absolute’s Remote Work and Distance Learning Center provides a broad benchmark of endpoint security health in aggregate today. The dashboard provides insights into device and data security, device health, device type and device usage and collaboration. It’s a useful reference site for evaluating how the pandemic continues to impact device usage and endpoint security. Benchmarking the following series of benefits is a good starting point for building a business case: Quantify the gains that could be made reducing IT help Desk’s time on endpoint configuration management. It’s a fair assumption to make that reducing the call volume of an IT Help Desk for endpoint configuration requirements can net out at least $45,000 a year. That’s based on the assumption of a call taking 10 minutes and a total time savings of around 1,260 hours every year. Reducing asset loss and device write-offs can conservatively save $300,000 a year in a typical enterprise. A primary factor in getting CISOs to commit the time and resources to an endpoint audit is to get in control of this number; it’s the amount of endpoint devices written off every year because they’re lost, stolen or not accounted for. Audits often find up to 40% of endpoints either inoperable, stolen or unallocated over a year. This also becomes a factor driving self-healing endpoints as they often provide real-time status updates on their configurations down to the OS, BIOS and patch levels. Audit and identify the cost savings of not having to put secops through file drills and rushed emergency endpoint projects using analytics to track time savings. IT Directors say lack of consistent endpoint security management burns thousands of hours a year and rarely provide the needed visibility and control of endpoints so badly needed in enterprise networks today. Getting to visibility of every endpoint is the goal in this phase of any audit being done in support of a business case. Fortunately there’s a significant amount of innovation going on in this area, with a diverse group of vendors offering solutions. A few of them include Absolute, CrowdStrike, CyCognito, Ivanti, Microsoft Defender for Endpoint and others. IT teams tell VentureBeat that based on their own estimates, approximately 2,500 hours could be saved from firefighting emergency endpoint security problems with a proven EPP platform. Assuming a typical enterprises’ cost structure the 2,500 hour savings would net out $130,000 a year in total savings alone. Analytics on endpoint use and condition are table stakes for getting endpoint asset lifecycle planning right. Endpoint platforms need to support analytics to the endpoint level to deliver the data needed for more accurate asset lifecycle projections and financial models. Asset lifecycles are becoming shorter on all endpoint devices, creating the potential for large, unforeseen cost variances enterprises will have to cover if they don’t predict an accurate lifecycle planning figure accurately. Getting this right with analytics and the financial data of how much is invested in endpoints in turn drives Return on Invested Capital (ROIC) and can conservatively save a typical enterprise approximately $140,000 in amortization and depreciation costs alone. Analytics improves regulatory and internal audits and can save $67,000 a year in regulatory audit prep time and expense alone. A few of the many regulatory audits enterprises need to be prepared to pass down to the endpoint level include General Data Protection Regulation (GDPR), Health Insurance Portability and Accountability Act (HIPAA) and Payment Card Industry Data Security Standard (PCI DSS), to name a few. How much endpoint security will cost These are the costs most often included in an endpoint security business case: Annual and multi-year licensing cost scenarios depending on the vendor. There’s a wide spectrum of pricing models Endpoint Protection Platform (EPP) provides rely on today. One of the market-leading vendors in cloud-based EPP platforms that promise self-healing, autonomous endpoint technology have a range of licensing costs from $750K to over $1.7M. ITSM and legacy system integration, customization, implementation and change management costs bundling into professional services is common. Most enterprises want endpoint security integrated across their tech stacks, and CISOs tell VentureBeat the time pay-offs with ITSM integration are worth it. Baseline figures VentureBeat received from EPP vendors are between $40K to over $150K to integrate EPP, ITSM and installed SIEM. How to define a business case for endpoint security While the initial goal of creating a business case for investing in endpoint security is to gain funding, the rigor of quantifying the costs and benefits often identifies large gaps in endpoint security coverage and security. How insightful and rigorous the use of analytics are to identify endpoint security costs and benefits pay off with a more accurate 360-degree view of endpoints for the first time. The audit that organizations do to gain the data needed for the following Return on Investment (ROI) calculation provides for many the first true, quantified view of just what endpoints are actually active and in use or not. It’s also invaluable for capturing the figure of lost endpoints; something CISOs admit to VentureBeat few companies have a 100% visibility into today. The following is the ROI calculation to define what an enterprise can reasonably expect to achieve on endpoint security investments: Endpoint Security ROI = (Endpoint Security Benefits – Endpoint Security Costs) / Endpoint Costs x 100. An insurance and financial services enterprise recently completed an internal audit, and the projected annual benefits of their endpoint security deployment will be $475,000 against a cost of $65,000, yielding a $6.30 net return for every $1 invested. Lessons learned from enterprises who have successfully created an ROI for endpoint security included the following: Start with an endpoint pilot and benchmark costs by phase. Even the most-researched ROI models can vary over time. It’s best to get an initial pilot completed of a series of endpoints then truth-test assumptions of the ROI model with actual financial data. Pilot programs help identify areas where previous approaches to endpoint security left gaps that leave an enterprise more vulnerable than before. Analytics are the guardrails every endpoint security strategy needs to stay on track. Selecting an EPP platform or endpoint security solution that includes analytics as part of its baseline is critical to success. It’s a bonus if there are APIs that can be used for gathering data and giving greater flexibility in defining custom metrics and Key Performance Indicators (KPIs). Keep C-level sponsors involved beyond go-live with future plans and wins. Too often once an endpoint security project is rolled out, C-level sponsors move onto another project. Getting their buy-in and support for future roadmaps is also key for getting the most value from endpoint security investments over the long-term. Endpoint security and its future benefits Defining a business case for endpoint security needs to quantify as many benefits and costs beforehand as possible if it’s going to succeed. The time savings IT teams can realize alone from automating patch management and self-healing endpoints is significant. Add to that having more effective endpoint discovery and asset management data, and the business case becomes an easier decision for C-level executives and in some cases, the board to support. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,653
2,022
"Why zero-trust security tops VPN for remote work — using AI | VentureBeat"
"https://venturebeat.com/2022/03/03/why-zero-trust-security-tops-vpn-for-remote-work-using-ai"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Why zero-trust security tops VPN for remote work — using AI Share on Facebook Share on X Share on LinkedIn This article is part of a VB special issue. Read the full series here: Intelligent Security When Oklahoma shifted into remote work at the start of the pandemic in 2020, the security issues involved with having employees working from home manifested almost immediately. Like many other organizations in that situation, the state government turned to a VPN — or virtual private network — in an attempt to provide secure remote access to work applications and data. As a technology that first emerged in the early years of the internet, the VPN was built for a very different time. And it showed, as the state’s 30,000 employees tried to use the system from their homes. “Many of our state agencies initially experienced outages as networks were overwhelmed with external logins and service requests,” said Matt Singleton, CISO for the state of Oklahoma, in an email. “Our legacy VPN solutions simply could not meet the increased volume and scalability demands. This resulted in a surge in calls to service desks and hundreds of VPN tickets a day, as well as increased cyber risk.” As a result, the state government went looking for a better remote access solution — and ultimately turned to cybersecurity vendor Zscaler , a provider of zero trust network access (ZTNA) technology. Zscaler’s cloud-based platform not only addressed the scalability issues, but also boosted security by ensuring that only authorized users could connect to applications, Singleton said. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The remote access approach is described as “zero trust” because it essentially assumes that users are unauthorized, by default — and it requires more proof of their legitimacy than traditional methods. To achieve this, ZTNA vendors such as Zscaler consider additional context factors beyond just authentication of identity, such as the security posture of a user’s device and the application or data they’re trying to access. Irregularities and unusual behavior can thus be identified immediately, and malicious actors can be denied access. At the core of Zscaler’s products is its Zero Trust Exchange, which combines a cloud-based secure web gateway with cloud-delivered ZTNA. Powered by AI/ML “Having 30,000 remote workers would have meant 30,000 ‘branches’ in a traditional network, each of which pose a potential security risk,” Singleton said. “With the Zscaler Zero Trust Exchange, all users were enabled to securely and productively perform their jobs from any remote location.” And to help make this all possible, Zscaler’s platform leverages advanced artificial intelligence (AI) and machine learning (ML) technology, according to Howie Xu, vice president of machine learning at AI at the company. For ZTNA to work optimally for an organization, the system really requires policies that are personalized, granular, and dynamic. But at a certain scale, that’s an extremely difficult thing for a company to implement manually — and the fact that many workers are remote only adds further complexity, Xu said. With zero trust, “you have to leverage AI machine learning at some point,” he said. “Once the scale reaches a certain level, it’s impossible to write rules anymore.” To manually maintain personalized and dynamic policies for a large organization, you would likely need dozens of staffers devoted to just doing that, Xu said. Zscaler’s AI/ML, however, can serve as an “assistant” on this work that takes away much of the manual effort required, he said. “You still need to do some work. AI/ML is not a robot that can do anything and everything. We are not there today,” Xu said. “But it alleviates [the manual work] tremendously.” And compared to VPN, the use of AI/ML with ZTNA is a major part of why it’s superior from a security perspective, he noted. Attempting to use VPN to achieve “granular, personalized, dynamic, contextual policies” is “not even possible,” Xu said. “You have to use more intelligent policies for this purpose.” Granular approach to AI-powered security The state of Oklahoma is currently in the midst of rolling out AI-powered intelligent policies as part of the Zscaler Private Access (ZPA) product, Singleton said. “ZPA Intelligent Policy will help develop an incredibly granular approach to segmentation of applications — and ultimately users,” he said. “This is huge for improving the cybersecurity posture of organizations with large remote workforces, as enterprise assets must co-exist with consumer/commercial products and environments.” If these advantages weren’t enough for an organization to consider switching from VPN to ZTNA for their hybrid workforce, one can also consider that VPN has had a hand in enabling some major breaches, such as the Colonial Pipeline ransomware attack in June 2020. The attack led to a shutdown of a 5,500-mile gas pipeline for five days, resulting in a fuel shortage that affected more than 10,000 gas stations across the Southeastern U.S. Without a doubt, breaches such as the Colonial Pipeline ransomware attack have shown that VPNs can be a serious liability, said Jay Chaudhry, founder and CEO of Zscaler. In the Colonial Pipeline breach, the attackers stole VPN credentials, “got on the network, moved laterally, found a high-value billing application – and then encrypted it and stole the data,” Chaudhry said. “It highlighted the notion that VPNs [can be] dangerous – dangerous because they put you on the network, and then you can move laterally.” By contrast, the idea of zero trust is to “connect users to applications – just applications, not to the network,” he said. ZTNA growth By many indications, zero trust network access is starting to gain some major momentum as many organizations settle into a permanently hybrid approach for their workforce. At least 40% of remote access to corporate resources will be provided “predominantly” through ZTNA by 2024, according to research from Gartner. That’s compared to less than 5% in late 2020, Gartner reported in November, during its Security & Risk Management Summit — Americas virtual conference. Because of all the scalability and security benefits of ZTNA — including minimization of lateral movement and personalization of access policies for workers — the zero trust approach brings significant advantages over VPN, according to Thomas Lintemuth, senior director and analyst at Gartner. “ZTNA does push beyond ‘good enough,’ into having a really great product from a security standpoint,” Lintemuth said during a session at the recent Gartner security conference. “When we look at the battle between ZTNA and VPN, the winner of this battle is ZTNA.” That’s not to say there aren’t challenges around moving to a zero trust architecture, he noted. For one thing, an organization must have a comprehensive understanding of the applications that its users need access to — and many organizations do not, Lintemuth said. For this and other reasons, a gradual approach to phasing in ZTNA is often warranted, said Banyan Security cofounder and CEO Jayanth Gummaraju. The ability for ZTNA and VPN to coexist for some period of time can be necessary in order to help customers make the shift, Gummaraju said. And so can AI/ML. At identity and access management vendor ForgeRock, for instance, the company’s AI-powered Autonomous Identity platform brings automation for role-based access control (RBAC), a key element of establishing a zero trust architecture. Achieving ‘least privilege’ AI enables RBAC, which is also a feature of Zscaler’s zero trust platform, to fulfill its potential for determining and enforcing an appropriate level of access for each individual user. This allows an organization to get to the point of establishing “least privilege” access, where users only get access to what they really need, according to the company. By automating role-based access control, “it helps companies use minimal resources to maintain their RBAC environment,” said David Burden, CIO of ForgeRock, in an email. The added complexities of securing the remote workforce has only made automation of RBAC even more essential, Burden said. With a distributed workforce, “it has been difficult to box employees into certain roles or types of access,” he said. “For many employees these days, they’re wearing multiple hats at work and need permission to access all sorts of systems and data that normally would be contained to a single role.” This new reality leads to “massive overhead” in maintaining the proper access for workers, Burden said. “It is extremely time-consuming to manually create, review and approve or remove user access in traditional systems.” That’s where a more autonomous approach can make a big difference, he said. As an example, ForgeRock Autonomous Identity enables the automatic approval and certification of high-confidence, low-risk access requests, as well as automatic revocation of stale user access rights and user removal, according to Burden. “This AI-driven analysis reduces operational access request burdens, and accelerates certification campaigns across the organization,” he said. Tightening up security with AI Leveraging AI is now essential in order to achieve accuracy with securing permissions, ForgeRock CEO Fran Rosch said. He cited an example of a recent customer that increased its entitlement rejections by 300% after deploying ForgeRock. “Because it was previously all done by these rules, and people were rubber-stamping these entitlement requests, they were letting these things go that they should never have approved,” Rosch said. “That was increasing the risk to the company. Because there were people who had no business accessing HR data, and no business accessing sales data, that were getting that information. So by leveraging the AI, a 300% increase in request rejections really tightened up the security of the organization.” Crucially, ForgeRock’s AI-driven zero trust system also provides explainability about why rejections take place, including with a visual representation, he said. “Companies want to know why. They don’t just want to know that ‘the secret algorithm rejected this.’ Well, why? What was it about this user behavior?” Rosch said. “So having that explainability front and center is really important. Because a lot of times you have to explain that to the user. Why did we reject this? Well, because here’s what was going on with your behavior.” The bottom line is that while AI-powered zero trust is not a silver bullet to address all of the challenges of securing a remote workforce, it can play an essential part — particularly when used in concert with other cybersecurity technologies, such as detection and response platforms and email security. The AI advantage With Microsoft’s view into many of the applications and endpoints used by businesses, the company aims to offer customers the full package pertaining to security — across zero trust identity security and threat detection. And the tech giant is making heavy use of AI/ML to accomplish this, said Alex Weinert, partner director of identity security at Microsoft. Thanks to the company’s “massive investments in data science and AI,” Microsoft is able to process tens of billions of logins per day via its Azure Active Directory (AD) identity authentication service, Weinert said. Azure AD enables zero trust security via conditional access, the mechanism used for considering contextual factors in deciding whether to grant a user access. Microsoft then correlates that data with telemetry from endpoints (those that are secured with Microsoft Defender) and from email accounts (in Microsoft Exchange), he said. Bringing all of that together, and using AI/ML technologies such as predictive algorithms, customers are provided with an accurate picture of what is truly happening in their environment, Weinert said. Ultimately, adopting a zero trust approach brings a shift of mindset toward getting “proactive about security,” he said. “Zero trust is about saying, ‘Let’s prepare the ground so that we have the best possible advantage against the attackers.'” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,654
2,020
"How AI and remote collaboration tools could help the construction industry get back to work | VentureBeat"
"https://venturebeat.com/2020/05/20/how-ai-and-remote-collaboration-tools-could-help-the-construction-industry-get-back-to-work"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How AI and remote collaboration tools could help the construction industry get back to work Share on Facebook Share on X Share on LinkedIn Architect using a tablet at construction site In the two-plus months of global lockdown, much of the digital debate has focused on the pandemic’s long-term impact on the global workforce. Remote working has suddenly become the norm for millions of people, and companies like Twitter have already confirmed that employees will be able to continue working remotely indefinitely. Whether this trend proliferates remains to be seen, but not all industries lend themselves to working from home. Among many other hands-on sectors, builders, electricians, and plumbers can’t very well ply their trades over Zoom. But digital technology could still play a pivotal part in getting the $11 trillion construction industry back on its feet. The construction industry has never been renowned for efficiency — various reports indicate it is among the lowest-performing sectors in terms of productivity, due in part to the lack of digitization. This has led to a boom in investments aimed at bringing building sites into the digital era through robotics, AI, and other automated tools. Now the COVID-19 crisis could accelerate that shift. Virtual tours OpenSpace uses artificial intelligence (AI) to automatically create navigable 360-degree photos of construction sites. The San Francisco-based company’s software works in tandem with a 360-degree camera, which builders and site managers can strap to their hard hats to document the evolution of a site. OpenSpace captures all the imagery and uploads it to the cloud before tapping computer vision and machine intelligence to organize the photos, stitch them together, and map them to project plans. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! These visuals allow all stakeholders to monitor progress remotely through virtual site tours, and the digital record can also be used to resolve conflicts at a later date or help managers track multiple projects without being physically present. Ultimately, such technology can reduce the number of people who need to be on-site at a given time. Above: OpenSpace: Comparing how a construction job has progressed In the immediate aftermath of the global lockdown, OpenSpace reported a sharp spike in usage, including a 400% rise in “field notes,” which are basically remote comments on a project image. The company also said it saw a 65% increase in the number of web-based viewing sessions and a 37% uptick in the number of on-site photo captures. OpenSpace also fast-tracked the launch of a new free version of its service called OpenSpace Photo — a basic “manual” offering aimed at smaller projects or subcontractors who only want to focus on a specific area of a build, such as a window installation. While the two versions of the OpenSpace platform target slightly different use cases, they are both designed to help construction sites operate with fewer people present. “With either version, you’re getting a purpose-built tool that makes it easier to collaborate with people virtually to monitor site status, make decisions about changes, and communicate any issues,” OpenSpace CEO and cofounder Jeevan Kalanithi told VentureBeat. “We’ve been planning the launch of OpenSpace Photo as a free service for some time now because we believe that photo-documentation is the future of the construction industry. We believe that any tool that can help builders continue their essential work while respecting distancing guidelines is useful, and we wanted to do our part for the industry.” The builders themselves still have to be physically present, but tools like OpenSpace make it easier for construction sites to respect social distancing measures by streamlining on-site teams. As a byproduct of this shift, construction companies are being forced to try new tools, which could lead to permanent changes in the way they operate. “We are definitely seeing that the current situation is accelerating the transition to remote working, as more companies find themselves suddenly reliant on remote workers to keep progress moving forward,” Kalanithi added. “Construction is not by any means a tech-averse industry, but because margins can be slim, many businesses like to see real ROI (return on investment) before investing in a new product. We think that the pandemic situation has made remote work a necessity and will accelerate the adoption of enabling technologies because the ROI is no longer a few extra percentage points of profit, but rather potentially the viability of the entire project.” Face blurring Elsewhere, London-based Disperse uses AI to improve the workflow of construction sites. Similar to OpenSpace, Disperse captures on-site visuals through a 360-degree camera and helps project managers spot issues before they escalate. The Disperse platform essentially creates what it calls a “digital twin” of a construction site by combining schedules, 3D models, drawings, and weekly snapshots captured on-site. Above: Disperse in action Disperse is also creating new tools and services to help the construction industry get back to work safely, including building integrations with Microsoft Teams and Slack. Perhaps more interestingly, the company is reappropriating some of its existing AI technology to help site managers figure out what tasks can be safely carried out while adhering to social distancing guidelines. Disperse previously used computer vision to blur workers’ faces when capturing imagery — to protect their privacy — but the company realized that historical face-blurring data could be used to measure physical distancing on building sites and map the data to specific tasks. This means construction companies can gauge how many people are typically required to do a particular job and how close those workers usually are to each other. “We’re now offering this functionality to our customers to help them understand things like which critical tasks are inherently safe because they can be performed by one person at a time; which tasks will always require more than one worker and should therefore require extra PPE (personal protective equipment) if [they’re] to continue; and which sort of high-risk, non-critical activities [we have] observed that can be eliminated or delayed altogether,” Disperse CEO Felix Neufeld said. “It’s not a ‘real-time’ solution that keeps tabs on workers at all times, but rather a new way to get an understanding of which tasks are still possible to do safely, and which tasks need to be reconfigured or resequenced.” In the wake of the COVID-19 crisis, Disperse noted a sharp rise in demand for its platform, reporting a 50% increase in “new project engagements.” Neufeld thinks — or perhaps hopes — that COVID-19 will expedite the construction industry’s embrace of digital technologies permanently. “The signs are everywhere,” he said. “Any solution that can ease the pains of a newly distributed workforce will be positioned to help right now. At Disperse, we’re experiencing an uptick in demand from contractors and developers who want to quickly understand how this crisis is impacting production and how to safely maintain a baseline of productivity where possible.” The status of the construction industry during the pandemic has varied wildly from country to county and state to state. In some regions, building work ground to a complete halt, while operations in other areas have been largely unaffected. It’s true the construction industry will never be able to go 100% remote, but the sector carries inherently fewer risks than many other industries that require workers to be physically present. For starters, many construction projects are located in large open spaces — whether indoors or outdoors — and safety guidelines around ventilation and PPE are often already in place. In this context, digital tools may simply serve as the “grease” to get some projects going again. “The new imperative of social distancing is challenging, and there’s no one ‘silver bullet’ that can solve it at once, but the situation is not insurmountable,” Neufeld added. “We’re working with our customers to make sure that anyone who can work remotely will at least have virtual access to a project, and to provide objective data that informs about which tasks can be done with fewer people in a physical space. [Having] fewer people on a site makes everybody safer, and to get there we’ve got to reconfigure processes and work smarter.” Procore , the California-based construction management software company that is reportedly shelving its IPO plans to raise more private funds, has also launched a bunch of video conferencing tool integrations over the past couple of months, including one with Zoom that was followed by tie-ups with Microsoft Teams and GoToMeeting last week. Above: Procore and Zoom And while architects, project managers, and engineers can use this integration to communicate remotely, it’s aimed at all stakeholders in the construction process — including on-site workers. “During the early stages of lockdown, customers wanted access to video conferencing tools to help support collaboration between home workers and on-site personnel, which we worked quickly to introduce to the platform,” said Brandon Oliveri-O’Connor, head of Procore in the U.K. and Ireland. “It’s not just about joining group calls to stay connected … construction workers on the ground have been using Zoom to show off-site personnel aspects of the construction site in real time, whether it’s to report a wall with a crack or to query something they’re not sure about.” New features built or accelerated to accommodate COVID-19 are only part of the story here. Most of these digital platforms have been designed from the get-go to help people collaborate without being in the same space. As construction sites gradually resume operations , logging a fault by uploading a photo to the cloud and then assigning the relevant contractor to fix it at a later time makes good sense all around. The global economy has already taken a thrashing over the past few months, and we will likely have to learn to live with COVID-19 rather than hunkering down in permanent lockdown. The world still needs new buildings, and people need to work — so construction technology may now have a real chance to shine. “Builders need to build now more than ever — we need more hospitals, more housing, and more care facilities,” Kalanithi added. “And we need the builders to stay healthy.” The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,655
2,020
"How robotics and automation could create new jobs in the new normal | VentureBeat"
"https://venturebeat.com/2020/08/17/how-robotics-and-automation-could-create-new-jobs-in-the-new-normal"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How robotics and automation could create new jobs in the new normal Share on Facebook Share on X Share on LinkedIn Depending on who you ask, AI and automation will either destroy jobs or create new ones. In reality, a greater push toward automation will probably both kill and create jobs — human workers will become redundant in certain spheres, sure, but many new roles will likely crop up. A report last year from PA Consulting, titled “ People and machines: From hype to reality ,” supports this assertion, predicting that AI and automation will lead to a net gain in job numbers. This is pretty much in line with findings from the Organization for Economic Co-operation and Development (OECD), a pan-governmental economic body spanning 36 member countries, which noted that “employment in total may continue to rise” even if automation disrupts specific industries. Automation has gained increased attention amid the great social distancing experiment sparked by COVID-19. But it’s too early to say whether the pandemic will expedite automation across all industries. Recent LinkedIn data suggests AI hiring slowed during the crisis, but there are plenty of cases where automation could help people adhere to social distancing protocols — from robot baristas and cleaners to commercial drones. Of course, any discussion about automation invariably raises the question of what it means for jobs. Humans in the loop As we’re still in the early stages of a broader shift to AI and automation, it’s not easy to fully envisage what new jobs could crop up — and which will be lost. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Slamcore is a London-based startup pushing to commercialize AI algorithms that help robots gain situational awareness from sensor data. Slamcore cofounder and CEO Owen Nicholson says we only have to look at some of today’s jobs to realize how difficult it can be to forecast the future. “Contrary to some beliefs, I see robots as creating vast amounts of new jobs in the future,” he said. “Just like 50 years ago a website designer, vlogger, or database architect were not things, over the next 50 years we will see many new types of job emerge.” Nicholson cites robot pilots as an example. “Ubiquitous, truly autonomous robots are still a long way from reality, so with semi-autonomous capabilities with humans in the loop, we can achieve much better performance overall and generate a brand-new job sector,” he added. There’s a growing consensus that humans will work in conjunction with robots, performing complementary roles that play to their respective strengths. San Diego-based Brain Corp recently locked down $36 million to “help meet the growing demand for autonomous mobile robots (AMRs)” across industries affected by the pandemic — from health care to retail. Brain Corp is the company behind BrainOS , an operating system that integrates with hardware and sensors and serves as the “brains” for delivery robots used in warehouses, factories, and retail stores. BrainOS also powers self-driving floor cleaners that assist human workers. The machines come equipped with a range of sensors, including lidar and 3D time-of-flight (ToF) sensors, to self-navigate in dynamic environments. Above: Robotic floor cleaners powered by BrainOS. Brain Corp said demand for BrainOS-powered cleaning robots has surged amid the COVID-19 crisis, with retail usage growing 24% in April 2020 alone. “A significant percentage of this uptick — 68% — is occurring during daytime hours, showing that businesses are cleaning more frequently and operating the technology during peak times,” Brain Corp executive Michel Spruijt told VentureBeat. The robots generate a significant amount of performance data, which is automatically compiled into reports that need to be interpreted, assessed, and analyzed to improve operation and fleet performance. While much of this work could be incorporated into existing roles, such tasks may eventually require dedicated employees, leading to the creation of new jobs. “Managers can view the routes being cleaned, take a look at quantitative metrics such as run time and task frequency, and receive notifications around diagnostics and relevant software updates,” Spruijt said. “An understanding of these reports and how to successfully interpret and apply this data will be imperative in order to improve store operations using automated technologies.” The robots are typically trained to follow routes through a “teach and repeat” method, with human workers guiding them along a cleaning route and making adjustments if the environment changes. As Spruijt is quick to point out, this process “proactively includes humans.” “The robot is not a functional robot without the human,” he said. Above: Cleaning equipment giant Tennant develops machines that use BrainOS. Additional new jobs could include maintenance workers to ensure the AMRs are functioning properly. “The process of physically building a robot and successfully maintaining it in the field requires a set of new or enhanced skills, which are likely to increase alongside adoption of AMRs,” Spruijt said. “As manufacturing lines start to ramp up robot production, [skill sets such as] tooling, light manufacturing, and familiarity with new hardware like touchscreens and lidars will be necessary. Once in the field, service providers with applied knowledge around robot maintenance and deployment are also important in ensuring success.” Humans and robots have distinct strengths and weaknesses, which is why a human-in-the-loop model makes sense for companies embracing automation. Veo Robotics is a Waltham, Massachusetts-based startup that uses computer vision and 3D sensing to give industrial robots greater perception. Its Veo FreeMove system , which is due to launch next year, is designed to help manufacturers coordinate the best attributes of robots and humans, meaning it’s neither completely manual nor completely automated. “What Veo does is enable a middle path, one where human workers with their flexibility, ingenuity, and dexterity can do the parts that humans are good at, while robots with their tirelessness and strength can help them by positioning parts or performing other elements of the process that are hard for the human worker,” Veo Robotics CEO and founder Patrick Sobalvarro explained. “It’s much quicker and cheaper to set up a work cell like this than to try to completely automate it because the human worker is able to do exactly the parts that are so hard to do automatically since they involve dexterity, sensing, and judgment.” “This means skilled welders will spend more time welding and less time fixturing, and quality technicians will spend more time measuring and less time moving parts around,” Sobalvarro added. “Everyone will be more comfortable and get more products built.” As industries strive to establish a new normal following the pandemic, human-robot collaboration could prove invaluable. “Manufacturers have to reduce human density in factories to comply with these new [social distancing] rules,” Sobalvarro continued. “The human-robot collaboration that Veo’s system provides can address this, as it means that instead of having two humans working closely together in a work cell, you will have a human and a robot working together.” Miso Robotics has been deploying its burger-flipping bots across the U.S. over the past few years. The company recently unveiled a next-gen robotic assistant called ROAR (robot on a rail) that can move between cooking stations, working the deep fryer and flipping burgers. With widespread lockdowns, restaurants have been among the hardest hit by COVID-19. CEO Buck Jordan naturally believes automation will play a big role in helping the food industry get back on its feet. “Now more than ever, [as] we are facing real new challenges and a whole new normal, navigating it is going to heavily fall on technology. And the restaurant industry is a prime example where automation needs to come in, in order to sustain the industry, drive growth, and create new job opportunities,” he said. “Incorporating automation into commercial food preparation empowers restaurant operators to safely reopen [and] attract customers with enhanced health and safety, as food comes into reduced contact with humans and points of contamination. [This] ultimately gives them the tools needed to increase production speeds and meet delivery and takeout demands — even in the face of new social distancing requirements that limit a full staff in the kitchen.” This all sounds like a death knell for traditional kitchen jobs, but if restaurants aren’t able to meet safety guidelines, the reality could be much worse. “Without giving restaurants the solutions they need to reopen and recover, the issue won’t be robots taking jobs. The real issue will be that there’s no jobs to take because restaurants can’t turn the profit they need to stay open, much less hire or create new job opportunities,” Jordan continued. In the field While drones incorporate various facets of automation, most require an operator to manage and oversee their deployment. People are needed to program flight paths and step in when things go wrong, and — as with other industries — perform maintenance. The commercial drone industry was in ascendance before COVID-19 struck, with reports suggesting the market would grow more than fivefold by 2026 from $1.2 billion in 2018. However, the pandemic seems to have increased demand for drone services in areas such as medical supply deliveries and site inspections. Mike Winn, cofounder and CEO of drone data software platform DroneDeploy , told VentureBeat the company had expected demand to grow across more than 10 industries in 2020 — and COVID-19 has only heightened that interest. “We’ve already seen growth in drone operations this year, despite COVID-19,” Winn said. “Our customer data showed 130% year-on-year growth among our active enterprise pilots, with minimal pandemic disruption from April 2019 to April 2020, and we just saw our greatest number of flights ever in May.” The San Francisco-based company’s platform allows commercial drone operators to map, survey, and inspect aerial images. These can be used to harness data in industries ranging from agriculture to mining, construction, and insurance. Above: DroneDeploy provides data capture and mapping software for commercial drone operators. Even if automation doesn’t create many jobs off the bat, drones offer a glimpse into the way traditional roles may evolve, with field engineers gradually shifting into a completely new role. “We’ve definitely seen companies training their teams on new skill sets involving drones,” Winn said. “For example, ‘drone operator’ is a quickly growing job title. Many field engineers are becoming drone operators as they have been wrapping drones into their roles more and more. In agriculture, this means capturing live field data, applying chemicals with precision, and more.” Dimitri Onistsuk is the cofounder of Freedom Robotics , a San Francisco-based company that builds software to control and monitor fleets of robots. Onistsuk said he is seeing certain roles evolve, and companies may need to expand their workforce to include dedicated specialists. “Companies need to broaden the skill range of their employees,” Onistsuk said. “For example, we’re seeing a bifurcation of robotics developers into engineers and operators. Because the demand for solutions is so high, it’s no longer a viable business decision to have engineers do operational tasks. Therefore it becomes important to have a higher-skilled individual focus on engineering and developing the robots — their vision, their algorithms, and so on — and have others focus on things like piloting, field operations, maintenance, and servicing.” Companies may have to reevaluate their workflow to ensure people’s skills are being used effectively. “It’s not efficient to send your top engineers across the country to fix a broken robot that is just sitting on the floor of an impatient customer’s site,” Onistsuk added. Like others in the robotics sphere, Onistsuk said Freedom Robotics has seen a “dramatic” increase in demand during the pandemic. More specifically, companies that were dabbling in the technology are now accelerating their plans. “Customers have always taken the potential of robotics seriously, but now we’re also seeing a much greater sense of urgency, where they are no longer waiting years to get their prototypes just right and are instead rapidly moving toward deployment,” he said. Onistsuk points to customers for real-world examples of how robots can augment the human workforce. These include teleoperator pilots and managers who supervise robots that are taking over some of the more mundane aspects of their role. “In the case of pilots, we are seeing humans in the loop, where a semi-autonomous delivery vehicle encounters an exception, such as a reflective material inside a warehouse or a bush that is difficult to navigate around near a sidewalk, and a human takes over via GPS, remote control teleoperation, or a script to reset back to autonomy,” he said. “In the case of manufacturing, robots are delivering parts to a particular station where their human counterparts take them and add them to the assembly.” While it may be difficult to pin down the kinds of new roles that could emerge, it can be fun to speculate. Onistsuk considers a scenario in which a robotic snowplow struggles to tell the difference between a snowbank and a car underneath a layer of snow, requiring a remote operator to step in and interpret the scene and manually navigate if required. As new technologies mature in the coming years, we’ll begin to get a clearer picture of the ways AI and automation will impact the workforce. “I think server and edge operations for robotics will be very important, where the necessary infrastructure is managed and assets are tracked,” Onistsuk said. “5G will bring a tidal wave of possibilities and will have a big effect on robotics and will require a new set of technical skills. I think there will also be a new generation of hybrid workers for jobs that are around today but that will be done collaboratively with a robot — sanitation, industrial inspection, surgery, and so on.” Job transitions A 2018 report from PricewaterhouseCoopers (PwC) noted that the industries likely to benefit most from AI are “human” and “highly technical” sectors, such as health care, education, and science. “Teaching requires high levels of interpersonal skills that cannot easily be replaced by AI systems or robots, although they can be complemented by them to meet projected rising demand for education over time,” the report found. “As such, we only expect around 5% of educators to be displaced by AI, more than offset by job creation of 10%.” The report noted that machines could take on some of the more “boring” teaching tasks, such as marking homework or administering multiple choice tests. The report adds that the sectors “more likely” to experience net job losses are those with a “high degree of repetitive and routine tasks.” One example is manufacturing, which PwC estimates will see 25% fewer jobs by 2037 as a direct result of automation. Meanwhile, 40% of existing “transportation and storage” jobs could be displaced by automation — due to a rise in driverless vehicles and automated warehouses — with less than half of those replaced by new types of jobs. A recent report from the International Federation of Robotics (IFR) noted that there will be an “operational stock of almost 4 million industrial robots” in factories globally by 2022 and predicted “high demand for robotics skills” as part of the post-pandemic recovery process. The report added that governments will need to focus on education and training to equip their workforce with the necessary skills. Even if AI and automation lead to net job creation across the board, there will be significant disruption and upheaval as the global workforce adapts to shifting demands. Some roles may become obsolete, while others may branch into new directions or lead to the creation of entirely new job titles. What is easier to predict, however, is the emphasis these changes will place on upskilling and retraining in the years ahead. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,656
2,020
"Smooth teleoperator: The rise of the remote controller | VentureBeat"
"https://venturebeat.com/2020/08/17/smooth-teleoperator-the-rise-of-the-remote-controller"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Smooth teleoperator: The rise of the remote controller Share on Facebook Share on X Share on LinkedIn When an interviewer pressed Postmates VP Ali Kashani last year on whether the company’s use of teleoperation technology was an “admission” that AI alone can’t solve all of the challenges its robots encounter on sidewalks, Kashani swiftly retorted: “That’s a strategy, not an admission.” Postmates, the on-demand delivery platform Uber is acquiring for $2.7 billion , is one of a number of companies developing autonomous sidewalk-traversing robots that deliver goods to homes and offices. Underpinning its service are human teleoperators who can step in and guide the robots when required. While AI-driven job loss has been hotly debated in recent years, mounting evidence suggests AI will also create jobs — like teleoperation — and open up the talent pool. San Francisco-based Postmates has employed the services of Phantom Auto , a company founded in 2017 to build remote communication software that integrates with all manner of unmanned vehicles, from robo-taxis and delivery robots to forklifts and yard trucks. Operators can use the software to monitor fleets or draw a path for a robot to follow. When necessary, they can even take over and control the vehicle directly. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Excitement about an autonomous revolution has given way to “ autonomous disillusionment, ” with the prospect of fully self-driving cars retreating further into the future, despite impressive advances made over the past decade. Before driverless cars hit the mainstream, companies will need teleoperators to deal with all the “edge” cases on the roads, such as disorderly parking lots, roadworks, or stray animals. “I’ve been doing autonomous vehicles for a long time, and in 2014 everyone thought that by 2018, 2019, or 2020 that these vehicles were just going to be driving themselves,” Phantom Auto cofounder Elliot Katz told VentureBeat. “And now people realize this is a very complex problem. You need a human in the loop today, and [probably] 40 years from now.” Teleoperation has already been used to explore the world’s oceans and defuse bombs. Amid the COVID-19 crisis, however, teleoperation — whether taking control of a vehicle remotely or offering indirect “remote assistance” — could take on greater importance, as it minimizes social contact. Teleoperation could also open up roles to an aging or less physically mobile workforce. “People who would otherwise not have been able to operate a forklift — say, someone with a physical disability or someone who has gotten to an advanced age where their [physical] skills have atrophied a bit … they can now operate a forklift,” Katz said. “That’s something that we didn’t even think of before, but it has been a topic discussed across the board with most of our customers.” Remote work setups could also lead to “labor arbitrage,” with companies taking advantage of cheaper labor costs in other locales. Phantom Auto’s technology allows anyone to control a robot, taxi, or forklift from thousands of miles away, meaning a warehouse in a premium location can access a remote workforce with lower wage expectations. “In Silicon Valley, let’s say that you have to pay $20 an hour to a forklift operator,” Katz continued. “If you can now hire forklift operators in Kansas, or anywhere for that matter, there’s labor savings and you’re still getting the exact same output.” Then there are potential safety benefits — in the U.S. alone, the Centers for Disease Control and Prevention (CDC) estimates that around 100 workers are killed and 20,000 are seriously injured in forklift-related incidents each year. “Operating a forklift involves a lot of risk, as they are picking up and dropping off large pallets, sometimes at great heights — and there can be accidents,” Katz added. “So [by] removing humans from … the warehouse, you’re eliminating that safety risk.” Truck on Einride , a Swedish company developing electric autonomous trucks, started hiring for remote truck operators earlier this year and plans to retrain former truck drivers for the roles — although the training program is still a work in progress. “As this is a brand-new role and the start of a new profession entirely, we are still developing the training regimen for our operators,” said Einride founder and CEO Robert Falck. “Holding a heavy vehicle license is a requirement for the job, however, so the rest is additional training related to the uniqueness of the role and developing the protocols for future remote operators.” The company, which has raised north of $32 million from big-name backers like Ericsson, is still in the hiring phase. But it is eager to talk about a future in which teleoperators control multiple autonomous trucks from a single remote station. In autonomous mode, the operator monitors what’s going on to make sure everything is running smoothly. But with the tap of a button, they can take control to ensure the truck is safely maneuvered into a parking bay, for example. Above: Einride teleoperator controlling a truck remotely If any other vehicle in the fleet requires assistance, a message flashes on the operator’s screen, prompting them to switch screens and take control. Above: Einride teleoperator switching to control another truck A few months back, Einride revealed it would start developing human-driven trucks with some of the underlying technology from its fully automated vehicles, including electrification and the telematics hardware that delivers data to its freight mobility platform. While Einride has said “diversification” was always part of its plan, the move away from purely autonomous vehicles highlights some of the hurdles involved. Teleoperation is close to being a viable mainstream technology, but it will rely on the widespread proliferation of autonomous vehicles, mobile robots, and associated networking technology. “We are currently using this technology [teleoperation] at customer sites and on public roads in Sweden, notably at the DB Schenker facility outside Jönköping, so it is already on the market,” Falck said. “The biggest challenge will be scaling the service, as that is dependent on the proliferation of 5G on a much wider scale.” Teleoperation could eventually turn trucking into something approximating a 9-to-5 job. Because drivers would be able to control vehicles from anywhere, they wouldn’t have to put in long days on the road and sleep in motels or truck cabins at night. “Teleoperation has the potential to be as widespread in the future as truck driving is today, transforming what it means to be a trucker to … a more hospitable profession with more regular hours,” Falck continued. Third-party service Israeli startup Ottopia has been building out its teleoperation platform since 2018. It completed work on a minimal viable product (MVP) last year before deploying it commercially with a handful of (undisclosed) paying customers, according to founder and CEO Amit Rosenzweig. “Our customers are the organizations who develop all sorts of autonomous ground vehicles — delivery robots, forklifts, AGVs (automated guided vehicles) , yard trucks, excavators, taxis, freight trucks, combines, and so on,” Rosenzweig said. Above: Ottopia CEO Amit Rosenzweig Ottopia develops software that works with most off-the-shelf hardware, including Nvidia, Intel, and ARM architectures. And as with other startups developing teleoperation services, it’s betting many — if not most — companies will prefer to use a third-party teleoperation provider rather than build the infrastructure in-house. “Apparently, those companies have a lot on their plate already, plus it’s extremely difficult to build a reliable product that can really provide the needed teleoperation KPIs (key performance indicators) — for example, sub 100 milliseconds glass-to-glass latency at a 99.999% video availability,” Rosenzweig continued. “[It takes] many millions of R&D dollars spent on the right engineers and methodology to actually build a reliable teleoperation platform. Just like it’s faster and safer for those companies to just buy a camera or a lidar from a third party, it’s also faster and safer to buy a teleoperation platform from a third party.” Cellular connectivity is pivotal to Ottopia’s offering, and to those of others in the space. However, Ottopia has previously stated that it isn’t waiting for 5G to come into its own — instead, it’s going all-in on 4G LTE. Although the startup’s team readily admits 5G will enable remote driving that’s “more efficient, at a lower cost” and will “unlock new use cases,” they consider 4G to be good enough for now — though they’ve found this to be a tough sell. Rosenzweig said one of the biggest challenges has been “testing — specifically, proving to ourselves and to our customers and partners that our platform works in a huge variety of cellular network conditions.” But the company remains undaunted. “We follow a very strict methodology. Over the last 20 months, we have recorded, cleaned, and analyzed more than 3,000 hours of high-fidelity cellular data from multiple countries. That data is used to train our machine learning algorithms to provide superior network performance,” Rosenzweig said. The teleoperator job itself is not particularly challenging, beyond the skill set a regular driver or operator would have. To carry those skills over, the remote station is usually designed to replicate a vehicle, with steering wheels, brakes, accelerators, and so on. Above: Ottopia teleoperation station The amount of time it takes to train someone depends on what it is they’re controlling — but we’re talking days, rather than weeks. “We’ve trained people, and it is based on experience — [but] around two full days to become fully comfortable,” Rosenzweig said. “That is, assuming you’re starting with a person who used to be a regular driver or forklift operator in their previous job. It’s a bit different when dealing with robo-delivery, because people didn’t have a previous job of driving a robot. Therefore, for delivery robots it could be three to four days to become fully comfortable.” Teleoperation won’t necessarily provide a “bridge” to full autonomy. While the humble elevator used to have human operators paid to control them and give passengers peace of mind, those jobs are long gone, replaced by buttons and safety mechanisms that connect to the outside world. Rosenzweig sees parallels to autonomous vehicles, but cars are obviously much more complex than elevators, and lawmakers may always require someone able to take over if needed. “I don’t ever see the regulators saying, ‘We don’t need a backup anymore, this autonomy thing is so solid it will never stop or have an issue,'” he said. “No one will agree to put their family in such an autonomous vehicle if it doesn’t have any human backup whatsoever, even 20 years into the future.” Connections Many teleoperation companies have foundations in Israel, including Phantom Auto and Ottopia. According to Rosenzweig, Israel has “strong roots” in all the main technologies needed to build teleoperation technologies, including cybersecurity, video compression, and optimized communication (e.g., forward error correction [ FEC ], low latency, encryption). Another Israeli startup, DriveU.Auto , recently raised $4 million after spinning out of LiveU, a renowned specialist in HD video transmission. DriveU.Auto is a teleoperation connectivity platform for autonomous vehicles that focuses on ultra-low latency and “high reliability” across networks. DriveU.Auto CEO Alon Podhurst said customers are already using his company’s technology on public programs, but he declined to divulge names. “The main challenge today is getting a reliable low-latency link with high video quality from the vehicle to the remote control center,” Podhurst explained. “This is so hard because standard video does not operate well in dynamic conditions of bandwidth and latency. Operating at latencies of less than 100 milliseconds, any capacity issue has an immediate impact on the video, something you would not even notice in a voice call or when downloading a file.” DriveU.Auto aims to overcome these issues with a dynamic video encoding technology coupled with “cellular bonding,” which achieves higher bandwidth by combining modems. This helps it cope with unpredictable and fast-changing network conditions while supporting high-resolution video, audio, sensor data, and more. “The bonding solution maximizes the performance of the networks for the specific needs of the teleoperation service,” Podhurst added. “The dynamic encoding provides the best 4K video quality, yet adapts to lower resolution without losing a frame.” Assistance Russian tech titan Yandex has been developing self-driving cars for years. As with others in the space, it has remote capabilities to support the development of driverless vehicles, but its focus is on “remote assistance.” Yandex doesn’t plan to enable full teleoperation capabilities in its vehicles due to the inherent technological restrictions. “We don’t think directly remote-controlling a vehicle in real time can ever be safe enough, as it relies on cellular connection which is almost never 100% stable for long durations,” Yandex self-driving car head Dmitry Polishchuk said. “Thus, we are developing autonomous vehicle technology that will enable a car to safely navigate public roads without the need for a sustained internet connection.” For this, Yandex said it has developed proprietary remote assistance software that’s optimized for its self-driving system. Yandex anticipates using human intervention only for “corner cases,” where the vehicle can’t decide what course of action to take. In such situations, the car will slow to a halt and send a request for backup. This approach is particularly well-suited for environments where network connectivity is limited, given that it doesn’t require the “same stability and bandwidth as actual remote control,” Polishchuk added. “This means providing a vehicle with additional information or instructions on demand remotely so it can continue navigating autonomously,” he said. “For example, if a lane is blocked as a result of a road accident and the only way to drive around it involves a forbidden behavior, such as crossing a lane marking, we can send the vehicle a permission to cross the marking for this particular event. The vehicle will then analyze the situation and make the maneuver when it is safe to do so.” To support this remote assistance approach, Yandex said it’s developing “special protocols” that enable faster data delivery between the vehicle’s sensors and remote operators, providing the operator with all the information they need to assess the road situation. Polishchuk said he believes self-driving technologies will eventually get better at solving corner cases independently and the need for remote assistance will decrease. But he said the vehicles will likely always need remote intervention capabilities. “The world is very complex and constantly changing,” he said. “We believe autonomous vehicles will always be challenged with new corner cases, which may require some kind of human intervention. They may also experience scenarios in new countries and regions they’ve never dealt with before, which will likely require remote assistance.” Uber has been another prominent player in the burgeoning driverless car industry, and it’s developing in-house teleoperation technology, combining “proprietary and industry-standard protocols,” according to Jon Thomason, VP of software engineering at Uber’s Advanced Technologies Group (ATG). Like Yandex, Uber is taking a lighter approach to teleoperation — keeping humans “in the loop,” rather than in control. “Remote vehicle assistance allows the operator to suggest maneuvers and verify that various actions would be safe and effective and allows the operator to monitor the action as it happens, but it’s the autonomy system driving the car,” Thomason stressed. “Our system allows for monitoring in real time, as well as having the ability to notify the operator when human assistance [is required] — typically to unblock trip progression.” Teleoperators are trained in the autonomous system and remote assistance technology, but they don’t train specifically in steering, braking, or other driving maneuvers. “We believe that teleoperation, also known as full remote driving, is not a good interim step to autonomy, and [we] are not planning to do it,” Thomason added. “The path we’re pursuing is human-in-the-loop. Assisting the autonomy system with long-tail events that the system cannot handle without assistance will probably be around for a long time.” Smooth (tele)operator There has been a flurry of activity across the teleoperation realm in the past year. Voyage , which spun out of Udacity and last year raised $31 million to commercialize community-focused autonomous taxis, recently launched Voyage Telessist , a software and workstation offering for remote operators. Above: Voyage’s Telessist Pod: A custom-built workstation for remote operators Electric micromobilty startups Tortoise and Go X recently kicked off a pilot in Georgia that allows customers to beckon an electric scooter through a mobile app. While the scooters have full autonomy built in, the companies are using teleoperation as a bridge until residents become accustomed to seeing riderless scooters. Postmates rival DoorDash recently snapped up teleoperator startup Scotty Labs, though it has yet to share plans for the acquisition. But given that DoorDash has been piloting autonomous robots for several years already , the possibilities are readily apparent. All this activity suggests we could be on the cusp of a major industrial shift, one that eases geographic restrictions on the labor pool, enhances safety, and widens the job market for aging or less physically mobile workers. Teleoperation might not be a new concept, but with the proliferation of high-speed internet, it could play a key role in taking autonomous transportation mainstream. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,657
2,020
"The meatpacking industry is an incubator for AI, automation, and COVID-19 | VentureBeat"
"https://venturebeat.com/2020/08/17/the-meatpacking-industry-is-an-incubator-for-ai-automation-and-covid-19"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages The meatpacking industry is an incubator for AI, automation, and COVID-19 Share on Facebook Share on X Share on LinkedIn In early spring 2020, Smithfield, Tyson, and other industrial food suppliers warned that upwards of millions of pounds of meat could disappear from the U.S. supply chain as a result of the coronavirus. Although it now appears these fears were overblown or possibly a ploy to bolster exports (excepting pork products like pepperoni ), tens of thousands of slaughterhouse workers around the world have tested positive for COVID-19 , and more than 90 of them have died from the virus. As the health crisis stretches on, the threat to meatpacking, meat processing, and distribution center employees has researchers hunting for a new production model. Even with physical distancing protocols and personal protective equipment like face shields and masks, plant closures are looming — and the idea of automation is rapidly gaining ground. The danger zone The U.S. meatpacking industry employed nearly 600,000 workers — a large portion of whom are immigrants — at wages averaging $15.92 an hour in 2019. The field has high turnover, and a January 2005 report released by the Government Accountability Office showed that some worksites experience over 100% annual churn. In April 2017, a U.S. Immigration and Customs Enforcement raid exposed slaughterhouses that had knowingly hired — and in some cases trafficked — undocumented employees with the promise of steady income. Meat processing is dangerous work. The rate of cumulative trauma injuries — serious physical injuries from repeated or prolonged activities — is the highest of any U.S. industry, at about 33 times the national average. According to federal statistics , nearly one out of 10 meatpacking workers suffers a cumulative trauma injury every year, up from one in four workers just over 20 years ago. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The unrelenting pursuit of speed and profit is likely to blame. The more animals are slaughtered per hour, the less it costs to process each one. In 1976, the typical line speed in a U.S. slaughterhouse was roughly 175 cattle per hour. By 2001, that had climbed to around 400. (One Tyson-owned plant in Holcomb, Kansas reportedly slaughters up to 6,000 head of cattle per day.) The U.S. Department of Agriculture (DOA) imposes line speed restrictions on a per-industry basis, but — citing an internal rule change in 2018 — it granted a record number of waivers to poultry and swine processors earlier this year. Plants owned by Tyson and Wayne Farms, among others, were allowed to operate lines at 175 birds per minute instead of 140. In response, activist groups filed a lawsuit against the DOA, arguing the waivers made conditions more dangerous for workers. On the line, hundreds of people stand in close proximity, wielding sharp materials as carcasses hung on hooks from overhead chains move toward them. Boston Consulting Group estimates meat processors employ 3.2 workers per 1,000 square feet of manufacturing space, or 3 times the national average for manufacturers. Lacerations are common — workers stab themselves or someone nearby — as are accidents involving power tools, conveyor belts, falling carcasses, and slippery floors. Repetitive motion injuries can lead to lifelong impairments as workers repeat motions throughout their shift, making the same knife cut 10,000 times a day or lifting the same weight every few seconds. Occupational Safety and Health Administration (OSHA) 2014 data showed that repetitive motion injuries among beef and pork processing workers were 7 times that in other industries. Robots to the rescue? There’s a limit to what humans — even pushed to the limit — can physically do. This is certainly true in the meatpacking industry, which began to introduce automated machinery and robots as early as the 1960s. While robots can’t address every task on the meat processing line, they’re increasingly able to perform the bulk of them — from packaging and sealing to cutting thoraxes and extracting viscera. Machines can scan, weigh, and measure carcasses to eviscerate them “intelligently,” with the more sophisticated models planning blade trajectories for cutting, separating meat from carcasses and boning them out. Dunedin, New Zealand-based Scott Automation is one of the largest meat processing robot providers in the world, with operations across five continents and customers in over 80 countries. In collaboration with partners like Meat & Livestock Australia, it develops and supplies machines like the Primal System for lamb, which uses computer vision to create a 3D map of bones within lamb forequarters, middles, and hindquarters for precision cutting that factors in height and angle measurements. For customers operating at scale, there’s Scott Automation’s automated boning system, which comprises six machines that transfer meat from one to the next in sequence. It separates up to 12 carcasses a minute into three sections using the Primal System for guidance, processing the forequarter, middle, and hindquarter before removing the knuckle tip from the hind leg. A reconfigurable middle system locates the spinal cord holes at either end of the saddle section, using a combination of vacuum and compressed air to remove spinal materials while a chine station bones the rack saddle, adding up to five grams to the yield per carcass. Scott Automation sales director Chris Hopkins told VentureBeat via email that while some of the company’s machines reduce the need for manual labor, none fully replaces human workers. He pointed out that available space in already tight facilities is often a challenge and that the systems’ primary value derives from the accuracy of their cuts, which increases carcass profits. “We … work closely with the processor companies to help equip their staff with the skills required to support and maintain the technology. We do not believe you would ever (or want to) fully replace all workers,” Hopkins said. “There is some thinking that automation and robotics can reduce the reliance on or the total number of people required to operate a processing facility, but any solution that does that … is still some years away.” Tokyo-based Mayekawa agrees with that assessment of meat-processing robots’ capabilities. The meatpacking automation company, which has sold thousands of chicken and turkey leg deboners, neck slitters, neck skin removers, and gizzard openers to companies in over 26 countries, says its products are designed to work alongside workers, rather than stand in for them. Customers who replace manual pork and ham deboning work with its machines realize a 50- 60% workforce reduction in deboning-related tasks, Mayekawa claims. “We think that automation is vital to maintaining the food supply chain,” Mayekawa engineering manager Shinji Shimamura told VentureBeat. “The pandemic has brought to light the need to further automate wherever possible.” Mayekawa’s marquee product — Legdas — separates chicken bone and leg meat at a rate of up to 3,000 legs per hour with results “as good as when done by hand.” The company also offers a poultry cutting machine that segments cuts with a touch pen, eliminating the need for workers to carry carcasses to a workbench or use a knife, and a feather bone extractor that automatically measures carcasses and peels bone from meat. (Together, they process 150 to 200 head per hour.) And Mayekawa recently developed Hamdas, a swine deboning machine that uses X-ray technology and AI to identify left and right legs, pick out femur bones and shank bones, and make slittings in up to 500 carcasses per hour with blade-equipped arms. In some cases, automation in the meatpacking industry has the potential to shift rather than reduce the demand for labor. A crab-processing robot developed by the Canadian Centre for Fisheries Innovation (CCFI) that cuts crabs in half and removes their legs is designed as part of a robotic system to extract meat from the crabs’ shells. It’s a process that’s typically done overseas, but the designers assert it can solve some of the workforce problems in rural Newfoundland fish plants that are attributed to changing demographics. “Younger people are not being attracted to the industry … If you talk to operators of fish plants today, everybody needs more people,” Bob Verge, managing director of the CCFI, told the Canadian Broadcasting Corporation. “A large part of the labor force in our processing sector now comes from the baby boomer generation. We can’t replace those baby boomers with an equal number of younger people.” The coronavirus problem At the urging of regulators, a number of processing plants have implemented infection prevention measures, including surgical masks, fever screening, and barriers encouraging six feet of distance between workers. However, companies complain these measures might slow down production. Indeed, U.S. pork processing dipped 6% year-over-year for the week ending May 30. Both Scott Automation and Mayekawa claim that some of their machines reduce workers’ risk of coming into contact with the coronavirus. The Centers for Disease Control and Prevention’s latest report notes that workers who are struggling to keep up breathe harder and might have difficulty keeping masks properly positioned on their faces. Other experts theorize the cold temperatures — and aggressive ventilation systems — required to prevent spoilage could be allowing the coronavirus to stay viable for longer. Scott Automation says most of its robotic systems operate in “exclusion zones” to keep people away from the equipment and maintain safety. There might be as many as two operators, but they’re physically distanced, and machines can be installed with “clean-in-place” technology that automates sterilization. Mayekawa says its Hamdas system optionally sterilizes blades as it processes carcasses. However, the company concedes most of its products require daily maintenance and upkeep, putting the onus on workers to maintain a proper distance and scrub down surfaces. There’s evidence — albeit anecdotal — that automation technologies have prevented (or at least forestalled) some COVID-19 infections among factory workers. Costco’s high-tech chicken processing plant in Nebraska, which employs two shifts of roughly 400 employees, reported only a single COVID-19 case in mid-April. In Denmark, across all 18 of the Danish Crown’s almost entirely automated meatpacking facilities, fewer than 10 workers have tested positive out of 8,000. And in Michigan, Clemens Food Group’s robotic pork-cutting packaging plant stayed open through May, slowing production only to install new protective equipment. By comparison, a Smithfield-owned facility in Sioux Falls, South Dakota disclosed hundreds of cases in early April, culminating in the facility’s closing. Tyson was also forced to shutter its Columbus Junction plant after dozens of workers contracted the virus, as were meatpacking operators in Canada , Spain , Ireland , Brazil , and Australia. Business as usual Meat processors are likely to further embrace automation for all the reasons mentioned — safety, yield, and reduced labor. Last August, Tyson, which has invested more than $215 million in robotics over the past six years, opened a facility near its headquarters in Springdale, Arkansas to develop automation solutions for its production plants. (In July, the Wall Street Journal reported that Tyson engineers and scientists are developing an automated deboning system to help butcher the nearly 40 million chickens processed each week.) In fall 2015, Brazil-based JBS — the world’s largest meatpacker — acquired a controlling share in Scott Automation. And Pilgrim’s Pride invested over $30 million in automation last year, targeting projects it says are helping its plants run efficiently in the midst of the pandemic. “We believe in automation, we believe in robotics, and we’re going to continue to move down that path,” Pilgrim’s Pride CEO Jayson Penn told analysts during an April earnings call. “This is something that [even] pre-COVID we’ve been addressing and doing with our facilities, using more automation and more robotics.” But change is unlikely to occur overnight. Tyson, JBS, Cargill, and other meat giants say robots can’t yet match humans’ ability to disassemble animal carcasses that subtly differ in size and shape. Finer cutting such as trimming fat largely remains in the hands of human workers. A skilled loin boner can efficiently carve a cut of meat like filet mignon without leaving too many scraps that get turned into lower-value products, such as the finely textured beef used in hamburger meat. And automating even a portion of the line is expensive. While Scott Automation offers sub-$200,000 solutions, it says its customers generally spend in the millions of dollars and don’t anticipate a return on investment for at least a year. “[The pandemic] will raise interest in automation, but I’m unsure if it will accelerate adoption,” Hopkins said. “That will come down to how much the meat processors — our customers — want the equipment and are prepared to invest their resources to achieve a faster adoption.” The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,658
2,020
"The promise of automation -- and those who could be left behind | VentureBeat"
"https://venturebeat.com/2020/08/17/the-promise-of-automation-and-those-who-could-be-left-behind"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages The promise of automation — and those who could be left behind Share on Facebook Share on X Share on LinkedIn If you squint just right, the immediate future of work looks bright. Despite widespread unemployment during the pandemic — up to 16.3 million in the U.S. as of July, according to the Bureau of Labor Statistics — new jobs, and new types of jobs , are surfing the oncoming wave of robots and automation. These positions are showing up in industries ranging from shipping to trucking, construction , transportation, delivery, health care, and manufacturing. But viewing automation solely through the lens of techno-optimism is, at best, myopic. Some people will be left behind, and it’s important to understand who will be most affected — based on race, age, gender, or other factors — and what can be done about it. A wealth of opportunity Broadly speaking, AI and automation promise new job opportunities and trillions of dollars in economic growth. While some of these jobs are highly technical, many won’t require an advanced degree or even a background in technology. Some roles will be filled by upskilling or reskilling workers with existing expertise, like retraining truck drivers to remotely pilot autonomous trucks. Jobs involving remote operators may create unprecedented opportunities for people with physical limitations or sensory issues that prevent them from participating in traditional workplaces. And teleoperation is ideal for the throngs of workers eager to move away from densely populated cities, whether to reduce their potential COVID-19 exposure or to find affordable housing and a better quality of life. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Many jobs can be partially automated, with technological advancements sparing workers tedium and physical labor. In work environments like those found in the meatpacking industry , automation can augment productivity while reducing the number of people needed on the line. This allows workers to maintain safe distances from each other and helps keep businesses running — and food on our table — during tough economic times. But these examples cut both ways. A machine that reduces manual labor is likely to displace the person who performed that work. Even if it’s necessary to reduce the number of people standing shoulder to shoulder for health purposes, the net result is fewer people working. Those jobs become casualties of both the pandemic and automation — and are unlikely to return. Automatable A 2019 report from McKinsey details who is most at risk of being left behind by automation, which tends to come down to how automatable their job is. A similar report from the Brookings Institution on automation and AI frames the issue by parsing tasks from skills. Citing earlier work from economists David Autor, Frank Levy, and Richard Murnane, the Brookings report says, “A job is a bundle of tasks, to which workers apply skill endowments in exchange for wages. Some of these tasks may become automated. Others may not. Skills belong to workers, which can be ported to other jobs — even those with a different task composition.” In other words, automation cannot replace people — just some of the tasks they do in the course of performing their job. Of course, that distinction is of little comfort to those who find themselves out of work. The McKinsey report found that the types of jobs most susceptible to automation by 2030 include cashier, food server, retail salesperson, customer service rep, office clerk, janitor, housekeeper, stock clerk, and order filler. Jobs at least risk of displacement include those in education, creative roles, health professions, business and legal professions, and jobs in property management and agriculture. When the report’s authors mapped demographics like race and gender onto the job types, they were able to quantify which groups of people were most at risk. They found that Latinx and Black workers faced the greatest risk of displacement, at rates of 25.5% and 23.1%, respectively. Meanwhile, white workers face a displacement rate of 22.4% and Asian American workers of 21.7%. There’s a great deal of nuance behind those numbers. Breaking things down by gender shows that the situation is more dire for Black men than for Black women, who are more likely to work in less automatable positions, including as health aides and nursing assistants. (Black people, generally, are overrepresented in the jobs most at risk of automation and underrepresented in those that are less vulnerable.) But women also face significant risks. Another McKinsey study found that globally, 40 million to 160 million women will need to shift into different occupations — some requiring additional skills — to keep pace with automation. And the types of jobs women are more likely to hold are at a greater risk of partial automation. Location also matters because new automation jobs tend to cluster in geographic hotbeds (remote operator roles notwithstanding). People who live outside growth areas are less likely to grab those jobs — or may need to relocate. Younger workers are more vulnerable than those in the middle of their careers, and those without a college degree are more vulnerable still. There’s also the matter of pay. “Only half of the top 10 occupations that African Americans typically hold pay above the federal poverty guidelines for a family of four ($25,750), and all 10 of those occupations fall below the median salary for a U.S. worker ($52,000),” according to the McKinsey report. As a survey from the Joint Center for Political and Economic Studies noted, people of color will make up the majority of the U.S. population in the next 20 to 30 years. That’s soon — within roughly a generation — and reinforces the need for equitable access to the new jobs promised by automation. The survey also found that “Asian Americans, African Americans, and Latinos were all more likely than whites to be interested in obtaining education or training from all the provided options, including a college degree program, online college, community college, online training, a trade union, and a GED.” Advice for a new world Researchers are busy exploring ways to prepare workers for the coming wave of automation. Much of their advice centers on education — in traditional higher education institutions but also through professional certificate programs and two-year associate degrees, as well as general reskilling and retraining. These same researchers urge policymakers, educators, and companies to make reskilling attainable for more workers. The McKinsey report presses higher education institutions to improve retention and completion rates for Black students and advocates decreased enrollment in for-profit schools. It advises companies to avoid imposing degree requirements that are higher than necessary and urges them to consider hiring skilled workers rather than only those with a university degree. It also suggests public and private sectors work together on targeted programs to increase awareness of shifting job requirements, offer support for higher education, and provide a path to transitioning into higher-paying and more future-resistant jobs. The Brookings report arrives at similar conclusions, suggesting the need for public/private coordination to ease workers’ transitions and reduce hardships (potentially through targeted programs). The report advises measures to “future-proof” local and regional economies and communities from the negative impacts of job displacement and loss. Like so many things, these studies address a very different world from the one we currently inhabit. Although research-based updates for 2020 are still in the works, VentureBeat spoke with one of the 2019 McKinsey report’s authors, Shelley Stewart III, to understand what, if anything, has changed during the pandemic. Stewart said much has remained the same in terms of the team’s findings and suggestions, despite the relative economic chaos of the past few months. “I don’t think any of those things have changed,” he said. “The folks who are getting furloughed — not all of them, but the majority of them that do these jobs — were already at risk of [being displaced by] automation.” But he noted that the pandemic has accelerated the pace of those changes. The number of jobs vulnerable to automation certainly appears to have increased in recent months. Stewart said that in April, McKinsey estimated 53 million U.S. jobs were vulnerable, but around four months later that number has ballooned to 57 million. To address accelerated job vulnerability, we need accelerated interventions. “This is not going to be solved by any one industry group, or even only by the private sector,” he said. People working in private and social sectors will have to work in tandem, and governmental leaders will need to make job displacement a priority. “That’s the thing that we keep trying to push, is [that] this is going to require coordinated effort and it should be a top agenda item for whoever the next [presidential] administration is,” Stewart said. Stewart also believes we need to shift our cultural orientation to embrace continued learning. Rather than seeing job training or a degree as terminal, people need to be constantly adding to their knowledge and skill sets. “But it requires a completely different way of thinking,” he said. “This notion of moving from ‘You get some formal education, then stop, and then you go to work’ versus ‘You’re on a perpetual learning journey, reskilling yourself based on where the puck is going.'” Authors of the Brookings report agree, citing the need to promote a “constant learning mindset” in workers, as well as in the educational system and within companies. Keeping up with the times is always a challenge, and it may feel unfair that these changes are happening at greater speeds because of the pandemic. Unfortunately, this acceleration may leave many people out in the cold, at least in the short term. But a cultural shift toward perpetual learning could go a long way toward ensuring more people partake in the spoils of automation. Meanwhile, a concerted, multi-stakeholder effort to identify the jobs and workers most at risk and work aggressively to reduce barriers to their success could make all the difference. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,659
2,020
"Facial recognition is no match for face masks, but things are changing fast | VentureBeat"
"https://venturebeat.com/2020/04/08/facial-recognition-is-no-match-for-face-masks-but-things-are-changing-fast"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Facial recognition is no match for face masks, but things are changing fast Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. In a major about-face in public health policy, the Centers for Disease Control (CDC), U.S. Surgeon General Dr. Jerome Adams, and state and local health officials around the country recently began urging people to wear homemade face masks when they’re out in public. The directive is not meant to replace social distancing, but to reduce the spread of infection and ensure the most effective personal protective equipment goes to health care workers on the front line. But it could also throw a wrench in a number of facial recognition applications, including those used to unlock smartphones. Less than a year old, Google’s facial recognition system on Pixel 4 smartphones is built to recognize a person even if they’ve shaved their beard or are wearing sunglasses, but Face Unlock for Pixel 4 is rendered virtually useless by homemade face masks. A Google spokesperson told VentureBeat that Face Unlock isn’t made to recognize people wearing face masks and declined to say whether the company is working to add that capability to its system. The Pixel 4 isn’t alone. Apple’s Face ID for iPhones launched in 2017 as one of the first facial recognition systems for smartphones. Some of the initial complaints about face masks rendering facial recognition inoperable were against Apple’s Face ID and came from Californians who kept their faces covered during the 2018 wildfire season , as well as people in parts of Asia, where it’s common for people to wear face masks when they’re sick. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Those frustrations have resurfaced with the emergence of COVID-19. As a workaround, in a video published last month a Tencent security employee demonstrated the ability to train Apple’s Face ID to recognize a smartphone user by doing a new facial scan with half of their face covered by a mask and the other half uncovered. COVID-19 is expected to change the world in significant ways, from an increase in telehealth and video calls to shifts in economic and public health policy, but it may also lead to more facial recognition technology that’s capable of identifying people in masks. That technology will surely live on well after the pandemic to unlock your phone, enable purchases, and recognize people at protests or political rallies. The new face mask recommendation from public health officials means facial recognition systems for smartphones and other settings must either adapt and grow more robust or be put on hold for a range of applications. Facial recognition is also in use in some workplaces for clocking in and clocking out of work, identity verification, and in parts of China for making purchases. It’s not yet clear whether U.S. public health officials plan to use facial recognition in contact tracing, but questionable companies like Clearview AI are attempting to sell facial recognition to state agencies for the purpose of tracking people infected with COVID-19. Mapping masked faces The vast majority of AI systems today are designed for recognizing not just the area around your eyes, but also your nose, mouth, and the curvature of the lower half of your face. Few facial recognition systems today recognize people wearing face masks, but some of the first to do so have emerged in China in recent weeks. To create its face mask data set, Hanwang asked its employees to share photos of themselves wearing face masks. Based on those photos, the company generated thousands of simulated images of fake people in masks. Now the company says its AI is capable of achieving 95% accuracy, but it’s designed for use in an office setting and for up to 50,000 employees, Hanwang CTO Huang Lei told ArsTechnica in March. Also last month, researchers from Wuhan University released the Real World Masked Face Recognition data set , which they believe is the biggest masked face data set in the world. Using one of three real and simulated data sets, they claim they trained AI to achieve state-of-the-art performance, correctly recognizing people 95% of the time. The portion of the data set that includes real people has 5,000 pictures of 525 different people wearing masks and 90,000 images of the same 525 subjects without masks. By contrast, the simulated data set is orders of magnitude larger and includes 500,000 images of 10,000 fake people. Researchers open-sourced the collection of three data sets for the express purpose of making existing facial recognition systems around the world better at recognizing people in masks in public places like train stations or security checkpoints. They also support facial recognition for recognizing people who aren’t wearing face masks in public, which is illegal in China during the coronavirus pandemic. “[I]t is necessary to improve the existing face recognition approaches that heavily rely on all facial feature points so that identity verification can still be performed reliably in the case of incompletely exposed faces,” a preprint paper on arXiv reads. “Our research has contributed scientific and technological power to the prevention and control of coronavirus epidemics and the resumption of production in industry. Furthermore, due to the frequent occurrence of haze weather, people will often wear masks, and the need for face recognition with masks will persist for a long time.” In addition to making stronger facial recognition systems for use in public areas, researchers want their data sets to allow facial recognition to overtake identity authentication methods that require touch, like fingerprint scanners and keypads. A masked future? In an op-ed earlier this week , Northeastern University professor Woodrow Hartzog said face masks are a temporary speed bump for facial recognition. While they present a challenge, he believes face masks will not stand in the way of increased facial recognition use in the age of COVID-19. Health officials are asking a majority of U.S. citizens and a sizable percentage of the global population to stay at home right now, but face masks could become more common when people return to work, at least until a suitable vaccine emerges, which could take more than a year. The CDC said its face mask policy is voluntary, but in places like Riverside County, California, sheriff’s deputies will fine or jail people who do not wear face masks. That’s all to say that as the country with the most confirmed cases of COVID-19 in the world, the United States might be a masked people for a while, and facial recognition that’s capable of recognizing individuals wearing masks may become an expected feature for systems that unlock phones, track COVID-19 outbreaks, or enforce quarantines. Apple’s FaceID and Google’s Face Unlock don’t take face masks into account today, but don’t be surprised if COVID-19 leads to better AI for unlocking smartphones or paying for coffee, as well as for tracking COVID-19 cases and dissidents at protests. Before the novel coronavirus changed all our lives, facial recognition and fights over masks are most closely associated with protests and anti-face mask laws passed in Hong Kong last fall. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,660
2,020
"MIT announces Bluetooth breakthrough in coronavirus-tracing app for Android and iOS | VentureBeat"
"https://venturebeat.com/2020/04/08/mit-announces-bluetooth-breakthrough-in-coronavirus-tracing-app-for-android-and-ios"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages MIT announces Bluetooth breakthrough in coronavirus-tracing app for Android and iOS Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. MIT and makers of the app Private Kit: Safe Paths say they’ve overcome an Android and iOS interoperability issue that will make the COVID-19 contact tracking app able to track people in close proximity with others using Bluetooth. MIT’s Lincoln Laboratory says it accomplished the feat last week. Currently, Private Kit logs location history using GPS for 28 days. Bluetooth proximity apps record when two devices running the app are near each other, and when a person tests positive for COVID-19, notifications can be sent to people who crossed their path. Safe Paths will soon be able to share any incidences of contact between two people that have occurred within 14 days. Project lead and MIT associate professor Ramesh Raskar said the Private Kit: Safe Paths team is currently in talks with over 30 countries, including India, Italy, Germany, and Vietnam. An MIT spokesperson said Private Kit pilots are also underway in a number of countries, including Ethiopia, Haiti, India, Italy, Spain, and the United Kingdom, as well as five locations across the U.S. — from Alaska to Los Angeles and an area outside Boston. The Private Kit: Safe Paths team is also in ongoing negotiations and talks with the World Health Organization and U.S. Department of Health and Human Services. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “Safe Paths is a platform to create completely interoperable standards. So we expect most apps to be based on the safe paths repository,” he said. “And in case Brazil creates one and Mexico creates one, and so on, [for] anyone who travels from one country to another, it’s the same base for everyone because we don’t expect Brazil to use an MIT app.” Private Kit is also working with makers of other Bluetooth tracing apps, like COVID Watch, on an open source offering that ensures Bluetooth pings picked up by one app are seen by others. Bluetooth proximity tracing apps must overcome interoperability issues before being considered a viable solution for recording instances when a person may contract COVID-19. Privacy is also a primary concern. “So the healthy people never have to share their data, but for infected people, they can release that data in an anonymized, aggregated, and redacted fashion. The next version will be encrypted as well,” Raskar said. The makers of Bluetooth contact tracing apps say if the solution gains widespread adoption, it could be part of the way some countries or regions allow people to return to normal life in a world without a cure for COVID-19. Private Kit builders include makers of FluPhone, MIT Alliance for Distributed and Private Machine Learning , members of MIT Media Lab, and MIT CSAIL’s Ron Rivest, cocreator of the RSA algorithm and symmetric key encryption algorithms. MIT also worked with Massachusetts General Hospital Center for Global Health, Boston University, Brown University, the Weizmann Institute of Science, and SRI International. In addition to allowing people to follow proximity tracing today, creators say Private Kit: Safe Paths is intended to act as a proof of concept for Apple and Google, dominant controllers of mobile operating systems around the world. “They have a critical role here. The aim of the prototype is to prove to these developers that this is feasible for them to implement,” Rivest said in a statement shared with VentureBeat. In recent days, U.S. senators have questioned Apple and Google over the privacy implications of COVID-19 surveillance. Engineers who built Private Kit were advised by a medical advisory team led by Louise Ivers, an infectious disease expert and executive director of the Massachusetts General Hospital Center for Global Health. Montreal Institute for Learning Algorithms head and deep learning expert Yoshua Bengio also contributed as an advisor. Since the launch of its Android and iOS app roughly one month ago, Safe Paths has been downloaded more than 10,000 times. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,661
2,020
"What privacy-preserving coronavirus tracing apps need to succeed | VentureBeat"
"https://venturebeat.com/2020/04/13/what-privacy-preserving-coronavirus-tracing-apps-need-to-succeed"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages What privacy-preserving coronavirus tracing apps need to succeed Share on Facebook Share on X Share on LinkedIn COVID Watch At present, most of the U.S. population is being asked to stay home to flatten the curve of the coronavirus pandemic, but as hundreds of millions of people begin to think about how to return to normal life, the need to trace the spread of the disease becomes crucially important. Public health officials traditionally use manual methods to conduct contact tracing, or mapping out who has a disease and who they come into contact with. But a tech solution could also be an efficient way of reaching people potentially infected with coronavirus. Quick and effective contact tracing is especially important as countries around the world consider lifting quarantines, restarting economies, and returning to normal life; health officials need to be able to quickly spot any outbreaks or clusters of infection so they can curb any potential re-emergence of an outbreak. Dozens of nations are already using some form of surveillance for contact tracing. In countries including Hungary , leaders used the crisis as an excuse to seize additional powers by enacting emergency laws. In some places, civil liberty organizations have labeled compulsory location tracking apps draconian or warned of the end of privacy. Epidemiologists who are calling for more surveillance are stoking these fears. World Health Organization (WHO) executive director Dr. Michael Ryan recently insisted that surveillance coupled with testing must be part of the return to normal life in many places. Privacy advocates fear that governments will take away personal liberties in the name of fighting COVID-19 and will never give them back. But it doesn’t have to be that way. Bluetooth contact tracing may provide the necessary tracking with a lower risk of violating civil liberties or handing sensitive data to governments. Epidemiologists, top researchers, major privacy advocates, and now Apple and Google are exploring Bluetooth contact tracing with this in mind. When it’s secured with cryptology, they say, it’s the best way to protect privacy, track movement, and leverage the devices most people around the world already have in their pockets. Additional methods of contact tracing via smartphones use location tracking through cell phone tower triangulation, Wi-Fi triangulation, and GPS. Different groups working on improved contact tracing methods disagree on whether to use all or some of those methods, but they seem to agree that using Bluetooth is a good idea. Advocates of Bluetooth say it’s an ideal solution because it’s designed to reach short distances, and health authorities have said many COVID-19 infections happen when people are fewer than six feet apart. With distance estimated based on Bluetooth signal strength, interactions that last longer than a few minutes can be stored locally on devices. When a person tests positive, they enter a code so everyone at risk of exposure with the app downloaded is also alerted. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Beyond alerting coworkers or friends when they’ve been exposed, contact tracking apps also make it possible for public health officials to send notifications to people who perhaps sat near one another on a train or met at a school or religious gathering. In what may be the biggest endorsement yet for the Bluetooth contact tracing method, Apple and Google recently announced that they’re partnering on a solution that combines Bluetooth, cryptology, and location tracking. Apple and Google will release an API in May, followed by a platform for building Bluetooth tracing into software. In the coming months, Android and iOS operating system updates will enable the automatic tracking and storage of Bluetooth Low Energy signals being sent by other devices. That way, a public health authority can instruct a person to download a contract tracing app after they test positive to share contact episodes stored locally on a smartphone. The news promises the potential proliferation of Bluetooth contact tracing apps. “Through close cooperation and collaboration with developers, governments, and public health providers, we hope to harness the power of technology to help countries around the world slow the spread of COVID-19 and accelerate the return of everyday life,” reads a joint statement from Apple and Google. Research shows that accurate contact tracing may help contain COVID-19 in areas without wide community spread, but a lot has to happen before this privacy-preserving approach is truly possible. Lending further credibility to this approach, Democratic presidential candidate Joe Biden shared in a New York Times op-ed on Sunday that privacy-conscious contact tracing and abundant testing capabilities are essential aspects of his plan for restarting the economy. If everything goes according to plan, such apps carry the promise of, as MIT professor Alex Pentland put it, “restarting the economy and avoiding Big Brother.” Apple and Google’s participation could remove a key hurdle to adoption for privacy-preserving contact tracing apps, but other challenges remain. VentureBeat spoke with the creators of COVID Watch and PrivateKit: Safe Paths in order to understand the difference between top offerings — and what needs to change. Both apps are open source and are being made in the U.S. by a mix of cryptographers, privacy advocates, public health experts, and engineers. For widespread adoption, these apps will require cooperation not just from Apple and Google but from governments, public health officials, and the average person. A private path to contact tracing Perhaps the furthest along of all Bluetooth tracing apps for COVID-19 is Private Kit: Safe Paths, which is designed to eliminate the risk of government surveillance. Creators of the app hail from MIT, Harvard University, and other East Coast institutions; they say they’re in talks with the WHO and over 30 countries around the world to work with Private Kit and make their own contact tracing apps. Trials are also underway in multiple parts of the U.S., from Alaska to Los Angeles and the Boston area. The team making Private Kit: Safe Paths represents some of the biggest names at the intersection of AI, privacy, and security. Team lead Ramesh Raskar was an executive at Alphabet’s experimental X unit, a member of Apple’s privacy team, and leader of a Facebook team working with Bluetooth. Quick containment, Raskar argued in a white paper accompanying the launch of Private Kit last month, is a key component of ending an outbreak, providing community spread hasn’t already become pervasive. Private Kit: Safe Paths was inspired by Apple’s Find My feature, which locates a lost device via Bluetooth, and it was created to act as a proof of concept for Apple, Google, and Microsoft. The next version of Private Kit: Safe Paths will include encryption, helped along by MIT professor Ron Rivest, a cryptologist and cocreator of the RSA security algorithm. Collaborators include members of the MIT Alliance for Distributed and Private Machine Learning who have contributed to projects related to differential privacy, federated learning, and other influential privacy-related methods for machine learning. Advisors on the project include the WHO, Harvard Medical School, the Mayo Clinic, and Mila professor and deep learning pioneer Yoshua Bengio — who’s developing another location tracking app being considered by Canadian authorities. Multinational interoperability Another key element for Bluetooth tracing app success is interoperability between apps. Created earlier this month by a coalition of 10 organizations, the open source Temporary Contact Number (TCN) protocol is designed to collectively share contact event numbers and ensure Bluetooth signals are received no matter what tracing app a user chooses to download. Makers of the TCN protocol say Apple and Google’s Bluetooth contact tracing plan is virtually identical to their own. Bluetooth proximity tracing apps can offer different levels of privacy and features, but Raskar said they’re designed for international tracking. “Safe Paths is a platform to create completely interoperable standards. So we expect most apps to be based on the safe paths repository,” he said. “And in case Brazil creates one and Mexico creates one, and so on, [for] anyone who travels from one country to another, it’s the same base for everyone because we don’t expect Brazil to use an MIT app.” An open letter signed by roughly 100 researchers , privacy advocates, and public health officials and distributed by COVID Watch privacy advisor Peter Eckersley says ensuring Android and iOS interoperability is one of the best things tech companies can do to fight the pandemic. Achieving mass adoption An obvious barrier to any app-based contact tracing is mass adoption. Tina White, the executive director of the COVID Watch group that makes the eponymous app, hopes to launch it with a big PR campaign, which ideally will encourage more people to download it. It would be even easier if Apple and Google pushed out updates to iOS and Android, respectively, that include a choice to add contact tracing apps. No matter how contact tracing apps dependent on widespread adoption roll out, White said Apple needs to fix the foreground-background issue with location tracking apps. This issue means iPhones can’t exchange Bluetooth contact numbers if they’re locked. COVID Watch, TraceTogether ( a tracking app launched in Singapore ), and other apps have the same problem. The way things work now, a person with an iPhone would have to walk around with their phone on all the time for a contact tracing app to work. “With the current protocol … the app would have to be put on in the foreground on their phone while they go around, which is annoying, and we’re trying to get around that. We’d have to get cooperation from Apple to change the battery policy about foreground versus background, which is another hurdle that’s in the way,” she said. A COVID Watch spokesperson said the organization expects this issue to be addressed as part of its work on an API with Google, but Apple hasn’t confirmed plans to do so yet. VentureBeat reached out to Apple to ask whether Bluetooth interoperability issues currently being addressed include allowing contact tracing apps to operate in the background. Speaking with VentureBeat a week before the Apple-Google partnership was announced, White argued that Apple should only allow apps that follow a decentralized approach to run in the background on iOS devices, and she suggested it would be smart for mobile operating system providers in general to back a decentralized approach to location tracking. For Android’s part, COVID Watch noted in a white paper last month that the operating system has multiple bugs that cause phones to lock up when they’re trying to connect to too many Bluetooth devices at once. Because your phone’s contact tracing app needs to have Bluetooth always on for the app to work as designed, that could create problems. In addition to the need for continued cooperation from Apple and Google, Private Kits: Safe Paths lead Raskar said organizers may release an app for mobile Windows devices for use in workplace settings. Cooperation from public health officials The cooperation of public health officials is essential for a number of reasons, not least of which is the need to assure notifications sent to users are credible and trustworthy. To avoid the potential for false positives or the need to rely on less accurate methods like self-reporting, Bluetooth tracing apps will need public health officials to distribute special codes to people who test positive for COVID-19. The makers of COVID Watch said public health official cooperation is important because people need to trust that the notifications they get from the app are based on credible exposure. As countries around the world move forward with location tracking apps, federal public health authorities in the United States have shared little guidance on their approach. VentureBeat reached out to the Centers for Disease Control and Prevention’s (CDC) COVID-19 team for details about federal guidance on location tracking that balances privacy requirements with the demands of controlling the pandemic, but we had not heard back at the time this story was published. Centralized vs. decentralized tracing White told VentureBeat that collaborators started work on the privacy-conscious location tracking app in February after it emerged that China and South Korea were using location tracking apps to slow the spread of COVID-19. China uses GPS and can assign risk scores based on location history to determine an individual’s freedom of movement. South Korea also used smartphone tracking. Neither government anonymized the personal data, which created privacy concerns. The COVID Watch group, which comprises around 200 Bluetooth experts, developers, privacy advocates, public health officials, and academic researchers, is working to discover the least privacy-invasive way to track people during a pandemic. White says they’ve found it, in the form of decentralized Bluetooth tracing. “I think it’s the best option for privacy,” White said. “This is the method we think minimizes privacy harms.” “Everybody’s trying to stop the virus, so it makes sense that people are making privacy trade-offs. I think that if this Bluetooth method were not available, I would be advocating for making at least a little bit of a trade-off because it’s really important to stop this right now, and I’d probably be advocating for whatever the other minimal trade-off is. But we think this is probably the best one, where you don’t have to provide any identifying information at all.” A COVID Watch spokesperson told VentureBeat the organization is extremely glad Apple sided with decentralization, and since the European Commission also recommended data be decentralized last week, the centralization approach seems dead in the water. COVID Watch says partnering with others is necessary to avoid reinventing the wheel. That spirit is reflected in the group’s willingness to collaborate with other contact tracing apps on the TCN protocol. Made by nearly a dozen organizations, the protocol uses anonymized numbers to represent each device. The protocol is designed so that phones can get notifications without revealing any identifiable tracking information, no matter which app they download. The group also works with Community Epidemiology in Action ( CoEpi ), Private Kit: Safe Paths, and TraceTogether, and is advised by public health and epidemiological experts from New York University and Stanford University. Despite a willingness to partner with others, COVID Watch draws a sharp contrast between its approach and that of other proximity app makers. Both Private Kit: Safe Paths and COVID Watch offer anonymized Bluetooth contact numbers, use the TCN protocol, and offer cryptographically secure contact tracing, but COVID Watch considers GPS tracking like the kind Private Kit: Safe Paths does to have significant potential privacy implications. Future features for COVID Watch may include personalized recommendations, the ability to self-report symptoms, and advice on how to get home testing. The app’s makers may also be able to get it to work with inexpensive Bluetooth hardware in countries with low smartphone penetration. Like the makers of Private Kit: Safe Paths, the COVID Watch team planned to create a heat map using epidemiological models, but they found that GPS anonymization is more challenging than Bluetooth anonymization and comes with increased privacy risks. The current version of Private Kit: Safe Paths can use both centralized and decentralized approaches, and Raskar believes an approach that combines Bluetooth with GPS and Wi-Fi network tracking is required in order to provide people with heat maps and supply health officials with data. “So the healthy people never have to share their data, but for infected people, they can release that data in an anonymized, aggregated, and redacted fashion. The next version will be encrypted as well,” Raskar said. Similar debates around centralization versus decentralization are taking place in Europe, where multiple transnational Bluetooth tracking projects are currently underway. A team of 25 professors and researchers from France, Germany, Netherlands, and Switzerland released a white paper last week called “Decentralized Privacy-Preserving Proximity Tracing” (DP3T). With DP3T, each person’s tracing data is used to power a virus contraction risk score algorithm locally, on each smartphone. That decentralized tracking method, they say, is the best way forward. The Pan-European Privacy-Preserving Proximity Tracing ( PEPP-PT ) project also launched recently. The app exchanges anonymous identifier codes like other Bluetooth apps do and is based on the idea that the current health care crisis shouldn’t lead to a backslide in privacy rights. But in contrast to DP3T, it offers a mix of centralized and decentralized approaches. Singapore’s TraceTogether uses a blend of centralized and decentralized methods. “We strongly urge governments, health authorities, and researchers that any deployment of proximity tracing follows a decentralized design similar to our system to avoid the creation of centralized systems that have the potential to become surveillance infrastructures,” the DP3T group said in the paper. “Compared to a central design in which the backend would compute risks and inform users, our design protects interaction graphs from the backend, and only a determined tech-savvy adversary can learn any extra information besides the one made visible by the app. The centralized system, in comparison, leaks a lot of unnecessary information about contacts to the backend and requires large amounts of trust in a central entity.” ACLU surveillance and cybersecurity counsel Jennifer Granick was equally firm about the need for decentralization in a statement she made following the Apple-Google contact tracing news last week. “To their credit, Apple and Google have announced an approach that appears to mitigate the worst privacy and centralization risks, but there is still room for improvement. We will remain vigilant moving forward to make sure any contract tracing app remains voluntary and decentralized and used only for public health purposes and only for the duration of this pandemic,” Granick said in a statement provided to VentureBeat. In a post Sunday , however, University of Cambridge security engineering professor Ross Anderson — who is currently advising the U.K. government officials considering contact tracing apps — said decentralization is no panacea, and some cryptographers say they see flaws. “[D]ecentralized systems are all very nice in theory but are a complete pain in practice, as they’re too hard to update,” he said. “Relying on cryptography tends to make things even more complex, fragile, and hard to change. In the pandemic, the public health folks may have to tweak all sorts of parameters weekly, or even daily. You can’t do that with apps on 169 different types of phone and with peer-to-peer communications.” Anderson has a number of reservations about contact tracing apps, including the National Health Service’s (NHS) poor record with data protection. And he is concerned a system for lightly anonymized data collection won’t be disassembled when the crisis is over. He also said there are other practical considerations to consider, like the fact that people with COVID-19 may be too sick to operate a smartphone and the likelihood of false positive scenarios, like when speaking with a neighbor through a closed window or when Bluetooth passes through plaster walls. Efforts should focus foremost on things like making testing and ventilators available before worrying about apps, he said. But decentralization advocates DP3T and COVID Watch believe strong personal privacy protection and high user trust can lead to higher rates of adoption for Bluetooth proximity tracing apps. “A common concern with systems like these is that the data and infrastructure might be used beyond its originally intended purpose,” the DP3T report reads. “Such assurances will likely be important to achieve the necessary level of adoption in each country and across Europe, by providing citizens with the confidence and trust that their personal data is protected and used appropriately and carefully.” Higher levels of user trust may also lead to higher rates of self-reported surveillance data and higher levels of opt-in participation. Spokespeople from both Google and Apple stressed that in order for its contact tracing initiative to succeed, mass adoption is not possible if people don’t trust the system and choose not to participate because they fear their privacy is at risk. The need for testing In order for contact tracing apps to have the greatest impact and give people confidence to return to work, they need to be paired with COVID-19 testing. In the absence of testing, health officials can’t confirm who actually has the disease, which dilutes the accuracy and impact of the tracking. We’re still learning who has immunity and whether people who have recovered can experience a resurgence of the virus; and of course, there’s no vaccine or cure yet. It’s unclear whether widespread testing will be available as lockdown orders begin to lift. Federal health officials are considering solutions like saliva testing, more at-home testing, and Abbott tests that deliver results in minutes. The United States still trails behind nations like Germany and South Korea in per capita testing, and some experts say the U.S. needs to quadruple testing capabilities in order to adequately respond to future needs. In March, President Trump said anyone who needed testing would be able to get it, but on multiple occasions in recent days he has cautioned against a need for widespread testing except in specific cities. In what might be the biggest decision of his presidency, Trump set May 1 as a date to reopen the economy and resume normal life, a date some experts say might prove unrealistic. Dr. Anthony Fauci of the National Institute of Allergy and Infectious Diseases said antibody testing will become increasingly available in the coming days. In the absence of testing, contact tracing apps can collect self-reported symptoms from users. Some epidemiologists and public health officials have said increased surveillance or self-reported information that reveals influenza-like illness can be good indicators of COVID-19 outbreaks or show where resources are most needed. Examples include anonymized data from companies like Apple, Facebook, and Google and surveys like COVID Near You, which asks people how they’re feeling. To date, the COVID Near You website, which was created by tech engineers and hospitals, has recorded how more than 400,000 people are feeling across the United States, Canada, and Mexico. The road ahead Some argue that because COVID-19’s incubation period is up to two weeks, tracking people’s movements must be a key part of any long-term strategy. And vaccine developers say we shouldn’t expect any effective cure for 18 months or more, making an interim solution essential. When it comes to implementing contact tracing apps, COVID Watch executive director White perhaps best summed up what we need next. “The important thing is buy-in from public health and Apple’s battery policy. Those are the two things we need from the outside. Everything else, the technical challenges, I think we can handle,” she said. Alongside widespread testing, fully anonymized Bluetooth tracing apps could in theory allow people around the world to participate in contact tracing without sacrificing privacy, but initial adoption has been slow. With some of the biggest names in AI and security attached to it, Private Kit: Safe Paths attracted a fair amount of media attention when it launched last month. But in the weeks since, the app has seen only 10,000 downloads. COVID Watch hopes to make a bigger splash with a coordinated PR campaign when its app launched soon. Even if location tracking apps with encryption and Bluetooth emerge as top methods for contact tracing — successfully balancing a need for surveillance with privacy protections — they can’t work without buy-in from individuals, help from Apple and Google, and coordination with national public health officials. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,662
2,020
"European scientists and researchers raise privacy concerns over coronavirus contact tracing apps | VentureBeat"
"https://venturebeat.com/2020/04/21/european-scientists-and-researchers-raise-privacy-concerns-over-coronavirus-contact-tracing-apps"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages European scientists and researchers raise privacy concerns over coronavirus contact tracing apps Share on Facebook Share on X Share on LinkedIn A woman holds a mobile phone while the spread of the coronavirus disease (COVID-19) continues, London, Britain, April 19, 2020. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. ( Reuters ) — A rift has opened up over the design of smartphone apps to trace people in Europe at risk of coronavirus infection, potentially hindering efforts to curb the pandemic and ease crippling travel restrictions. Scientists and researchers from more than 25 countries published an open letter on Monday urging governments not to abuse such technology to spy on their people and warning of risks in an approach championed by Germany. “We are concerned that some ‘solutions’ to the crisis may, via mission creep, result in systems which would allow unprecedented surveillance of society at large,” said the letter that gathered more than 300 signatures. Tech experts are rushing to develop digital methods to fight COVID-19, a flu-like disease caused by the novel coronavirus that has infected 2.4 million people worldwide and been linked to 165,000 deaths. Automating the assessment of who is at risk and telling them to see a doctor, get tested, or self-isolate is seen by advocates as a way to speed up a task that typically entails phone calls and house calls. Contact tracing apps are already in use in Asia, but copying their approach by using location data would violate Europe’s privacy laws. Instead, Bluetooth chatter between devices is seen as a better way to measure person-to-person contacts. The apps should be voluntary and would need to be downloaded by at least 60% of the population to achieve the “digital herd immunity” needed to suppress COVID-19, say researchers from Oxford University’s Big Data Institute. Yet controversy over the best way forward could delay the rollout of apps to help governments, once they have brought the pandemic under control, to contain any new outbreaks. Mission creep The rift has opened up over a German-led initiative, called Pan-European Privacy-Preserving Proximity Tracing ( PEPP-PT ), which has been criticized for being too centralized and thus prone to governmental mission creep. Its critics back a decentralized contact tracing protocol called DP-3T pioneered by Swiss researchers that is aligned with a technology allianc e between Apple and Alphabet’s Google. The details are highly technical but revolve around whether sensitive data would be kept safely on devices or stored on a central server in a way that might allow a bad actor to reconstruct a person’s “social graph” — a record of where and when they meet other people. “Solutions which allow reconstructing invasive information about the population should be rejected without further discussion,” the scientists said in their letter. Among the signatories was Michael Backes, head of Germany’s CISPA Helmholtz Center for Information Security, which pulled out of PEPP-PT over the weekend. Swiss researchers have also publicly dissociated themselves from PEPP-PT, citing concerns over centralization and privacy. Critics have also questioned PEPP-PT’s assertion that seven European countries — Austria, Germany, France, Italy, Malta, Spain, and Switzerland — had come on board. Spain and Switzerland now back rival DP-3T, government and research sources said. PEPP-PT said it was committed to guaranteeing the privacy of users and data protection at all times. PEPP-PT also asserted its commitment to privacy in a 25-page document it released at the end of last week on GitHub, a software developer platform. “If the system would leak information about personal behavior, identities, or even reveal who has been infected with Sars-CoV-19, users would quite rightfully refuse to adopt the system,” the document stated. Germany plans to release a contact tracing app within weeks that’s based on the PEPP-PT platform, government sources said last week. The head of France’s INRIA digital research institute has also backed the initiative. The PEPP-PT platform is designed to support national apps that could “talk” to each other across borders — a goal that could become harder to achieve if other European countries back a different standard. “The debate is turning away the focus from what really matters: to build an app that traces the virus, not the human, and to do this as fast as possible,” said Julian Teicke, who is CEO of Berlin insurance tech firm WeFox and involved in a German-based coronavirus app tracing project called Healthy Together. ( Reporting by Douglas Busvine, editng by Jane Merriman. ) VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,663
2,020
"PwC's workplace contact tracing app won't share info with public health officials | VentureBeat"
"https://venturebeat.com/2020/04/30/pwcs-workplace-contact-tracing-app-wont-share-info-with-public-health-officials"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages PwC’s workplace contact tracing app won’t share info with public health officials Share on Facebook Share on X Share on LinkedIn PricewaterhouseCoopers automated workplace contact tracing app Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. As some countries declare the end of coronavirus community spread and plan for a safe return to work, surveillance technology appears to be becoming a much bigger part of the workplace. Already real-time computer vision and thermal cameras are beginning to enter the workplace to ensure social distancing or find people with elevated temperatures, and last week, PricewaterhouseCooper (PwC) introduced automatic contact tracing called Check-In for enterprise businesses to track employees. But contact tracing for large offices or businesses might look very different than solutions created for public health authorities such as that from a joint Apple-Google venture. Instead of sending a smartphone notification when a person tests positive, HR or company leadership can use the PwC service to generate a list of employees, assess employee contact risk levels, and contact employees considered most at risk of exposure. In this context, your company’s HR or crisis management officials take the role of contact tracers. PwC also has no plans to share information about its solution with local public health officials. And it has no plans to integrate with open source Bluetooth protocols like the one from the TCN Coalition made for sharing contact events across multiple apps and services. Depending on where you work, the solution may also be mandatory for employees, but it only tracks people within the geofenced confines of the workplace, PwC connected solutions and IoT lead Rob Mesirow told VentureBeat in a phone interview. Mesirow argues that if Fortune 1000 companies adopt a mandatory solution, it could be quickly adopted by tens of millions of people. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “You really want to have as close to 100% participation as you can possibly get to make contact tracing effective, and how do you do that? You do that inside an enterprise,” he said. “I think it’s a tall order outside the enterprise because of adoption. If it doesn’t work in a small island nation like Singapore that has a lot more influence and power over their population, it’s going to be very difficult for it to work in a country as large and free as the United States.” PwC declined to share the number of clients who are using its solution but said associates are in talks with clients in retail, hospitality, factories, industrial environments, and financial services. Alongside prisons, cruise ships, and nursing homes, some of the largest COVID-19 outbreaks to date in the U.S. have occurred in meat processing facilities, and that appears to be contributing to a growing COVID-19 case count. In one Iowa county, 90% of cases can be tied back to a Tyson meat processing plant. Mesirow said he could see contact tracing solutions developing both inside and outside the workplace, but they might need each other in order to be truly effective. The historic joint venture between Apple and Google that created interoperability between Android and iOS devices will not enable employers to track employees, since that data is being made available only to public health officials. A test version of Apple and Google’s solution launched this week , while an API for contact tracing apps is due out in the coming weeks. Consumers’ lack of trust in Apple and Google could be a barrier to adoption, according to a Washington Post poll released today. EU officials last week predicted that in order to be effective, contact tracing apps must reach adoption rates as high as 60%. The Apple-Google solution, and others being developed by multinational coalitions, rely on decentralized Bluetooth contact tracing , while PwC’s solution uses a combination of Wi-Fi and Bluetooth. Businesses may be highly incentivized to put contact tracing apps in place because there’s a lot at stake. A workplace COVID-19 outbreak can cause closures, kill clients or employees, reduce profits, interrupt supply chains, or continue community spread that lengthens shelter-in-place closures. There’s also the prospect of legal liability or negligence claims. For their part, employees who believe a tracking app can help them feel safe — or who feel pressured to adopt tracking tech — may also be more likely to agree to download a dedicated app or more than a dozen PwC apps getting an SDK update to enable tracking. PwC’s solution can work on both employees’ personal devices and employer-issued hardware. “Unlike a lot of parts of the world, in the U.S. the employer is the main supplier for health care. So there’s some real skin in the game as it relates to protecting the health and well-being of our workforce,” Mesirow said. Under ideal circumstances, a workplace contact tracing app could keep businesses from shutting down unnecessarily as a precaution when there’s a local case. But like contact tracing apps put forward by public health officials, workplace-linked automated contact tracing faces some immediately recognizable challenges and limitations. One office in a skyscraper using the solution, for example, could lead an employer to believe everything is fine while community spread settles in nearby, increasing the likelihood of widespread outbreak. Because many carriers are asymptomatic, the virus can spread without any indication that anything is wrong. Though some states are already reopening establishments like restaurants and bowling alleys, areas like the San Francisco Bay Area will continue shelter-in-place orders for most jobs until June 1. Public health experts say some regions of the country won’t hit their peak number of COVID-19 cases for weeks, and that national testing capacity must quadruple before the United States economy can resume normal activity. Other workplace solutions are also under development: Private Kit: Safe Paths, a contact tracing app out of MIT in conversations with several governments around the world, is being released as a Windows 10 version for the workplace. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,664
2,020
"Finding the balance between safety and freedom in the shadow of COVID-19 | VentureBeat"
"https://venturebeat.com/2020/05/18/finding-the-balance-between-safety-and-freedom-in-the-shadow-of-covid-19"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Finding the balance between safety and freedom in the shadow of COVID-19 Share on Facebook Share on X Share on LinkedIn Countries around the globe are focusing their collective attention on humanity’s most immediate existential threat. The coronavirus threatens jobs, global economic activity, international relations, the health of our loved ones, and our own lives. To combat this pandemic, epidemiologists require data so they can better understand where and how the coronavirus may be spreading among populations. World leaders from the international level down to local ranks need to be able to track the spread of the virus in order to make informed decisions about how to manage resources, handle shelter-in-place restrictions, and reopen businesses. The technologies politicians are testing, like phone-based contact tracing, thermal scanning, and facial recognition, are all euphemisms for surveillance, and tradeoffs being weighed now could extend well beyond this crisis. Before the pandemic, one of the most important — and popular — movements in ethics and social justice was the push against technology-powered surveillance, especially AI technologies like facial recognition. It’s a rich topic centered around power that pits everyday people against the worst parts of big tech, overreaching law enforcement, and potential governmental abuse. “Surveillance capitalism” is as gross as its name implies, and speaking truth to that particular sort of power feels good. But now, with millions suddenly unemployed and some 80,000 deaths from COVID-19 in the U.S. alone, the issue is no longer corporate profits or policing efficacy versus privacy, security, and power. In a global pandemic, the tradeoff may very well be privacy, security, and power versus life itself. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The spread of the coronavirus poses an immediate life-and-death threat. No one alive has experienced anything like it on such a scale, and everyone is scrambling to adjust. Against such a dire backdrop, theoretical concerns about data privacy or overreaching facial recognition-powered government surveillance are easily brushed aside. Is it really such a bad thing if our COVID-19-related medical records go into a massive database that helps frontline health care workers battle the disease? Or if that data helps epidemiologists track the virus and understand how and where it spreads? Or aids researchers in developing cures? Who cares if we have to share some of our smartphone data to find out whether we’ve come into contact with a COVID-19 patient? Is it really that onerous to deploy facial recognition surveillance if it prevents super-spreaders from blithely infecting hundreds or thousands of people? Those are legitimate questions, but on the whole it’s a dangerously shallow perspective to take. A similar zeitgeist permeated the United States after 9/11. Out of fear — and a strong desire for solidarity — Congress quickly passed the Patriot Act with broad bipartisan support. But the country lacked the foresight to demand and implement guardrails, and the federal government has held onto broad surveillance powers in the nearly two decades since. What we learned — or should have learned, at least — from 9/11 and the Patriot Act is that a proactive approach to threats should not exclude forward-looking protections. Anything less is panic. The dangers posed by a hasty and wholesale surrender of privacy and other freedoms are not theoretical. They’re just perhaps not as immediate and clear as the threat posed by the coronavirus. Giving up your privacy amounts to giving up your power, and it’s important to know who will hold onto all that data. In some cases, it’s tech giants like Apple and Google, which are already not widely trusted, but it could also be AI surveillance tech companies like Palantir, or Clearview or Banjo , which have ties to far right extremists. In other cases, your power flows directly into the government’s hands. Sometimes, as in the case of a tech company the government contracts to perform a task like facial recognition-powered surveillance, you could be giving your data and power to both at the same time. Perhaps worse, some experts and ethicists believe systems built or deployed during the pandemic will not be dismantled. That means if you agree to feed mobile companies your smartphone data now, it’s likely they’ll keep taking it. If you agree to quarantine enforcement measures that include facial recognition systems deployed all over a city, those systems will likely become a standard part of law enforcement after the quarantines are over. And so on. This isn’t to say that the pandemic doesn’t require some tough tradeoffs — the difficult but crucially important part is understanding which concessions are acceptable and necessary and what legal and regulatory safeguards need to be put in place. For a start, we can look to some general best practices. The International Principles on the Application of Human Rights to Communication Surveillance , which has been signed by hundreds of organizations worldwide, has for years insisted that any mass surveillance efforts must be necessary, adequate, and proportionate. Health officials, not law enforcement, need to drive the decision-making around data collection. Privacy considerations should be built into tools like contact tracing apps. Any compromises made in the name of public health need to be balanced against the costs to privacy, and if a surveillance system is installed, it needs to be dismantled when the emergent threat of the coronavirus subsides. Data collected during the pandemic must have legal protections, including stringent restrictions on who can access that data, for what purpose, and for how long. In this special issue, we explore the privacy and surveillance tradeoffs lawmakers are working through , outline methods of tracking the coronavirus , and examine France as a case study in the challenges governments face at the intersection of politics, technology, and people’s lives. This is a matter of life and death. But it’s about life and death now and life and death for years to come. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,665
2,020
"France offers a case study in the battle between privacy and coronavirus tracing apps | VentureBeat"
"https://venturebeat.com/2020/05/18/france-offers-a-case-study-in-the-battle-between-privacy-and-coronavirus-tracking-apps"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages France offers a case study in the battle between privacy and coronavirus tracing apps Share on Facebook Share on X Share on LinkedIn As countries rush to develop COVID-19 tracing apps, France has become a lightning rod for the technical and ethical debates surrounding attempts to balance public health and mass surveillance. The French government has embraced a framework for its app, called StopCovid, that would centralize the collection of citizens’ data. Privacy groups have blasted this approach and accused the otherwise privacy-obsessed French politicians of being hypocrites. StopCovid has also triggered a confrontation with Apple , which has so far refused to enable such an approach on its devices. The government is not backing down, insisting that a centralized approach can protect privacy by anonymizing data while at the same time offering greater overall security and insights into the virus’ spread. More fundamentally, the French government insists that decisions around the public use of this data need to be made by elected officials rather than private companies. With data viewed as a critical tool for combating the pandemic, the fevered arguments in France serve as a microcosm of the global debate over how to strike a balance between public health and privacy. All parties agree that creating trust around these apps is essential to achieving participation rates high enough to be effective. In terms of public buy-in and technical design, these apps will serve as a test run for governments seeking to navigate the tradeoffs necessary to fight not just the coronavirus, but future pandemics as well. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “As with any technology, zero risk does not exist,” French digital minister Cédric O wrote in defending his government’s approach to developing an app. “No solution is foolproof, but each type has its own flaws … StopCovid is not a ‘peacetime’ application. Such a project would not exist without the situation created by COVID-19.” Centralizing data So far, about a dozen countries have deployed some kind of COVID-19 tracing app. The tools represent wide-ranging approaches to a variety of questions — such as whether to centralize data and whether to track users’ locations. More recently, Apple and Google announced a partnership to develop a contact tracing API that will allow other organizations to create apps that work across Android and iOS devices. In Europe, two competing visions have emerged as possible frameworks for these apps. The first stores data on a central server, where it performs infection-matching. The second keeps the data on users’ smartphones, where the matching happens. Neither would use GPS or other methods of location tracking. The technical details, privacy tradeoffs, and security risks have been front-page news and widely debated on evening news shows in France over the past few weeks — an indication of just how important such issues are to people in the country. In France, the government has chosen to adapt the centralized framework developed by a group called Pan-European Privacy-Preserving Proximity Tracing ( PEPP-PT ). Initially led by German researchers, this effort eventually resulted in the creation of a tracing framework called ROBERT ( ROBust and privacy-presERving proximity Tracing protocol ). In explaining ROBERT, Bruno Sportisse, CEO of French research institute Inria, wrote in mid-April that any framework involving data tracking will have some privacy and security tradeoffs. He argued that it was a false narrative to label one approach “centralized” and another “decentralized” because all systems would involve some information at the device level and some information passing through a common server. In the case of ROBERT, all users would have to opt in, and the information sent to a central server would be stored using only crypto-identifiers, rather than any actual names or personal information. “This application is not a ‘tracking’ app: It only uses Bluetooth, never GSM or geolocation data,” Sportisse wrote. “Nor is it a surveillance app. To be even clearer: It has been designed in such a way that NOBODY, not even the government, has access to the list of people diagnosed as positive or to the list of social interactions between people.” France’s StopCovid app is being built on the ROBERT framework, with input from a coalition of institutes, universities, and companies. These include Inria, ANSSI, Capgemini, Dassault Systèmes, Inserm, Lunabee Studio, Orange, Withings, and France’s public health agency. A version of the StopCovid app is slated to be ready in late May so it can be put up for debate and approval by France’s National Assembly. Assuming it is approved and tests are successful, it may begin to roll out in early June. Nobody is promoting the app as a silver bullet, but rather as one of the tools France is weighing as it slowly begins reopening this month. Digital minister O has also stressed that StopCovid is not intended to monitor people and that no one can be forced to download it to their phone and activate it. Any sharing of information would be on a strictly opt-in basis. If someone does opt in, they can declare that they have tested positive for COVID-19, and the app will then notify any users who have been in proximity to the infected individual. From there, it’s up to app users who have been exposed to decide whether to contact health officials. People are not informed who the infected contact was, and the app on the phone would not contain information to let them figure that out. The French model has received a tentative thumbs up from the independent privacy agency Commission Nationale Informatique et Libertés (CNIL), which felt it provided sufficient privacy measures to meet Europe’s General Data Protection Regulation (GDPR) guidelines. The National Digital Council advisory board also gave preliminary support , but said it could not render a full opinion until it was able to evaluate the actual app. Speaking to broader privacy concerns, O wrote: “The StopCovid project is not a foot in the door. Everything is temporary: The data is erased after a few days, and the application itself is not intended to be used beyond the epidemic period.” Decentralizing data The framework competing with ROBERT is a decentralized contact tracing protocol called Decentralized Privacy-Preserving Proximity Tracing (DP-PPT). A coalition of researchers from several European institutions designed this framework, and it syncs up with the API Apple and Google are developing. Prior to that Apple-Google partnership, COVID-19 tracing apps had faced various problems operating on iPhones. For one thing, Apple generally prevents Bluetooth from continually sending out signals to ping other phones. More recent versions of Android also place some restrictions on Bluetooth, but it’s Apple phones that are viewed as the biggest hurdle for any contact tracing apps. “You can implement either app just fine on an Android phone,” said James Larus, part of the DP-PPT team and dean of the School of Computer and Communications Science at Switzerland’s École Polytechnique Fédérale de Lausanne (EPFL) technical university. “The problem is Apple phones.” In Singapore, the government developed a workaround to the Apple issue by having their app run in the foreground and keeping the phone unlocked. However, that drained the battery and created privacy concerns that led to an adoption rate too low to be effective. Apple has decided it’s willing to bend on that issue as long as the data regarding contacts is being kept on users’ phones, essentially forcing governments to accept a decentralized solution. In the centralized app, if someone is infected, their contact information would be uploaded to the central server. For the decentralized Apple-Google version, if someone reported to their app that they were infected, a server would then upload their encrypted contacts into a database. On the other end, an app periodically downloads this database to other users’ smartphones. If the app detects a match between a record of infection reports in the database and a user’s recent contact, the user would be notified. The main difference between this approach and the ROBERT framework is that the anonymized IDs would not be continuously stored on the central server. “The real differences come down to this question of where the data is stored and where the matching is done,” Larus said. “And those are true differences. But in the end, the functionality of the apps [is] the same.” Both frameworks pose potential security risks, as each system relies on some form of encryption. In France’s version, users must trust that the government agency controlling the system has designed enough security into the app and the network. But with the decentralized approach, users must take the risk of other people’s phones storing their encrypted information if they are diagnosed, making the system only as secure as everyone else’s phones. The French government cites this as one of the reasons for rejecting the decentralized approach. Its own security agency, the National Information Systems Security Agency (ANSSI), labeled the “decentralized” model riskier because the encrypted identifiers would be circulating on people’s phones. “All those applications involve very important risks when it comes to protecting privacy and individual rights,” ANSSI stated in a letter. “This mass surveillance could be done by collecting the interaction graph of individuals — the social graph. It could happen at the operating system level on the phones. Not only could operating system makers reconstruct the social graph, but the state could as well, more or less easily depending on the approaches.” France versus Apple With the French group rushing to complete work on the app this month, one of the main logjams remains tension between Apple and the French government. While the United Kingdom has taken a COVID-19 tracing app philosophy similar to that of France, Germany has changed course and opted for a decentralized version. Orange CEO Stéphane Richard, whose company is helping create France’s app, has expressed some optimism that the French StopCovid app consortium can reach a deal with Apple. “There are meetings almost every day. It’s not a done deal yet … but we have a discussion dynamic with Apple that is not bad,” Richard told Reuters. But the French government has expressed continued frustration. “Apple could have helped us make the application work even better on the iPhone. They have not wished to do so,” France’s O told BFM Business TV on May 5. He also issued a stern reminder that the dispute with Apple underscores the “oligopolistic nature of the OS market,” which puts nations at the mercy of big companies. “Health policy is, from the point of view of the French government, a sovereign prerogative which is the responsibility of the state,” O wrote. “It is up to the public authorities, with their qualities and their faults, to make the choices they consider to be the best for protecting French women and men. The French government does not refuse the API proposed in the state by these two companies because they are American companies. … It refuses to do so because, in its current format, it constrains the technical choice: Only a ‘decentralized’ solution can work perfectly on phones equipped with iOS.” France, he added, must be able to protect its sovereignty and “not to be constrained by the choices of a big company, as innovative and efficient as it is.” Lost in these technical and political debates is the reality that no one knows whether any of these apps will be truly effective. In part, that’s because the technology is unproven and it’s not clear whether enough people will download them. Epidemiologists have generally estimated that 60% of the population must use the apps for them to provide an effective tracking system. Even then, Switzerland’s Larus said the apps must be connected to the broader health care infrastructure of a country to have an impact. People need to know what specific actions to take if they receive a notification, such as who to call for more information or to make an appointment for testing. Likewise, doctors, hospitals, StopCovid app call centers, and testing facilities must be prepared to follow set policies if they are contacted by someone who has received an exposure notification. Policymakers must decide whether such people should be directed to get immediate testing or told to monitor symptoms. “These issues involve large groups of people, and they require political decisions,” Larus said. “These are much more difficult decisions, and they’re very national and specific to each country. There’s not going to be a single app’s back end that you can take from one country and just plop it down in another country.” Still, Larus said, he’s glad to see that the issues surrounding the app, even though they can be quite technical, are being taken so seriously in France and across Europe. Making the right tradeoffs between privacy, security, design, and policy for this generation of contact tracing apps will be critical to limiting damage from the current pandemic. But the decisions made now will also likely form the foundation of future contact tracing apps. If the coming COVID-19 apps are widely embraced and prove their value, many painful and time-consuming policy and technical debates could be avoided when the next pandemic hits. And Larus said we can be sure there will be a next time. “If you needed to do this again, could we do it faster next time?” Larus asked. “Could we have the code for the app sitting there so that it’s easy to do it again quickly? Is the integration into the health system maintained so that next time we don’t have to start from scratch? The expertise we are developing right now, the knowledge, is going to be important even after we are past this crisis.” The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,666
2,020
"The technologies the world is using to track coronavirus -- and people | VentureBeat"
"https://venturebeat.com/2020/05/18/the-technologies-the-world-is-using-to-track-coronavirus-and-people"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages The technologies the world is using to track coronavirus — and people Share on Facebook Share on X Share on LinkedIn Now that the world is in the thick of the coronavirus pandemic, governments are quickly deploying their own cocktails of tracking methods. These include device-based contact tracing, wearables, thermal scanning, drones, and facial recognition technology. It’s important to understand how those tools and technologies work and how governments are using them to track not just the spread of the coronavirus, but the movements of their citizens. Contact tracing and smartphone data Contact tracing is one of the fastest-growing means of viral tracking. Although the term entered the common lexicon with the novel coronavirus, it’s not a new practice. The Centers for Disease Control and Prevention (CDC) says contact tracing is “a core disease control measure employed by local and state health department personnel for decades.” Traditionally, contact tracing involves a trained public health professional interviewing an ill patient about everyone they’ve been in contact with and then contacting those people to provide education and support, all without revealing the identity of the original patient. But in a global pandemic, that careful manual method cannot keep pace, so a more automated system is needed. That’s where device-based contact tracing (usually via smartphone) comes into play. This involves using an app and data from people’s smartphones to figure out who has been in contact with whom — even if it’s just a casual passing in the street — and alerting everyone who has been exposed to an infected individual. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! But the devil is in the details. There are obvious concerns about data privacy and abuse if that data is exposed or misused by those who hold it. And the tradeoffs between privacy and measures needed to curb the spread of COVID-19 are a matter of extensive debate. The core of that debate is whether to take a centralized or decentralized approach to data collection and analysis. To oversimplify: In either approach, data is generated when people’s phones come into contact with one another. In a centralized approach, data from the phones gets uploaded into a database, and the database matches a user’s records with others and subsequently sends out alerts. In a decentralized approach, a user’s phone uploads only an anonymized identifier, other users download the list of anonymous IDs, and the matching is done on-device. The advantage of decentralization is that data stays private and essentially unexploitable, and users remain anonymous. Centralization offers richer data, which could help public health officials better understand the disease and its spread and allow government officials to more effectively plan, execute, and enforce quarantines and other measures designed to protect the public. But the potential disadvantages of centralized data are downright dystopian. Governments can exploit the data. Private tech companies may be able to buy or sell it en masse. Hackers could steal it. And even though centralized systems anonymize data, that data can be re-identified in some cases. In South Korea, for example, a failure to keep contact tracing data sufficiently anonymous led to incidents of public shaming. An Israel-based company called the NSO Group provides spyware that could be put to such a task. According to Bloomberg, the company has contracts with a dozen countries and is embroiled in a lawsuit with WhatsApp , accused of delivering spyware via the popular messaging platform. That’s not to mention various technical challenges — notably that Apple doesn’t allow the tracking apps to run in the background, as well as some Android bugs that contact tracing app developers have encountered. To obviate some of these issues, Apple and Google forged a historic partnership to create a shared API. But the debate between centralized and decentralized approaches remains riddled with nuance. A deep dive into the situation in France provides a microcosm of the whole issue, from the push/pull between governments and private companies to technical limitations to issues of public trust and the need for mass adoption before contact tracing can be effective. But even with these growing pains, the urgent need to ease lockdowns means various forms of contact tracing have already been employed in countries around the world, and in the U.S. from state to state. Examples include: In the U.S., absent a clear federal contact tracing plan ( for now ), states have moved forward on their own. A multi-state group that includes New York, New Jersey, and Connecticut is creating its own tracing program. South Korea’s Ministry of the Interior and Safety developed a GPS-tracking app that requires citizens who have been ordered to quarantine to stay in touch with a case worker. In China, citizens are required to use an app that color-codes people based on their health level (green, yellow, or red) to dictate where they’re allowed to be. A New York Times report said the app shares data with law enforcement. India’s government mandated that all workers use its Aarogya Setu app (which uses Bluetooth and GPS for contact tracing), ostensibly to maintain social distancing measures as the nation lifts restrictions and sends people back to work. Singapore was early to contact tracing with its TraceTogether app, but low adoption has spurred a push to merge it with a tool called SafeEntry that would force people to check in electronically at businesses and other places. Both Australia and New Zealand have employed contact tracing apps based on Singapore’s TraceTogether. MIT Technology Review is building a database tracker of all the government-backed automated contact tracing apps. Iceland’s Rakning C-19 contact tracing app uses GPS and has achieved 38% adoption, but a government official said it hasn’t made a significant impact on contact tracing efforts. Michigan has chosen to rely on traditional manual contact tracing in lieu of an app. The U.K.’s NHS contact tracing app is rolling out for testing and will be used along with traditional manual contact tracing methods, but the app’s centralized approach has privacy advocates concerned. Wearables and apps One method cribbed from law enforcement and the medical field is the use of wristbands or GPS ankle monitors to track specific individuals. In some cases, these monitors are paired with smartphone apps that differ from traditional contact tracing apps in that they’re meant to specifically identify a person and track their movements. In health care, patients who are discharged may be given a wristband or other wearable that’s equipped with smart technology to track their vitals. This is ideal for elderly people, especially those who live alone. If they experience a health crisis, an app connected to the wristband can alert their caregivers. In theory, this could help medical professionals keep an eye on the ongoing health of a recovered and discharged COVID-19 patient, monitoring them for any secondary health issues. Ostensibly, this sort of tracking would be kept between the patient and their health care provider. Law enforcement has long used ankle monitors to ensure that people under house arrest abide by court orders. In recent years, mobile apps have seen similar use. It’s not a big jump to apply these same technologies to tracking people under quarantine. A judge in West Virginia allowed law enforcement to put ankle monitors on people who have tested positive for COVID-19 but have refused to quarantine, and a judge in Louisville, Kentucky did the same. According to a Reuters report , Hawaii — which needs to ensure that arriving airline passengers quarantine for 14 days after entering the state — was considering using similar GPS-enabled ankle monitors or smartphone tracking apps but shelved that idea after pushback from the state’s attorney general. Remote monitoring via AI offers a potentially more attractive solution. A group of Stanford researchers proposed a home monitoring system designed for the elderly that would use AI to noninvasively (and with a layer of privacy) track a person’s overall health and well-being. Its potential value during quarantine, when caregivers need to avoid unnecessary contact with vulnerable populations, is obvious. Apps can also be used to create a crowdsourced citizen surveillance network. For example, the county of Riverside, California launched an app called RivCoMobile that allows people to anonymously report others they suspect are violating quarantine, hosting a large gathering, or flouting other rules, like not wearing facemasks inside essential businesses. As an opt-in choice for medical purposes, a wearable device and app could allow patients to maintain a lifeline to their care providers while also contributing data that helps medical professionals better understand the disease and its effects. But as an extension of law enforcement, wearables raise a far more ominous specter. Even so, it’s a tradeoff, as people with COVID-19 who willfully ignore stay-at-home orders are putting lives at risk. Examples include: In Poland, the government’s Home Quarantine app let police check that people are abiding by forced quarantines. Users had to check in using a phone number and SMS code, and they had to take a photo that’s verified with facial recognition. Quarantine breakers could receive fines. Those entering Kenya via the Jomo Kenyatta International Airport were required to self-quarantine for 14 days. The government monitored these people’s movements using their phones, and those who broke quarantine could be apprehended by police. The Southern Nevada Health District used an app to track those who have been tested and presumed to have COVID-19. They’re supposed to report daily symptoms, and if they fail to do so, the app will notify a “disease investigator.” Washington state’s Providence St. Joseph Health hospital deployed remote monitoring from Twistle to care for confirmed and suspected COVID-19 patients. In New York and New Orleans, LSU Healthcare Network is leveraging AI to remotely monitor cardiac patients vulnerable to the coronavirus. MIT’s Emerald monitoring device uses Wi-Fi and AI to track patients’ vitals, sleep, and movements. Current Health partnered with the Mayo Clinic on remote patient monitoring. Thermal scanning Thermal scanning has been used as a simple check at points of entry, like airports , military bases , and businesses of various kinds. The idea is that a thermal scan will catch anyone who is feverish — defined by the CDC as having a temperature of at least 100.4 degrees Fahrenheit — in an effort to flag those potentially stricken with COVID-19. But thermal scanning is not in itself diagnostic. It’s merely a way to spot one of the common symptoms of COVID-19, although anyone flagged by a thermal scan could, of course, be referred to an actual testing facility. Thermal scanners range from small handheld devices to larger and more expensive multi-camera systems. They can and have been installed on drones that fly around an area to hunt for feverish individuals who may need to be hospitalized or quarantined. Unlike facial recognition, thermal scanning is inherently private. Scanner technology doesn’t identify who anyone is or collect other identifying information. But some thermal imaging systems add — or claim to add — AI to the mix, like Kogniz and Feevr. And thermal scanners are highly problematic, mainly because there’s little evidence of their efficacy. Even thermal camera maker Flir, which could cash in on pandemic fears, has a prominent disclaimer on its site about using its technology to screen for COVID-19. But that hasn’t stopped some people from using Flir’s cameras for this purpose anyway. Thermal scanning can only spot people who have COVID-19 and are also symptomatic with a fever. Many people who end up testing positive for the disease are asymptomatic, meaning a thermal scan would show nothing out of the ordinary. And a fever is present in some but by no means all symptomatic cases. Even those who contract COVID-19 and do experience a fever may be infected for days before any symptoms actually appear, and they remain contagious for days after. Thermal scans are also vulnerable to false positives. Because it merely looks at a person’s body temperature, a thermal scan can’t tell if someone has a fever from a different illness or is perhaps overheated from exertion or experiencing a hot flash. That doesn’t even take into account whether a given thermal scanner is precise enough to be reliable. If its accuracy is, say, +/- 2 degrees, a 100-degree temperature could register as 98 degrees or 102 degrees. Although false negatives are dangerous because they could let a sick person through a checkpoint, false positives could result in people being unfairly detained. That could mean they’re sent home from work, forced into quarantine, or penalized for not abiding by an ordered quarantine, even though they aren’t sick. Tech journalists’ inboxes have been inundated with pitches for various smart thermometers and thermal cameras for weeks. But it’s reasonable to wonder how many of these companies are the equivalent of snake oil peddlers. Allegations have already been made against Athena Security , a company that touted an AI-powered thermal detection system. Facial recognition and other AI The most invasive type of tracking involves facial recognition and other forms of AI. There’s an obvious use case there. You can track many, many people all at once and continue tracking their movements as they are scanned again and again, yielding massive amounts of data on who is sick, where they are, where they’ve been, and who they’ve been in contact with. Enforcing a quarantine order becomes a great deal easier, more accurate, and more effective. However, facial recognition is also the technology that’s most ripe for dystopian abuse. Much ink has been spilled over the relative inaccuracy of facial recognition systems on all but white males, the ways governments have already used it to persecute people , and the real and potential dangers of its use within policing. That’s not to mention the sometimes deeply alarming figures behind the private companies making and selling this technology and concerns about its use by government agencies like ICE or U.S Customs and Border Protection. None of these problems will disappear just because of a pandemic. In fact, rhetoric about the urgency of the fight against the coronavirus may provide narrative cover for accelerating the development or deployment of facial recognition systems that may never be dismantled — unless stringent legal guardrails are put in place now. Russia , Poland , and China are all using facial recognition to enforce quarantines. Companies like CrowdVision and Shapes AI use computer vision, often along with Bluetooth, IR, Wi-Fi, and lidar, to track social distancing in public places like airports, stadiums, and shopping malls. CrowdVision says it has customers in North America, Europe, the Middle East, Asia, and Australia. In an emailed press release, U.K.-based Shapes AI said its camera-based computer vision system “can be utilized by authorities to help monitor and enforce the behaviors in streets and public spaces.” There will also be increased use of AI within workplaces as companies try to figure out how to safely restart operations in a post-quarantine world. Amazon, for example, is currently using AI to track employees’ social distancing compliance and potentially flag ill workers for quarantine. But deploying facial recognition systems during the pandemic raises another issue, which is that they tend to struggle with masked faces (at least for now), significantly reducing their efficacy. The drone problem Drones fall within a Venn diagram of tracking technology and present their own regulatory problems during the coronavirus pandemic. They’re a useful delivery system for things like medical supplies or other goods, and they may be used to spray disinfectants — but they’re also deployed for thermal scanning and facial recognition. Indeed, policing measures — whether they’re called surveillance, quarantine enforcement, or something else — are an obvious and natural use case for drones. And this is deeply problematic, particularly when it involves AI casting an eye from the sky, exacerbating existing problems like overpolicing in communities that are predominately home to people of color. The Electronic Frontier Foundation (EFF) is emphatic that there must be guardrails around the use of drones for any kind of coronavirus-related surveillance or tracking, and it wrote about the dangers they pose. The EFF isn’t alone in its concern, and the ACLU has recently gone so far as to take the issue of aerial surveillance to court. Drone applications include the following examples: UPS subsidiary UPS Flight Forward (UPSFF) and CVS have partnered to use Matternet’s M2 drone system to fly medications from the pharmacy to a retirement community in Florida. Baltimore, Maryland police are planning to use drones to track the movements of people in the city. Zipline will deliver personal protective equipment (such as masks) around the campuses of the Novant Health medical network in Charlotte, North Carolina. The company’s drones are also flying COVID-19 test samples from rural areas of Ghana to Accra, the nation’s capital. Through its Disaster Recovery Program, DJI is conducting remote outreach to homeless populations in Tulsa, Florida and helping enforce social distancing guidelines in Daytona Beach. In China, medical delivery drones supplied by Antwork and others have been used to transport quarantine supplies and medical samples. Paris police are facing backlash from privacy groups after using drones to surveil those who break the city’s lockdown rules. Flytrex launched a small drone delivery deployment in Grand Forks, North Dakota that’s designed to deliver medicine, food, and other supplies from businesses to homes. Police in Mumbai are using drones in some areas of the city to find and help disperse gatherings that violate social distancing rules. In some roles, drones can help save lives, or at least reduce the spread of the coronavirus by limiting person-to-person contact. As surveillance mechanisms, they could become part of an oppressive police state. They may even edge close to both at the same time. In an in-depth look at what happened with Draganfly, VentureBeat’s Emil Protalinski unpacked how the drone company went from trying to provide social distancing and health monitoring services from the air to licensing computer vision tech from a company called the Vital Intelligence and launching a pilot project in Westport, Connecticut aimed at flattening the curve. Local officials abruptly ended the pilot after blowback from residents, who objected to the surveillance drones and their ties to policing. This article includes reporting by Kyle Wiggers. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "