id
int64
3
41.8M
url
stringlengths
1
1.84k
title
stringlengths
1
9.99k
author
stringlengths
1
10k
markdown
stringlengths
1
4.36M
downloaded
bool
2 classes
meta_extracted
bool
2 classes
parsed
bool
2 classes
description
stringlengths
1
10k
filedate
stringclasses
2 values
date
stringlengths
9
19
image
stringlengths
1
10k
pagetype
stringclasses
365 values
hostname
stringlengths
4
84
sitename
stringlengths
1
1.6k
tags
stringclasses
0 values
categories
stringclasses
0 values
30,523,957
https://bariweiss.substack.com/p/the-wests-green-delusions-empowered
The West’s Green Delusions Empowered Putin
Michael Shellenberger
How has Vladimir Putin—a man ruling a country with an economy smaller than that of Texas, with an average life expectancy 10 years lower than that of France—managed to launch an unprovoked full-scale assault on Ukraine? There is a deep psychological, political and almost civilizational answer to that question: He wants Ukraine to be part of Russia more than the West wants it to be free. He is willing to risk tremendous loss of life and treasure to get it. There are serious limits to how much the U.S. and Europe are willing to do militarily. And Putin knows it. Missing from that explanation, though, is a story about material reality and basic economics—two things that Putin seems to understand far better than his counterparts in the free world and especially in Europe. Putin knows that Europe produces 3.6 million barrels of oil a day but uses 15 million barrels of oil a day. Putin knows that Europe produces 230 billion cubic meters of natural gas a year but uses 560 billion cubic meters. He knows that Europe uses 950 million tons of coal a year but produces half that. The former KGB agent knows Russia produces 11 million barrels of oil per day but only uses 3.4 million. He knows Russia now produces over 700 billion cubic meters of gas a year but only uses around 400 billion. Russia mines 800 million tons of coal each year but uses 300. That’s how Russia ends up supplying about 20 percent of Europe’s oil, 40 percent of its gas, and 20 percent of its coal. The math is simple. A child could do it. The reason Europe didn’t have a muscular deterrent threat to prevent Russian aggression—and in fact prevented the U.S. from getting allies to do more—is that it needs Putin’s oil and gas. The question is why. How is it possible that European countries, Germany especially, allowed themselves to become so dependent on an authoritarian country over the 30 years since the end of the Cold War? Here’s how: These countries are in the grips of a delusional ideology that makes them incapable of understanding the hard realities of energy production. Green ideology insists we don’t need nuclear and that we don’t need fracking. It insists that it’s just a matter of will and money to switch to all-renewables—and fast. It insists that we need “degrowth” of the economy, and that we face looming human “extinction.” (I would know. I myself was once a true believer.) John Kerry, the United States’ climate envoy, perfectly captured the myopia of this view when he said, in the days before the war, that the Russian invasion of Ukraine “could have a profound negative impact on the climate, obviously. You have a war, and obviously you’re going to have massive emissions consequences to the war. But equally importantly, you’re going to lose people’s focus.” But it was the West’s focus on healing the planet with “soft energy” renewables, and moving away from natural gas and nuclear, that allowed Putin to gain a stranglehold over Europe’s energy supply. As the West fell into a hypnotic trance about healing its relationship with nature, averting climate apocalypse and worshiping a teenager named Greta, Vladimir Putin made his moves. While he expanded nuclear energy at home so Russia could export its precious oil and gas to Europe, Western governments spent their time and energy obsessing over “carbon footprints,” a term created by an advertising firm working for British Petroleum. They banned plastic straws because of a 9-year-old Canadian child’s science homework. They paid for hours of “climate anxiety” therapy. While Putin expanded Russia’s oil production, expanded natural gas production, and then doubled nuclear energy production to allow more exports of its precious gas, Europe, led by Germany, shut down its nuclear power plants, closed gas fields, and refused to develop more through advanced methods like fracking. The numbers tell the story best. In 2016, 30 percent of the natural gas consumed by the European Union came from Russia. In 2018, that figure jumped to 40 percent. By 2020, it was nearly 44 percent, and by early 2021, it was nearly 47 percent. For all his fawning over Putin, Donald Trump, back in 2018, defied diplomatic protocol to call out Germany publicly for its dependence on Moscow. “Germany, as far as I’m concerned, is captive to Russia because it’s getting so much of its energy from Russia,” Trump said. This prompted Germany’s then-chancellor, Angela Merkel, who had been widely praised in polite circles for being the last serious leader in the West, to say that her country “can make our own policies and make our own decisions.” The result has been the worst global energy crisis since 1973, driving prices for electricity and gasoline higher around the world. It is a crisis, fundamentally, of inadequate supply. But the scarcity is entirely manufactured. Europeans—led by figures like Greta Thunberg and European Green Party leaders, and supported by Americans like John Kerry—believed that a healthy relationship with the Earth requires making energy scarce. By turning to renewables, they would show the world how to live without harming the planet. But this was a pipe dream. You can’t power a whole grid with solar and wind, because the sun and the wind are inconstant, and currently existing batteries aren’t even cheap enough to store large quantities of electricity overnight, much less across whole seasons. In service to green ideology, they made the perfect the enemy of the good—and of Ukraine. Take Germany. Green campaigns have succeeded in destroying German energy independence—they call it *Energiewende*, or “energy turnaround”—by successfully selling policymakers on a peculiar version of environmentalism. It calls climate change a near-term apocalyptic threat to human survival while turning up its nose at the technologies that can help address climate change most and soonest: nuclear and natural gas. At the turn of the millennium, Germany’s electricity was around 30 percent nuclear-powered. But Germany has been sacking its reliable, inexpensive nuclear plants. (Thunberg called nuclear power “extremely dangerous, expensive, and time-consuming” despite the UN’s International Panel on Climate Change deeming it necessary and every major scientific review deeming nuclear the safest way to make reliable power.) By 2020, Germany had reduced its nuclear share from 30 percent to 11 percent. Then, on the last day of 2021, Germany shut down half of its remaining six nuclear reactors. The other three are slated for shutdown at the end of this year. (Compare this to nextdoor France, which fulfills 70 percent of its electricity needs with carbon-free nuclear plants.) Germany has also spent lavishly on weather-dependent renewables—to the tune of $36 billion a year—mainly solar panels and industrial wind turbines. But those have their problems. Solar panels have to go somewhere, and a solar plant in Europe needs 400 to 800 times more land than natural gas or nuclear plants to make the same amount of power. Farmland has to be cut apart to host solar. And solar energy is getting cheaper these days mainly because Europe’s supply of solar panels is produced by slave labor in concentration camps as part of China’s genocide against Uighur Muslims. The upshot here is that you can’t spend enough on climate initiatives to fix things if you ignore nuclear and gas. Between 2015 and 2025, Germany’s efforts to green its energy production will have cost $580 billion. Yet despite this enormous investment, German electricity still costs 50 percent more than nuclear-friendly France’s, and generating it produces eight times more carbon emissions per unit. Plus, Germany is getting over a third of its energy from Russia. Germany has trapped itself. It could burn more coal and undermine its commitment to reducing carbon emissions. Or it could use more natural gas, which generates half the carbon emissions of coal, but at the cost of dependence on imported Russian gas. Berlin was faced with a choice between unleashing the wrath of Putin on neighboring countries or inviting the wrath of Greta Thunberg. They chose Putin. Because of these policy choices, Vladimir Putin could turn off the gas flows to Germany, and quickly threaten Germans’ ability to cook or stay warm. He or his successor will hold this power for every foreseeable winter barring big changes. It’s as if you knew that hackers had stolen your banking details, but you won’t change your password. This is why Germany successfully begged the incoming Biden administration not to oppose a contentious new gas pipeline from Russia called Nord Stream 2. This cut against the priorities of green-minded governance: On day one of Biden’s presidency, one of the new administration’'s first acts was to shut down the Keystone XL oil pipeline from Canada to the U.S. in service to climate ideology. But Russia’s pipeline was too important to get the same treatment given how dependent Germany is on Russian imports. (Once Russia invaded, Germany was finally dragged into nixing Nord Stream 2, for now.) Naturally, when American sanctions on Russia’s biggest banks were finally announced in concert with European allies last week, they specifically exempted energy products so Russia and Europe can keep doing that dirty business. A few voices called for what would really hit Russia where it hurts: cutting off energy imports. But what actually happened was that European energy utilities jumped to buy* more* contracts for the Russian oil and gas that flows through Ukraine. That’s because they have no other good options right now, after green activism’s attacks on nuclear and importing fracked gas from America. There’s no current plan for powering Europe that doesn’t involve buying from Putin. We should take Russia’s invasion of Ukraine as a wake-up call. Standing up for Western civilization this time requires cheap, abundant, and reliable energy supplies produced at home or in allied nations. National security, economic growth, and sustainability requires greater reliance on nuclear and natural gas, and less on solar panels and wind turbines, which make electricity too expensive. The first and most obvious thing that should be done is for President Biden to call on German Chancellor Scholz to restart the three nuclear reactors that Germany closed in December. A key step in the right direction came on Sunday when Vice-Chancellor Robert Habeck, the economy and climate minister, announced that Germany would at least consider stopping its phaseout of nuclear. If Germany turns these three on and cancels plans to turn off the three others, those six should produce enough electricity to replace 11 billion cubic meters of natural gas per year—an eighth of Germany’s current needs. Second, we need concerted action led by Biden, Congress, and their Canadian counterparts to significantly expand oil and natural gas output from North America to ensure the energy security of our allies in Europe and Asia. North America is more energy-rich than anyone dreamed. Yes, it will be more expensive than Russian gas sent by pipeline. But it would mean Europe could address Putin’s war on Ukraine, rather than financing it. Exporting gas by ship requires special terminals at ports to liquify (by cooling) natural gas; environmentalists oppose these terminals because of their ideological objection to any combustible fuel. So it’s a good sign that Chancellor Sholz announced plans on Sunday to build two of these terminals to receive North American gas, along with announcing major new military spending to counter Russia. Third, the U.S. must stop shutting down nuclear plants and start building them. Every country should invest in next-generation nuclear fuel technology while recognizing that the current generation of light-water reactors are our best tool for creating energy at home, with no emissions, right now. What you’ve heard about waste is mostly pseudoscience. Storing used fuel rods is a trivial problem, already solved around the world by keeping them in steel and concrete cans. The more nuclear power we generate, the less oil and gas we have to burn. And the less the West will have to buy from Russia. Putin’s relentless focus on energy reality has left him in a stronger position than he should ever have been allowed to find himself. It’s not too late for the rest of the West to save the world from tyrannical regimes that have been empowered by our own energy superstitions. ## our Comments Use common sense here: disagree, debate, but don't be a .
true
true
true
While we banned plastic straws, Russia drilled and doubled nuclear energy production.
2024-10-12 00:00:00
2022-03-01 00:00:00
https://substackcdn.com/…4_3936x1892.jpeg
article
thefp.com
The Free Press
null
null
21,780,303
https://www.bloomberg.com/news/articles/2019-12-13/delivery-hero-said-to-near-deal-to-buy-woowa-in-4-billion-deal
Bloomberg
null
To continue, please click the box below to let us know you're not a robot. Please make sure your browser supports JavaScript and cookies and that you are not blocking them from loading. For more information you can review our Terms of Service and Cookie Policy. For inquiries related to this message please contact our support team and provide the reference ID below.
true
true
true
null
2024-10-12 00:00:00
null
null
null
null
null
null
null
23,087,934
https://en.wikipedia.org/wiki/Vicarious_embarrassment
Vicarious embarrassment - Wikipedia
null
# Vicarious embarrassment **Vicarious embarrassment** (also known as **secondhand**, **empathetic**, or **third-party embarrassment** and also as **Spanish shame**[1][2] or **Fremdschämen** in German[3][4]) is the feeling of embarrassment from observing the embarrassing actions of another person. Unlike general embarrassment, vicarious embarrassment is not the feelings of embarrassment for yourself or for your own actions, but instead by feeling embarrassment for somebody else after witnessing (verbally and/or visually) that other person experience an embarrassing event. These emotions can be perceived as pro-social, and some say they can be seen as motives for following socially and culturally acceptable behavior.[5][6] Vicarious embarrassment (German: *Fremdscham*) is often seen as an opposite to *schadenfreude*, which is the feeling of pleasure or satisfaction at misfortune, humiliation or embarrassment of another person.[7][8] Vicarious embarrassment is different from an emotional contagion, which is when a person unconsciously mimics the emotions that others are experiencing.[9] An emotional contagion is experienced by both people, making it a shared emotion. Vicarious embarrassment often occurs even when the individual experiencing the embarrassing event might not be aware of the implications. For an act to be considered an emotional contagion, more than one person must be affected by the emotion, but in vicarious emotions, it is only necessary that the observer experience the emotion.[10] Furthermore, vicarious embarrassment can be experienced even when the observer is completely isolated.[11][12][13] Vicarious embarrassment, like other vicarious emotions, presents symptoms that reflect the original emotion. However, unlike shared emotions, the experience of embarrassment for the observer is dependent on how they normally experience embarrassment. Individuals who experience social anxiety in their own life may experience the familiar symptoms of blushing,[12][14] excess sweating, trembling, palpitations, and nausea.[15][16] Other, less severe symptoms may include cringing, looking away, or general discomfort. ## Psychological basis [edit]### Empathy [edit]Vicarious embarrassment, also known as empathetic embarrassment, is intrinsically linked to empathy. Empathy is the ability to understand the feelings of another and is considered a highly reinforcing emotion to promote selflessness, prosocial behavior,[14] and group emotion, whereas a lack of empathy is related to antisocial behavior.[17][18] During an embarrassing situation, the observer empathizes with the victim of embarrassment, assuming the feeling of embarrassment. People who have more empathy are more likely to be susceptible to vicarious embarrassment.[13] The capacity to recognize emotions is probably innate,[19] as it may be achieved unconsciously. Yet it can be trained and achieved with various degrees of intensity or accuracy.[20] ### Self-projection [edit]Psychological projection is a theory in psychology and psychoanalysis in which humans defend themselves against undesirable emotions by denying their existence in themselves while attributing them to others.[21] Projection is considered a normal and common process in everyday life.[22] Vicarious embarrassment and other vicarious emotions, however, work in the reverse, a process called self-projection. The undesirable emotion is experienced in another person, and the observer projects what they interpret as the appropriate response onto themselves.[23] For example, someone who lies easily might feel vicariously embarrassed if they self-project the experience of someone getting caught in a bad lie. ## Cultural significance [edit]Embarrassing situations often arise in social situations, as the result of failing to meet a social expectation, and is used to help learn what has been deemed culturally appropriate.[24][17][5][14][22] While embarrassment isolates the victim based on a cultural bias, vicarious embarrassment is used to promote prosocial behavior between the victim and the observer.[13][6] ### Cringe comedy [edit]Embarrassing situations have been used for a long time in situational comedy, sketch comedy, dramatic irony, and practical jokes. Traditionally, laugh tracks were used to help cue the audience to laugh at appropriate times. But as laugh tracks were removed from sitcoms, embarrassing situations on television were now accompanied by silence, creating a genre known as cringe comedy,[25][26][27] which includes many critically acclaimed sitcom television shows, such as the British television series *The Office*.[28][11] ## See also [edit]## References [edit]**^**Gallego, Javier (18 June 2012). "Spanish shame" (in Spanish). RTVE.**^**Albertus, Ramón (11 February 2022). "Club Caníbal, «humor negro» y 'spanish shame'".*El Correo*(in Spanish).**^**Wedia. "German words expats should know: Fremdschämen".*IamExpat*. Retrieved 2022-11-16.**^**"German Word of the Day: Fremdschämen".*The Local Germany*. 2018-10-04. Retrieved 2022-11-16.- ^ **a**Hoffman, Martin L. (1990-06-01). "Empathy and justice motivation".**b***Motivation and Emotion*.**14**(2): 151–172. doi:10.1007/BF00991641. ISSN 0146-7239. S2CID 143830768. - ^ **a**Williams, Kipling D. (2007). "Ostracism".**b***Annual Review of Psychology*.**58**(1): 425–452. doi:10.1146/annurev.psych.58.110405.085641. PMID 16968209. **^**"The Opposite Of Schadenfreude: Vicarious Embarrassment".*NPR.org*. Retrieved 2017-12-04.**^**Curiosity. "This is why you don't like cringe comedies".*RedEye Chicago*. Retrieved 2017-12-06.**^**Hatfield, Elaine; Cacioppo, John T.; Rapson, Richard L. (2016-06-22). "Emotional Contagion".*Current Directions in Psychological Science*.**2**(3): 96–100. doi:10.1111/1467-8721.ep10770953. S2CID 220533081.**^**Barsade, Sigal G. (2002-12-01). "The Ripple Effect: Emotional Contagion and its Influence on Group Behavior".*Administrative Science Quarterly*.**47**(4): 644–675. CiteSeerX 10.1.1.476.4921. doi:10.2307/3094912. ISSN 0001-8392. JSTOR 3094912. S2CID 1397435.- ^ **a**Hartmann, Margaret. "The Science Behind Your Secondhand Embarrassment".**b***Jezebel*. Retrieved 2017-12-04. - ^ **a**Nikolić, Milica; Colonnesi, Cristina; de Vente, Wieke; Drummond, Peter; Bögels, Susan M. (2015-06-01). "Blushing and Social Anxiety: A Meta-Analysis".**b***Clinical Psychology: Science and Practice*.**22**(2): 177–193. doi:10.1111/cpsp.12102. ISSN 1468-2850. - ^ **a****b**Krach, Sören; Cohrs, Jan Christopher; Loebell, Nicole Cruz de Echeverría; Kircher, Tilo; Sommer, Jens; Jansen, Andreas; Paulus, Frieder Michel (2011-04-13). "Your Flaws Are My Pain: Linking Empathy To Vicarious Embarrassment".**c***PLOS ONE*.**6**(4): e18675. Bibcode:2011PLoSO...618675K. doi:10.1371/journal.pone.0018675. ISSN 1932-6203. PMC 3076433. PMID 21533250. - ^ **a****b**Feinberg, Matthew; Willer, Robb; Keltner, Dacher (January 2012). "Flustered and faithful: embarrassment as a signal of prosociality" (PDF).**c***Journal of Personality and Social Psychology*.**102**(1): 81–97. doi:10.1037/a0025403. ISSN 1939-1315. PMID 21928915. S2CID 14251097. Archived from the original (PDF) on 2019-03-02. **^**Acarturk, C.; de Graaf, Ron; van Straten, A.; Have, M. Ten; Cuijpers, P. (April 2008). "Social phobia and number of social fears, and their association with comorbidity, health-related quality of life and help seeking: a population-based study" (PDF).*Social Psychiatry and Psychiatric Epidemiology*.**43**(4): 273–279. doi:10.1007/s00127-008-0309-1. ISSN 0933-7954. PMID 18219433. S2CID 8450876.**^**"NIMH » Social Anxiety Disorder: More Than Just Shyness".*www.nimh.nih.gov*. Retrieved 2017-12-04.- ^ **a**Parrott, W. Gerrod (2001).**b***Emotions in Social Psychology: Essential Readings*. Psychology Press. ISBN 9780863776823. **^**de Waal, Frans B.M. (2007-12-21). "Putting the Altruism Back into Altruism: The Evolution of Empathy".*Annual Review of Psychology*.**59**(1): 279–300. doi:10.1146/annurev.psych.59.103006.093625. ISSN 0066-4308. PMID 17550343.**^**D., Baird, James (2010).*Unlock the positive potential hidden in your DNA*. Nadel, Laurie, 1948-. Franklin Lakes, NJ: New Page Books. ISBN 9781601631053. OCLC 460061527.`{{cite book}}` : CS1 maint: multiple names: authors list (link)**^**O'Malley, J (1999).*Teaching Empathy*. America. pp. 22–26.`{{cite book}}` : CS1 maint: location missing publisher (link)**^**C. G., JUNG (1969). ADLER, GERHARD; HULL, R. F. C. (eds.).*Collected Works of C. G. Jung, Volume 11: Psychology and Religion: West and East*. Princeton University Press. JSTOR j.ctt5hhr4b.- ^ **a**Wade, Carole; Tavris, Carol (2002).**b***Psychology*. Prentice Hall. ISBN 9780130982636.wade psychology. **^**Mills, Jon (2013-02-01). "Jung's metaphysics".*International Journal of Jungian Studies*.**5**(1): 19–43. doi:10.1080/19409052.2012.671182. ISSN 1940-9052.**^**"The Psychology of Embarrassment".*World of Psychology*. 2012-11-14. Archived from the original on 2016-12-07. Retrieved 2017-12-04.**^**"Funny Business".*tribunedigital-chicagotribune*. Retrieved 2017-12-06.**^**"With 'Office,' NBC Goes Off the Beaten Laugh Track (washingtonpost.com)".*www.washingtonpost.com*. Retrieved 2017-12-04.**^**"Don't Like Cringe Comedies? You Probably Have Fremdscham".*curiosity.com*. Archived from the original on 2017-12-08. Retrieved 2017-12-06.**^**"The Office, "Duel" & 30 Rock, "Flu Shot": Silent but deadly".*NJ.com*. Retrieved 2017-12-04.
true
true
true
null
2024-10-12 00:00:00
2017-10-04 00:00:00
null
website
wikipedia.org
Wikimedia Foundation, Inc.
null
null
21,147,976
https://www.quantamagazine.org/cell-bacteria-mergers-offer-clues-to-how-organelles-evolved-20191003/
Cell-Bacteria Mergers Offer Clues to How Organelles Evolved
Viviane Callier October 3
# Cell-Bacteria Mergers Offer Clues to How Organelles Evolved ## Introduction There are few relationships in nature more intimate than those between cells and the symbiotic bacteria, or endosymbionts, that live inside them. In these partnerships, a host cell typically provides protection to its endosymbiont and gives it a way to propagate, while the endosymbiont provides key nutrients to the host. It’s a deeply cooperative arrangement, in which the genomes of the host and the endosymbiont even seem to contribute complementary pieces to each other’s metabolic and biosynthetic pathways. The revealed intricacy of these partnerships continues to hold surprises. In a new study appearing today in *Cell*, scientists show that a complex three-way symbiosis between an insect cell and two species of bacteria — one an endosymbiont of the other — deeply intertwines the organisms’ genomes and physiologies. Those results may illuminate how mitochondria and other organelles arose from ancient endosymbionts in the earliest eukaryotic cells. When cells need to quickly acquire a new metabolic trait to survive, their best option may be to borrow one from other organisms. Horizontal transfers can move a few genes between cells, but the chances of horizontally acquiring the complete suite of genes for a complex metabolic pathway are vanishingly small. So the easiest solution is often for cells with dissimilar abilities and complementary needs to merge, explains John McCutcheon, an endosymbiosis researcher at the University of Montana. These mergers are not uncommon in nature. Secondary and tertiary mergers are even known to have occurred, producing the cellular equivalent of a set of nested Russian dolls. One such Russian-doll merger occurred about 100 million years ago, when the small insect pests called mealybugs acquired a bacterial endosymbiont, *Tremblaya*. Subsequently, *Tremblaya *acquired several other bacteria including *Moranella*. Eventually the others were lost and only *Moranella *remained. It’s not known how long the *Moranella *endosymbiosis has been going on, but it is probably on the order of tens of millions of years. The result is that the mealybug cell contains a bacterium that contains another bacterium – an arrangement discovered back in 2001 by Carol von Dohlen, a biologist at Utah State University. In 2011, von Dohlen and McCutcheon published the sequenced genomes of these two bacteria. Each of the genomes had lost genes, but together they had the full complement of genes coding for enzymes in biosynthetic pathways for essential amino acids. Thus, *Tremblaya *and *Moranella *work together to produce essential amino acids for themselves as well as the ones that the mealybug cannot find in its strict sap diet. But the two bacterial genomes were missing other genes as well, and although they complemented one another for amino acid synthesis, they seemed unable to make enzymes crucial to other metabolic pathways. That led McCutcheon to wonder whether the host insect’s genome contained the genes that filled those holes. In a paper published in 2013, McCutcheon and colleagues showed that this was indeed the case. They also noticed that although those genes sat inside the nuclei of the insect host cells, many of them had clearly not started out as mealybug genes because they coded for synthesizing peptidoglycans, the main components of bacterial cell walls. Those genes had to have been horizontally transferred into the mealybug nuclear genome from bacteria. The genomic evidence therefore suggested, but did not prove, that the *Moranella *endosymbiont might rely on gene products from the mealybug’s nuclear genome to make its cell walls. If so, however, it meant that the products of the insect’s genes had to move from the host nucleus through five cell membranes (three in *Tremblaya *and two in *Moranella*) to reach the inside of the most deeply nested bacteria, where the peptidoglycans are made. That unproved proposition seemed highly unlikely. In addition, it would mean that genes from at least three disparate sources — authentic mealybug nuclear genes, various bacterial genes acquired by the mealybug nuclear genome, and *Moranella *genes — were all working together in a complex biosynthetic pathway. That also seemed unlikely. The hypothesis taking shape looked ungainly, even to the researchers. “It’s so complicated, and for all this stuff to work together, it’s almost just ridiculous,” McCutcheon said. Still, they had enough confidence in their genomic data to devise a way to test it. “We wanted to figure out a pathway where we could go in and actually prove that what we were seeing in the genomics actually worked the way we thought it would.” ## Symbionts and Metabolism In a paper published today in *Cell*, McCutcheon, DeAnna Bublitz (a senior scientist in McCutcheon’s laboratory) and their colleagues describe the clever trick that enabled them to accomplish this. They exploited a unique feature of peptidoglycans: that they are made with D-alanine, an amino acid found nowhere else in cell metabolism. In the experiments, the researchers gave growing cultures of mealybug cells versions of D-alanine that were tagged with either a heavy nitrogen isotope or fluorescent compounds, which allowed them to trace its location in the cell and its various metabolic transformations. Their findings confirmed that the complete biochemistry of peptidoglycan synthesis was occurring inside the nested *Moranella *endosymbionts. When the researchers first saw the results of the heavy-isotope experiment, Bublitz recalls, she was in a basement laboratory with other grad students and research scientists. “The screen came up and started to show that the pattern we expected with peptidoglycan was real, but we took a moment to go stand back in the hallway and look at the screen from very, very far away to make sure that what we were seeing was still visible,” she said. “It was that level of disbelief.” The researchers even called in strangers and asked them to describe the image on the screen to see “if it was as obvious to them as blind reviewers as it was to us, and that we weren’t just seeing what we wanted to see.” “The thing that makes me most excited is that this shows that these complicated endosymbioses work,” McCutcheon said. He was also struck that *Moranella *and *Tremblaya *are so integrated into the mealybug cells that they are effectively part of them: “Seeing the genetic complexity that underlies it really erodes all functional distinction between endosymbiont and organelle.” W. Ford Doolittle, an evolutionary and molecular biologist at Dalhousie University in Nova Scotia, says this study represents “a necessary and exciting amount of ground-truthing” because it tested out the biochemistry implied by the genomic data and showed that the cells are actually making what they’re supposed to be making. “I do think this is a very significant paper,” he said. “It’s quite a tour de force to be able to actually show that the innermost bacterium makes the cell walls,” said Seemay Chou, a biochemist studying interactions between animals and microbes at the University of California, San Francisco. “It’s a very technically challenging thing to demonstrate.” “What is super wild about this is that the division of labor requires that the host transport something across five different lipid membranes to get to the inner endosymbiont,” she added. It’s unclear at this point whether the symbiotic cells are using the same molecular mechanisms that eukaryotic cells normally use to shuttle proteins across cell membranes or whether they had to invent new ways to move this cargo around. According to McCutcheon, some evidence indicates that *Tremblaya*, the middleman in this three-way symbiosis, actively transports the gene products between the host nucleus and the *Moranella *genome, because none of the heavy isotope-tagged metabolites were ending up in *Tremblaya*. “It is participating somehow, but that ‘how’ is really quite a mystery,” he said. ## The Controlling Partner There is at least one other known example of an endosymbiosis that involves shared genes. *Paulinella*, a protist, evolved a “second chloroplast” or chromatophore about 100 million years ago by acquiring a cyanobacterial endosymbiont. The chromatophore genome works in complement with the host protist genome to make the chromatophore’s peptidoglycan layer. McCutcheon speculated that in both *Paulinella *and the mealybug-*Tremblaya-Moranella *endosymbiosis, the genomic mosaic for making peptidoglycans may enable the eukaryotic host to control its bacterial endosymbionts. If the endosymbiont replicated too quickly, it could kill the host; by limiting the rate at which the bacteria can build cell walls, the host keeps its residents in check. Why *Moranella *continues to make a cell wall remains a mystery, however, because it is already safely encased inside both *Tremblaya *and the host cell. “Clearly, the fact that it has to do all this suggests it’s important for some reason,” Chou said. McCutcheon thinks that the mealybug ménage à trois may hold clues about the evolution of the very oldest, and probably best-known, organelle: mitochondria. Mitochondria evolved from an alphaproteobacteria that was engulfed by a prokaryote (most likely a member of Archaea) between 1.5 billion and 2 billion years ago. Because most living bacteria have peptidoglycan cell walls, it is likely that ancient ones — including the ancestor of mitochondria — did too. Bublitz hypothesizes that the alphaproteobacteria got taken up by another cell or actively invaded it, and that the host cell was able to co-opt the peptidoglycan pathway and control its endosymbiont’s replication. It’s possible the host and the early mitochondria eventually became further integrated, so much so that the host no longer needed the peptidoglycan pathway to control the mitochondria’s division. Ceding control over peptidoglycan synthesis might be “one of those first steps in transitioning from an autonomous bacterium to some kind of functional organelle,” Bublitz speculated. Because mitochondria are so ancient, and because they evolved only once, it is difficult to reconstruct exactly how that endosymbiosis evolved. What we know is that over time mitochondrial genomes lost genes, some of which came to be inserted into the nuclear genome. Today, in most animals complicated enough to have bilateral symmetry, the mitochondrial genomes retain only 37 genes; the mitochondria rely on more than 1,000 genes now in the nucleus to function. (In contrast, eukaryotic microbes ordinarily have between three and 69 mitochondrial genes.) But a little-known fact is that many of those nuclear genes didn’t come from the ancient mitochondrial genome; they came from other bacteria in horizontal transfer events, McCutcheon says. The origin of these horizontally transferred genes is a hotly debated issue in mitochondrial biology. One possibility is that the alphaproteobacteria acquired them from other bacteria before the endosymbiotic event that produced the eukaryotic cell, and the genes subsequently moved over to the host nucleus. Another is that the horizontally transferred genes arrived in the nuclear genome directly from various bacteria over time, after the eukaryotic cell evolved. Although we can’t be completely certain how the oldest organelle evolved, the mealybug-*Tremblaya-Moranella *symbiosis demonstrates that the second evolutionary scenario can work — that genes from different bacterial infections can accumulate slowly and become integrated into functional pathways in a single entity. What makes mitochondria unique among endosymbionts-turned-organelles is that they are the oldest example, McCutcheon said. “But their antiquity makes them hard to study, and it makes it hard to infer what happened before they became organelles. I think the thing about *Paulinella*is that it gives us a window into what might’ve happened.” Although mitochondria have been uniquely successful, not all endosymbioses end as happily. Earlier work by McCutcheon showed that cicadas have a bacterial endosymbiont, *Hodgkinia cicadicola*, that fragmented into more than two dozen lineages inside the cicada cells. Those lineages each contain only varying subsets of the *Hodgkinia *genome. Together, all of the lineages have the full complement of genes needed to make the essential amino acids the cicadas depend on. But McCutcheon thinks this is an example of a nonadaptive endosymbiont evolution that creates difficulties for the host: The cicada eggs need to pick up a full complement of the endosymbionts to survive, and the variability in what different combinations of endosymbionts provide poses problems. McCutcheon and Bublitz are now working to figure out why some endosymbioses evolve into stable, successful partnerships while others spiral out of control or degrade. “Right now, there’s no smoking gun as to what allows one to persist,” Bublitz said.
true
true
true
Cells in symbiotic partnership, sometimes nested one within the other and functioning like organelles, can borrow from their host’s genes to complete their own metabolic pathways.
2024-10-12 00:00:00
2019-10-03 00:00:00
https://d2r55xnwy6nx47.c…_1200_Social.jpg
article
quantamagazine.org
Quanta Magazine
null
null
13,777,998
https://blogs.nvidia.com/blog/2017/03/02/ai-podcast-how-a-computer-scientist-uses-ai-to-read-lost-literature/
Artificial Intelligence Archives
Wp-Block-Co-Authors-Plus-Coauthors Is-Layout-Flow
Skip to content Artificial Intelligence Computing Leadership from NVIDIA Search for: Toggle Search Search for: Home AI Data Center Driving Gaming Pro Graphics Robotics Healthcare Startups AI Podcast NVIDIA Life Tag: Artificial Intelligence AI’ll Be by Your Side: Mental Health Startup Enhances Therapist-Client Connections What’s the ROI? Getting the Most Out of LLM Inference Flux and Furious: New Image Generation Model Runs Fastest on RTX AI PCs and Workstations US Healthcare System Deploys AI Agents, From Research to Rounds A Not-So-Secret Agent: NVIDIA Unveils NIM Blueprint for Cybersecurity SETI Institute Researchers Engage in World’s First Real-Time AI Search for Fast Radio Bursts Pittsburgh Steels Itself for Innovation With Launch of NVIDIA AI Tech Community Brave New World: Leo AI and Ollama Bring RTX-Accelerated Local LLMs to Brave Browser Users Bon Voyage: NIO Unveils ONVO L60 Smart Electric SUV, Built on NVIDIA DRIVE Orin Load More Articles Share This Facebook LinkedIn Email Share on Mastodon Enter your Mastodon instance URL (optional) Share
true
true
true
null
2024-10-12 00:00:00
2024-09-01 00:00:00
https://blogs.nvidia.com…/nvidia-logo.jpg
article
nvidia.com
NVIDIA Blog
null
null
18,239,597
http://fsharpforfunandprofit.com/posts/property-based-testing-2/
Choosing properties for property-based testing
null
# Choosing properties for property-based testing *UPDATE: I did a talk on property-based testing based on these posts. Slides and video here.* In the previous two posts, I described the basics of property-based testing, and showed how it can save a lot of time by generating random tests. But here’s a common problem. Everyone who sees a property-based testing tool like FsCheck or QuickCheck thinks that it is amazing… but when it comes time to start creating your own properties, the universal complaint is: “what properties should I use? I can’t think of any!” The goal of this post is to show some common patterns that can help you discover the properties that are applicable to your code. In my experience, many properties can be discovered by using one of the seven approaches listed below. - “Different paths, same destination” - “There and back again” - “Some things never change” - “The more things change, the more they stay the same” - “Solve a smaller problem first” - “Hard to prove, easy to verify” - “The test oracle” This is by no means a comprehensive list, just the ones that have been most useful to me. For a different perspective, check out the list of patterns that the PEX team at Microsoft have compiled. These kinds of properties are based on combining operations in different orders, but getting the same result. For example, in the diagram below, doing `X` then `Y` gives the same result as doing `Y` followed by `X` . In category theory, this is called a *commutative diagram*. Addition is an obvious example of this pattern. For example, the result of `add 1` then `add 2` is the same as the result of `add 2` followed by `add 1` . This pattern, generalized, can produce a wide range of useful properties. We’ll see some more uses of this pattern later in this post. These kinds of properties are based on combining an operation with its inverse, ending up with the same value you started with. In the diagram below, doing `X` serializes `ABC` to some kind of binary format, and the inverse of `X` is some sort of deserialization that returns the same `ABC` value again. In addition to serialization/deserialization, other pairs of operations can be checked this way: `addition` /`subtraction` , `write` /`read` , `setProperty` /`getProperty` , and so on. Other pair of functions fit this pattern too, even though they are not strict inverses, pairs such as `insert` /`contains` , `create` /`exists` , etc. These kinds of properties are based on an invariant that is preserved after some transformation. In the diagram below, the transform changes the order of the items, but the same four items are still present afterwards. Common invariants include size of a collection (for `map` say), the contents of a collection (for `sort` say), the height or depth of something in proportion to size (e.g. balanced trees). These kinds of properties are based on “idempotence” – that is, doing an operation twice is the same as doing it once. In the diagram below, using `distinct` to filter the set returns two items, but doing `distinct` twice returns the same set again. Idempotence properties are very useful, and can be extended to things like database updates and message processing. These kinds of properties are based on “structural induction” – that is, if a large thing can be broken into smaller parts, and some property is true for these smaller parts, then you can often prove that the property is true for a large thing as well. In the diagram below, we can see that the four-item list can be partitioned into an item plus a three-item list, which in turn can be partitioned into an item plus a two-item list. If we can prove the property holds for two-item list, then we can infer that it holds for the three-item list, and for the four-item list as well. Induction properties are often naturally applicable to recursive structures such as lists and trees. Often an algorithm to find a result can be complicated, but verifying the answer is easy. In the diagram below, we can see that finding a route through a maze is hard, but checking that it works is trivial! Many famous problems are of this sort, such as prime number factorization. But this approach can be used for even simple problems. For example, you might check that a string tokenizer works by just concatenating all the tokens again. The resulting string should be the same as what you started with. In many situations you often have an alternate version of an algorithm or process (a “test oracle”) that you can use to check your results. For example, you might have a high-performance algorithm with optimization tweaks that you want to test. In this case, you might compare it with a brute force algorithm that is much slower but is also much easier to write correctly. Similarly, you might compare the result of a parallel or concurrent algorithm with the result of a linear, single thread version. “Model-based” testing, which we will discuss in more detail in a later post, is a variant on having a test oracle. The way it works is that, in parallel with your (complex) system under test, you create a simplified model. Then, when you do something to the system under test, you do the same (but simplified) thing to your model. At the end, you compare your model’s state with the state of the system under test. If they are the same, you’re done. If not, either your SUT is buggy or your model is wrong and you have to start over! So that covers some of the common ways of thinking about properties. Here are the seven ways again, along with a more formal term, if available. - “Different paths, same destination” – a diagram that commutes - “There and back again” – an invertible function - “Some things never change” – an invariant under transformation - “The more things change, the more they stay the same” – idempotence - “Solve a smaller problem first” – structural induction - “Hard to prove, easy to verify” - “A test oracle” So that’s the theory. How might we apply them in practice? In the next post, we’ll look at some simple tasks, such as “sort a list”, “reverse a list”, and so on, and see how we might test there implementations with these various approaches.
true
true
true
Or, I want to use PBT, but I can never think of any properties to use
2024-10-12 00:00:00
2014-12-12 00:00:00
null
null
null
ScottWlaschin
null
null
28,319,993
https://github.com/gaul/modernizer-maven-plugin
GitHub - gaul/modernizer-maven-plugin: Detect uses of legacy Java APIs
Gaul
Modernizer Maven Plugin detects uses of legacy APIs which modern Java versions supersede. These modern APIs are often more performant, safer, and idiomatic than the legacy equivalents. For example, Modernizer can detect uses of `Vector` instead of `ArrayList` , `String.getBytes(String)` instead of `String.getBytes(Charset)` , and Guava `Objects.equal` instead of Java 7 `Objects.equals` . The default configuration detects over 200 legacy APIs, including third-party libraries like Apache Commons, Guava, and Joda-Time. To run Modernizer, add the following to the `<plugins>` stanza in your pom.xml then invoke `mvn modernizer:modernizer` : ``` <plugin> <groupId>org.gaul</groupId> <artifactId>modernizer-maven-plugin</artifactId> <version>2.7.0</version> <configuration> <javaVersion>8</javaVersion> </configuration> </plugin> ``` The `<configuration>` stanza can contain several elements: `<javaVersion>` enables violations based on target Java version, e.g., 8. For example, Modernizer will detect uses of`Vector` as violations when targeting Java 1.2 but not when targeting Java 1.1. Required parameter.`<failOnViolations>` fail phase if Modernizer detects any violations. Defaults to true.`<includeTestClasses>` run Modernizer on test classes. Defaults to true.`<violationsFile>` user-specified violation file. Also disables standard violation checks. Can point to classpath using absolute paths, e.g.`classpath:/your/file.xml` .`<violationsFiles>` user-specified violations file. The latter files override violations from the former ones, including`violationsFile` and the default violations. Can point to classpath using absolute paths, e.g.`classpath:/your/file.xml` .`<exclusionsFile>` disables user-specified violations. This is a text file with one exclusion per line in the javap format:`java/lang/String.getBytes:(Ljava/lang/String;)[B` . Empty lines and lines starting with`#` are ignored.`<exclusions>` violations to disable. Each exclusion should be in the javap format:`java/lang/String.getBytes:(Ljava/lang/String;)[B` .`<exclusionPatterns>` violation patterns to disable, specified using`<exclusionPattern>` child elements. Each exclusion should be a regular expression that matches the javap format:`java/lang/.*` of a violation.`<ignorePackages>` package prefixes to ignore, specified using`<ignorePackage>` child elements. Specifying`foo.bar` subsequently ignores`foo.bar.*` ,`foo.bar.baz.*` and so on.`<ignoreClassNamePatterns>` full qualified class names (incl. package) to ignore, specified using`<ignoreClassNamePattern>` child elements. Each exclusion should be a regular expression that matches a package and/or class; the package will be / not . separated (ASM's format).`<ignoreGeneratedClasses>` classes annotated with an annotation whose retention policy is`runtime` or`class` and whose simple name contain "Generated" will be ignored. (Note: both javax.annotation.Generated and javax.annotation.processing.Generated have retention policy SOURCE (aka discarded by compiler).) To run Modernizer during the verify phase of your build, add the following to the modernizer `<plugin>` stanza in your pom.xml: ``` <executions> <execution> <id>modernizer</id> <phase>verify</phase> <goals> <goal>modernizer</goal> </goals> </execution> </executions> ``` Command-line flags can override Modernizer configuration and ModernizerMojo documents all of these. The most commonly used flags: `-Dmodernizer.failOnViolations` - fail phase if violations detected, defaults to true`-Dmodernizer.skip` - skip plugin execution, defaults to false The plugin can output Modernizer violations in one of many formats which can be configured with the `<configuration>` stanza using `<outputFormat>` . The currently supported formats and their respective configuration options are outlined below: `CONSOLE` List each violation using Maven's logger. This is the**default**format.`<violationLogLevel>` Specify the log level of the logger:`error` ,`warn` ,`info` or`debug` . Default is`error` . `CODE_CLIMATE` Write the violations according to Code Climate's Spec. GitLab uses this format for its code quality as shown here.`<outputFile>` The full path the file to output to. Default is`${project.build.directory}/code-quality.json` `<codeClimateSeverity>` Severity of Modernizer violations for CodeClimate:`INFO` ,`MINOR` ,`MAJOR` ,`CRITICAL` or`BLOCKER` . Default is`MINOR` . Code can suppress violations within a class or method via an annotation. First add the following dependency to your `pom.xml` : ``` <dependencies> <dependency> <groupId>org.gaul</groupId> <artifactId>modernizer-maven-annotations</artifactId> <version>2.7.0</version> </dependency> </dependencies> ``` Then add `@SuppressModernizer` to the element to ignore: ``` import org.gaul.modernizer_maven_annotations.SuppressModernizer; public class Example { @SuppressModernizer public static void method() { ... } } ``` - ASM provides Java bytecode introspection which enables Modernizer's checks - Checkstyle IllegalInstantiation and Regexp checks can mimic some of Modernizer's functionality - Google Error Prone JdkObsolete can mimic some of Modernizer's functionality - Gradle Modernizer Plugin provides a Gradle interface to Modernizer `javac -Xlint:deprecated` detects uses of interfaces with @Deprecated annotations- Overstock.com library-detectors detects uses of interfaces with @Beta annotations - Policeman's Forbidden API Checker provides similar functionality to Modernizer Copyright (C) 2014-2022 Andrew Gaul Licensed under the Apache License, Version 2.0
true
true
true
Detect uses of legacy Java APIs. Contribute to gaul/modernizer-maven-plugin development by creating an account on GitHub.
2024-10-12 00:00:00
2014-09-18 00:00:00
https://opengraph.githubassets.com/0ca1e2012c7d67f5c193c8f4bd4c8768d671835a0c5aaba06980efe1eefaf8e1/gaul/modernizer-maven-plugin
object
github.com
GitHub
null
null
25,979,572
https://www.psypost.org/2021/01/neuroscience-study-indicates-that-lsd-frees-brain-activity-from-anatomical-constraints-59458
Neuroscience study indicates that LSD "frees" brain activity from anatomical constraints
Eric W Dolan
The psychedelic state induced by LSD appears to weaken the association between anatomical brain structure and functional connectivity, according to new research published in the journal *NeuroImage*. The study also provides evidence that LSD increases the complexity of segregated brain states. The findings provide new insights into the relationship between brain function and consciousness. “My main interest — and the focus of my research — is on understanding the neuroscience of human consciousness,” said study author Andrea I. Luppi (@loopyluppi), a Gates Scholar at the University of Cambridge in the Cognition and Consciousness Imaging Group. “Most studies of consciousness focus on its loss: sleep, anesthesia, or coma. But we think that a complementary way to obtain insights is to study states of altered consciousness, such as the psychedelic state induced by LSD.” Consciousness is believed to involve the integration of multiple segregated brain networks and their subnetworks, and the researchers sought to better understand how these patterns of brain connectivity varied over time under the influence of LSD. Luppi and his colleagues used functional magnetic resonance imaging (fMRI) to examine the structural and functional brain connectivity of 15 healthy volunteers during two separate sessions. During one session, the participants were given a placebo. During the other, they were given an active dose of LSD. Typically, “neurons that fire together, wire together.” But the researchers found that LSD decoupled the relationship between structural and functionally connectivity, indicating that brain activity is “less constrained than usual by the presence or absence of an underlying anatomical connection” under the influence of the substance. “We know that brain structure has a large influence on brain function under normal conditions. Our research shows that under the effects of LSD, this relationship becomes weaker: function is less constrained by structure. This is largely the opposite of what happens during anesthesia,” Luppi explained. As the researchers wrote in their study, under the influence of LSD, it appears that “the brain is free to explore a variety of functional connectivity patterns that go beyond those dictated by anatomy – presumably resulting in the unusual beliefs and experiences reported during the psychedelic state, and reflected by increased functional complexity.” “Integration and segregation of information are fundamental properties of brain function: We found that LSD does not affect them equally, but rather it has specific effects on each,” Luppi told PsyPost. In addition, these changes in brain integration and segregation fluctuated over time, and these fluctuations were associated with subjective experiences. For example, the experience of losing one’s sense of self during a psychedelic experience, a phenomenon known as “ego dissolution” or “ego death,” was associated with a state of high global integration. “This is a relatively new area of neuroscience, and research on larger cohorts will be needed to fully understand the effects of LSD and other psychedelics on brain function,” Luppi said. “A more thorough characterization may also shed light on potential clinical applications — such as the ongoing research at the new Centre for Psychedelic Research in London.” “Studying psychoactive substances offers a unique opportunity for neuroscience: we can study their effects in terms of brain chemistry, but also at the level of brain function and subjective experience,” he added. “In particular, the mind is never static, and neither is the brain: we are increasingly discovering that when it comes to brain function and its evolution over time, the journey matters just as much as the destination.” The study, “LSD alters dynamic integration and segregation in the human brain“, was authored by Andrea I. Luppi, Robin L. Carhart-Harris, Leor Roseman, Ioannis Pappas, David K. Menon, and Emmanuel A. Stamatakis.
true
true
true
The psychedelic state induced by LSD appears to weaken the association between anatomical brain structure and functional connectivity, according to new
2024-10-12 00:00:00
2021-01-30 00:00:00
https://www.psypost.org/…21/01/neuron.jpg
article
psypost.org
PsyPost
null
null
2,870,217
http://blog.zpao.com/post/8746168914/this-is-bad-recruiting
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
12,993,097
https://www.theguardian.com/world/2016/nov/15/murder-nuns-burundi-man-who-knew-too-much
Murder in Burundi: the man who knew too much | Jessica Hatcher-Moore
Jessica Hatcher-Moore
On a Sunday afternoon in early September 2014, Sister Bernadetta Boggian drove into the compound of the Catholic convent where she lived in Bujumbura, the capital of Burundi, and called out to her fellow nuns. There was no sign of the other elderly sisters who lived in the convent, so Sister Bernadetta went to find Father Mario Pulcini, the head of the mission, to ask if he had seen them. He tried phoning them, but there was no reply. So they walked across the shady compound to the nuns’ quarters, where they found the curtains drawn. They knocked, and called out, but there was no answer. The priest was about to force open the door, but Sister Bernadetta walked around to a side entrance, which was unlocked. Inside she found a horrific scene. Sister Olga Raschietti was lying dead in her bedroom, blood pooling around her head. In the bedroom next door lay the body of Sister Lucia Pulici. Both women had been stabbed, and their throats slit. Sister Lucia would have celebrated her 76th birthday the next day. Sister Olga was 82. Together, these three elderly friends had worked for almost 50 years in South Kivu, an eastern province of the Democratic Republic of Congo that was at the centre of a series of conflicts sometimes known collectively as the Great African War, the deadliest in the continent’s modern history. When the three sisters finally left South Kivu for Burundi, they were looking forward to a more peaceful retirement posting. Father Mario called the local police, and his superiors in Italy. Lorries and pickup trucks arrived quickly, disgorging police and soldiers, and security forces circled the compound. At around 6pm, the congregation poured out of mass in their brightly coloured Sunday best, straight into a crime scene. A papal official stood over the bodies and wept. Outside the convent, young women the sisters had taught to sew wailed with grief. Sister Bernadetta, who remained collected throughout, accompanied the bodies to the morgue, and then returned to the convent. Father Mario wanted to find somewhere else for sister Bernadetta and the other nuns to sleep. But the sisters insisted they wanted to stay together, and sleep at the convent. As night fell, heavily armed police patrolled the compound. When noises woke Sister Bernadetta during the night, she telephoned Father Mario, who was still awake, writing down an account of the previous day. “I think the killer is still here,” sister Bernadetta told him in a shaky voice. The priest hurried to the nuns’ quarters, but he was too late. Sister Bernadetta was already dead. In an act of violence unimaginable to those who knew the small and wiry 79-year-old, the killer had cut off her head. The next morning, shocked and angry locals closed their businesses and gathered outside the convent to protest against the murders. People claimed the killers were being protected by the police. Some protesters saw the notorious head of the state intelligence agency, General Adolphe Nshimirimana, enter the convent. Some time later, Father Mario emerged from the gates and appealed to the protesters to disperse peacefully. Three weeks on, a leaflet was found at the convent urging the mission not to pursue an investigation into the crimes. The murders at the convent horrified Burundians, not just because of their brutality but because they took place almost a decade after the end of the country’s 12-year civil war, in which 300,000 were slaughtered and 1.2 million – a fifth of the population – fled their homes. In the wake of that conflict, which divided the nation along ethnic lines, between Hutu and Tutsi, Burundians vowed that their country would never again experience such brutal violence. Church missions to countries riven by long-term civil strife are liable to get caught up in toxic politics. The powerful Catholic church in Burundi, which represents 80% of Burundians, has come under suspicion for providing aid to militant groups during the civil war. But it has also regularly criticised government abuses, and paid a price for it. In 1995, during the civil war, when the majority Hutus rose up against the abusive Tutsi military, gunmen executed two priests and a lay preacher suspected of supporting the rebels. A year later, a moderate Tutsi archbishop was murdered by gunmen. More than 10 Catholic clerics were assassinated in Burundi in the first three years of the civil war. When church leaders have denounced the violence of the country’s warring factions, political leaders have often seen them as a threat – and done whatever was necessary to silence them. **The murders at the convent followed **a ripple of unrest across the region. In April 2014, the United Nations envoy to Burundi had cabled UN headquarters in New York with a warning that the Hutu-led ruling party was distributing weapons and uniforms to its youth wing. In some areas, particularly outside the cities, the group acts “in collusion with local authorities and with total impunity”. It acts like a “militia over and above the police, the army, and the judiciary”, the cable said. The group was described as “one of the major threats to peace in Burundi and to the credibility of the 2015 elections as they are responsible for most politically motivated violence against opposition”. The government of Burundi issued a rebuttal to the UN, vehemently denying that it had been funding, arming or training the youth wing, known as the Imbonerakure. Nonetheless, the cable was received with alarm. A few weeks after the UN cable, Burundi’s most prominent human rights activist, Pierre Claver Mbonimpa, told listeners of the popular radio station Radio Publique Africaine (RPA), that arms and uniforms were being given out to hundreds of young men, who had gone for military training in the neighbouring Democratic Republic of Congo (DRC). Mbonimpa described photographs he had seen of young Burundian fighters training in the DRC, and first-hand accounts he had heard from witnesses and former soldiers. During the civil war, Mbonimpa’s own mother, a Tutsi, had been hacked to death by Hutu youths armed with machetes. A decade later, he pledged to do what he could to stop what appeared to be the preparation of a new youth militia. “In my experience, it is always the youths that do the killing. Everywhere in the region, it’s the youth that are used for violence,” Mbonimpa told me earlier this year. In the hot and humid city of Bujumbura, on the shore of Lake Tanganyika, the poor live in airless, dusty and tight-knit grid systems, clustered around the centre and to the north. These poor neighbourhoods, particularly Kamenge, where the convent stands, had been fertile recruiting grounds for rebel groups looking for young men during the war. Once again, it seemed the youth were being prepared to fight. People were afraid of what the training of this new secret youth army might mean. In a bid to end the civil war, a peace deal had been tabled in 1998 by president Julius Nyerere of Tanzania. Talks had continued under Nelson Mandela and his successor, Thabo Mbeki.** **An agreement was eventually signed in 2000, which laid out rules for equal representation of Hutus and Tutsis, and said that no president could serve more than two terms. A ceasefire finally became law in January 2005. As the 2015 elections approached, critics were afraid that Burundi’s president, the former Hutu rebel leader Pierre Nkurunziza, would try to hold on to power. Government forces and their opponents were preparing to face off in a new round of violence and intimidation. The president’s supporters insisted his rule brought peace to the country, and regeneration to the rural economy - he is passionate about growing avocados. His office maintained that a third term in office would be legitimate, as the first term (the result of an election by parliament, not the people) did not count. **Mbonimpa is a tall and powerful man,** with a gentle manner and an unshakeable determination. Only the dusting of grey stubble on his head gives away his age. At 71 – in a country where life expectancy for men is 58 – he is considered one of the grandfathers of postwar Burundi, celebrated for his tireless work in exposing attacks on human rights workers, opposition politicians and journalists, enforced disappearances, illegal detention and torture. His work has earned him a loyal support base. In September this year, Human Rights Watch honoured his work with the Alison Des Forges award, given to leading campaigners for justice, describing him as “a man of extraordinary courage who has defied repeated threats to defend victims of abuse”. Mbonimpa spent two years in prison during the war. Between 1994 and 1996, he was an inmate at the vast and overcrowded Mpimba prison, accused of a crime he did not commit. There, he was beaten and starved, he witnessed young boys locked up and abused by adults, women raped by guards and children born as a result. He emerged determined to reform the prison and justice systems in Burundi. Soon after his release, he founded the Association for the Protection of Human Rights and Detained Persons, now the most prominent human rights group in the country. He has secured the release of thousands of young Burundians who have been wrongfully imprisoned. The day after Mbonimpa’s radio appearance, police called him in for questioning. They summoned him repeatedly over the next week, each time demanding that he reveal his sources – but each time, he refused. At midnight on 15 May, when he arrived at the airport for a flight to Kenya, a number of police officers were waiting for him. He had just enough time to call his wife before the police bundled him into a waiting car. At the end of the following day, Mbonimpa was indicted for “endangering the internal and external security of the state”, based on his comments in the media, and jailed once again. When Mbonimpa was arrested in May 2014, four months before the nuns were murdered, a weekly protest known as “Green Friday” began in Bujumbura – with Burundians wearing green, the colour of prison uniforms, to show solidarity with Mbonimpa. Activists around the world joined in, tweeting photographs of themselves wearing green. His popular following infuriated the authorities, and radio stations were banned from reporting on the training and arming of militias. But Mbonimpa had a surprising protector. As commander of the ruling party’s military wing, General Adolphe was hated and feared by those outside his circle, but revered by his own men. At 50, he looked young for his age, sporting a moustache and wearing a heavy gold chain. A photograph popular with his followers shows him in a white baseball cap embellished with a gold eagle, the symbol of the CNDD-FDD, the Hutu rebel group that now runs Burundi. (The party’s full name translates from the French as The National Council for the** **Defence of Democracy – Forces for the Defence of Democracy. The repetition, Burundians joke, is just in case people need convincing their country is a democracy.) The general owned a bar in Kamenge called Iwabo W’abantu, meaning “the home of the people”, just a few blocks away from the convent where the nuns were killed. A grandiose stone eagle marks the entrance. Inside, on a high shelf, a nervous monkey twitches in a cage. CNDD-FDD supporters and civil servants sit behind huge bottles of Primus, the local lager, and talk politics while nearby a crocodile basks in a dirty pond. “The home of the people” is rumoured to double as a torture chamber for dissidents** **–** **one of many said to be hidden across the country. Adolphe, as he was generally known, was “not an educated man”, a former colleague said, but he was a gifted mobiliser, a man of the people. Generous and gregarious, he would stay up buying rounds of drinks until he was the last man standing. The Imbonerakure were largely made up of former Hutu rebels from the civil war and the general was like a father to them. During the civil war, he was a hero on the streets of Kamenge, the bastion of the Hutu rebellion. After the war, Adolphe forged ties with a violent rebel group in the DRC formed by the remnants of Rwanda’s genocidaires. The UN reports that he used these ties to carry out large-scale gold smuggling from rebel-controlled mines. By 2014, people said that it was not** **President Pierre Nkurunziza, but Adolphe, his loyal and trusted lieutenant, who held the levers of power. According to Gervais Rufyikiri, the former vice president, who fled Burundi in June 2015, every decision in government first went past the general. After the peace agreement limited the president’s term of office, it was rumoured that Adolphe was training a private army to keep the president, and himself, in power. As a human rights activist, Mbonimpa’s work involved challenging the brutality of Adolphe’s secret police – but the two men had a shared history. Mbonimpa was among the civilians who had supported the armed rebel movement during the war, providing food, information, money, and, occasionally, a place to hide, and General Adolphe was indebted to him for this. Mbonimpa’s own son was a rebel recruit. When Mbonimpa spoke out against the training of the Imbonerakure, he threw himself into direct conflict with the general, but he was confident that the general’s esteem would protect him. Adolphe had once told his son that if anyone killed Mbonimpa, the general would avenge him with his own hand – a vow that gave Mbonimpa courage to continue with his campaigning work. But when Mbonimpa found out that the three nuns had been butchered with machetes in the trademark manner of Adolphe’s intelligence agents, the two men were once again set on a collision course – one that would threaten both their lives. **On 9 September, two days after the murder, **the police arrested a local man named Christian Claude Butoyi and charged him with the nuns’ murder. Police told journalists that it was a revenge killing: the church had stolen Butoyi’s family land decades earlier, and bitterness at this injustice had driven Butoyi to rape and murder the nuns. Land disputes are common in Burundi – a tiny country with a barely functioning judicial system and one of the fastest-growing populations in the world – and are commonly resolved with violence. But Butoyi, 33, said nothing as police paraded him, handcuffed and dressed in ripped sports clothes, before the press. He showed no sign of remorse. Indeed, he showed no sign of any emotion at all. Residents of Kamenge recognised Butoyi as one of “*les fous*”, the mentally ill people who lived on the streets, surviving on scraps and charity – they doubted he possessed the physical or mental strength, never mind the motive, to carry out a series of brutal murders. From his prison cell, Mbonimpa watched this farce unfold. It was clear to him that Butoyi was not the killer. The following week, a court freed Mbonimpa after four months in detention, on medical grounds – he is diabetic – but he remained under judicial supervision. A thousand-strong crowd gathered outside the court, singing and dancing in celebration. When the authorities fail to deliver justice, Mbonimpa told me, it falls to journalists and human rights campaigners to discover the truth. To find out who was behind the murder of the three nuns, Mbonimpa approached one of Burundi’s leading investigative journalists, Bob Rugurika, the head of Radio Publique Africaine. The station, which broadcasts under the slogan “The Voice of the Voiceless”, was set up in 2001 with money from the Soros foundation; Samantha Power, US ambassador to the UN, was on the board. RPA’s goal was to heal the ethnic divisions of the civil war by getting Hutus and Tutsis to share airtime. There were one or two presenters who had managed to get members of militant groups to confess to killings on air – and express their remorse. A public confession could lay the foundations for reconciliation. A small, energetic man, Rugurika joined RPA while he was at law school, and had become editor in 2010. His career has been split between revealing corruption and living with the consequences, which have included death threats and periods spent in exile. While Mbonimpa was in prison, he had learned that as many as 1,800 members of the Imbonerakure had returned from the DRC and spread out across the country, ramping up their campaign of violence. On General Adolphe’s orders, the young fighters beat up suspected opposition candidates, disrupted meetings, and spread fear through the population. Mbonimpa found a source close to the investigation who had been assigned to the nuns’ murder case – whose identity remains protected. The source reported that police officers guarding the compound had allegedly confessed to complicity, while four men had carried out the killings. They all claimed to be acting on the direct orders of officers loyal to Adolphe. The police made a number of arrests based on these statements – but a few days later, Adolphe intervened. He ordered the suspects’ release, and the investigation into the nuns’ murder went no further. In October 2014, at a secret meeting, Bob Rugurika was introduced to a former intelligence agent known as Mwarabu, or “the light-skinned one”. Over a number of meetings in darkened bars, parked cars and private houses, Rugurika won the trust of Mwarabu, who eventually confessed to taking part in the murders. He even agreed to let Rugurika record his confession, on the condition that it could not be broadcast until he had fled the country. Mwarabu spoke into Rugurika’s microphone, and confessed that he and three others had killed the nuns, on the orders of General Adolphe. According to Mwarabu, the nuns had seen the Imbonerakure training near their former mission in the DRC. When Adolphe had heard Mbonimpa on the radio saying that the youth militia was being re-armed and trained, the general was convinced that the nuns had been Mbonimpa’s secret source. After Mbonimpa’s arrest, Adolphe worried that the nuns would speak out in order to secure his release – and he knew enough about the courage of these elderly missionaries to understand that they, unlike many others, would be unafraid to testify against him. The nuns had some additional inconvenient information, Mwarabu said. The former parish leader had been a legend among Hutus for surviving the 1995 massacre of Catholic priests, and a close ally of Adolphe. The general had used the mission’s health clinic to stock a private hospital he owned in the neighbourhood, to avoid paying import tax on drugs. In mid-2014, the parish priest became seriously ill and left Burundi for treatment in Italy. When the nuns and father Mario found out about the abuse of church resources, they were determined to put a stop to it. (The mission did not wish to comment on these claims.) A month after he won the trust of one alleged killer, Rugurika was introduced to another, Juvent Nduwimana, in a bar. Juvent, who grew up in Kamenge, was a rebel during the war and later became an intelligence agent working for Adolphe. Rugurika recorded his statement at the RPA office, and Juvent stated on the record that the nuns had been silenced to prevent them revealing the existence of an armed militia being trained by Adolphe to keep the president in power. **In January 2015, the country came to a standstill** at 12.30pm every day for a week as Bob Rugurika broadcast Mwarabu’s confession in a series of daily programmes on RPA. The streets fell silent as taxi drivers, traffic police, and street vendors gathered around radios to listen to the shocking admission of guilt. Rugurika had planned to broadcast his interviews with Juvent as corroboration – but before he could do so, Rugurika was arrested on 20 January. Two days later, although he had not been charged, he was transferred to an isolation cell and denied visitors. The arrest outraged human rights groups. Mbonimpa addressed journalists outside his office: “They can imprison us, they can kill us, but they can’t shut us up,” he said. Burundians took to the streets again in protest. Mwarabu, the confessed killer, sent Rugurika a message of solidarity from his hiding place in exile. A month later, a court freed Rugurika on bail, and thousands lined the streets in celebration. Rugurika broadcast Juvent’s confession in full on 30 March 2015. In the recording, Juvent’s voice was quiet – he sounded nervous – but his account was detailed and damning. He had heard from his recruiting officer that Adolphe was concerned that news of the Imbonerakure training in Congo would spread. The Catholic order to which the nuns belonged, the Xaverian Missionary Sisters of Mary, also ran a clinic in Luvungi, South Kivu, close to where the Imbonerakure were training. The soldiers had been getting treatment at the clinic, and the nuns, who travelled frequently between Bujumbura and the eastern DRC, were aware of this. Adolphe’s main concern was to stop them talking. The interview with Juvent was even more explosive than Mwarabu’s initial confession – since it corroborated the first confession, and pinned responsibility directly on Adolphe. Rugurika began sleeping in a different safe house every night – informed, by reliable friends, that security forces now had orders to kill him on sight. **In the weeks following Rugurika’s broadcasts, **as fears of a return to civil war grew, more than 10,000 Burundians fled to neighbouring countries, a number that has since risen to around 296,000. There were reports of opponents being intimidated or dragged out of their houses and beaten. On 25 April 2015, the ruling party announced that its candidate would be the incumbent Pierre Nkurunziza. For the third time in 12 months, thousands of men and women took to the streets, but this time they faced loaded guns. On the first day of the street protests, police shot dead at least one civilian. Armed men forced entry into RPA’s green-painted building and shut down all broadcasts. Observers watched with mixed feelings as Tutsis and Hutus united on the streets in defence of their hard-won peace agreement. A week later, Rugurika fled into exile, in fear of his life. When President Nkurunziza flew to Tanzania to discuss the crisis with regional leaders on 13 May, news of a coup led by a senior general** **was broadcast on independent radio stations. The president was unable to return to the country after rebel soldiers took control of Bujumbura airport. General Adolphe, however, remained in Bujumbura, standing by to launch a counter-attack against the coup plotters. State security forces attacked media buildings, throwing grenades at the headquarters of the Renaissance TV station. Flames licked the facade of the RPA building. Journalists who had stored sensitive evidence at the radio station, believing it to be the safest place, watched helplessly as years of their work went up in smoke. After dark, there was fighting in the streets, and bomb blasts shook the buildings. The next day, dead soldiers lay in the streets. No one knew who was in charge. Later that day, news trickled out that the coup had failed. The government branded the coup’s plotters as terrorists, and launched a violent crackdown. As journalists, politicians, activists, indeed anyone who might be perceived as anti-Nkurunziza, fled, Mbonimpa dug in. “I have no fear,” he told me in his office on 15 June 2015. In the previous six weeks, 94,000 people had fled. Ragged-looking men and women waited outside his office, either to report a crime or beg for news of a lost loved one. Mbonimpa had documented 77 dead (a figure that would later rise to more than 500) and 300 injured, but hundreds more – mainly young men – had disappeared. Soon after RPA had broadcast the testimony of Juvent, the alleged second killer, at the end of March, Juvent was arrested. As police drove Juvent away, Mbonimpa followed in his car, in a dogged attempt to show they could not kill him, nor force him into changing his story about the nuns’ deaths. **President Nkurunziza surprised no one** by winning the presidential election on 21 July. But his victory was far from peaceful. That night, blasts and gunfire resounded through the capital. Less than two weeks later, early on a Sunday morning, General Adolphe was driving through Kamenge, accompanied by his bodyguards, when four men in fatigues ambushed their armoured black SUV. The gunmen launched two rockets at the vehicle, fired automatic weapons and lobbed in a grenade to ensure there were no survivors. Neither the rebels nor the ruling party claimed responsibility for killing Adolphe – although it was rumoured that his position as the head of the party’s militia made him a threat to its political leaders. One former ruling party politician** **in exile in Brussels said, “Adolphe had raised an army, and they became more important than the police, more important than the army itself.” At the time, Mbonimpa was in the capital, Bujumbura, assisting African Union (AU) observers in their investigation of the illegal distribution of arms in Burundi. He remained at work, although he knew that with Adolphe’s assassination, he had now lost his protector. The day after Adolphe’s death, at 5.30 in the afternoon, Mbonimpa said farewell to the AU delegation and got into his car. The driver took the road north through the city towards his home. At 6pm, two men on a motorbike pulled up alongside Mbonimpa’s car, and the passenger fired four times at his head. He lost consciousness for a few minutes. When he came to, he gave clear instructions to his driver, who was unhurt: “Take me home. I want to die with my family around me.” When they got to his house, his wife took one look at him and told the driver: “Take him to hospital.” They raced to the Polyclinique Centrale in Bujumbura. Doctors stabilised Mbonimpa, and after a week he was flown to Brussels and was admitted to University Hospital. For four months, his head was clamped in a metal cage, and was fed by an intravenous drip. But the attackers were not done with Mbonimpa. His daughter was forced to leave Burundi after getting death threats. In late September, her husband was murdered by men on motorbikes as he arrived at the gate of his home in Bujumbura. On 6 November 2015, police in Bujumbura were searching the centre of the city looking for dissidents, when they tracked down Mbonimpa’s 28-year-old son, Welly Fleury Nzitonda. Seeing the police, he tried to run but they caught him. Hours later, neighbours found his body. He had been shot in the head and heart. Mbonimpa, unable to attend his young son’s funeral, sent a note from his hospital bed for a colleague to read out after the burial. “Do not lose courage,” the note said. “The tragedies we face will end with a resolution of the conflict … I maintain hope that it will come soon.” Sitting in a fifth-floor temporary office on a grey day in Brussels on 10 May this year, recalling how he missed his son’s funeral, Mbonimpa wept. On 9 December 2015, after four months in a head brace, Mbonimpa was sent for a scan by his doctor. Afterwards, he took a seat in the doctor’s office. The doctor looked at Mbonimpa for minutes without saying a word, then called in his colleagues. Six of them analysed Mbonimpa’s latest scan, and were amazed at his recovery. After they removed the brace, the doctor asked: “Pierre Claver, what angel is it that walks with you? Even if we don’t believe in God, we have to believe in something because you, sir, have an angel that guides you every day.” Hours later, Mbonimpa walked out of the hospital with his wife, weighing just 56kg, but determined as ever. In March 2016, the government alleged that a member of the opposition had ordered the nuns’ assassinations to tarnish the Burundi government’s reputation. Young rebels who trained in Rwanda carried out the killing, a government spokesperson said. “There was no Imbonerakure trained in South Kivu. So there was no need to cover up,” the spokesperson said, adding that Adolphe, a devout Christian, could not have been behind the assassination of members of a religious order. The spokesman blamed the crisis on the meddling of Rwanda in Burundi’s affairs, and on the international media, for fanning the flames. Earlier this year, the international criminal court in the Hague announced that it would conduct a preliminary investigation into the violence that accompanied the re-election of President Nkurunziza in 2015. But the government of Burundi responded in October by announcing that it would simply withdraw itself from the court’s jurisdiction – “so we can really be free,” in the words of Gaston Sindimwo, one of Nkurunziza’s two vice-presidents. Other members of the AU have also withdrawn, claiming the ICC unfairly targets African countries. “The day I feel well, I will return to my country,” Mbonimpa told me in March this year. Last month, the government revoked the licence of Mbonimpa’s human rights organisation, claiming it was responsible for destabilising the state. For now, he remains in exile. *Main picture: Phil Hatcher-Moore*
true
true
true
The Long Read: How the killing of three elderly nuns set the country’s leading human rights activist on a collision course with its most powerful general
2024-10-12 00:00:00
2016-11-15 00:00:00
https://i.guim.co.uk/img…8eb7a380547c451a
article
theguardian.com
The Guardian
null
null
18,723,091
https://www.phoronix.com/scan.php?page=news_item&px=Debian-AH-Archive-Removal
Debian's Anti-Harassment Team Is Removing A Package Over Its Name
Michael Larabel
# Debian's Anti-Harassment Team Is Removing A Package Over Its Name The latest notes from the Debian anti-harassment team on Wednesday caught my attention when reading, " When digging further, the package raised to the Debian Anti-Harassment Team was "Weboob." Weboob is short for "Web Outside of Browsers" as it's an open-source collection of software to script and automate the parsing/scraping/gathering-via-API of web data so that it can be consumed by different modules/applications. Weboob.org describes itself as " Weboob is Python-based and offers Qt-based user interfaces for accessing these different modules for reading data from different web-sites outside of any conventional web browser. Those interested can learn more about the software at Weboob.org. But, yes, the name is juvenile and likely inappropriate in most professional/corporate environments. Also raised were issues with the icons/artwork like: Additionally, Weboob has some module/application names as well with "boob" in the string. Silly and juvenile, but should this package, which isn't installed by default on Debian or otherwise featured by the Linux distribution, worth removing from the package repository over the naming convention? Weboob was initially added to Debian back in 2010 and has been maintained since although briefly removed. A few months back though the issue was raised over the name/project having sexual references and that goes against the Debian Diversity Statement and values. During the discussions over the project's name, the following statement was added by the package maintainer to it: " The Debian Anti-Harassment Team ruled that Weboob is against the Debian Code of Conduct in needing to be respectful. The team called for the program's removal from the Debian archive or to otherwise patch/fork it to remove the name/branding. Should the package not be addressed, they say the Debian FTP master should unilaterally remove the package. Debian Project Leader Chris Lamb has indeed gone ahead with the request to the Debian FTP master to remove Weboob. *We were requested to advice on the appropriateness of a certain package in the Debian archive. Our decision resulted in the package pending removal from the archive.*" Curiosity got the best of me... What package was deemed too inappropriate for the Debian archive?When digging further, the package raised to the Debian Anti-Harassment Team was "Weboob." Weboob is short for "Web Outside of Browsers" as it's an open-source collection of software to script and automate the parsing/scraping/gathering-via-API of web data so that it can be consumed by different modules/applications. Weboob.org describes itself as " *Weboob is a collection of applications able to interact with websites, without requiring the user to open them in a browser. It also provides well-defined APIs to talk to websites lacking one.*"Weboob is Python-based and offers Qt-based user interfaces for accessing these different modules for reading data from different web-sites outside of any conventional web browser. Those interested can learn more about the software at Weboob.org. But, yes, the name is juvenile and likely inappropriate in most professional/corporate environments. Also raised were issues with the icons/artwork like: Additionally, Weboob has some module/application names as well with "boob" in the string. Silly and juvenile, but should this package, which isn't installed by default on Debian or otherwise featured by the Linux distribution, worth removing from the package repository over the naming convention? Weboob was initially added to Debian back in 2010 and has been maintained since although briefly removed. A few months back though the issue was raised over the name/project having sexual references and that goes against the Debian Diversity Statement and values. During the discussions over the project's name, the following statement was added by the package maintainer to it: " *Note from the Maintainer: This software, included binaries and maybe other content contain childish references to a specific women's body part. Upstream refused to rename it. There is no diminishing or insulting message so I decided to keep it in the archive. You may nevertheless feel uncomfortable using this tool.*"The Debian Anti-Harassment Team ruled that Weboob is against the Debian Code of Conduct in needing to be respectful. The team called for the program's removal from the Debian archive or to otherwise patch/fork it to remove the name/branding. Should the package not be addressed, they say the Debian FTP master should unilaterally remove the package. Debian Project Leader Chris Lamb has indeed gone ahead with the request to the Debian FTP master to remove Weboob. 218 Comments
true
true
true
The latest notes from the Debian anti-harassment team on Wednesday caught my attention when reading, 'We were requested to advice on the appropriateness of a certain package in the Debian archive
2024-10-12 00:00:00
2018-12-01 00:00:00
null
null
null
Phoronix
null
null
9,038,709
http://www.w3.org/blog/news/archives/4365
WebRTC 1.0: Real-time Communication Between Browsers Draft Published
null
# WebRTC 1.0: Real-time Communication Between Browsers Draft Published The Web Real-Time Communications Working Group has published a Working Draft of WebRTC 1.0: Real-time Communication Between Browsers. This document defines a set of ECMAScript APIs in WebIDL to allow media to be sent to and received from another browser or device implementing the appropriate set of real-time protocols. This specification is being developed in conjunction with a protocol specification developed by the IETF RTCWEB group and an API specification to get access to local media devices developed by the Media Capture Task Force. Learn more about the Ubiquitous Web Applications Activity.
true
true
true
The World Wide Web Consortium (W3C) is an international community where Member organizations, a full-time staff, and the public work together to develop Web standards.
2024-10-12 00:00:00
2015-02-09 00:00:00
https://www.w3.org/asset…ngraph-image.png
website
w3.org
W3C
null
null
19,656,775
https://venturebeat.com/2019/04/13/openai-five-defeats-a-team-of-professional-dota-2-players/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
29,306,456
https://www.reuters.com/investigates/special-report/finance-crypto-sundaresan/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
28,550,447
https://www.theguardian.com/us-news/2021/sep/15/california-recall-election-national-us-politics
‘Study Newsom’s playbook’: what Democrats – and Republicans – can learn from California’s recall
Maanvi Singh
It was anger over Gavin Newsom’s pandemic restrictions that ultimately put a recall vote on the ballot. But the California governor doubled down, placing his coronavirus policies at the heart of his campaign and casting his leading opponent – the anti-mask, anti-vaccine rightwing radio host Larry Elder – as a dangerous proxy for Trump. That winning strategy could have national implications for both Democrats and Republicans already looking ahead to the 2022 midterms. “Democrats running in other parts of the country next year would do well to study Newsom’s playbook very carefully,” said Dan Schnur, a politics lecturer at several universities. “Newsom was able to take the Covid issue, which might have been a fatal weakness for him, and was able to turn it into a considerable strength.” The Republican-led recall’s anti-mask, anti-vaccine stance was undercut by the rise of the Delta variant and a surge of infections that overwhelmed hospitals in California and around the US, said James Lance Taylor, a political scientist at the University of San Francisco. “At least in some states, particularly blue states and some purple states, Newsom’s strategy has offered a model for Democratic candidates,” Taylor added. That Newsom triumphed over the recall by such a large margin also placed him in an ideal position to run for national office in the years to come, Taylor said. The state saw a huge Covid surge last winter, and Newsom has had to live down major missteps including an initially slow vaccine rollout – but overall, the governor could make a national case that his pandemic leadership saved lives. The recall has also exposed the potential limits of Trumpian politics in a post-Trump era, says Mindy Romero, founder of the Center for Inclusive Democracy, a non-partisan research organization. A more moderate candidate might have appealed to Democrats willing to try something new, a strategy that helped the Republican Arnold Schwarzenegger prevail over the Democrat Gray Davis in California’s last recall election, in 2003. “A lot of people voted against the recall because they were fearful of a Larry Elder becoming governor,” she said. “It doesn’t mean they were happy with Newsom.” Indeed, several voters the Guardian spoke with ahead of the election affirmed fears that California, under Elder’s leadership, could go the way of Florida and Texas. “I’m with a lot of people who might like to recall Gavin, but aren’t necessarily in favor of having Larry Elder in there,” said John Friedrich, a retiree living in Stockton, California, about an hour south of the capital, Sacramento. Still, that might not weaken the Republican party’s ties to Trumpism. Elder, who didn’t win the governor’s seat, nonetheless captured the greatest proportion of votes amongst Newsom’s challengers, indicating that while he lacked broad appeal, he did energize the state’s vocal, rightwing minority. Elder, who hinted at a 2022 run in his concession speech on Tuesday, has recycled the former president’s “big lie” conspiracy theory that elections lost by Republicans were rigged against them. “What we’ve learned from the recall is that Republicans aren’t ready for a post-Trump era. They are doubling down on Trump,” said Schnur, who has advised conservative candidates. “If they want to retake congressional majorities next year, that has potential to be a really big problem.” Still, the peculiarities of California’s recall process, and the state’s unique political structure, do confound attempts to see it as a broad barometer for national politics. Conservatives who opposed Newsom – a broadly popular governor who won office in 2018 by a historic margin – were able to trigger a recall election by gathering just 1.7m signatures in a state with 22 million registered voters. Democrats outnumber Republicans nearly two to one here, meaning any Democratic candidate already has a significant mathematical advantage, regardless of their strategy. But that the race even appeared close weeks before election day might be a lesson for Democrats in California, and nationally, that they will have to work hard to rally apathetic voters – especially minority voters who have long felt forsaken by their elected leaders. When polls in August found that distracted and disengaged Democratic voters – especially Latino voters, who make up about 32% of eligible voters – could cost the governor his seat, Newsom’s campaign scrambled. “There was a mad dash to the end to speak to as many Latino voters as possible,” said Christian Arana, a vice-president of the Latino Community Foundation. “But what this election really showed was that outreach to Latino voters needs to happen early, and often.” Votes are still being tallied in California and neither the final count nor demographic breakdowns are available yet. But according to calculations from Political Data Inc, only about 30% of ballots mailed to Latino voters were returned early, while ballots mailed to white voters had a 50% return rate. Fewer people tend to vote in special elections than in presidential elections or midterms, but in all cases, “turnout in elections is not representative of the population”, said Romero. “Voters of color have helped make California such a solidly blue state and they were clearly key to Newsom’s victory,” she added. “Now I think Democrats can turn this into an opportunity to get to know the voters better and build a better relationship with voters of color.”
true
true
true
Governor’s strategy could help Democrats in some states but also offers them a warning
2024-10-12 00:00:00
2021-09-16 00:00:00
https://i.guim.co.uk/img…eb0481f40cbeb2b6
article
theguardian.com
The Guardian
null
null
37,726,115
https://variety.com/2023/digital/news/letterboxd-acquired-50-million-deal-valuation-1235740185/
Letterboxd Acquired by Canadian Firm in Deal Valuing It at More Than $50 Million
Todd Spangler
Letterboxd has been acquired by Tiny, a Canadian holding company, in a deal that values the popular social site for film fanatics at over $50 million. Letterboxd was founded in 2011 by two entrepreneurs in New Zealand, Matthew Buchanan and Karl von Randow. It recently topped 10 million registered accounts, after seeing a particularly sizable surge during the COVID pandemic, and has attracted celebrity users including Margot Robbie, Olivia Rodrigo, Ava DuVernay and Christopher McQuarrie. Tiny, based in Victoria, British Columbia, now owns a 60% majority stake in Letterboxd, giving it a valuation of between $50 million and $60 million, a source familiar with the deal told *Variety*. Buchanan and von Randow will retain minority positions in the business and continue to lead the company. Letterboxd, known as the “Goodreads for movies,” plans to capitalize on the new ownership of Tiny to further establish the platform as the leading social network for film buffs worldwide. “Teaming up with Tiny represents a big leap forward for us,” Buchanan and von Randow said in a statement. “We see this as a huge win for our community, enabling us to cement Letterboxd’s future with additional resources without sacrificing the DNA of what makes it special.” ### Popular on Variety Tiny co-founder Andrew Wilkinson commented, “We’ve been huge fans and users of Letterboxd for a long time and could not be more excited to join forces with Matt, Karl and the rest of the team for the long-term. Our aim is to make Letterboxd the ultimate destination for anyone looking to discover or discuss movies online… and we believe the untapped market potential for superior discovery and discussion is a huge opportunity.” Letterboxd’s media arm includes the online magazine Journal; “The Letterboxd Show” podcast; and multiple other routes for “films and their talent to be showcased.”
true
true
true
Letterboxd was acquired by Canadian holding company Tiny in a deal that values the popular social site for film fanatics at over $50 million.
2024-10-12 00:00:00
2023-09-29 00:00:00
https://variety.com/wp-c…960&h=540&crop=1
article
variety.com
Variety
null
null
13,107,535
https://medium.com/@pimterry/testing-your-shell-scripts-with-bats-abfca9bdc5b9
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
8,504,364
http://www.windowscentral.com/surface-division-track-be-microsofts-next-1-billion-business
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
16,630,747
https://www.youtube.com/watch?v=uolTUtioIrc
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
38,662,690
https://medium.com/datasociety-points/the-fragmentation-of-truth-3c766ebb74cf
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
25,372,401
https://www.mojeek.com
Mojeek
null
Looking for different results? Value your right to privacy? Trying to escape big tech? Mojeek is a growing independent search engine which does not track you.
true
true
true
Mojeek is a web search engine that provides unbiased, fast, and relevant search results combined with a no tracking privacy policy.
2024-10-12 00:00:00
null
null
null
mojeek.com
mojeek.com
null
null
38,249,652
https://console.dev
Console Newsletter - The best tools for developers
null
# Console ## Discover the best tools for developers A free weekly email digest of the best tools for developers. 30k+ subscribers Every Thursday. See latest email ### Devtools Podcast #### Getting technical about devtools. Security, infrastructure, encryption, privacy... all the technical details around devtools.
true
true
true
A free weekly email digest of the best tools for developers.
2024-10-12 00:00:00
2023-07-06 00:00:00
https://console.dev/img/console-mark.png
website
console.dev
Console
null
null
12,760,149
http://siliconangle.com/blog/2016/10/21/how-men-and-women-experience-virtual-reality-differently/
How men and women experience virtual reality differently - SiliconANGLE
Pooja Sivaraman
### How men and women experience virtual reality differently Gemma Busoni is a recent high school graduate and co-founder of Discovr, a virtual reality startup based in Los Angeles. While still in school, she started a nonprofit for inner-city students in LA to get resources for coding, hacking and other STEM skills. Earlier this year, she stood alongside Michelle Obama on the cover of Seventeen* *magazine. “That was fun,” she said casually. Exuding confidence as she spoke among several powerful women at a panel called “Entrepreneurs and Intrepreneurs” at the Grace Hopper Celebration of Women in Computing in Houston this week, Busoni talked about how she raised $100,000 for her company and shared tips for other budding female entrepreneurs searching for funding. The talk featured filmmakers, software engineers and coders who had either started their own company or innovated new programs within their company (intrapreneurs). Gemma Busoni is the former, but she thinks of herself primarily as an educator. I sat down with Busoni to learn a little more about her opinions on the VR industry. Immersed in the Grace Hopper atmosphere, our conversations veered towards the implications of virtual reality for women, the potential for VR to be more inclusive, and her experiences as a female gamer and developer in a male-dominated industry. * (Above: GroundTruth reporting fellows Tori Bedford, left, and Karis Hustad try out an International Space Station training module at the Virtual Reality Laboratory at the Johnson Space Center in Houston.)* *You’ve started your career pretty early. What’s your proudest accomplishment so far?* We built all of ancient Rome in VR. That’s how we got funding in the first place. It was sick. *What do you think is the scope for VR***?** Ideally it would just remove barriers to everything. If you’ve ever seen “Star Trek,” there’s the holodeck–that’s what I think of as the ultimate VR experience. *What’s the most inspiring VR experience out there that you wish you had thought of first?* Variable Labs created a 360 experience teaching women to negotiate their salaries. I’ve heard brilliant reviews. *That’s very appropriate to what we’re talking at Grace Hopper, and a great segue. Based on your experience in the field, are there cases where VR products have forgotten about their female consumer base?* Potentially, but I don’t think it’s on purpose. For example, the way that women and men perceive immersion is very different. I distinctly remember seeing an article that said “VR is a sexist industry because women get more motion-sick in VR.” I don’t know a single developer who’s sitting behind his computer like “Ha ha ha, I want this woman to feel sick.” But, as it turns out, “motion parallax” is what makes a male feel immersed and for females its “shape from shading.” If a shadow is off it will ruin the experience for a woman. As opposed to a male, where “motion parallax” is just how things move relative to you. There are fundamental differences in humans that are making these things happen. I don’t think much of it is on purpose. Oculus just wrote out their new avatar system, and as opposed to other video games where you have to choose “male” or “female,” there’s none of that. There’s just face shape. That’s a really big deal and should be an example that’s followed. *Agreed. Do you think more women entering the industry can help with some of these issues?* VR is very unique in that I’ve never seen so many women come together and be accepted. I distinctly remember going from being a game developer — and being the only girl — to this VR group where there were tons of women. The more women, the faster this will get solved, but things have been changing pretty quickly. *So, you shifted from gaming to VR. What are you looking to create in VR specifically?* In education, I’m personally passionate about making experiences for disabled students. VR is a great way to teach someone with ADHD because they can *literally* look nowhere else. VR is great for focusing attention. Attention rates go up for all students. It’s also great for people trying to overcome PTSD, people who have muscle tremors, or people who need a little bit of help with empathy and need to see through the eyes of somebody else. There are so many things you can do with VR, it’s endless. *In your opinion, are VR companies are on the track?* Yeah, I would say they’re on the general right track–some could use some nudging now and then. But compared to how other industries have moved I’m very impressed in how progressive the VR industry has been in the last two years. *That optimism is great to hear. And what about gaming, do you think the gaming industry has the potential to be a diver of social change**?* Of course, just like anything has the potential. Different mediums reach different crowds. I remember a game where your character was a house mom and you had to somehow do all the chores at once. I remember some boys commenting on it: “How the fuck do moms do this?” *Were the male players annoyed about that?* Not annoyed, they saw it as a challenge. It animated in a way that it didn’t seem girly, but it still presented the issue. *That anecdote reminds me of #GamerGate, where many people were outraged about non-stereotypical game designs. As a gamer and game developer, do you think #GamerGate made it easier to have conversations about harassment?* I remember when #GamerGate came around a bunch of women in VR were like, “Well, harassment can happen in a virtual space too.” Even if its two avatars harassing each other, it can still be just as traumatizing and even more so traumatizing in VR. You could get PTSD in VR depending on how immersive the experience is. *Have you had any experiences like that?* I played a lot of “League of Legends” and I distinctly remember this 12-year-old DDoS-ing me because I wouldn’t be his girlfriend. I didn’t know him at all–we just played league games together. Luckily one of my best friends works at “Riot Games”–and wrecked him. He got him banned and everything. It was amazing. *That’s awesome. I’ve never heard a story about harassment end like that. Do you have any advice for other female gamers and game developers who face harassment?* There are really amazing communities. For every person throwing out negativity, there’s somebody there who wants to help. There’s a really awesome Facebook group called “Women in VR” and I would say it is the most popular VR group in general, but it’s called “*Women* in VR.” There are so many men in there, and lots of women. If there’s an event and people see Go-Go Girls, they’ll call out the company. There are people who really push for equality and it’s fantastic. *This story was written as part of a **Women in Tech fellowship** sponsored by the **GroundTruth Project** and SiliconANGLE Media’s **theCUBE**. Other stories reported from the Anita Borg Institute’s Grace Hopper Celebration of Women in Computing conference in Houston can be found at SiliconANGLE, the **TechTruth Women in Tech site** and the **GroundTruth Project**.* # A message from John Furrier, co-founder of SiliconANGLE: ### Your vote of support is important to us and it helps us keep the content FREE. ### One click below supports our mission to provide free, deep, and relevant content. ### Join our community on YouTube ### Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts. **THANK YOU**
true
true
true
How men and women experience virtual reality differently - SiliconANGLE
2024-10-12 00:00:00
2016-10-21 00:00:00
https://d15shllkswkct0.c…rge-1920x722.jpg
article
siliconangle.com
SiliconANGLE
null
null
2,309,849
http://www.cmo.com/social-media/why-twitter-foursquare-are-dying
CMO by Adobe |
null
CMO by Adobe Topics CMO by Adobe Filters Industries
true
true
true
Insights, expertise and inspiration for and by digital leaders
2024-10-12 00:00:00
null
https://blog.adobe.com/e…&optimize=medium
null
adobe.com
blog.adobe.com
null
null
12,386,596
http://www.bbc.com/news/technology-37114313
US ready to 'hand over' the internet's naming system
Dave Lee
# US ready to 'hand over' the internet's naming system - Published - comments **The US has confirmed it is finally ready to cede power of the internet’s naming system, ending the almost 20-year process to hand over a crucial part of the internet's governance.** The Domain Naming System, DNS, is one of the internet’s most important components. It pairs the easy-to-remember web addresses - like bbc.com - with their relevant servers. Without DNS, you’d only be able to access websites by typing in its IP address, a series of numbers such as "194.66.82.10". More by circumstance than intention, the US has always had ultimate say over how the DNS is controlled - but not for much longer. It will give up its power fully to Icann - the Internet Corporation for Assigned Names and Numbers - a non-profit organisation. The terms of the change were agreed upon in 2014, but it wasn’t until now that the US said it was finally satisfied that Icann was ready to make the change, external. Icann will get the “keys to the kingdom”, as one expert put it, on 1 October 2016. From that date, the US will lose its dominant voice - although Icann will remain in Los Angeles. **If anyone can, Icann?** Users of the web will not notice any difference - that’s because Icann has essentially been doing the job for years anyway. But it’s a move that has been fiercely criticised by some US politicians as opening the door to the likes of China and Russia to meddle with a system that has always been “protected” by the US. "The proposal will significantly increase the power of foreign governments over the Internet,” warned a letter signed by several Republican senators, including former Presidential hopeful, Ted Cruz. Whether you think those fears are justified depends on your confidence in the ability of Icann to do its job. It was created in 1998 to take over the task of assigning web addresses. Until that point, that job was handled by one man - Jon Postel. He was known to many as the “god of the internet”, a nod to his power over the internet, as well as his research work in creating some of the systems that underpin networking. Mr Postel, who died not long after Icann was created, external, was in charge of the Internet Assigned Numbers Authority (IANA). Administration of the IANA was contracted to the newly-formed Icann, but the US's National Telecommunications and Information Administration (NTIA), part of the Department of Commerce, kept its final say over what it was able to do. It’s that final detail that is set to change from October. No longer will the US government - through the NTIA - be able to intervene on matters around internet naming. It rarely intervened. Most famously, it stepped in when Icann wanted to launch a new top-level domain for pornography, “.xxx”. The government wanted Icann to ditch the idea, but it eventually went ahead anyway. From October, the “new” Icann will become an organisation that answers to multiple stakeholders who want a say over the internet. Those stakeholders include countries, businesses and groups offering technical expertise. **Best option** “It's a big change,” remarked Prof Alan Woodward from the University of Surrey. "It marks a transition from an internet effectively governed by one nation to a multi-stakeholder governed internet: a properly global solution for what has become a global asset." Technically, the US is doing this voluntarily - if it wanted to keep power of DNS, it could. But the country has long acknowledged that relinquishing its control was a vital act of international diplomacy. Other countries, particularly China and Russia, had put pressure on the UN to call for the DNS to be controlled by the United Nations’ International Telecommunication Union. A treaty to do just that was on the table in 2012 - but the US, along with the UK, Canada and Australia, refused, citing concerns over human rights abuses that may arise if other countries had greater say and control over the internet and its technical foundations. Instead, the US has used its remaining power over DNS to shift control to Icann, not the UN. In response to worries about abuse of the internet by foreign governments, the NTIA said it had consulted corporate governance experts who said its the prospect of government interference was “extremely remote”. "The community’s new powers to challenge board decisions and enforce decisions in court protect against any one party or group of interests from inappropriately influencing Icann,” it said in a Q&A section on its website, external. As for how it will change what happens on the internet, the effects will most likely be minimal for the average user. "This has nothing to do with laws on the internet,” Prof Woodward said. "Those still are the national laws that apply where it touches those countries. "This is more about who officially controls the foundations of the Internet/web addresses and domain names, without which the network wouldn't function." * Follow Dave Lee *on Twitter @DaveLeeBBC, external *on Facebook, external* **and**- Published17 March 2014 - Published14 December 2012
true
true
true
After almost two decades of deliberation, the US government confirms it is on track give up its control of the internet's naming system by October.
2024-10-12 00:00:00
2016-08-18 00:00:00
https://ichef.bbci.co.uk…_whatsubject.jpg
article
bbc.com
BBC News
null
null
7,590,653
http://www.bbc.com/news/magazine-26937454
Could offices change from sitting to standing?
null
# Could offices change from sitting to standing? - Published **A number of studies have suggested that constantly sitting at work is bad for you. So could workplaces be rejigged around standing up, asks would-be stander Chris Bowlby.** Medical research has been building up for a while now, suggesting constant sitting is harming our health - potentially causing cardiovascular problems or vulnerability to diabetes. We can't simply fix it by heading for the gym. This has big implications not just for homes - usually blamed for "couch potato" lifestyles - but for sedentary workplaces too, especially the modern office. But when it comes to the average office, reducing sitting is a huge challenge. It means rethinking architecture, spending a lot of money, changing the office routine. Adjustable sit-stand desks can cost many hundreds of pounds. The current common arrangement of rigid rows of desks, beloved of businesses wanting to cut down on renting floor space, does not suit employees who want more physical choice in how they work Advocates say more standing would benefit not only health, but also workers' energy and creativity. And some big organisations and companies are beginning to look seriously at change. US firm General Electric's British plant in Groby, Leicestershire, is considering giving staff a choice. "It's becoming more well known that long periods of sedentary behaviour has an adverse effect on health," says GE engineer Jonathan McGregor, "so we're looking at bringing in standing desks." But the cost must be calculated. Senior management at the site are asking for data on illnesses and time off before making a final decision. Prices vary according to design but they cost more than conventional desks. UK firm Elite Office Furniture manufactures sit/stand desks in the UK and charges £500 per desk for orders of 50 or more. One of its major clients is Google which has fitted a large number in its London office, although it will not divulge just how many the search giant has bought. Another UK firm, National Office Furniture Supplies, charges a similar amount and would bill £15-£50 to remove each old desk. It tends to sell only two or three desks to clients who have employees with specific needs. Yet another firm, Back Care Solutions, charges just under £400 but this compares with a standard desk which costs £172. For anyone wanting, say, 1,000 desks changed, it's easy to see how cost would be an obstacle. And there's an issue. People have to choose to stand. Forcing offices to stand up might harm morale. Ergonomics expert Alan Hedge is sceptical about how far workers can change. Some will simply want to stay sitting, he points out. And those with adjustable desks don't mix well with the sitters. But he thinks employees should still be encouraged to move around much more. "We need to think of sitting like driving," he says. "Take a break regularly." Small adjustments - abolishing the tea trolley, for instance - can encourage people to move around more. The whole concept of sitting as the norm in workplaces is a recent innovation, points out Jeremy Myerson, professor of design at the Royal College of Art. "If you look at the late 19th Century," he says, Victorian clerks could stand at their desks and "moved around a lot more". "It's possible to look back at the industrial office of the past 100 years or so as some kind of weird aberration in a 1,000-year continuum of work where we've always moved around." What changed things in the 20th Century was "Taylorism" - time and motion studies applied to office work. "It's much easier to supervise and control people when they're sitting down," says Myerson. In the US and UK, "there's a tendency to treat workplace design as a cost, not an investment", he suggests. "Denmark has just made it mandatory for employers to offer their staff sit-stand desks." And while offering an option to stand seems a good idea, forcing everybody to give up their swivel chair would have consequences. "A lot of people felt having their own desk and chair was a symbol of job security and status," says Myerson. What might finally change things is if the evidence becomes overwhelming, the health costs rise, and stopping employees from sitting too much becomes part of an employer's legal duty of care. Fred Turok founded the LA Fitness chain of gyms and is now chair of the physical activity network for the Department of Health. "The best way to get the biggest returns," he says, "is to get those people who currently do no exercise to do some exercise. Even 10 minutes a day having elevated your heartbeat will see the biggest financial returns to the economy as well as the emotional and social returns for the individual." But that message, he adds, "has not yet got through to the people who are designing our space". So what happened when I started to cut back on sitting? I found myself standing, but at the far side of my office, at a higher desk not meant for regular work. I had only been able to find one desk fixed at around my height, usually used for specialised technical jobs. The computer connection was bad, and there was no phone. Getting this changed, I was told, would be costly. Design gurus talk a lot about mobile technology liberating workers. But for many, the need for computer and landline is still more like a leash. "If what we are creating are environments where people are not going to be terribly healthy and are suffering from diseases like cardiovascular disease and diabetes," says Prof Alexi Marmot, a specialist on workplace design, "it's highly unlikely the organisation benefits in any way." How did I feel after days of more standing? After some initial aches and pains standing for prolonged periods, I began to get used to it. Sitting back in a chair felt more cramped than before. But when standing, I was quite cut off from my colleagues, most of whom wondered what on earth I was doing. *Follow *@BBCNewsMagazine, external* on Twitter and on *Facebook, external
true
true
true
A number of studies have suggested that constantly sitting at work is bad for you. So could workplaces be rejigged around standing up, asks would-be stander Chris Bowlby.
2024-10-12 00:00:00
2014-04-14 00:00:00
https://ichef.bbci.co.uk…246315_jodie.jpg
article
bbc.com
BBC News
null
null
20,381,185
https://mozilla-research.forms.fm/mozilla-research-grants-2019h1/forms/6510
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
36,939,213
https://scrapeops.io/web-scraping-playbook/residential-mobile-proxies-economics/
The Crazy Economics of Residential & Mobile Proxies | ScrapeOps
null
# The Crazy Economics of Residential & Mobile Proxies Developers and companies can make a lot of money in the web scraping market. Be it providing web scraping services, tooling, or building products on top of web scraped data. However, there is one area of the web scraping industry that has the craziest economics...**residential & mobile proxies!** Most web scraping proxy providers have good 75-90% gross margins, however, proxy providers who own their own residential & mobile IP networks are on a completely different level. You can get residential & mobile proxies in lots of different ways. But when they acquired using **embedded SDKs in Apps & Chrome Extensions** is when they really make the **big money**! Let's explain... ## Embedded App & Chrome Extension SDKs Although there are different ways to build to a residential & mobile proxy network, like building proxy farms or peer-to-peer networks. The most efficient way to build a large and diverse residential & mobile proxy network is by piggy backing on existing **Android**, **Windows** or **macOS apps** and **Chrome Extensions**. Here, instead of an App or Chrome extension developer having to monetise their free product with ads, they can integrate the SDK of one of the large proxy providers into their product and get paid for every monthly active user they have. Two of the most popular include the Bright Data SDK and Infatica SDK. Once embedded, these SDKs allow end users to opt-in to having ad free access to the App or Chrome Extension in exchange for the proxy provider being able send traffic through their device for the purposes of web scraping, ad verification or website testing. This can be good for developers and end users because: **Opt-In:**End users can explicitedly opt-in to having an ad free experience in exchange for their bandwidth being used.**Better UX:**End users have a better user experience if they aren't being interupted by ads.**Predictable Revenue:**The developers of these Apps and Chrome extensions get predictable revenue of $1,000 - $50,000 per month. However, it is the proxy providers that really **win big**. ## $$$ Insanely Profitable The big winners of this approach to getting residential & mobile proxies are the proxy providers, because this business model is **insanely profitable**! On average, proxy providers will pay the owners of these Apps & Chrome extensions **$0.05 for every MAU** (monthly active user) the App has. However, these proxy providers charge their customers between **$2 to $15 per GB** of traffic sent through their **residential proxy networks**. Even charging as high as **$10 to $40 per GB** for traffic sent through their **mobile proxy networks**. Given that the proxy provider can easily send **10-20 GB of traffic** through each App or Chrome Extension their SDK has been installed in each month, the proxy provider can be easily making **$100-500 for every MAU** but only paying $0.05 to send the traffic. In essence they acquire their proxy network at a fixed price of $0.05 per MAU, but they charge their customers a much higher rate based on the traffic they send through the proxy network. That is a dream business model! So if I'm the developer of a popular Android App with **100,000 monthly active users** and I integrate with Bright Data's SDK then: **Users:**Get ad free experience in exchange for their bandwidth.**Developer:**Makes $5,000 per month for installing the SDK into the App ($0.05 per MAU).**Proxy Provider:**Makes $10,000,000 per month from their proxy customers (assuming 10GB bandwidth per MAU & charging proxy users $10 per GB) That is a **$9,995,000 per month profit** on a single App that has integrated the SDK! Obviously, the proxy provider will have other expenses to deduct from operating the proxy network (servers, infrastructure, developers, etc.). But still, they almost certainly will have a **99.9% profit margin on $10 million in revenue**! Little wonder that Bright Data, the pioneer of this strategy, recently announced they surpassed the $100 million mark in annual revenue. ## Proxy Providers Building App Networks This business model is highly profitable, but much harder to execute than simply buying datacenter IPs or access to residential/mobile proxy networks from other providers. As a result, only a few big providers have been able to pull this off: ### Bright Data Bright Data (formerly Luminati) pioneered this business model when they first integrated it into their sister company's product, the HolaVPN. HolaVPN offered users a free VPN in exchange for their device being used by Luminati as a proxy. Then later, Bright Data started allowing 3rd party App and Chrome extension developers integrate the SDK into their products. This strategy was a huge competitive edge for Bright Data as they quickly grew to become the largest proxy provider, offering some of the most professional and sophisticated proxy services for developers and companies. And building a very big business in the process. ### OxyLabs For a time Oxylabs, the second largest proxy provider in the market, also had a 3rd party App and Chrome extension SDK program as they built out their own residential and mobile proxy networks. With their SDK believed to be included in : - AppAspect Technologies' **EMI Calculator**and**Automatic Call Recorder** - Birrastorming Ideas S.L’s **IPTV Manager for VL** - CC Soft’s **Followers Tool for Instagram** - Glidesoft Technologies’ **Route Finder** - ImaTechInnovations’ 3D **Wallpaper Parallax 2018** - Softmate a/k/a Toolbarstudio Inc.’s **AppGeyser**and**Toolbarstudio** However, Oxylabs doesn't seem to offer their SDK to 3rd party developers anymore. Possibly after the patent infringement litigation Bright Data settled against OxyLabs in 2018. Today, Oxylabs appears to use different methods to build its proxy network including Honeygain.com. ### Infatica.io A more recent adopter of this approach has been Infatica.io, who are also providing App developers a SDK that they can integrate into their codebase and monetize their active users. They offer developers between $0.04-$0.06 per monthly active user their App has when integrating the SDK into their Apps. ## Ethics of Integrated Proxy SDKs When proxy companies and developers started using the model of integrating proxy SDKs into Apps this practice had a lot more questionable ethics. Over the years, there have been lots of users & developers complaining about this practice: **No Consent:**Developers were including the SDKs without informing users.**App Store Bans:**Some developers had their Apps removed from App stores because of the SDKs.**Malware Claims:**Concerns these SDKs might include malware.**Bad Traffic:**Users devices being used in DDoS and other attacks. However, in the last year or two, proxy companies and developers have taken a much more responsible and ethical approach to how their SDKs are integrated and used. **For Example:** Bright Data positions its SDK as an **Ethical SDK**. Requiring App users to explicitedly opt-in to having Bright Data use them as a proxy endpoint. They have also introduced strict **KYC requirements** for all new users of their residential & mobile proxy networks when they sign up to ensure end-user phones & laptops aren't used for illegal or questionable use cases. ## More Web Scraping Articles This was a deep dive into one of the lesser known aspects of the web scraping market. If you would like to learn more about proxies or web scraping in general, then be sure to check out The Web Scraping Playbook, or check out one of our other in-depth guides:
true
true
true
Building your own residential or mobile proxy network can be very profitable, but using Embedded App & Chrome Extension SDKs can take your profitability to insane levels.
2024-10-12 00:00:00
2024-01-01 00:00:00
https://assets-scrapeops…zy-economics.jpg
null
scrapeops.io
scrapeops.io
null
null
30,820,235
https://www.bbc.co.uk/news/business-60838192
Electric cars: Five big questions answered
Lexy O'Connor
# Electric cars: Five big questions answered - Published **In less than eight years, the government plans to ban the sale of all new petrol and diesel cars and vans and as part of this shift is promising to expand the network of public charging points to 300,000. ** It's part of a government strategy in order to help the UK meet its 2050 net zero target, where electric vehicles (EVs ) will soon become the most common option for anyone wanting to buy a brand new car. Among the 35 million cars driving around on UK roads just 1.3% were EVs in 2020 but that figure is starting to climb. Battery electric and hybrid cars accounted for nearly a third of new cars leaving dealerships last month, according to The Society of Motor Manufacturers and Traders. (SMMT). But would-be buyers still have a lot of reservations. BBC Radio 5's The Big Green Money Show asked listeners to send in their questions, here are their top five: ## Why are electric cars so expensive? Electric cars usually cost thousands of pounds more than their petrol, or diesel, counterparts. This is because EV batteries are expensive to make and a high level of investment is needed to transform existing factory production lines to manufacture the new technology. However, costs are expected to come down in the near future: The SMMT forecasts electric and internal combustion engine cars should cost roughly the same "by the end of this decade." Meanwhile, experts say you should also consider the total spend over the car's lifetime. The cost of the electricity used to power your EV has been rising sharply recently and will vary according to your household tariff, but it is still cheaper than petrol or diesel fuel per mile. Melanie Shufflebotham is the co-founder of Zap Map, which maps the UK's charging points. She says if an EV is charged at home "the average price people are paying is roughly 5p per mile". This compares she says, to a cost of between 15-25 pence per mile for petrol or diesel cars. There are other potential savings too. Vehicle tax is based on how much pollution a car emits, so zero emissions vehicles like electric cars are exempt. Meanwhile, says Melanie Shufflebotham, an EV is usually cheaper to maintain because "typically, a petrol or diesel car has hundreds of moving parts, whereas an electric car doesn't." For instance, an electric vehicle will not require its oil changing and because it has fewer moving parts, is likely to suffer less wear and tear. The cost of replacing batteries is high, but many manufacturers offer a guarantee of at least eight years. ## Are there enough public chargers for all the EV drivers who will need them? Right now, the UK has around 30,000 public charging points, of which two thirds are "fast" or "rapid" chargers, according to Zap Map. The government announced on Friday plans to expand this ten-fold to 300,000 by 2030. Last July, the Competition and Markets Authority raised concerns that the on-street charging rollout has been "slow and patchy" and called on the government to set up a national strategy to improve the infrastructure before the 2030 deadline. The current number of public chargers is "nowhere near enough" says Paul Wilcox, the UK managing director of Vauxhall. But he believes the situation will improve vastly by 2030 as EVs become more common, because as demand rises, the number of chargers installed will increase. "I'm absolutely confident … because once you get the volume of cars you'll get the commercialisation of charging." Only rapid and ultra-rapid chargers are suitable for drivers wishing to recharge on long journeys. According to Zap Map, at the moment around 5,500 of those exist. Of those - just over 800 are Tesla Superchargers which can only be used by Tesla drivers. ## What about 'range anxiety' - how far can a fully-charged EV travel? The distance a car can be driven on a single battery charge is known as range and varies between models. Peter Rolton, Executive Chairman at Britishvolt, which is building an EV battery factory in Northumberland, says the technology to increase range is improving. "A hatchback [car] currently has around 200-250 miles range." By the end of the decade, he says better batteries will push that distance by many miles. EVs are powered by lithium-ion batteries and research is ongoing to improve their range. The game-changer will be if, and when, manufacturers can commercialise the next stage of the technology - known as solid state batteries. These batteries will be lighter and charge much faster than their lithium-ion counterparts. James Gaade, head of programme management at the Faraday Institution says "many automakers are targeting the introduction of solid-state batteries" within the next decade. These, he says "could herald a step change in EV range." In a nutshell, these cars are expected to be able to travel much further on a single charge in the future. ## What if I am not able to charge at home? The majority of households in the UK, some 18 million (65%), either have, or could offer, off-street parking for at least one vehicle. according to data from the RAC, external. However, According to the Competition and Markets Authority that leaves more than eight million households without access to home charging, including some people living in flats. There are other alternatives, says Melanie Shufflebotham of Zap Map. "Local authorities are beginning to install on-street chargers…and then apart from that it's about finding a charger at a local supermarket, or a local charging hub, so you can charge up periodically, as you would with a petrol or diesel car." However, people relying on public chargers face higher costs to power up their EV than those able to charge at home, with prices varying depending on which company owns the charging point. Public charge points also attract a higher rate of VAT; 20% compared with the 5% paid by domestic users charging at home. ## Will we all own cars in the future? Maybe not. Paul Wilcox of Vauxhall says "seismic changes are coming". He expects to see "a huge rise in things like subscription models", where customers pay monthly to use a car with other costs like insurance and maintenance included. Another area expected to grow is what's known as 'fractional ownership', or car sharing clubs. Melanie Shufflebotham says car sharing could grow in popularity, until driverless cars eventually become a reality. "Imagine, just as you'd call up an Uber now, you will have an app to call up an autonomous car to take you where you want to go. In the nearer term I think car sharing is a really important solution." *Download and subscribe to *The Big Green Money Show* with Deborah Meaden on BBC Sounds, or listen to a shorter version on BBC Radio 5 Live on Fridays.* ## Related topics - Published25 March 2022 - Published6 January 2022 - Published22 March 2022 - Published3 May
true
true
true
The electric car revolution is speeding up but what do buyers really want to know?
2024-10-12 00:00:00
2022-03-26 00:00:00
https://ichef.bbci.co.uk…s-1344977043.jpg
article
bbc.com
BBC News
null
null
8,305,630
https://blog.squareup.com/townsquare/posts/emv-explained-in-2-minutes
The Bottom Line | A Publication By Square
null
## Emerging Payments The payments landscape is always changing and consumers are finding new ways to transact everyday. From buy now, pay later to payment links, discover the latest trends and technologies emerging in payments today. Millions of companies use Square to take payments, manage staff, and conduct business in-store and online. Get startedAs a business, you need to be where your customers are. That may mean selling on social media, optimizing for mobile shopping, or even opening a pop-up. Learn how you can build your business and sell wherever, and however, you want. Explore curated articles, collections, videos, and more that Square editors have hand selected to help you navigate as you scale your business, grow and manage your team, organize your finances, and reach new customers. The way people are buying and selling items is changing, and mobile commerce is leading the charge. Mobile commerce is a type of eCommerce that enables people to purchase items directly through their mobile devices, like phones and tablets. And purchases are surging, with 59% of consumers saying they bought items directly from social media, according to our latest Future of Commerce report. Use this collection to learn how to get started with mobile commerce so you can reach new customers where they already are. The payments landscape is always changing and consumers are finding new ways to transact everyday. From buy now, pay later to payment links, discover the latest trends and technologies emerging in payments today. This downloadable omnichannel selling metrics template makes it easy to identify and track the key performance indicators (KPIs) that will help you grow your business efficiently across channels.
true
true
true
Fuel your business ambitions with expert insights, original research, and in-depth explainers from The Bottom Line, a publication by Square for entrepreneurs succeeding on their own terms.
2024-10-12 00:00:00
2023-01-05 00:00:00
https://images-cdn2.welcomesoftware.com/Zz0wMzY3Y2ZhYzNkZjQxMWVmODM0YmJlNTU2ZWM3ODk2ZA==?width=1200
website
squareup.com
The Bottom Line by Square
null
null
21,648,994
https://medium.com/@byrnehobart/investing-in-bitcoin-the-asset-allocators-perspective-70c4aa4f221c
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
18,369,788
http://nautil.us/issue/66/clockwork/why-do-taxonomists-write-the-meanest-obituaries-rp2
Why Do Taxonomists Write the Meanest Obituaries?
Ansel Payne
*To enjoy a story behind the story podcast of this article, just press the play button below.* Constantine Rafinesque had only been dead a few months when Asa Gray sat down to eulogize him for the *American Journal of Science*. The year was 1841, and Gray, soon to join both the American Academy and the Harvard faculty, was well on his way to becoming the most respected botanist of his generation. *Grayia*, a new genus of desert shrub, had just been named in his honor. Rafinesque, on the other hand, was *persona non grata*. Described by peers as a “literary madman,” the Turkish-born polymath had died of cancer the previous fall. Among the many works he left behind were rambling discourses on zoology and geology; a catalog of Native American burial mounds; a new interpretation of the Hebrew Bible; a 5,400-line epic poem (with footnotes); and, last but not least, a lengthy series of studies on North American plants. It was these last that had attracted Gray’s attention. That’s because scattered among Rafinesque’s botanical works were descriptions of over 6,000 new plant species, far more than any one person had managed to produce prior to that time—more, in fact, than anyone has produced since. It should have been an amazing accomplishment, and would have been, if only Rafinesque had been a decent botanist. “Our task,” Gray began, “although necessary, as it appears to us, is not altogether pleasing …” While the professor wanted to do “full justice” to Rafinesque’s life, he felt “obliged, at the same time, to protest against all of his later and one of his earlier botanical works … There can, we think, be but one opinion as to the consideration which is due to these new genera and species: They must be regarded as fictitious, and unworthy of the slightest notice.” What can we say of his life? Nothing. Through a lifetime of what historians would call a “nervous and appalling industry,” Rafinesque had somehow managed to produce thousands of pages of the worst work the field had ever seen. Full of errors and oversights, punctuated only by the occasional tirade, Rafinesque’s papers were case studies in sloppy work. His descriptions were so vague that readers had trouble attaching them to actual plants, and his talents for misidentification were legendary: Many of his “new” species were actually just well-known weeds. Toward the end, Rafinesque’s “passion for establishing new genera and species,” Gray wrote, “appears to have become a complete *monomania*. This is the most charitable supposition we can entertain.” Now, in the months after his death, the scientific establishment had assessed his work and passed its judgment. Rafinesque’s “absurd” botanical legacy, Gray wrote, amounted to little more than a “curious mass of nonsense.” Gray’s note wouldn’t be the last unkind obituary in the annals of taxonomy, nor would it be the worst. That’s because the rules dictating how taxonomists name and classify living things bind these scientists in a web of influence stretching far back into the 18th century. When an agent of chaos like Rafinesque enters the scene, that web can get sticky fast. In a field haunted by ghosts, someone has to reckon with the dead. Taxonomy, the art and science of classifying life, really should be a civilized pursuit. It encourages solitude, concentration, care. It rewards a meticulous attention to detail. And while it might occasionally receive some good-natured ribbing from the popular culture—think of all those butterfly collectors stumbling around in *Far Side* cartoons—it continues to play a vital role at the foundations of modern biology. It can come as a bit of a surprise, then, when that veneer of civilization cracks, and the field reveals itself to be one of the more contentious arenas in science, a place where arguments over names and classifications rage through the literature for decades. This is both a strength, as challenges to current classification keep the field dynamic and relevant, and an expression of its hardwired vulnerabilities. For starters, there’s the problem of classification itself. Ever since Darwin gave us a framework for understanding common descent, the search has been on for a natural classification, an arrangement of nested groups, or *taxa*, that accurately reflects evolutionary relationships. In this scheme, a classification functions as an explicit evolutionary hypothesis—to say that five species form a genus is also to say that those five species share a unique common ancestor. Ditto for families and orders, right up through classes and kingdoms. The obituarist referred to Walker’s work simply as the “evil.” On another level, though, higher-order classifications are all a bit arbitrary. So long as all the members of a genus share a unique common ancestor and some unifying trait, the size of that genus—the number of related species lumped together under that name—is really up to the classifier. This has created generational fights between two different camps of taxonomists: splitters, who advocate for more and smaller groups, and lumpers, who like their groups big and inclusive. And then there’s the problem of identifying species in the first place. While each of the many different textbook definitions seems straightforward enough in theory, discovering the actual boundaries of a species in nature can prove frustrating. Are subtle variations between two populations meaningful? Or are they just background noise? Working from a limited number of specimens or a dearth of genetic evidence, taxonomists often have to rely on intuition and experience to make that call. The resulting species definitions are provisional, and can be overturned by new evidence in the form of more specimens, novel genetic data, or broader geographic surveys. To accommodate all this hypothesis testing, the field has committed itself to the principle of taxonomic freedom, the right of any individual to change any classification based on new evidence and his or her best judgment. The practical result is that classifications tend to fluctuate in ways that are both healthy and natural, and that ultimately lead to better agreement between those classifications and evolutionary history. It’s a messy process, but it seems to work. To complicate things further, the Earth has an extraordinary abundance of biodiversity. In the years since Linnaeus introduced the modern system of classification and nomenclature, more than 250,000 species of plants and over 1.2 million species of animals have been described by naturalists. Each carries a unique, two-part name, or Latin binomial, that indicates both its genus group (e.g., *Homo*) and its species (*sapiens*). Keeping track of all these names and their shifting histories requires specialists to maintain a near-encyclopedic familiarity with the literature of their group. Now imagine if two taxonomists, working in different languages, at different times, or in different parts of the world, independently describe the same species. Unaware of one another’s work, each would produce a different name for the same organism, only one of which can ultimately be correct. A similar problem occurs when taxonomists independently use the same name to describe two different species. To avoid the chaos that would result from either of these situations (known, respectively, as synonymy and homonymy), the field enshrines another principle, taxonomic stability, to serve as a counterbalance to taxonomic freedom. The idea is to find a classification that’s stable enough to be useful, but flexible enough to incorporate necessary changes. This tension between freedom and stability was long ago formalized in two sets of official and binding rules: the International Code of Zoological Nomenclature (ICZN), which deals with animals, and the International Code of Nomenclature for algae, fungi, and plants (ICN). Periodically updated by committees of working taxonomists, these documents set out precise, legalistic frameworks for how to apply names both to species and to higher taxa. (The animal and plant codes operate independently, which means that an animal can share a scientific name with a plant, but not with another animal, and vice versa.) Central to both, and to taxonomic stability itself, is the Principle of Priority, which states that the first valid scientific name applied to a group of animals is *the* valid name. If Linnaeus named a species in 1758, and that species is still recognized today, then Linnaeus’ name stands. Priority forms the backbone of biological nomenclature; without it, classification would degenerate, Babel-style, into a panoply of competing and incompatible systems. One consequence of all this is that taxonomists are constantly combing through the older literature to uncover the proper name for any given species. Another is that current classificatory changes come with a high potential for downstream influence. And there’s the rub. In order to balance freedom with stability, the Codes generally remain silent on the question of *quality*: Any taxonomic proposal, no matter how outlandish, ill-informed, or incompetent, counts so long as it was published according to the barest of requirements set out in the Codes themselves. For the ICN, this means descriptions must be published in printed materials that are distributed to libraries and accessible to botanists. For the ICZN, which recently relaxed its requirements, descriptions can come in either publically accessible printed materials or Internet-based digital publications. In neither case do the Codes require peer review; if you can print it and you can distribute it, then you can describe pretty much whatever you want. While this freedom opens up a valuable space for amateur contributions, it also creates a massive loophole for unscrupulous, incompetent, or fringe characters to wreak havoc. That’s because the Principle of Priority binds all taxonomists into a complicated network of interdependence; just because a species description is wrong, poorly conceived, or otherwise inadequate, doesn’t mean that it isn’t a recognized part of taxonomic history. Whereas in physics, say, “unified theories” scrawled on napkins and mailed in unmarked envelopes end up in trashcans, biologists, regardless of their own opinions, are bound to reckon with the legacy of anyone publishing a new name. Taxonomists are more than welcome to deal with (or “revise”) these incorrect names in print, but they can’t really ignore them. And this, finally, is the context for Gray’s posthumous hatchet job. Rafinesque’s manic and reckless work on North American plants was more than just a curious sideshow in the history of American botany; it was a problem that other botanists had to solve. While Gray may have advocated for ignoring Rafinesque’s work wholesale, later taxonomists understood that kind of boycott as a violation, both in letter and in spirit, of the Codes of Nomenclature. In fact, the extensive nature of Rafinesque’s interventions—and the complication that at least some of his new species were actually valid—meant that taxonomists would still be untangling his mess well into the 20th century. As E.D. Merrill, the Harvard botanist who set out to index and correct much of this work in the 1930s and ’40s, put it, “we would have been infinitely better off today had Rafinesque never written or published anything appertaining to the subject.” More than twenty years too late for his scientific reputation, and after having done an amount of injury to entomology almost inconceivable in its immensity, Francis Walker has passed from among us.” Walker’s two-page obituary, in the November 1874 issue of the *Entomologist’s Monthly Magazine*, sits between a short research note (“*Emmelesia unifasciata* three years in the pupa state”) and some words on the passing of William Lello (“He leaves a considerable collection of Lepidoptera …”). Written anonymously, it pulled no punches when it came to the late taxonomist’s legacy: The vast majority of the tens of thousands of new species he proposed were “objects of derision for all conscientious entomologists.” More than once, the obituarist referred to Walker’s work simply as the “evil.” And yet, the man’s career had begun with promise. His first work, a well-regarded study of the tiny wasps known as chalcidids, had “marked an era in the study of its subject.” Despite considerable inherited wealth, he longed for a permanent position at one of Britain’s major collections. When that position failed to materialize, Walker, “in an unlucky moment,” instead took up the first in a long series of contract appointments cataloging insects for the British Museum. This is where the trouble began. Moving from drawer to drawer through the collection, Walker took it upon himself to describe what he believed to be thousands of new species in virtually all major groups of insects, a task requiring skills far beyond what he, or anyone else, possessed. “The result,” the obituarist wrote, “was what might have been expected. The work was done mechanically: ‘new genera and species’ were erected in the most reckless manner …” Through a Rafinesquean combination of industry and incompetence, the humble Englishman had begun to single-handedly wreak havoc on the classification of the world’s insects. The growth of so-called “vanity journals” has produced new avenues for what some have taken to calling “taxonomic vandalism.” As Walker published more and more dubious names, in wider and wider groups, the entomological establishment grew ever louder in its condemnation. By the time he had exhausted most of the major insect orders, his once considerable “entomological reputation [had been] worn to shreds.” Walker, however, “appeared to be utterly indifferent to anything that could be hurled at him … In his social relations he was amiability itself …” When he died, at 65, the entomological community mourned the gentle soul who had walked the halls of the British Museum, but also let out a collective sigh of relief. “We earnestly hope,” the obituarist added, “that never again will it fall to us, nor to our successors in entomological journalism, to have to write such an obituary notice as this.” But Walker’s name wouldn’t be the last to live in taxonomic infamy. In fact, his obituary seems downright tactful beside Claude Morley’s note, from 1913, on the death of Peter Cameron, an infamous describer of Central American insects. “Peter Cameron is dead, as was announced by most of the halfpenny papers on December 4th. What can we say of his life? Nothing; for it concerns us in no way. What shall we say of his work? Much, for it is entirely ours, and will go down to posterity as probably the most prolific and chaotic output of any individual for many years past.” Cameron, a Scottish amateur with a penchant for Central American insects, left a legacy that echoed for decades. Fifty years after his death, Richard Bohart, a taxonomist at the University of California Davis, would reiterate that the entomologist’s “work was careless, his descriptions poor, his locality data were often vague or omitted, his generic assignments were characteristically erroneous and contradictory, and he eschewed illustrations.” Despite all that, or perhaps because of it, Bohart wound up with the thankless task of sorting through Cameron’s North American contributions to a small group of wasps known as the Odynerini. Of the hundred or so names Cameron proposed within the group, almost all, Bohart found, were invalid. Meanwhile, modern taxonomy has its own outliers. In 2006, over 50 scientists signed an open letter to the administration of the University of Utrecht protesting the work of one Dewanand Makhan, an amateur entomologist who frequently listed the university as his institutional affiliation. (Makhan was a contract employee at the university’s herbarium, and not a member of the academic staff; his publications now list a personal address.) “For many years,” they wrote, “Dr. Makhan has been a growing threat to taxonomy and zoological nomenclature, publishing a large number of new genera and species in groups as wide ranging as beetles, spiders, and gastropods. These publications are uniformly poor in quality and scholarship.” A group of ant experts put it more bluntly: A 2007 publication by Makhan, they wrote, was “one of the most inadequate papers that has ever been produced in ant taxonomy.” Makhan’s descriptions are notoriously short on detail. In place of clear scientific diagrams, he illustrates much of his work with blurry, out-of-focus photographs. Most frustrating to fellow entomologists, many of Makhan’s “new” species are instantly recognizable, at least to them, as already described insects. Despite numerous articles and blog posts on the so-called “Makhan problem,” new publications continue to appear, most in a small Australian journal without a traditional peer-review process. (As recently as last year, Makhan described a new species of waterbeetle, *Desmopachria barackobamai*—named, of course, for the 44th president of the United States.) The story is a familiar one, but with a modern twist. That’s because the growth of so-called “vanity journals”—publications that look to all appearances like mainstream scientific outlets, but lack rigorous peer-review—has produced new avenues for what some have taken to calling “taxonomic vandalism.” As traditional boundaries between experts and amateurs dissolve in the face of digital publishing, more opportunities than ever exist for novel voices in science, journalism, and politics. Unfortunately, these opportunities come at a cost, as a growing tide of information challenges the discriminatory abilities of scientists and lay readers alike. While discussions underway now could revise the Codes to include stricter controls on which publications count for classificatory changes, many taxonomists are wary of doing anything that might deter amateur contributions. With so many species left to discover, and with existential threats to biodiversity looming, they realize the field needs as much help as it can get. In the struggle to balance its highest ideals with its unruliest practicioners, taxonomy teaches us an enduring lesson about science as a whole. While we like to think of that enterprise as an antidote to fallibility—a way of seeing that seeks, through meticulous care and relentless examination, to minimize our tendency toward error—it remains fundamentally, inescapably human. Somewhere in between is where real progress happens. I was reminded of that when I found another obituary for Peter Cameron, this one from his hometown newspaper, the Glasgow *Herald*: Much regret will be expressed in scientific circles throughout the world at the death of Mr. Peter Cameron, a well-known entomologist and a native of Glasgow. For fifteen years, Mr. Cameron lived as a recluse at New Mills, Derbyshire, but for the last three years he had lodged at the cottage of a labourer named Price … At the inquest, it was stated that Mr. Cameron’s only relative was a sister in Dresden. A doctor expressed the opinion that he had neglected himself, and the jury returned a verdict that death was due to alcoholism … Of late, his circumstances had not been very bright, which prompted a certain section of the Royal Society to get together a sum of money, which was given to him at intervals to meet his needs. In 2014, a team of entomologists from the Universidad de Panamá and Spain’s Museo Nacional de Ciencias Naturales described a new species of gall wasp found in oak trees growing on the slopes of Panama’s Volcán Barú. Less than three millimeters long and with four finely veined wings, the species differs from its relatives in the United States by an additional antennal segment and a minor difference in the shape of its thorax. Its name? *Callirhytis cameroni*. *Ansel Payne is a writer and naturalist in Tuscaloosa, Alabama.* *This article was originally published in our “Boundaries” issue in April, 2016.*
true
true
true
The open nature of the science of classification virtually guarantees fights.
2024-10-12 00:00:00
2016-03-25 00:00:00
https://assets.nautil.us…&ixlib=php-3.3.1
article
nautil.us
Nautilus
null
null
5,429,383
http://www.bloomberg.com/news/2013-03-23/blackstone-group-icahn-said-to-submit-proposals-to-dell.html
Bloomberg
null
To continue, please click the box below to let us know you're not a robot. Please make sure your browser supports JavaScript and cookies and that you are not blocking them from loading. For more information you can review our Terms of Service and Cookie Policy. For inquiries related to this message please contact our support team and provide the reference ID below.
true
true
true
null
2024-10-12 00:00:00
null
null
null
null
null
null
null
19,601,412
https://www.bloomberg.com/news/articles/2019-04-07/gene-therapy-was-hailed-as-a-revolution-then-came-the-bill
Bloomberg
null
To continue, please click the box below to let us know you're not a robot. Please make sure your browser supports JavaScript and cookies and that you are not blocking them from loading. For more information you can review our Terms of Service and Cookie Policy. For inquiries related to this message please contact our support team and provide the reference ID below.
true
true
true
null
2024-10-12 00:00:00
null
null
null
null
null
null
null
39,771,717
https://github.com/slsa-framework/slsa-github-generator/pull/3401/files
chore: Update changelog for #3350 by laurentsimon · Pull Request #3401 · slsa-framework/slsa-github-generator
Slsa-Framework
Add this suggestion to a batch that can be applied as a single commit. This suggestion is invalid because no changes were made to the code. Suggestions cannot be applied while the pull request is closed. Suggestions cannot be applied while viewing a subset of changes. Only one suggestion per line can be applied in a batch. Add this suggestion to a batch that can be applied as a single commit. Applying suggestions on deleted lines is not supported. You must change the existing code in this line in order to create a valid suggestion. Outdated suggestions cannot be applied. This suggestion has been applied or marked resolved. Suggestions cannot be applied from pending reviews. Suggestions cannot be applied on multi-line comments. Suggestions cannot be applied while the pull request is queued to merge. Suggestion cannot be applied right now. Please check back later. ## New issue Have a question about this project?Sign up for a free GitHub account to open an issue and contact its maintainers and the community.By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails. Already on GitHub? Sign in to your account ## chore: Update changelog for #3350 #3401 ## chore: Update changelog for #3350 #3401 ## Changes from all commits`0354c9e` `32f4b91` `57af8f9` `6c3aedd` File filter## Filter by extension ConversationsJump to## There are no files selected for viewing
true
true
true
Summary Update changelog for #3350 Testing Process ... Checklist Review the contributing guidelines Add a reference to related issues in the PR description. Update documentation if applicable. ...
2024-10-12 00:00:00
2024-03-20 00:00:00
https://avatars.githubusercontent.com/u/64505099?s=400&v=4
object
github.com
GitHub
null
null
24,978,883
https://phys.org/news/2020-11-future.html
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
14,401,827
https://www.twilio.com/blog/2017/05/announcing-programmable-video-group-rooms.html
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
38,585,842
https://www.debian.org/News/2023/2023120902
Debian 12.3 image release delayed
null
# Debian 12.3 image release delayed **December 9th, 2023** Due to an issue in the ext4 file system with data corruption in kernel 6.1.64-1, we are pausing the planned Debian 12.3 point release images for today while we attend to fixes. Please do not upgrade any systems at this time, we urge caution for users with UnattendedUpgrades configured. For more information please refer to Debian bug report #1057843. ## About Debian The Debian Project is an association of Free Software developers who volunteer their time and effort in order to produce a completely free operating system known as Debian. ## Contact Information For further information, please visit the Debian web pages at https://www.debian.org/ or send mail to <[email protected]>.
true
true
true
null
2024-10-12 00:00:00
2024-09-24 00:00:00
null
null
null
null
null
null
1,380,279
http://www.theregister.co.uk/2010/05/25/74_democrats_defy_genachowski/
74 Democrats defy Obama man's net neut plans
Rik Myslewski
This article is more than **1 year old** # 74 Democrats defy Obama man's net neut plans ## The magic word: 'jobs' Seventy-four Democratic members of the US House of Representatives have sided with telcos in the ongoing dust-up over the Federal Communications Commission efforts to preserve net neutrality. "We urge you not to move forward with a proposal that undermines critically important investment in broadband and the jobs that come with it," reads a letter from Houston, Texas Rep Gene Green, signed by a total of 74 members of Obama's own party. The proposal that the letter derides is FCC chairman Julius Genachowski's "Third Way", his self-described "narrow and tailored approach" to internet regulation floated earlier this month in response to the April decision by a federal appeals court that sharply restricted the FCC's internet-regulatory powers by overthrowing the commission's sanctioning of Comcast for choking BitTorrent traffic. Based on a legal framework created by the FCC's general counsel, the Third Way is Genachowski's tightrope-walking attempt to satisfy both the supporters and opponents of net neutrality by having the FCC assume regulatory powers over only the transmission component of broadband-access service, but to steer clear of any controls over content, services, e-commerce, apps, and the like. The telcos — fervid opponents of FCC regulation — are having none of Genachowski's attempt at a middle ground. They're opposing any extension of telecommunications-style regulation into the broadband marketplace. One result of what must be a swarm of lobbyists crowding Capitol Hill is the Gene Green letter, which has been published in full by both anti–Third Way organizations such as Americans for Prosperity and pro–Third Way entities such as ColorOfChange.org. (Interestingly, ColorOfChange.org, an organization dedicated to "strengthening Black America's political voice," is balanced by the minority-infused, AT&T-supported Alliance for Digital Equality, which takes a decidedly anti–Third Way stance). In the letter, Green and his troops argue that the Third Way would stifle broadband investment, and that doing so would impede job growth. "The uncertainty this proposal creates will jeopardize jobs and deter needed investment for years to come," Green writes. Although Green refers to the Third Way as being "expanded FCC jurisdiction over broadband" that's "unprecedented", it's only fair to point out that Genachowski's proposal would essentially restore the powers that the FCC exercised in the years before the Comcast decision — years during which internet expansion was quite vibrant. Pro–Third Way congressman issue a public letter like the one floated by Gene Green and his co-signers. But thirteen corporate members of the pro-net-neut OpenInternetCoalition — whose supporters include Amazon, eBay, Facebook, Google, and Skype — recently sent a letter to Genachowski that said: "we applaud the middle ground approach that you have proposed. We share your belief that this course will create a legally sound, light-touch regulatory framework that benefits consumers, technology companies, and broadband Internet access providers." Pro–net neutrality org *Free Press* has also issued a line-by-line deconstruction of the Green letter, calling it "so full of misinformation that no member of Congress should in good conscience put his or her name on it." All of this letter-writing, invoking of the sacred word "jobs", argument deconstruction, and name-calling is all being done to sway Genachowski's decision on implementing the Third Way, and to jockey for position in preparation for possible Congressional action on net neutrality. Earlier this month, the chairmen of the House Committee on Energy and Commerce and the Senate Committee on Science, Commerce, and Transportation, Henry Waxman of southern California and Jay Rockefeller of West Virginia sent — what else? — a letter to Genachowski in support of FCC regulation. In that letter, Waxman and Rockefeller also noted that there might be "a need to rewrite the law to provide consumers, the Commission, and industry with a new framework for telecommunications policy." But with 74 Democrats enlisting in the anti–Third Way army while waving the highly electable banner of job-protection, with mid-term elections looming, and with other battles such as financial reform, oil-spill face-saving, and Supreme Court–candidate ratification eating up the political bandwidth, it's unlikely that Congress will be diving into the broadband battle anytime soon. So Genachowski has a choice: he can stick his neck out, introduce the Third Way regs, and face not only accusations that the Obama administration is a job-killer, but also possible legal challenges to the FCC's right to strengthen its regulatory powers without specific legislation — or he can fold. Perhaps the next step will be for the pro-Third Way lobbyists, funded by Google, Amazon, eBay, and the rest, to shop their own letter around Capitol Hill. After all, there are 255 Democrats in the House, 181 of whom didn't sign the Green letter. ® ### Bootnote Thirty-seven Republican representatives sent their own letter to Genachowski on Monday, denouncing "the heavy-handed 19th century regulations you seek to impose on a highly competitive 21st century communications marketplace." However, reporting that the Republican party opposes an Obama administration proposal is akin to posting that age-old definition of a non-news headline: "Dog Bites Man." 15
true
true
true
The magic word: 'jobs'
2024-10-12 00:00:00
2010-05-25 00:00:00
null
article
theregister.com
The Register
null
null
24,339,102
https://thl.fi/en/web/thlfi-en/-/koronavilkku-has-now-been-published-download-the-app-to-your-phone-
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
27,162,879
https://developer.exogress.com/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
2,602,752
http://www.in-the-box.org
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
8,223,741
http://www.psychologytoday.com/blog/dont-delay/201403/procrastination-why-mindfulness-is-crucial
Procrastination: Why Mindfulness Is Crucial
Timothy A Pychyl Ph D
###### Procrastination # Procrastination: Why Mindfulness Is Crucial ## Mindful awareness & nonjudgmental acceptance—the path to emotional regulation Posted March 12, 2014 Why do we procrastinate? Why do we procrastinate? The short answer is quite simple, I think. When we face some tasks, we have a negative emotional reaction. These can be transient emotions, even preconscious—meaning that we’re not really aware of these feelings at a conscious level—but we do react to them. We cope by avoiding the task that we perceive is causing these negative emotions or is, at the very least, associated with these negative emotions. This avoidant coping response is procrastination. Long-time readers of this blog will understand this avoidant coping as the “giving in to feel good” instance. We seek short-term mood repair through task avoidance. This task avoidance is procrastination when we have intended to do a task because, all things considered, it’s in our best interest to do the task at that time, but we needlessly avoid it instead. We’ve become our own worst enemy with a self-defeating delay, but we can learn a better approach. I have argued here in this blog and in my research (e.g., Sirois & Pychyl, 2013) and book (Solving the Procrastination Puzzle) that procrastination is a self-regulation failure. We fail to regulate ourselves to act as intended. When we face a task to which we have a negative visceral reaction and think “I don’t feel like it” or “I don’t want to” we require self-regulation to proceed despite these feelings. What I want to emphasize and make clear in this post is that **effective self-regulation relies on emotion regulation, and this emotion regulation in turn relies on mindfulness.** There is clear evidence that mindfulness is related to less procrastination (including a thesis that is just wrapping up in my research group now). So, we need to begin with the connection between mindfulness and emotion regulation. **Mindfulness and Emotion Regulation** Although I focus on procrastination as an emotion-regulation process, I turn to a recent publication by Rimma Teper, Zindel Segal and Michael Inzlicht (University of Toronto) for an excellent summary of the connection between mindfulness and emotion regulation. This is, I argue, the missing piece, the most essential piece, of what I have come to call the “procrastination puzzle.” Let’s begin with emotions. Emotions provide information. They are adaptive, for the most part. And, based on these two properties, we have come to understand that emotions have motivational properties. For example, we typically acknowledge that fear motivates a “fight or flight” response. These responses are what evolutionary psychologists consider our “best guesses” about what to do in situations that evoke a fear emotion. Although it’s possible that a task might evoke fear (e.g., fear of failure), there are other more subtle and perhaps transient emotions that define task aversiveness such as boredom or frustration. These emotions motivate us too. They can motivate the avoidance we label as procrastination. The crux of the issue then is that despite these emotions our successful goal pursuit will depend on task engagement. **Despite negative emotions, we need to regulate our behavior towards the task, not away from it. How do we do this?** First, we have to be aware of our emotions. I have noted already that our emotional reaction to a task may not be a “jump up in your face” sort of emotional response that you might have if a poisonous snake just slithered out from under your desk or if you were to hear someone kick in your door just now (I’ll bet both of these examples evoked emotions for some readers, they do for me). In any case, I digress. **The thing is we have to be aware of emotions so that this awareness might signal the need for effortful self-regulation.** This is where my colleagues from the University of Toronto fill a gap in the story. Their work, led by Michael Inzlicht, focuses on the link between the executive function resources we require to self-regulate and the mindfulness that we require to call executive function into action. **We have to know when to control ourselves.** Their argument in a nutshell is this: - Mindfulness as a practice cultivates the ability to maintain focus on the present moment. This present-moment awareness provides sensitivity to sensory cues—like that negative emotional “pang” we might feel when facing an aversive task. - When we are aware of these sensory cues, and, in particular for procrastination, the negative emotions we’re experiencing related to the task, we can instigate control. Here I would argue that one key feature of executive function is inhibition—and the first step for successful goal pursuit with an aversive task might simply be inhibiting the prepotent avoidance response. Put more simply, **mindful awareness lets me recognize that I’m freaking out about this task or bored stiff by this task and this awareness can signal the need to inhibit my habit of procrastinating**. **If I can be aware of my emotions, I can exert control and stay put.** **Mindfulness is more than awareness, however.** Mindfulness also includes the key feature of nonjudgmental acceptance of our emotions (or thoughts). In some fascinating experimental work, Rimma Teper and Michael Inzlicht demonstrated that more experienced meditators—those participants in their study who scored higher in mindfulness—committed fewer errors in their experimental task that measured executive control. They concluded that these mindful participants or any people “…who are able to accept the ‘pang’ of making an error may experience this quick affective state more keenly and may thus be more likely to attend to attend to their errors and prevent them from happening in future trials. * These people may be better able to control their behavior because they are more accepting of their errors and associated conflict*” (p. 451, emphasis added). Interestingly, they also examined the neuroaffective correlates of this executive control where their research revealed that more experienced meditators showed greater evoked brain potentials generated in the anterior cingulate cortex (an area of the brain associated with executive function). All this is to say that this research by Michael Inzlicht and his colleagues indicates that when people are accepting of their emotions (an attribute developed through mindfulness), the potential is that they may also may be able to use these transient emotions as signals to do something about them, not just rely on a habitual coping response such as avoidance. Although research will continue to investigate these links, I think it is intuitively obvious that **mindful awareness and acceptance are fundamental first steps in the self-regulation necessary to initiate goal action when procrastination may be the habitual response.** **If we can cultivate mindful awareness and acceptance, we can better understand when and why we’re motivated to procrastinate, and this in turn can promote more willful attempts to exercise the control necessary to stay the course until the initial emotions pass.** I would add that research clearly shows that progress on our goals fuels well-being, so this initial “priming the pump” for action may well be the mechanism by which we move from initially negative to more positive task-related emotions. **Concluding Thoughts** The take home message, and I think the main route for dealing with procrastination as a habitual coping response, is that **mindfulness meditation is a powerful tool for each of us. In fact, it may be one of the only entry points available to us to break the cycle of avoidance.** I'll give the final word to Michael Inzlicht and his colleagues, “…mindfulness promotes executive control by enhancing experience of and attention to transient affects [emotions]—the control alarms—that arise from competing goal tendencies…Early awareness and acceptance of these sensations is advantageous, because it allows people to efficiently recruit regulatory resources” (p. 452). *References* Flett, A. & Pychyl, T.A. (2014). *Procrastination, Rumination, and Distress in Students: An Analysis of the Roles of Self-Compassion and Mindfulness*. Poster to be presented at the Canadian Psychological Association Annual Conference. Pychyl, T.A., & Rotblatt, A. (2007). *Mindfulness Meditation as an intervention for academic procrastination*. Paper presented at the biannual conference, Counseling the Procrastinator in Academic Settings, Catholic University of Peru, Lima, Peru. Sirois, F. & Pychyl, T.A. (2013). Procrastination and the priority of short-term mood regulation: Consequences for future self. *Social and Personality Psychology Compass*, *7,* 115-127. DOI: 10.1111/spc3.12011 Sirois, F.M., & Tosti, N. (2012). Lost in the moment?: An investigation of procrastination, mindfulness and well-being. *Journal of Rational-Emotive and Cognitive-Behavior Therapy. *DOI: 10.1007/s10942-012-0149-5 Teper, R., Segal, Z.V., & Inzlicht, M. (2013). Inside the mindful mind: How mindfulness enhances emotion regulation through improvements in executive control. *Current Directions in Psychological Science, 22(6)*, 449-454. DOI: 10.1177/0963721413495869
true
true
true
In a recent paper, some colleagues wrote, “The connection between mindfulness and improved emotion regulation is certainly an intuitive one…” I agree. What seems less intuitive to many people is how these also connect to our procrastination. In fact, I think understanding this is the central thing we need to understand about procrastination.
2024-10-12 00:00:00
2014-03-12 00:00:00
https://cdn2.psychologyt…pg?itok=Mkc4MF-z
article
psychologytoday.com
Psychology Today
null
null
15,861,752
https://www.w3schools.com/browsers/
W3Schools.com
null
# Browser Statistics Browser Statistics since 2002 ## The Most Popular Browsers W3Schools has over 60 million monthly visits. From the statistics below (collected since 2002) you can read the long term trends of browser usage. Click on the browser names to see detailed browser information: 2024 | Chrome | Edge | Firefox | Safari | Opera | ---|---|---|---|---|---| March | 77.6 % | 10.7 % | 4.6 % | 3.7 % | 2.2 % | February | 77.5 % | 10.5 % | 4.6 % | 3.6 % | 2.0 % | January | 78.1 % | 10.4 % | 4.7 % | 3.8 % | 2.1 % | 2023 | Chrome | Edge | Firefox | Safari | Opera | December | 78.2 % | 10.0 % | 4.6 % | 3.7 % | 2.1 % | November | 77.4 % | 10.6 % | 4.9 % | 3.9 % | 2.4 % | October | 78.0 % | 10.3 % | 4.8 % | 3.9 % | 2.3 % | September | 78.8 % | 10.3 % | 4.6 % | 3.4 % | 2.2 % | August | 80.1 % | 9.8 % | 4.6 % | 3.0 % | 1.8 % | July | 79.9 % | 10.0 % | 4.8 % | 3.0 % | 1.8 % | June | 79.8 % | 9.4 % | 4.3 % | 3.5 % | 2.1 % | May | 79.4 % | 9.3 % | 4.5 % | 3.7 % | 2.3 % | April | 79.5 % | 9.0 % | 4.6 % | 3.8 % | 2.2 % | March | 79.6 % | 8.8 % | 4.7 % | 3.7 % | 2.3 % | February | 79.7 % | 8.6 % | 4.8 % | 3.9 % | 2.2 % | January | 79.7 % | 8.2 % | 5.0 % | 3.9 % | 2.3 % | 2022 | Chrome | Edge | Firefox | Safari | Opera | December | 80.3 % | 7.8 % | 4.9 % | 3.7 % | 2.4 % | November | 79.9 % | 8.1 % | 4.9 % | 3.9 % | 2.2 % | October | 79.9 % | 8.1 % | 5.2 % | 4.2 % | 1.7 % | September | 80.9 % | 7.8 % | 5.2 % | 3.7 % | 1.5 % | August | 81.1 % | 7.6 % | 5.2 % | 3.4 % | 1.7 % | July | 81.1 % | 7.5 % | 5.0 % | 3.4 % | 2.1 % | June | 76.3 % | 7.4 % | 5.1 % | 3.6 % | 2.3 % | May | 79.9 % | 7.3 % | 5.3 % | 3.8 % | 2.4 % | April | 80.3 % | 7.2 % | 5.3 % | 3.8 % | 2.4 % | March | 80.3 % | 7.5 % | 5.3 % | 3.7 % | 2.3 % | February | 79.9 % | 7.5 % | 5.4 % | 4.0 % | 2.3 % | January | 80.1 % | 7.3 % | 5.5 % | 3.9 % | 2.3 % | 2021 | Chrome | Edge | Firefox | Safari | Opera | December | 81.0 % | 6.6 % | 5.5 % | 3.7 % | 2.3 % | November | 80.0 % | 6.8 % | 5.8 % | 3.9 % | 2.4 % | October | 80.3 % | 6.7 % | 5.7 % | 3.9 % | 2.3 % | September | 80.9 % | 6.5 % | 5.6 % | 3.6 % | 2.2 % | August | 81.4 % | 6.1 % | 5.6 % | 3.3 % | 2.1 % | July | 81.6 % | 6.0 % | 5.6 % | 3.3 % | 2.2 % | June | 81.7 % | 5.9 % | 5.6 % | 3.4 % | 2.2 % | May | 81.2 % | 5.8 % | 5.8 % | 3.5 % | 2.4 % | April | 80.7 % | 5.6 % | 6.1 % | 3.7 % | 2.4 % | March | 80.8 % | 5.5 % | 6.3 % | 3.7 % | 2.3 % | February | 80.6 % | 5.4 % | 6.6 % | 3.9 % | 2.3 % | January | 80.3 % | 5.3 % | 6.7 % | 3.8 % | 2.3 % | 2020 | Chrome | Edge/IE | Firefox | Safari | Opera | December | 80.5 % | 5.2 % | 6.7 % | 3.7 % | 2.3 % | November | 80.0 % | 5.3 % | 7.1 % | 3.9 % | 2.3 % | October | 80.4 % | 5.2 % | 7.1 % | 3.7 % | 2.1 % | September | 81.0 % | 4.9 % | 7.2 % | 3.6 % | 2.0 % | August | 81.2 % | 4.6 % | 7.3 % | 3.4 % | 2.0 % | July | 81.3 % | 4.3 % | 7.6 % | 3.4 % | 2.0 % | June | 80.7 % | 3.9 % | 8.1 % | 3.7 % | 2.1 % | May | 80.7 % | 3.5 % | 8.5 % | 4.1 % | 1.6 % | April | 80.7 % | 3.4 % | 8.6 % | 4.2 % | 1.5 % | March | 81.4 % | 3.5 % | 8.7 % | 3.7 % | 1.3 % | February | 82.0 % | 3.4 % | 8.7 % | 3.4 % | 1.2 % | January | 81.9 % | 3.0 % | 9.1 % | 3.3 % | 1.3 % | 2019 | Chrome | Edge/IE | Firefox | Safari | Opera | November | 81.3 % | 3.2 % | 9.2 % | 3.5 % | 1.4 % | September | 81.4 % | 3.3 % | 9.1 % | 3.1 % | 1.6 % | July | 80.9 % | 3.3 % | 9.3 % | 2.7 % | 1.6 % | May | 80.4 % | 3.6 % | 9.5 % | 3.3 % | 1.7 % | March | 80.0 % | 3.8 % | 9.6 % | 3.3 % | 1.7 % | January | 79.5 % | 4.0 % | 10.2 % | 3.3 % | 1.6 % | 2018 | Chrome | IE/Edge | Firefox | Safari | Opera | November | 79.1 % | 4.1 % | 10.2 % | 3.8 % | 1.6 % | September | 79.6 % | 3.9 % | 10.3 % | 3.3 % | 1.5 % | July | 80.1 % | 3.5 % | 10.8 % | 2.7 % | 1.5 % | May | 79.0 % | 3.9 % | 10.9 % | 3.2 % | 1.6 % | March | 78.1 % | 4.0 % | 11.5 % | 3.3 % | 1.6 % | January | 77.2 % | 4.1 % | 12.4 % | 3.2 % | 1.6 % | 2017 | Chrome | IE/Edge | Firefox | Safari | Opera | November | 76.8 % | 4.3 % | 12.5 % | 3.3 % | 1.6 % | September | 76.5 % | 4.2 % | 12.8 % | 3.2 % | 1.2 % | July | 76.7 % | 4.2 % | 13.3 % | 3.0 % | 1.2 % | May | 75.8 % | 4.6 % | 13.6 % | 3.4 % | 1.1 % | March | 75.1 % | 4.8 % | 14.1 % | 3.6 % | 1.0 % | January | 73.7 % | 4.9 % | 15.4 % | 3.6 % | 1.0 % | 2016 | Chrome | IE/Edge | Firefox | Safari | Opera | November | 73.8 % | 5.2 % | 15.3 % | 3.5 % | 1.1 % | September | 72.5 % | 5.3 % | 16.3 % | 3.5 % | 1.0 % | July | 71.9 % | 5.2 % | 17.1 % | 3.2 % | 1.1 % | May | 71.4 % | 5.7 % | 16.9 % | 3.6 % | 1.2 % | March | 69.9 % | 6.1 % | 17.8 % | 3.6 % | 1.3 % | January | 68.4 % | 6.2 % | 18.8 % | 3.7 % | 1.4 % | Year | Chrome | IE | Firefox | Safari | Opera | ---|---|---|---|---|---| 2015 | 63.3 % | 6.5 % | 21.6 % | 4.9 % | 2.5 % | 2014 | 59.8 % | 8.5 % | 24.9 % | 3.5 % | 1.7 % | 2013 | 52.8 % | 11.8 % | 28.9 % | 3.6 % | 1.6 % | 2012 | 42.9 % | 16.3 % | 33.7 % | 3.9 % | 2.1 % | 2011 | 29.4 % | 22.0 % | 42.0 % | 3.6 % | 2.4 % | 2010 | 16.7 % | 30.4 % | 46.4 % | 3.4 % | 2.3 % | 2009 | 6.5 % | 39.4 % | 47.9 % | 3.3 % | 2.1 % | 2008 | 52.4 % | 42.6 % | 2.5 % | 1.9 % | | 2007 | 58.5 % | 35.9 % | 1.5 % | 1.9 % | | Netscape | ||||| 2006 | 62.4 % | 27.8 % | 0.4 % | 1.4 % | | 2005 | 73.8 % | 22.4 % | 0.5 % | 1.2 % | | Mozilla | ||||| 2004 | 80.4 % | 12.6 % | 2.2 % | 1.6 % | | 2003 | 87.2 % | 5.7 % | 2.7 % | 1.7 % | | 2002 | 84.5 % | 3.5 % | 7.3 % | **Chrome**= Google Chrome**Edge**= Microsoft Edge**IE**= Microsoft Internet Explorer**Firefox**= Mozilla Firefox (identified as Mozilla before 2005)**Mozilla**= The Mozilla Suite (identified as Firefox after 2004)**Safari**= Apple Safari (and Konqueror. Both identified as Mozilla before 2007)**Opera**= Opera (from 2011; Opera Mini is included here)**Netscape**= Netscape Navigator (identified as Mozilla after 2006) ## Statistics Can Be Misleading **W3Schools'** statistics may not be relevant to **your** web site. Different sites attract different audiences. Some web sites attract developers using professional hardware, while other sites attract hobbyists using older computers. Anyway, data collected from W3Schools' log-files over many years clearly shows the long term trends. ## Browsers Developer Tools Browser's developer tools can be used to inspect, edit and debug HTML, CSS, and JavaScript of the curently-loaded page. To learn more, check out the browser's own manual for developer tools: Microsoft Edge Developer Tools ## Other Statistics ## Computer Speed The first electrical computer, Z3 (1941), could do 5 instructions per second. The first electronic digital computer, ENIAC (1945), could do 5000 instructions per second. Today's computers can do 5 billion instructions per second. Computer | Year | Instructions per Second | Bits per Instruction | ---|---|---|---| Z3 | 1941 | 5 | 4 | ENIAC | 1945 | 5.000 | 8 | IBM PC | 1981 | 5.000.000 | 16 | Intel Pentium | 1995 | 100.000.000 | 32 | AMD | 2000 | 1.000.000.000 | 64 | Today | 2020 | 5.000.000.000 | 128 |
true
true
true
W3Schools offers free online tutorials, references and exercises in all the major languages of the web. Covering popular subjects like HTML, CSS, JavaScript, Python, SQL, Java, and many, many more.
2024-10-12 00:00:00
2024-09-10 00:00:00
https://www.w3schools.co…s_logo_436_2.png
null
null
null
null
null
7,686,186
http://jakearchibald.com/2014/visible-undoes-hidden/
null
null
null
true
false
false
null
null
null
null
null
null
null
null
null
3,408,340
http://herbsutter.com/welcome-to-the-jungle/
Welcome to the Jungle
null
## Or, A Heterogeneous Supercomputer in Every Pocket *In the twilight of Moore’s Law, the transitions to multicore processors, GPU computing, and HaaS cloud computing are not separate trends, but aspects of a single trend – mainstream computers from desktops to ‘smartphones’ are being permanently transformed into heterogeneous supercomputer clusters. Henceforth, a single compute-intensive application will need to harness different kinds of cores, in immense numbers, to get its job done.* *The free lunch is over. Now welcome to the hardware jungle.* From 1975 to 2005, our industry accomplished a phenomenal mission: In 30 years, we put a personal computer on every desk, in every home, and in every pocket. In 2005, however, mainstream computing hit a wall. In **“The Free Lunch Is Over”** (December 2004), I described the reasons for the then-upcoming industry transition from single-core to multi-core CPUs in mainstream machines, why it would require changes throughout the software stack from operating systems to languages to tools, and why it would permanently affect the way we as software developers have to write our code if we want our applications to continue exploiting Moore’s transistor dividend. In 2005, our industry undertook a new mission: to put a *personal parallel supercomputer* on every desk, in every home, and in every pocket. 2011 was special: it’s the year that we completed the transition to parallel computing in all mainstream form factors, with the arrival of multicore tablets (e.g., iPad 2, Playbook, Kindle Fire, Nook Tablet) and smartphones (e.g., Galaxy S II, Droid X2, iPhone 4S). 2012 will see us continue to build out multicore with mainstream quad- and eight-core tablets (as Windows 8 brings a modern tablet experience to x86 as well as ARM), and the last single-core gaming console holdout will go multicore (as Nintendo’s Wii U replaces Wii). This time it took us just sixyears to deliver mainstream parallel computing in all popular form factors. And we know the transition to multicore is permanent, because multicore delivers compute performance that single-core cannot and there will always be mainstream applications that run better on a multi-core machine. There’s no going back. For the first time in the history of computing, mainstream hardware is no longer a single-processor von Neumann machine, and never will be again. That was the first act. ### Overview: Trifecta It turns out that multicore is just the first of three related permanent transitions that layer on and amplify each other. **1. Multicore (2005-).** As above. **2. Heterogeneous cores (2009-).** A single computer already typically includes more than one kind of processor core, as mainstream notebooks, consoles, and tablets all increasingly have both CPUs and compute-capable GPUs. The open question in the industry today is not whether a single application will be spread across different kinds of cores, but only “how different” the cores should be – whether they should be basically the same with similar instruction sets but in a mix of a few big cores that are best at sequential code plus many smaller cores best at running parallel code (the Intel MIC model slated to arrive in 2012-2013, which is easier to program), or cores with different capabilities that may only support subsets of general-purpose languages like C and C++ (the current Cell and GPGPU model, which requires more complexity including language extensions and subsets). Heterogeneity amplifies the first trend (multicore), because if some of the cores are smaller then we can fit more of them on the same chip. Indeed, 100x and 1,000x parallelism is already available today on many mainstream home machines – for programs that can harness the GPU. We know the transition to heterogeneous cores is permanent, because different kinds of computations naturally run faster and/or use less power on different kinds of cores – including that different parts of the same application will run faster and/or cooler on a machine with several different kinds of cores. **3. Elastic compute cloud cores (2010-).** For our purposes, “cloud” means specifically “hardware (or infrastructure) as a service” (HaaS) – delivering access to more computational hardware as an extension of the mainstream machine. This started to hit the mainstream with commercial compute cloud offerings from Amazon Web Services (AWS), Microsoft Azure, Google App Engine (GAE), and others. Cloud HaaS again amplifies both of the first two trends, because it’s fundamentally about deploying large numbers of nodes where each node is a mainstream machine containing multiple and heterogeneous cores. In the cloud, the number of cores available to a single application is scaling fast (e.g., in summer 2011, Cycle Computing delivered a 30,000-core cloud for under $1,300/hour, using AWS) and the same heterogeneous cores are available in compute nodes (e.g., AWS already offers “Cluster GPU” nodes with dual nVIDIA Tesla M2050 GPU cards, enabling massively parallel and massively distributed CUDA applications). In short, parallelism is not just in full bloom, but increasingly in full variety. This article will develop four key points: **Moore’s End.**We can observe clear evidence that Moore’s Law is ending, because we can point to a pattern that precedes the end of exploiting any kind of resource. But there’s no reason to panic, because Moore’s Law limits only one kind of scaling, and we have already started another kind.**Mapping one trend, not three.**Multicore, heterogeneous cores, and HaaS cloud computing are not three separate trends, but aspects of a single trend: putting a*personal heterogeneous supercomputer cluster*on every desk, in every home, and in every pocket.**The effect on software development.**As software developers, we will be expected to enable a single application to exploit a “jungle” of enormous numbers of cores that are increasingly different in kind (specialized for different tasks) and different in location (from local to very remote; on-die, in-box, on-premises, in-cloud). The jungle of heterogeneity will continue to spur deep and fast evolution of mainstream software development, but we can predict what some of the changes will be.**Three distinct near-term stages of Moore’s End.**And why “smartphones” aren’t, really. Let’s begin with the end… of Moore’s Law. ### Mining Moore’s Law We’ve been hearing breathless “Moore’s Law is ending” announcements for years. That Moore’s Law will end was never news; every exponential progression must. Although it didn’t end when some prognosticators expected, its end is possible to forecast – we just have to know what to look for, and that is *diminishing returns*. A key observation is that exploiting Moore’s Law is like exploiting a gold mine or any other kind of resource. Exploiting a gold ore deposit never just stops abruptly; rather, running a mine goes through phases of increasing costs and diminishing returns until finally the gold that’s left in that patch of ground is no longer commercially exploitable and operating the mine is no longer profitable. Mining Moore’s Law has followed the same pattern. Let’s consider its three major phases, where we are now in transition from Phase II to Phase III. And throughout this discussion, never forget that the only reason Moore’s Law is interesting at all is because we can transform its raw resource (more transistors) into a useful form (either greater computational throughput or lower cost). *Phase I, Moore’s Motherlode = Unicore “Free Lunch” (1975-2005)* When you first find an ore deposit and open a mine, you focus your efforts on the motherlode, where everybody gets to enjoy a high yield and a low cost per pound of gold extracted. For 30 years, mainstream processors mined Moore’s motherlode by using their growing transistor budgets to make a single core more and more complex so that it could execute a single thread faster. This was wonderful because it meant the performance was *easily exploitable* – compute-bound software would get faster with relatively little effort. Mining this motherlode in mainstream microprocessors went through two main subphases as the pendulum swung from simpler to increasingly complex cores: - In the 1970s and 1980s, each chip generation could use most of the extra transistors to add One Big Feature (e.g., on-die floating point unit, pipelining, out of order execution) that would make single-threaded code run faster. - In the 1990s and 2000s, each chip generation started using the extra transistors to add or improve two or three smaller features that would make single-threaded code run faster, and then five or six smaller features, and so on. The figure at right illustrates how the pendulum swung toward increasingly complex single cores, with three sample chips: the 80286, 80486, and Pentium Extreme Edition 840. Note that the chips’ boxes are to scale by number of transistors. By 2005, the pendulum had swung about as far as it could go toward the complex single-core model. Although the motherlode has been mostly exhausted, we’re still scraping some ore off its walls in the form of some continued improvement in single-threaded code performance, but no longer at the historically delightful exponential rate. *Phase II, Secondary Veins = Homogeneous Multicore (2005-)* As a motherlode gets used up, miners concentrate on secondary veins that are still profitable but have a more moderate yield and higher cost per pound of extracted gold. So when Moore’s unicore motherlode started getting mined out, we turned to mining Moore’s secondary veins – using the additional transistors to make more cores per chip. Multicore let us continue to deliver exponentially increasing compute throughput in mainstream computers, but in a form that was *less easily exploitable* because it placed a greater burden on software developers who had to write parallel programs that could use the hardware. Moving into Phase II took a lot of work in the software world. We’ve had to learn to write “new free lunch” applications – ones that have lots of latent parallelism and so can once again ride the wave to run the same executable faster on next year’s hardware, hardware that still delivers exponential performance gains but primarily in the form of additional cores. And we’re mostly there – we have parallel runtimes and libraries like Intel Threading Building Blocks (TBB) and Microsoft Parallel Patterns Library (PPL), parallel debuggers and parallel profilers, and updated operating systems to run them all. But this time the phase didn’t last 30 years. We barely have time to catch our breath, because Phase III is already beginning. *Phase III, Tertiary Veins = Heterogeneous Cores (2011-)* As our miners are forced to move into smaller and smaller veins, yields diminish and costs rise. Our intrepid miners are trying harder and harder, but for less reward, by turning to Moore’s tertiary veins: Using Moore’s extra transistors to make, not just more cores, but also different kinds of cores – and in very large numbers, because the different cores are often smaller and swing the pendulum back toward the left. There are two main categories of heterogeneity. **Big/fast vs. small/slow cores.** The smallest amount of heterogeneity is when all the cores are general-purpose cores with the same instruction set, but some cores are beefier than others because they contain more hardware to accelerate execution (notably by hiding memory latency using various forms of internal concurrency). In this model, some cores are big complex ones that are optimized to run the sequential parts of a program really fast, while others are smaller cores that are optimized to get better total throughput for the scalably parallel parts of the program. However, even though they use the same instruction set, the compiler will often want to generate different code; this difference can become visible to the programmer if the programming language must expose ways to control code generation. This is Intel’s approach with Xeon (big/fast) and MIC (small/slow) which both run approximately the x86 instruction set. **General vs. specialized cores.** Beyond that, we see systems with multiple cores having different capabilities, including that some cores may not be able to support all of a mainstream language like C or C++: In 2006-2007, with the arrival of the PlayStation 3, the IBM Cell processor led the way by incorporating different kinds of cores on the same chip, with a single general-purpose core assisted by eight or more special-purpose SPU cores. Since 2009, we have begun to see mainstream use of GPUs to perform computation instead of just graphics. Specialized cores like SPUs and GPUs are attractive when they can run certain kinds of code more efficiently, both faster and more cheaply (e.g., using less power), which is a great bargain if your workload fits it. GPGPU is especially interesting because we already have an *underutilized installed base*: A significant percentage of existing mainstream machines already have compute-capable GPUs just waiting to be exploited. With the June 2011 introduction of AMD Fusion and the November 2011 launch of NVIDIA Tegra 3, systems with CPU and GPU cores on the same chip is becoming a new norm. That installed base is a big carrot, and creates an enormous incentive for compute-intensive mainstream applications to leverage that patiently waiting hardware. To date, a few early adopters have been using technologies like CUDA, OpenCL, and more recently C++ AMP to harness GPUs for computation. Mainstream application developers who care about performance need to learn to do the same. But that’s pretty much it – we currently know of no other major ways to exploit Moore’s Law for compute performance, and once these veins are exhausted it will be largely mined out. We’re still actively mining for now, but the writing on the wall is clear: “*mene mene* diminishing returns” demonstrate that we’ve entered the endgame. ### On the Charts: Not Three Trends, but One Trend Next, let’s put all of this in perspective by showing that multicore, hetero-core, and cloud-core are not three trends, but aspects of a single trend. To show that, we have to show that they can be plotted on the same map. Here is an appropriate map that lets us chart out where processor core architectures are going, where memory architectures are going, and visualize just where we’ve been digging around in the mine so far: First we’ll describe each axis, then map out past and current hardware to spot trends, and finally draw some conclusions about where hardware is likely to concentrate. *Processor Core Types* The vertical axis shows processor core architectures. From bottom to top, they form a continuum of increasing performance and scalability, but also of increasing restrictions on programs and programmers in the form of additional performance issues (yellow) or correctness issues (red) added at each step. **Complex cores **are the “big” traditional ones, with the pendulum swung far to the right in the “habitable zone.” These are best at running sequential code, including code limited by Amdahl’s Law. **Simple cores **are are the “small” traditional ones, toward the left of the “habitable zone.” These are best at running parallelizable code that still requires the full expressivity of a mainstream programming language. **Specialized cores** like those in GPUs, DSPs, and Cell’s SPUs are more limited, and often do not yet fully support all features of mainstream languages (e.g., exception handling). These are best for running highly parallelizable code that can be expressed in a subset of a language like C or C++; for example, Xbox Kinect skeletal tracking requires using the CPU and the GPU cores on the console, and would be impossible otherwise. The further you move upward on the chart (to the right in the blown-up figure), the better the performance throughput and/or the less power you need, but the more the application code is constrained as it has to be more parallel and/or use only subsets of a mainstream language. Future mainstream hardware will likely contain all three basic kinds of cores, because many applications have all these kinds of code in the same program, and so naturally will run best on a heterogeneous computer that has all these kinds of cores. For example, most PS3 games, all Kinect games, and all CUDA/OpenCL/C++AMP applications available today could not run well or at all on a homogeneous machine, because they rely on running parts of the same application on the CPU(s) and other parts on specialized cores. Those applications are just the beginning. *Memory Architectures* The horizontal axis shows six common memory architectures. From left to right, they form a continuum of increasing performance and scalability, but (except for one important discontinuity) also increasing work for programs and programmers to deal with performance issues (yellow) or correctness issues (red). In the blown-up figure, triangles represent cache and lower boxes represent RAM. A processor core (ALU) sits at the top of each cache “peak.” **Unified memory is tied to the unicore motherlode **and the memory hierarchy is wonderfully simple – a single mountain with a core sitting on top. This describes essentially all mainstream computers from the dawn of computing until the mid-2000s. This delivers a simple programming model: Every pointer (or object reference) can address every byte, and every byte is equally “far away” from the core. Even here, programmers need to be conscious of at least two basic cache effects: *locality*, or how well “hot” data fits into cache; and *access order*, because modern memory architectures love sequential access patterns. (For more on this, see my Machine Architecture talk.) **NUMA cache** retains a single chunk of RAM, but adds multiple caches. Now instead of a single mountain, we have a mountain range with multiple peaks, each with a core on top. This describes today’s mainstream multi-core devices. Here we still enjoy a single address space and pretty good performance as long as different cores access different memory, but programmers now have to deal with two main additional performance effects: *locality *matters in new ways because some peaks are closer to each other than others (e.g., two cores that share an L2 cache vs. two cores that share only L3 or RAM), and *layout* matters because we have to keep data physically close together if it’s used together (e.g., on the same cache line) and apart if it’s not (e.g., to avoid the ping-pong game of false sharing). **NUMA RAM** further fragments memory into multiple physical chunks of RAM, but still exposes a single logical address space. Now the performance valleys between the cores get deeper, because accessing RAM in a chunk not local to this core incurs a trip across the bus. Examples include bladed servers, symmetric multi-processor (SMP) desktop computers with multiple sockets, and newer GPU architectures that provide a unified address space view of the CPU’s and GPU’s memory but leave some memory physically closer to the CPU and other memory closer to the GPU. Now we add another item to the menu of what a performance-conscious programmer needs to think about: *copying.* Just because we can form a pointer to anything doesn’t mean we always should, if it means reaching across an expensive chasm on every access. **Incoherent and weak memory** makes memory be by default unsynchronized, in the hope that allowing each core to have its own divergent view of the state of memory can make them run faster, at least until memory must inevitably be synchronized again. As of this writing, the only remaining mainstream CPUs with weak memory models are current PowerPC and ARM processors (popular despite their memory models rather than because of them; more on this below). This model still has the simplicity of a single address space, but now the programmer further has to take on the burden of *synchronizing *memory himself. Clarification: By “weak (hardware) memory model” CPUs I mean specifically ones that do not natively support efficient sequentially consistent atomics, because on the software side programming languages have converged on the strong “sequential consistency for data-race-free programs” (SC-DRF, roughly aka DRF0 or RCsc) as the default (C11, C++11) or only (Java 5+) supported software memory model for software. Hardware that supports weaker memory models than that are permanently disadvantaged and will either become stronger (as ARMv8 is now doing by adding SC acquire/release instructions) or atrophy. The two main hardware architectures with what I called “weak” memory models were ARMv7 and POWER. ARMv8 is upgrading to SC acquire/release, as predicted, and it remains to be seen whether POWER will upgrade or atrophy. I’ve seen some call x86 “weak”, but x86 has always been the poster child for a *strong* hardware memory model in all of our software memory model discussions for Java, C, and C++ during the 2000s. Therefore it’s clear that “weak” and “strong” are not useful terms because they mean different things for software and hardware memory models, and I’ve updated the text to clarify this. **Disjoint (tightly coupled)** memory bites the bullet and lets different cores see different memory, typically over a shared bus, while still running as a tightly-coupled unit that has low latency and whose reliability is still evaluated as a single unit. Now the model turns into a tightly-clustered group of mountainous islands, each with core-tipped mountains of cache overlooking square miles of memory, and connected by bridges with a fleet of trucks expediting goods from point to point – bulk transfer operations, message queues, and similar. In the mainstream, we see this model used by 2009-2011 vintage GPUs whose on-board memory is not shared with the CPU or with each other. True, programmers no longer enjoy having a single address space and the ability to share pointers, but in exchange we have removed the entire set of programmer burdens accumulated so far and replaced them with a single new responsibility: *copying* data between islands of memory. **Disjoint (loosely coupled) is the cloud** where cores spread out-of-box into different rooms and buildings and datacenters. This moves the islands farther apart, and replaces the bus “bridges” with network “speedboats” and “tankers.” In the mainstream, we see this model in HaaS cloud computing offerings; this is the commoditization of the compute cluster. Programmers now have to arrange to deal with two additional concerns, which often can be abstracted away by libraries and runtimes: *reliability* as nodes can come and go, and *latency* as the islands are farther apart. *Charting the Hardware* All three trends are just aspects of a single trend: filling out the chart and enabling heterogeneous parallel computing. The chart wants to be filled out because there are workloads that are naturally suited to each of these boxes, though some boxes are more popular than others. To help visualize the filling-out process more concretely, why not check to see how mainstream hardware has progressed on this chart? The easiest place to start is the long-standing mainstream CPU and more recent GPU: - From the 1970s to the 2000s, CPUs started with simple single cores and then moved downward as the pendulum swung to increasingly complex cores. They hugged the left side of the chart by staying single-core as long as possible, but in 2005 they ran out of room and turned toward multi-core NUMA cache architectures. - Meanwhile, in the late 2000s, mainstream GPUs started to be capable of handling computational workloads. But because they started life in an add-on discrete GPU card format where graphics-specific cores and memory were physically located away from the CPU and system RAM, they started further upward and to the right (Specialized / Disjoint (local)). GPUs have been moving leftward to increasingly unified views of memory, and slightly downward to try to support full mainstream languages (e.g., add exception handling support). - Today’s typical mainstream computer includes both a CPU and a discrete or integrated GPU. The dotted line in the graphic denotes cores that are available to a single application because they are in the same device, but not on the same chip. Now we are seeing a trend to use CPU and specialized (currently GPU) cores with very tightly coupled memory, and even on the same die: - In 2005, the Xbox 360 sported a multi-core CPU and GPU that could not only directly access the same RAM, but had the very unusual feature that they could share even L2 cache. - In 2006 and 2007, the Cell-based PS3 console sported a single processor having both a single general-purpose core and eight special-purpose SPU cores. The solid line in the graphic denotes cores that are on the same chip, not just in the same device. - In June 2011 and November 2011, respectively, AMD and NVIDIA launched the Fusion and Tegra 3 architectures, multi-core CPU chips that sported a compute-class GPU (hence extending vertically) on the same die (hence well to the left). - Intel has also shipped the Sandy Bridge line of processors, which includes an integrated GPU that is not yet as compute-capable but continues to grow. Intel’s main focus has been the MIC effort of more than 50 simple general-purpose x86-like cores on the same die, expected to be commercially available in the near future. Finally, we complete the picture with cloud HaaS: - In 2008 and 2009, Amazon, Microsoft, Google, and other vendors began rolling out their cloud compute offerings. AWS, Azure, and GAE support an elastic cloud of nodes each of which is a traditional computer (“big-core” and loosely coupled, therefore on the bottom right corner of the chart) where each node in the cloud has a single core or multiple CPU cores (the two lower-left boxes). As before, the dotted line denotes that all of the cores are available to a single application, and the network is just another bus to more compute cores. - Since November 2010, AWS also supports compute instances that contain both CPU cores and GPU cores, indicated by the H-shaped virtual machine where the application runs on a cloud of loosely-coupled nodes with disjoint memory (right column) each of which contains both CPU and GPU cores (currently not on the same die, so the vertical lines are still dotted). *The Jungle* Putting it all together, we get a noisy profusion of life and color: This may look like a confused mess, so let’s notice two things that help make sense of it. First, every box has a workload that it’s best at, but some boxes (particularly some columns) are more popular than others. Two columns are particularly less interesting: - Fully unified memory models are only applicable to single-core, which is being essentially abandoned in the mainstream. - Incoherent/weak hardware memory models (those that do not efficiently support sequential consistency for data race free programs, roughly aka DRF0 or RCsc) are a performance experiment that is in the process of failing in the marketplace. On the hardware side, the theoretical performance benefits that come from letting caches work less synchronously have already been largely duplicated in other ways by mainstream processors having stronger memory models. On the software side, all of the mainstream general-purpose languages and environments (C, C++, Java) have largely rejected weak memory models, and require a coherent model that is technically called “sequential consistency for data race free programs” as either their only supported memory model (Java) or their default memory model (ISO C++11, ISO C11). Nobody is moving toward the middle vertical incoherent/weak memory strip of the chart; at best they’re moving through it to get to the other side, but nobody wants to stay there. (Note: x86 has always been considered a “strong” hardware memory model and supports sequentially consistent atomics efficiently, as does the recently-announced ARMv8 architecture with its new *ldra*and*strl*instructions; POWER and ARMv7 notoriously do not support SC atomics efficiently.) But all other boxes, including all rows (processors), continue to be strongly represented, and we realize why that’s true – because different parts of even the same application naturally want to run on different kinds of cores. Second, let’s clarify the picture by highlighting and labeling the two regions that hardware is migrating toward: Here again we see the first and fourth columns being deemphasized, as hardware trends have begun gradually coalescing around two major areas. Both areas extend vertically across all kinds of cores – and the most important thing to note is that these represent *two mines*, where the area to the left is the Moore’s Law mine. **Mine #1: “Scale in” = Moore’s Law.**Local machines will continue to use large numbers of heterogeneous local cores, either in-box (e.g., CPU with discrete GPU) or on-die (e.g., Sandy Bridge, Fusion, Tegra 3). We’ll see core counts increase until Moore’s Law ends, and then stabilize core counts for individual local devices.**Mine #2: “Scale out” = distributed cloud.**Much more importantly, we will continue to see a cornucopia of cores delivered via compute clouds, either on-premises (e.g., cluster, private cloud) or in public clouds. This is a brand new mine directly enabled by the lower coupling of disjoint memory, especially loosely coupled distributed nodes. The good news is that we can heave a sigh of relief at having found another mine to open. The even better news is that the new mine has a far faster growth rate than even Moore’s Law. Notice the slopes of the lines when we graph the amount of parallelism available to a single application running on various architectures: The bottom three lines are mining Moore’s Law for “scale-in” growth, and their common slope reflects Moore’s wonderful exponent, just shifted upward or downward to account for how many cores of a given size can be packed onto the same die. The top two lines are mining the cloud (with CPUs and GPUs, respectively) for “scale-out” growth – and it’s even better. If hardware designers merely use Moore’s Law to deliver more big fat cores, on-device hardware parallelism will stay in double digits for the next decade, which is very roughly when Moore’s Law is due to sputter, give or take about a half decade. If hardware follows Niagara’s and MIC’s lead to go back to simpler cores, we’ll see a one-time jump and then stay in triple digits. If we all learn to leverage GPUs, we already have 1,500-way parallelism in modern graphics cards (I’ll say “cores” for convenience, though that word means something a little different on GPUs) and likely reach five digits in the decade timeframe. But all of that is eclipsed by the scalability of the cloud, whose growth line is already steeper than Moore’s Law because we’re better at quickly deploying and using cost-effective networked machines than we’ve been at quickly jam-packing and harnessing cost-effective transistors. It’s hard to get data on the current largest cloud deployments because many projects are private or secret, but the largest documented public cloud apps (which don’t use GPUs) are already harnessing over 30,000 cores *for a single computation*. I wouldn’t be surprised if undocumented projects are exceeding 100,000 cores today. And that’s general-purpose cores; if you add GPU-capable nodes to the mix, add two more zeroes. Such massive parallelism, already available for rates of under $1,300/hour for a 30,000-core cloud, is game-changing. If you doubt that, here is a boring example that doesn’t involve advanced augmented reality or spook-level technomancery: How long will it take someone who’s stolen a strong password file (which we’ll assume is correctly hashed and salted and contains no dictionary passwords) to retrieve 90% of the passwords by brute force using a publicly available GPU-enabled compute cloud? Hint: An AWS dual-Tesla node can test on the order of 20 billion passwords per second, and clouds of 30,000 nodes are publicly documented (of course, Amazon won’t say if it has that many GPU-enabled nodes for hire; but if it doesn’t now, it will soon). To borrow a tired misquote, 640 trillion affordable attempts per second should be enough for anyone. But if that’s not enough for you, not to worry; just wait a small number of years and it’ll be 640 quadrillion affordable attempts per second. ### What It Means For Us: A Programmer’s View How will all of this change the way we write our software, if we care about harnessing mainstream hardware performance? The basic conclusions echo and expand upon ones that I proposed in “The Free Lunch is Over”: **Applications will need to be at least massively parallel, and ideally able to use non-local cores and heterogeneous cores,**if they want to fully exploit the long-term continued exponential growth in compute throughput being delivered both in-box and in-cloud. After all, soon the vast majority of compute cores available to a mainstream application will be non-local.**Efficiency and performance optimization will get more, not less, important.**We’re being asked to do more (new experiences like sensor-based UIs and augmented reality) with less hardware (constrained mobile form factors and the eventual plateauing of scale-in when Moore’s Law ends). In December 2004 I wrote: “Those languages that already lend themselves to heavy optimization will find new life; those that don’t will need to find ways to compete and become more efficient and optimizable. Expect long-term increased demand for performance-oriented languages and systems.” This is still true; witness the resurgence of interest in C++ in 2011 and onward, primarily because of its expressive flexibility and performance efficiency. A program that is twice as efficient has two advantages: it will be able to run twice as well on a local disconnected device especially when Moore’s Law can no longer deliver local performance improvements in any form; and it will always be able to run at half the power and cost on an elastic compute cloud even as those continue to expand for the indefinite future.**Programming languages and systems will increasingly be forced to deal with heterogeneous distributed parallelism.**As previously predicted, just basic homogeneous multicore has proved to be a far bigger event for languages than even object-oriented programming was, because some languages (notably C) could get away with ignoring objects while still remaining commercially relevant for mainstream software development. No mainstream language, including the just-ratified C11 standard, could ignore basic concurrency and parallelism and stay relevant in even a homogeneous-multicore world. Now expect all mainstream languages and environments, including their standard libraries, to develop explicit support for at least distributed parallelism and probably also heterogeneous parallelism; they cannot hope to avoid it without becoming marginalized for mainstream app development. Expanding on that last bullet, what are some basic elements we will need to add to mainstream programming models (think: C, C++, Java, and .NET)? Here are a few basics I think will be unavoidable, that must be supported explicitly in one form or another. **Deal with the processor axis’ lower section by supporting compute cores with different performance (big/fast, slow/small).**At minimum, mainstream operating systems and runtimes will need to be aware that some cores are faster than others, and know which parts of an application want to run on which of those cores.**Deal with the processor axis’ upper section by supporting language subsets, to allow for cores with different capabilities including that not all fully support mainstream language features.**In the next decade, a mainstream operating system (on its own, or augmented with an extra runtime like the Java/.NET VM or the ConcRT runtime underpinning PPL) will be capable of managing cores with different instruction sets and running a single application across many of those cores. Programming languages and tools will be extended to let the developer express code that is restricted to use just a subset of a mainstream programming language (e.g., the restrict() qualifiers in C++ AMP; I am optimistic that for most mainstream languages such a single language extension will be sufficient while leveraging existing language rules for overloading and dispatch, thus minimizing the impact on developers, but experience will have to bear this out).**Deal with the memory axis for computation, by providing distributed algorithms that can scale not just locally but also across a compute cloud.**Libraries and runtimes like OpenCL and TBB and PPL will be extended or duplicated to enable writing loops and other algorithms that run on large numbers of local and non-local parallel cores. Today we can write a parallel_for_each call that can run with 1,000x parallelism on a set of local discrete GPUs and ship the right data shards to the right compute cards and the results back; tomorrow we need to be able to write that same call that can run with 1,000,000,000x parallelism on a set of cloud-based GPUs and ship the right data shards to the right nodes and the results back. This is a “baby step” example in that it just uses local data (e.g., that can fit in a single machine’s memory), but distributed computation; the data subsets are simply copied hub-and-spoke.**Deal with the memory axis for data, by providing distributed data containers, which can be spread across many nodes.**The next step is for the data itself to be larger than any node’s memory, and (preferably automatically) move the right data subsets to the right nodes of a distributed computation. For example, we need containers like a distributed_array or distributed_table that can be backed by multiple and/or redundant cloud storage, and then make those the target of the same distributed parallel_for_each call. After all, why shouldn’t we write a single parallel_for_each call that efficiently updates a 100 petabyte table? Hadoop enables this today for specific workloads and with extra work; this will become a standard capability available out-of-the-box in mainstream language compilers and their standard libraries.**Enable a unified programming model that can handle the entire chart with the same source code.**Since we can map the hardware on a single chart with two degrees of freedom, the landscape is unified enough that it should be able to be served by a single programming model in the future. Any solution will have at least two basic characteristics: First, it will cover the Processor axis by letting the programmer express language subsets in a way integrated holistically into the language. Second, it will cover or hide the Memory axis by abstracting the location of data, and copying data subsets on demand by default, while also providing a way to take control of the copying for advanced users who want to optimize the performance of a specific computation. Perhaps our most difficult mental adjustment, however, will be to learn to think of the cloud as part of the mainstream machine – to view all these local and non-local cores as being equally part of the target machine that executes our application, where the network is just another bus that connects us to more cores. That is, in a few years we will write code for mainstream machines assuming that they have million-way parallelism, of which only thousand-way parallelism is guaranteed to always be available (when out of WiFi range). Five years from now we want to be delivering apps that run well on an isolated device, and then just run faster or better when they are in WiFi range and have dynamic access to many more cores. The makers of our operating systems, runtimes, libraries, programming languages, and tools need to get us to a place where we can create compute-bound applications that run well in isolation on disconnected devices with 1,000-way local parallelism… and when the device is in WiFi range just run faster, handle much larger data sets, and/or light up with additional capabilities. We have a very small taste of that now with cloud-based apps like Shazam (which function only when online), but yet a long way to go to realize this full vision. ### Exit Moore, Pursued by a Dark Silicon Bear Finally, let’s return one more time to the end of Moore’s Law to see what awaits us in our near future, and why we will likely pass through three distinct stages as we navigate Moore’s End. Eventually, our tired miners will reach the point where it’s no longer economically feasible to operate the mine. There’s still gold left, but it’s no longer commercially exploitable. Recall that Moore’s Law has been interesting only because we have been able to transform its raw resource of “more transistors” into one of two useful forms: **Exploit #1: Greater throughput.**Moore’s Law lets us deliver more transistors, and therefore more complex chips, at the same cost. That’s what will let us continue to deliver more computational performance per chip – as long as we can find ways to harness the extra transistors for computation.**Exploit #2: Lower cost/power/size.**Alternatively, Moore’s Law lets us deliver the same number of transistors at a lower cost, including in a smaller area and at lower power. That’s what will let us continue to deliver powerful experiences in increasingly compact and mobile and embedded form factors. The key thing to note is that we can expect these two ways of exploiting Moore’s Law to end, not at the same time, but one after the other and in that order. Why? Because Exploit #2 only relies on the basic Moore’s Law effect, whereas the first relies on Moore’s Law *and* the ability to use all the transistors at the same time. Which brings us to one last problem down in our mine… *The Power Problem: Dark Silicon* Sometimes you can be hard at work in a mine, still productive, when a small disaster happens: a cave-in, or striking water. Besides hurting miners, such disasters can *render entire sections of the mine unreachable*. We are now starting to hit exactly those kinds of problems. One particular problem we have just begun to encounter is known as “dark silicon.” Although Moore’s Law is still delivering more transistors, *we are losing the ability to power them all at the same time*. For more details, see Jem Davies’ talk “Compute Power With Energy-Efficiency” and the ISCA’11 paper “Dark Silicon and the End of Multicore Scaling” (alternate link). This “dark silicon” effect is like a Shakespearian bear chasing our doomed character offstage. Even though we can continue to pack more cores on a chip, if we cannot use them at the same time we have failed to exploit Moore’s Law to deliver more computational throughput (Exploit #1). When we enter the phase where Moore’s Law continues to give us more transistors per die area, but we are no longer able to power them all, we will find ourselves in a transitional period where Exploit #1 has ended while Exploit #2 continues and outlives it for a time. This means that we will likely see the following major phases in the “scale-in” growth of mainstream machines. (Note that these apply to individual machines only, such as your personal notebook and smartphone or an individual compute node; they do not apply to a compute cloud, which we saw belongs to a different “scale-out” mine.) **Exploit #1 + Exploit #2: Increasing performance (compute throughput) in all form factors (1975 – mid-2010s?).**For a few years yet, we will see continuing increases in mainstream computer performance in all form factors from desktop to smartphone. As today, the bigger form factors will still have more parallelism, just as today’s desktop CPUs and GPUs are routinely more capable than those in tablets and smartphones – as long as Exploit #1 lives, and then…**Exploit #2 only: Flat performance (compute throughput) at the top end, and mid and lower segments catching up (late 2010s – early 2020s?).**Next, if problems like dark silicon are not solved, we will enter a period where mainstream computer performance levels out, starting at the top end with desktops and game consoles and working its way down through tablets and smartphones. During this period we will continue to use Moore’s Law to lower cost, power, and/or size – delivering the same complexity and performance already available in bigger form factors also in smaller devices. Assuming Moore’s Law continues long enough beyond the end of Exploit #1, we can estimate how long it will take for Exploit #2 to equalize personal devices by observing the difference in transistor counts between current mainstream desktop machines and smartphones; it’s roughly a factor of 20, which will take Moore’s Law about eight years to cover.**Democratization (early 2020s? – onward).**Finally, this democratization will reach the point where a desktop computer and smartphone have roughly the same computational performance. In that case, why buy a desktop ever again? Just dock your tablet or smartphone. You might think that there are still two important differences between the desktop and the mobile device: power, because the desktop is plugged in, and peripherals, because the desktop has easier access to a bigger screen and a real keyboard/mouse – but once you dock the smaller device, it has the same access to power and peripherals and even those differences go away. **Speaking of Smartphones Pocket Tablets and Democratization** Note that the word “smartphone” is already a major misnomer, because a pocket device that can run apps is not primarily a phone at all. It’s primarily a general-purpose personal computer that happens to have a couple of built-in radios for cell and WiFi service – making the “traditional cell phone” capability just an app that happens to use the cell radio, and the Skype “IP phone” capability on the same device just another similar app that happens to use the WiFi radio instead. The right way to think about even today’s mobile landscape is that there are not really “tablets” and “smartphones”; there are just page-sized tablets and pocket-sized tablets, both already available with or without cellular radios, and that they run different operating systems today is just a point-in-time effect. This is why those people who said an iPad is just a big iPhone without the cellular radio had it exactly backwards – the iPhone (3G or later, which allows apps) is a small iPad that fits in your pocket and happens to have a cellular radio in order to obsolete another pocket-sized device. Both devices are primarily tablets – they minimize hardware chrome and “turn into” the full-screen immersive app, and that’s the closest thing you can get today to a morphing device that turns into a special-purpose device on demand. (Aside: It’ll be great when we figure out how to get past the flat-glass-pane model to let the hardware morph too, initially just raised bumps so we can feel where the keys and controls are, and then eventually more; but hardware morphing is a separate topic and flat glass is plenty fine for now.) Many of us routinely use our “phones” mostly as a small tablet – spending most of our time on the device running apps to read books, browse news, watch movies, play games, update social networks, and surf the net. I already use my phone as a small tablet far more often than I use it as a phone, and if you have an app-capable phone then I’ll bet you already do that too. Well before the end of this decade, I expect the most likely dominant mainstream form factor to be “page-sized and pocket-sized tablets, plus docking” – where “docking” means any means of attaching peripherals like keyboards and big screens on demand, which today already encompasses physical docks and Bluetooth and “Play To” connections, and will only continue to get more wireless and more seamless. This future shouldn’t be too hard to imagine, because many of us have already been working that way for a while now: For the past decade I’ve routinely worked from my notebook as my primary and only environment; usually I’m in my home office or work office where I use a real keyboard and big screens by docking the notebook and/or using it via a remote-desktop client, and when I’m mobile I use it as a notebook. In 2012, I expect to replace my notebook with an x86-based modern tablet and use it exactly the same way. We’ve seen it play out many times: - Many of us used to carry around both a PalmPilot and a cell phone, but then the smartphone took over the job of the dedicated PalmPilot and eliminated a device with the same form factor. - Lots of kids (or their parents) carry a hand-held gaming device and a pocket tablet (aka “smartphone”), and we are seeing the decline of the dedicated hand-held gaming device as the pocket tablet is taking over more and more of that job. - Similarly, today many of us carry around a notebook and a dedicated tablet, and convergence will again let us eliminate a device with the same form factor. Computing loves convergence. In general-purpose personal computing (like notebooks and tablets, not special-purpose appliances like microwaves and automobiles that may happen to use microprocessors), convergence always happily dooms special-purpose devices in the long run, as each device either evolves to take over the other’s job or gets taken over. We will continue to have distinct pocket-sized tablets and page-sized tablets for a time because they are different form factors with different mobile uses, but even that may last only until we find a way to unify the form factors (fold them?) so that they too can converge. ### Summary and Conclusions Mainstream hardware is becoming permanently parallel, heterogeneous, and distributed. These changes are permanent, and so will permanently affect the way we have to write performance-intensive code on mainstream architectures. The good news is that Moore’s “local scale-in” transistor mine isn’t empty yet; it appears the transistor bonanza will continue for about another decade, give or take a half decade or so, which should be long enough to exploit the lower-cost side of the Law to get us to parity between desktops and pocket tablets. The bad news is that we can clearly observe the diminishing returns as the transistors are decreasingly exploitable – with each new generation of processors, software developers have to work harder and the chips get more difficult to power. And with each new crank of the diminishing-returns wheel, there’s less time for hardware and software designers to come up with ways to overcome the next hurdle; the motherlode free lunch lasted 30 years, but the homogeneous multicore era lasted only about six years, and we are now already overlapping the next two eras of hetero-core and cloud-core. But all is well: When your mine is getting empty, you don’t panic, you just open a new mine at a new motherlode, operate both mines for a while, then continue to profit from the new mine long-term even after the first one finally shuts down and gets converted into a museum. As usual, in this case the end of one dominant wave overlaps with the beginning of the next, and we are now early in the period of overlap where we are standing with a foot in each wave, a crew in each of Moore’s mine and the cloud mine. Perhaps the best news of all is that the cloud wave is already scaling enormously quickly – faster than the Moore’s Law wave that it complements, and that it will outlive and replace. If you haven’t done so already, now is the time to take a hard look at the design of your applications, determine what existing features – or, better still, what potential and currently-unimaginable demanding new features – are CPU-sensitive now or are likely to become so soon, and identify how those places could benefit from local and distributed parallelism. Now is also the time for you and your team to grok the requirements, pitfalls, styles, and idioms of hetero-parallel (e.g., GPGPU) and cloud programming (e.g., Amazon Web Services, Microsoft Azure, Google App Engine). To continue enjoying the free lunch of shipping an application that runs well on today’s hardware and will just naturally run faster or better on tomorrow’s hardware, you need to write an app with lots of juicy latent parallelism expressed in a form that can be spread across a machine with a variable number of cores of different kinds – local and distributed cores, and big/small/specialized cores. The filet mignon of throughput gains is still on the menu, but now it costs extra – extra development effort, extra code complexity, and extra testing effort. The good news is that for many classes of applications the extra effort will be worthwhile, because concurrency will let them fully exploit the exponential gains in compute throughput that will continue to grow strong and fast long after Moore’s Law has gone into its sunny retirement, as we continue to mine the cloud for the rest of our careers. ### Acknowledgments I would like to particularly thank Jeffrey Barr, David Callahan, Olivier Giroux, Yossi Levanoni, Henry Moreton, and James Reinders, who graciously made themselves available to answer questions to provide background information, and who shared their feedback on appropriately mapping their companies’ products on the processor/memory chart. ### Update History 2012-08-02: Updated to clarify that by “weak (hardware) memory model” CPUs I mean specifically ones that do not natively support efficient sequentially consistent (SC) atomics, because on the software side programming languages have converged on the strong “sequential consistency for data-race-free programs” (SC-DRF, roughly aka DRF0 or RCsc) as the default (C11, C++11) or only (Java 5+) supported software memory model for software. Hardware that supports weaker memory models than that are permanently disadvantaged and will either become stronger (as ARMv8 is now doing by adding SC acquire/release instructions) or atrophy. The two main hardware architectures with what I called “weak” memory models were ARMv7 and POWER. ARMv8 is upgrading to SC acquire/release, as predicted, and it remains to be seen whether POWER will upgrade or atrophy. I’ve seen some call x86 “weak”, but x86 has always been the poster child for a *strong* hardware memory model in all of our software memory model discussions for Java, C, and C++ during the 2000s. Therefore it’s clear that “weak” and “strong” are not useful terms because they mean different things for software and hardware memory models, and I’ve updated the text to clarify this.
true
true
true
Or, A Heterogeneous Supercomputer in Every Pocket In the twilight of Moore’s Law, the transitions to multicore processors, GPU computing, and HaaS cloud computing are not separate trends, bu…
2024-10-12 00:00:00
2011-11-08 00:00:00
https://herbsutter.com/w…mage_thumb21.png
article
herbsutter.com
Sutter’s Mill
null
null
37,827,903
https://g.livejournal.com/18438.html
Log in
null
? ? LiveJournal Find more Communities RSS Reader Shop Help Log in Log in Join free Join English (en) English (en) Русский (ru) Українська (uk) Français (fr) Português (pt) español (es) Deutsch (de) Italiano (it) Беларуская (be) Authorization Log in No account? Create an account Remember me Forgot password Log in Log in QR code
true
true
true
Your life is the best story! Just start your blog today!
2024-10-12 00:00:00
2003-01-01 00:00:00
https://l-stat.livejourn…img/og_image.jpg
website
livejournal.com
livejournal.com
null
null
35,803,067
https://www.csimagazine.com/csi/Via-Licensing-acquires-MPEG-LA.php
Via Licensing acquires MPEG LA
CSI
The two groups have agreed to merge, bringing together their extensive patent pools for audio and video in a new patent powerhouse under one roof. The combination, called Via Licencing Alliance (Via LA) creates the largest patent pool administrator in the consumer electronics industry. The increased scale of is expected to provide improved expertise and support for patentholders and licensing customers. The two organisations have a rich heritage. MPEG LA’s MPEG-2 licensing program established the modern patent pool industry and helped produce one of the most widely employed standards in consumer electronics history. Even today, after more than 20 years, there is still some MPEG-2 being used to encode video. More recently, MPEG LA set up a pool related to the Versatile Video Coding (VVC) standard (also known as H.266 and MPEG-I Part 3), which provides increased compression compared to older MPEG standards like HEVC. A number of essential VVC patents are now under the remit of Via LA's VVC Patent Portfolio Licence (it is one of several key VVC patent pools). Via Licensing also dates back over 20 years ago, starting with AAC (Via is an independent subsidiary of Dolby). It is hoped that combining the teams and best practices of these two licensing pioneers creates new economies of scale and efficiencies that benefit the global innovation ecosystem. Via LA plans to consolidate dozens of patent pools covering a broad range of technologies into one organisation, further simplifying the licensing process for its pool participants. In addition, affiliates of General Electric, Koninklijke Philips and Mitsubishi Electric Corp will convert their partial ownership in MPEG LA to partial ownership in Via LA. “Via LA combines the best capabilities in the licensing industry to deliver efficient, transparent, and balanced intellectual property solutions to thousands of partners around the world,” said Heath Hoglund, President, Via LA. “This pool of pools provides an efficient mechanism for innovators with broad portfolios to get their technologies to the mass market and enables easy access to a broad range of necessary IP for implementors with diverse product offerings.” “Our two teams share similar strategies, cultures, and business models, so I am confident this will be a smooth transition into an even more valuable, independently managed licensing solution for the IP industry,” added Larry Horn, CEO, MPEG LA. He is now retiring as CEO of MPEG LA, but will continue to serve as an advisor to Via LA. ## Recent Stories
true
true
true
null
2024-10-12 00:00:00
2023-05-03 00:00:00
https://www.csimagazine.…images/viala.png
article
csimagazine.com
CSI
null
null
5,898,476
http://h3manth.com/new/blog/2013/handling-currency-in-javascript/
null
null
null
true
false
false
null
null
null
null
null
null
null
null
null
12,403,058
https://munchies.vice.com/en/articles/science-says-pizza-can-make-you-more-productive-at-work
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
9,246,282
http://yanirseroussi.com/2015/03/22/the-long-road-to-a-lifestyle-business/
null
null
null
true
false
false
null
null
null
null
null
null
null
null
null
23,145,617
https://www.apple.com/in/homepod/
HomePod
null
## Speakers of the house. With HomePod or HomePod mini, amplify all the listening experiences you love. And enjoy an effortlessly connected smart home — with Siri built in — that’s private and secure. Get 3 months of Apple Music free with your HomePod.** Learn more With HomePod or HomePod mini, amplify all the listening experiences you love. And enjoy an effortlessly connected smart home — with Siri built in — that’s private and secure.
true
true
true
HomePod mini takes up no space yet delivers room-filling sound. HomePod is a breakthrough high-fidelity speaker. Both help you multitask with Siri.
2024-10-12 00:00:00
2024-05-07 00:00:00
https://www.apple.com/v/…png?202409252245
website
apple.com
Apple (India)
null
null
6,336,232
http://nautil.us/issue/5/fame/homo-narrativus-and-the-trouble-with-fame
Homo Narrativus and the Trouble with Fame
Peter Sheridan Dodds
Our understanding of fame is critical to how we see each other and our society. But it is also badly wrong. Let me tell you why. We humans are storytelling and story-finding machines: *homo narrativus*, if you will. In making sense of the world, we look for the shapes of meaningful narratives in everything. Even in science, we enjoy mathematical equations and algorithms because they are a kind of universal story. Fluids—the oceans and atmosphere, the blood in your body, honey—all flow according to a single, beautiful set of equations called the Navier-Stokes equations. In our everyday, human stories, far away from science, we have a limited (if generous) capacity to entertain randomness—we are certainly not *homo probabilisticus*. Too many coincidences in a movie or book will render it unbelievable and unpalatable. We would think to ourselves, “that would never happen in real life!” This skews our stories. We tend to find or create story threads where there are none. While it can sometimes be useful to err on the side of causality, the fact remains that our tendency toward teleological explanations often oversteps the evidence. We also instinctively build our stories around individuals. To see evidence for this fact, we need only to look to local coverage of a recent disaster. For example, one article in *The New York Times* on Hurricane Sandy discussed the (unproven) fact that more babies are generated when generators fail. It opened with the line: “Late last October, Hurricane Sandy pumped six feet of water into the lobby of a residential building in downtown Jersey City, trapping Meaghan B. Murphy and her husband, Patrick, in their apartment and leaving them without electricity for days.” We instinctively build our stories around individuals. Another Times article discussed the loss of life from the storm, starting with: “In the days after Hurricane Sandy swept through the Rockaways, residents were left to sift through the devastation, taking stock of who and what had survived. But few seemed to notice Keith Lancaster was missing.” Both stories tether the complex, stochastic narrative of the larger population to that of an individual. We can’t blame the *Times* here: This kind of narrative works. We can put ourselves into that person’s mind, walk in their shoes, and travel in their story. But social groups are far more complicated than any individual story. Networked, distributed, conflicting, and changing, they do not simply map onto an individual. For this reason, it’s unnatural and very difficult to put ourselves into the collective minds of groups. Even a group of two is too much—we have to side with one person or switch between points of view. We are embodied stories of one. So, when we discuss groups, we are left without metaphor. What do we do as a result? We force our single-body stories onto them: Groups become one dominant person—a monarch, the President, Michael Jordan—plus a supporting cast. These two traits—our compulsion to tell stories, and our bias towards the individual—conspire to ruin our intuitive understanding of fame. They cause us to believe that fame is earned, that it is the result of the intrinsic properties of the famous person or object. Consider, for example, the *Mona Lisa*, perhaps the most famous painting in the world. Its fame is ascribed to all manner of intrinsic qualities: the subject’s mysterious smile, her changeable expression, the way her eyes follow you, da Vinci’s novel use of sfumato, the individual genius of Leonardo himself. This has the makings of an excellent story. It’s simple and causal, and it means that the *Mona Lisa’s* fame was inevitable, and deserved. But it’s the wrong story. If we travel to the Louvre, we find ourselves bemused immediately by the surprising smallness of the *Mona Lisa*—it’s only 30 inches by 21 inches. We observe that museum-goers pause for a few minutes at most in front of the painting. And we wonder if this is really the best we could do. As Donald Sassoon lays out in his book *Becoming Mona Lisa*, Leonardo’s now-great painting took 400 years to become world-renowned, and jumped in fame only after being stolen and later vandalized—certainly not events dictated by some intrinsic quality. These two traits—our compulsion to tell stories, and our bias towards the individual—conspire to ruin our intuitive understanding of fame. My own research has shown that fame has much less to do with intrinsic quality than we believe it does, and much more to do with the characteristics of the people among whom fame spreads. For example, in 2006, Matt Salganik, myself, and Duncan Watts reported the results of an online experiment of ours called Music Lab.1 We gathered roughly 14,000 Internet participants, and gave them a total of 48 songs by unknown artists to listen to, rate, and download. What we didn’t tell them was that they were randomly assigned to nine separate worlds: one world in which participants acted independently of each other, and eight parallel social worlds in which participants saw the current number of downloads of each song within their world—an indication of popularity. The independent world served as a control to the eight social worlds. The popularity of songs in this world reflected the intrinsic “quality” of the songs. There were a few songs that did poorly in this world, plus all eight social worlds, and others that performed reliably well. Things were very different in the social worlds. Here, rankings were much more affected primarily by chance and the choices made by early participants: One song, for example, ranked first in one world and 40th in another. What’s more, we were able to influence the song rankings by tweaking the strength of the social “signal” in each world, through varying whether the songs were arranged randomly, in a jukebox layout, or in order of the number of previous downloads. The experiment showed clearly that people like to imitate each other. Even weak social signaling skewed the popularity distributions significantly. As we upped the signaling strength, the inequality among songs increased. The famous became more famous. But the popularity ordering began to differ more strongly between worlds: The choices of the early participants mattered more, and the system become more uncertain. The data implies that there is no such thing as fate, only the story of fate. This idea is encoded in the etymology of the word: “fate” derives from the Latin *fatus*, meaning “spoken”—talk that is done—in direct opposition to the root of “fame,” which is *f**ā**ma*, meaning “talk.” Destiny is not deterministic but probabilistic: Re-run the world, and the outcomes may well change. In fact, social systems bear a sensitivity to initial conditions that is the hallmark of chaos theory. Fame has much less to do with intrinsic quality than we believe it does, and much more to do with the characteristics of the people among whom fame spreads. But most importantly, the experiment highlights the role of imitation. The next time you listen to Justin Bieber, and wonder, “Why?” remember that global success has more to do with social imitation than anything else. We have an extraordinary ability and drive to replicate each other’s physical actions and mental processes. Copying is a fundamental part of how we learn, it gives us social cohesion, and it signals group affiliation. It is so pervasive that small moments of mimicry can fall outside of our attention, allowing us to misattribute fame. The origin of global fame is primarily the ability of a given system to allow the faithful copying of a given message. A useful analogy might be the match and the wildfire. Try this piece of pulp fiction I’ve made up: When a matchbox full of lazily dreaming future grill-lighters was bought at a small-town store on a hot summer day in the California desert, little did it know that one of its passengers would become the most notorious, most sought after weapon of all time. Two weeks later one of its matches would be used to start a wildfire that would burn for months and destroy 50 million acres. Anyone armed with a match like this one would be able to take over the world. This is a patently ridiculous story—a single match is not the entire reason for a wildfire starting and spreading. But that’s exactly how we naturally think about social wildfires: that the match is the key. In fact, there are two requirements: a local requirement (a spark), and a global requirement (the ability of the fire to spread). And it’s the second component that is actually the bottleneck: If a forest is dangerously dry, any spark can start a fire. Sparks are easy to come by, and are not intrinsically special. **The Language of Fame: ** Let’s represent people as nodes in a simple random network like the one pictured above. The sole defining feature of any node in a random network is the number of links it has, or its “degree,” which we’ll call *k*. Instead of thinking about infected nodes, it’s more useful to think about the infected links that emanate from infected nodes. First, imagine we’re traveling along a link in our network away from a node (infected or not). The probability that we reach a node with *k* friends can be written as *kP k* / *〈k〉*, where *P*is the probability that a randomly chosen node has degree k*k*, and the normalization *〈k〉*is the average degree over the entire network. Now if a message is passing along this link, the receiving node will either reject the message or become infected and generate *k*-1 new infected links. Let’s define *β*to be the probability that the receiving node is infected by the sending node’s message. Then we can write the following expression for something we call the gain ratio, k*R*, which is the expected new number of infected links generated by a single infected link. If *R*> 1, infected links beget more infected links, and the message will successfully spread throughout the network. Our research has shown that a match-centric viewpoint completely fails to describe many model social (and other) networks. There is nothing in a mathematical description of spreading fame (see The Language of Fame) about the match that sets the fire going—instead, it’s all about how the network connects, and how acceptance of a message spreads along links between people. Just as real forests must be ready to burn before a forest fire can erupt, the key condition for spreading in social networks is a global one: Many average, trusting people need to be able to experience and then want to share choices in their social networks, far away from the source. Network models of fame can provide counter-intuitive insights. For example, people with many friends can actually impede the growth of a social contagion, rather than accelerate it. To see how this might happen, let’s look at a highly idealized social wildfire called threshold contagion, first explored in model form by the Nobelist Thomas Schelling in his work on racial segregation. Threshold behavior works like this: If enough of your friends believe in something, then so do you. This is called “social proof,”2 and is a core kind of interpersonal influence mechanism at work in our Music Lab experiment. Imagine a population where everyone will believe a specific message if at least 1 in 5 of their friends does. In the figure below, people are represented by the letters *a* through *e*. Friendships are represented by arrows, and time is represented by the letter *t*. The message that has been earlier taken on by *a* starts to spread, first to *c* and *e* (who hear the message from 1 in 5 and 1 in 3 of their friends). Then *b* switches because they now hear the message from 2 in 8 friends. But *d* continues to resist as its signal remains at 1 in 6, a precarious position. Unlike the spread of a biological disease, threshold contagion does not spread through well-connected nodes. In fact, these nodes tend to resist the message. Instead, the initial message spreads far if there are sufficiently many nodes that are both a little influential (they have a moderate number of friends) and a little susceptible to being influenced. We also observe that small changes in the population’s behavior or the structure of their social networks can lead to big differences globally. If everyone’s threshold was 1 in 4 instead of 1 in 5 for this message then individual *a* would not have repeated the message in the first place and spreading would have locally stopped there. If everyone’s threshold was low, say 1 in 10, then the message would have spread immediately through this subnetwork in one time step, converting *d* as well. There is no such thing as fate, only the story of fate. For more complicated model networks, where our mathematical analyses come up short, Duncan Watts and myself have studied social contagion and influence through simulation. In our paper, “Influentials, Networks, and Public Opinion Formation,”3 we again found that, for certain networks, individuals with many friends were actually less useful for spreading social contagion, and were less able to start social wildfires than those with a moderate number. The research also showed that, for many kinds of complex networks, the power of the match to generate a social contagion is greatly limited. Taken together, the results put into question the existence of hyper-influentials or “opinion leaders”—those few, special, and above all, hypothetical people who are responsible for the choices of the rest of the population. The matches of society. But the idea of the opinion leader persists. Why? Because we tell stories, and because we focus on individuals. We can’t resist the promise that if we can find these special people, we can change society to our ends, whether it be to sell products or to improve people’s health. Our plight is made worse by “survivorship bias”—the tendency to focus on successful stories of such influencers. The cover of Malcolm Gladwell’s massively successful book about social wildfire, *The Tipping Point*, shows an unused match above the words “How little things can make a big difference.” But the forest is not in the frame, and it should be. *Peter Sheridan Dodds is a professor at the University of Vermont (UVM) working on system-level problems in many fields, with a focus on sociotechnical systems. He is director of UVM’s Complex Systems Center, co-leader of UVM’s Computational Story Lab, and a faculty member in the Department of Mathematics and Statistics. He can be reached at [email protected].* References 1. Salganik, M.J., Dodds, P.S. & Watts, D.J. Experimental study of inequality and unpredictability in an artificial cultural market. *Science* **311**, 854-856 (2006). 2. Cialdini, R.B. *Influence: Science and Practice* Allyn and Bacon, 4th Edition, (2000). 3. Watts D.J., & Dodds, P.S. Influentials, Networks, and Public Opinion Formation. *Journal of Consumer Research* **34**, 441-458 (2007).
true
true
true
We think that fame is deserved. We are wrong.
2024-10-12 00:00:00
2013-08-29 00:00:00
https://assets.nautil.us…&ixlib=php-3.3.1
article
nautil.us
Nautilus
null
null
19,674,209
http://fortune.com/2019/04/12/china-996-jack-ma/
China's Workers Are Protesting Tech's Deadly '996' Overtime Culture. Alibaba's Jack Ma Says He Requires It.
Lulu Yilun Chen; Bloomberg
To survive at Alibaba Group Holding Ltd. you need to work 12 hours a day, six days a week. That’s what billionaire Jack Ma demands of his staff at China’s biggest e-commerce platform. Ma told an internal meeting that Alibaba doesn’t need people who look forward to a typical eight-hour office lifestyle, according to a post on Alibaba’s official Weibo account. Instead, he endorsed the industry’s notorious 996 work culture — that is, 9 a.m. to 9 p.m., six days a week. “To be able to work 996 is a huge bliss,” China’s richest man said. “If you want to join Alibaba, you need to be prepared to work 12 hours a day, otherwise why even bother joining.” China’s tech industry is littered with tales of programmers and startup founders dying unexpectedly due to long hours and grueling stress. The comments from Ma elicited some intense reaction. “A load of nonsense, and didn’t even mention whether the company provides overtime compensation for a 996 schedule,” wrote one commenter on the Weibo post. “I hope people can stick more to the law, and not to their own reasoning.” “The bosses do 996 because they’re working for themselves and their wealth is growing,” another comment read. “We work 996 because we’re exploited without overtime compensation.” Representatives for Alibaba didn’t immediately respond to a request for comment. Ma’s comments come amid a fierce debate. Programmers in China protested their labor conditions on the online code-sharing community Github in March under the banner 996.ICU, a topic that quickly became the site’s most popular, with more than 211,000 stars. “By following the ’996’ work schedule, you are risking yourself getting into the ICU [Intensive Care Unit],” according to a description posted on the “996.ICU” project page. The creator, whose identity is unknown, called on tech workers to come forward with examples of companies abusing staff by demanding uncompensated overtime. Alibaba and its financial affiliate Ant Financial were both named.
true
true
true
"If you want to join Alibaba, you need to be prepared to work 12 hours a day, otherwise why even bother joining."
2024-10-12 00:00:00
2019-04-12 00:00:00
https://fortune.com/img-…?resize=1200,600
article
fortune.com
Fortune
null
null
6,189,707
https://medium.com/geek-empire-1/78b12f283ca8
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
8,514,887
http://nautil.us/blog/how-to-learn-to-love-to-practice
How to Learn to Love to Practice
Jeanette Bicknell
In interviews, famous people often say that the key to becoming both happy and successful is to “do what you love.” But mastering a skill, even one that you deeply love, requires a huge amount of drudgery. Any challenging activity—from computer programming to playing a musical instrument to athletics—requires focused and concentrated practice. A perfect golf swing or flawless butterfly stroke takes untold hours of practice (actually around 10,000 hours, according to Malcolm Gladwell) and countless repetitions to perfect. Anyone who wants to master a skill must run through the cycle of practice, critical feedback, modification, and incremental improvement again, again, and again. Some people seem able to concentrate on practicing an activity like this for years and take pleasure in their gradual improvement. Yet others find this kind of focused, time-intensive work to be frustrating or boring. Why? The difference may turn on the ability to enter into a state of “flow,” the feeling of being completely involved in what you are doing. Whether you call it being “in the zone,” “in a groove,” or something else, a flow state is a special experience. Since Mihaly Csikszentmihalyi developed the concept of flow in the 1970’s, it has been a mainstay of positive-psychology research. Flow states can happen in the course of any activity, and they are most common when a task has well-defined goals and is at an appropriate skill level, and where the individual is able to adjust their performance to clear and immediate feedback. Flow states turn the drudgery of practice into an autotelic activity—that is, one that can be enjoyed for its own sake, rather than as a means to an end or for attaining some external reward. That raises the question of how we can turn this to our advantage: How can we get into a flow state for an activity that we want to master, so that we enjoy both the process of improving skills and the rewards that some with being a master? Csikszentmihalyi suggested that those who most readily entered into flow states had an “autotelic personality”—a disposition to seek out challenges and get into a state of flow. While those without such a personality see difficulties, autotelic individuals see opportunities to build skills. Autotelic individuals are receptive and open to new challenges. They are also persistent and have low levels of self-centeredness. Such people, with their capacity for “disinterested interest” (an ability to focus on tasks rather than rewards) have a great advantage over others in developing their innate abilities. Fortunately for those of us who aren’t necessarily blessed with an autotelic personality, there is evidence that flow states can be facilitated by environmental factors. In particular, the learning framework prescribed by Montessori schools seems to encourage flow states. A comparison of Montessori middle schools with traditional middle schools (co-written by Csikszentmihalyi) found that the Montessori students showed greater affect, higher intrinsic motivation, and more frequent flow experiences than their counterparts in traditional schools. In Montessori schools, learning comes through discovery rather than direct instruction, students are encouraged to develop individual interests, and a great deal of unstructured time is built into the day so that they can pursue these interests. Competition is discouraged and grading is de-emphasized, taking the focus off of external rewards. Students are grouped together according to shared interests, rather than segregated by ability. While there isn’t (yet) a pill that can turn mundane practice into a thrilling activity for anyone, it is heartening that we seem, at least to some degree, to be able to nudge ourselves toward flow states. By giving ourselves unstructured, open-ended time, minimal distractions, and a task set at a moderate level of difficulty, we may be able to love what we’re doing while we put in the hard work practicing the things we love doing. *Jeanette Bicknell, Ph. D., is the author of *Why Music Moves Us* (2009). She lives in Toronto, Canada.*
true
true
true
In interviews, famous people often say that the key to becoming both happy and successful is to “do what you love.” But mastering a skill, even one that you deeply love, requires a huge amount of drudgery. Any challenging activity—from computer programming to playing a musical instrument to athletics—requires focused and concentrated practice. A […]
2024-10-12 00:00:00
2014-04-30 00:00:00
https://assets.nautil.us…&ixlib=php-3.3.1
article
nautil.us
Nautilus
null
null
4,193,147
http://restreitinho.com/why-i-love-niche-social-media/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
6,709,743
http://www.spacex.com/news/2013/05/19/spacex-crew-program
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
30,328,920
https://www.jetadmin.io/
Build Custom Business Apps with No-Code | Jet Admin
null
Easily create internal tools, partner and customer apps without code Connect popular data sources and perform advanced CRUD actions with great performances. Work with native integrations (Airtable, Xano, Supabase, SQL) or any API (REST, GraphQL, or SOAP API). Integrate data from databases, SaaS applications, internal APIs and business logic in code. Trigger Workflows by issuing API calls or schedule jobs for every minute, hour, day, or week. Webhooks Trigger automations across technologies Branching and splits Set up complex processes based on logical operations SQL/API requests Supercharge your app with low code options Native debugger Test your automations to keep things running smoothly Your users can choose to sign in/up with a secure or Google login, or request a secure magic link over email. Designed to meet complex compliance requirements – Jet Admin is as secure and flexible as building your own web apps from scratch Lookup values in other tables, create messages, do math, generate QR codes,measure distances, call APIs-all without code or spreadsheet formulas Dynamically bind data, do complex calculations, transform responses, and even add custom JS Global styles, fonts & colors. Configurable styles for any element. Custom colors, backgrounds Read in data from Database, join it to business apps with SQL, and POST the result to Stripe's API Instantly publish updates that become live for all users. Draft releases, modify on-the-fly, and revert to earlier versions with ease
true
true
true
Jet Admin is the best platform to build custom business apps with no-code. Streamline productivity, cut costs and deploy your tools with ease.
2024-10-12 00:00:00
2023-01-01 00:00:00
null
website
null
null
null
null
2,677,428
http://www.readwriteweb.com/archives/save_your_photos_to_amazon_or_dropbox_with_app_pla.php#.TgAdmWz0gPc;hackernews
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
26,391,734
https://projects.noahliebman.net/encodemightythings/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
949,679
http://www.eweek.com/c/a/Application-Development/Microsoft-Plans-to-Open-C-Visual-Basic-Compilers-813884/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
20,546,077
https://techblog.bozho.net/lets-annotate-our-methods-with-the-features-they-implement
Let's Annotate Our Methods With The Features They Implement - Bozho's tech blog
Bozho
# Let’s Annotate Our Methods With The Features They Implement Writing software consists of very little actual “writing”, and much more thinking, designing, reading, “digging”, analyzing, debugging, refactoring, aligning and meeting others. The reading and digging part is where you try to understand what has been implemented before, why it has been implemented, and how it works. In larger projects it becomes increasingly hard to find what is happening and why – there are so many classes that interfere, and so many methods participate in implementing a particular feature. That’s probably because there is a mismatch between the programming units (classes, methods) and the business logic units (features). Product owners want a “password reset” feature, and they don’t care if it’s done using framework configuration, custom code split in three classes, or one monolithic controller method that does that job. This mismatch is partially addressed by the so called BDD (behaviour driven development), as business people can define scenarios in a formalized language (although they rarely do, it’s still up to the QAs or developers to write the tests). But having your tests organized around features and behaviours doesn’t mean the code is, and BDD doesn’t help in making your way through the codebase in search of why and how something is implemented. Another issue is linking a piece of code to the issue tracking system. Source control conventions and hooks allow for setting the issue tracker number as part of the commit, and then when browsing the code, you can annotate the file and see the issue number. However, due the the many changes, even a very strict team will end up methods that are related to multiple issues and you can’t easily tell which is the proper one. Yet another issue with the lack of a “feature” unit in programming languages is that you can’t trivially reuse existing projects to start a new one. We’ve all been there – you have a similar project and you want to get a skeleton to get thing running faster. And while there are many tools to help that (Spring Boot, Spring Roo, and other scaffolding utilities), they can rarely deliver what you need – you always have to tweak something, delete something, customize some configuration, as defaults are almost never practical. And I have a simple proposal that will help with the issues above. As with any complex problem, simple ideas don’t solve everything, but are at least a step forward. The proposal is in the title – let’s annotate our methods with the features they implement. Let’s have `@Feature(name = "Forgotten password", issueTrackerCode="PROJ-123")` . A method can implement multiple features, but that is generally discouraged by best practices (e.g. the single responsibility principle). The granularity of “feature” is something that has to be determined by each team and is the tricky part – sometimes an epic describes a feature, sometimes individual stories or even subtasks do. A definition of a feature should be agreed upon and every new team member should be told what to do and how to interpret it. There is of course a lot of complexity, e.g. for generic methods like DAO methods, utility methods, or methods that are reused in too many places. But they also represent features, it’s just that these features are horizontal. “Data access layer” is a feature – a more technical one indeed, but it counts, and maybe deserves a story in the issue tracker. Your features can actually be listed in one or several enums, grouped by type – business, horizontal, performance, etc. That way you can even compose features – e.g. account creation relies on database access and a security layer. How does such a proposal help? - Consciousnesses about the single responsibility of methods and that code should be readable - Provides a rationale for the existence of each method. Even if a proper comment is missing, the annotation will put a method (or a class) in context - Helps navigating code and fixing issues (if you can see all places where a feature is implemented, you are more likely to spot an issue) - Allows tools to analyze your features – amount, complexity, how chaotic a feature is spread across the code base, test coverage per feature, etc. - Allows tools to use existing projects for scaffolding for new ones – you specify the features you want to have, and they are automatically copied At this point I’m supposed to give a link to a GitHub project for a feature annotation library. But it doesn’t make sense to have a single-annotation project. It can easily be part of guava or something similar Or can be manually created in each project. The complex part – the tools that will do the scanning and analysis, deserve separate projects, but unfortunately I don’t have time to write one. A checkstyle plugin that fails the build if the annotation is missing would also be a good start. But even without the tools, the concept of annotating methods with their high-level features is I think a useful one. Instead of trying to deduce why is this method here and what requirements does it have to implement (and were all necessary tests written at the time), such an annotation can come handy. Writing software consists of very little actual “writing”, and much more thinking, designing, reading, “digging”, analyzing, debugging, refactoring, aligning and meeting others. The reading and digging part is where you try to understand what has been implemented before, why it has been implemented, and how it works. In larger projects it becomes increasingly hard to find what is happening and why – there are so many classes that interfere, and so many methods participate in implementing a particular feature. That’s probably because there is a mismatch between the programming units (classes, methods) and the business logic units (features). Product owners want a “password reset” feature, and they don’t care if it’s done using framework configuration, custom code split in three classes, or one monolithic controller method that does that job. This mismatch is partially addressed by the so called BDD (behaviour driven development), as business people can define scenarios in a formalized language (although they rarely do, it’s still up to the QAs or developers to write the tests). But having your tests organized around features and behaviours doesn’t mean the code is, and BDD doesn’t help in making your way through the codebase in search of why and how something is implemented. Another issue is linking a piece of code to the issue tracking system. Source control conventions and hooks allow for setting the issue tracker number as part of the commit, and then when browsing the code, you can annotate the file and see the issue number. However, due the the many changes, even a very strict team will end up methods that are related to multiple issues and you can’t easily tell which is the proper one. Yet another issue with the lack of a “feature” unit in programming languages is that you can’t trivially reuse existing projects to start a new one. We’ve all been there – you have a similar project and you want to get a skeleton to get thing running faster. And while there are many tools to help that (Spring Boot, Spring Roo, and other scaffolding utilities), they can rarely deliver what you need – you always have to tweak something, delete something, customize some configuration, as defaults are almost never practical. And I have a simple proposal that will help with the issues above. As with any complex problem, simple ideas don’t solve everything, but are at least a step forward. The proposal is in the title – let’s annotate our methods with the features they implement. Let’s have `@Feature(name = "Forgotten password", issueTrackerCode="PROJ-123")` . A method can implement multiple features, but that is generally discouraged by best practices (e.g. the single responsibility principle). The granularity of “feature” is something that has to be determined by each team and is the tricky part – sometimes an epic describes a feature, sometimes individual stories or even subtasks do. A definition of a feature should be agreed upon and every new team member should be told what to do and how to interpret it. There is of course a lot of complexity, e.g. for generic methods like DAO methods, utility methods, or methods that are reused in too many places. But they also represent features, it’s just that these features are horizontal. “Data access layer” is a feature – a more technical one indeed, but it counts, and maybe deserves a story in the issue tracker. Your features can actually be listed in one or several enums, grouped by type – business, horizontal, performance, etc. That way you can even compose features – e.g. account creation relies on database access and a security layer. How does such a proposal help? - Consciousnesses about the single responsibility of methods and that code should be readable - Provides a rationale for the existence of each method. Even if a proper comment is missing, the annotation will put a method (or a class) in context - Helps navigating code and fixing issues (if you can see all places where a feature is implemented, you are more likely to spot an issue) - Allows tools to analyze your features – amount, complexity, how chaotic a feature is spread across the code base, test coverage per feature, etc. - Allows tools to use existing projects for scaffolding for new ones – you specify the features you want to have, and they are automatically copied At this point I’m supposed to give a link to a GitHub project for a feature annotation library. But it doesn’t make sense to have a single-annotation project. It can easily be part of guava or something similar Or can be manually created in each project. The complex part – the tools that will do the scanning and analysis, deserve separate projects, but unfortunately I don’t have time to write one. A checkstyle plugin that fails the build if the annotation is missing would also be a good start. But even without the tools, the concept of annotating methods with their high-level features is I think a useful one. Instead of trying to deduce why is this method here and what requirements does it have to implement (and were all necessary tests written at the time), such an annotation can come handy. Very interesting idea: I’ve long been unsatisfied with the xml-doc (in C#) and doc strings (in Python) as a limited way to get to where a particular method fits in. I might try this approach in a small project and see how it goes, and if there is any use to it, as it will add clutter to an external person coming into the project. This is a brilliant idea. Thank you! Hi, really great idea! To respond to your statement “A checkstyle plugin that fails the build if the annotation is missing would also be a good start.”, I would suggest developers to use ArchUnit (https://www.archunit.org/). It provides an API allowing us to write the following: val classes = ClassFileImporter().importClasspath() val rule = ArchRuleDefinition.methods().should().beAnnotatedWith(Feature.class) rule.check(classes) It will check this during unit tests’ step. Hope this helps. Regards. Romain Rochegude
true
true
true
Writing software consists of very little actual “writing”, and much more thinking, designing, reading, “digging”, analyzing, debugging, refactoring, aligning and meeting others. The reading and digging part is where youContinue reading
2024-10-12 00:00:00
2019-07-27 00:00:00
https://techblog.bozho.n…20/09/digits.jpg
article
bozho.net
Bozho's tech blog
null
null
6,556,303
http://linuxgizmos.com/sony-smartwatch-2-ticks-as-google-smartwatch-rumors-tock/
null
null
null
true
false
false
null
null
null
null
null
null
null
null
null
995,925
http://www.math.rutgers.edu/~zeilberg/mamarim/mamarimhtml/collatz.html
Teaching the Computer how to Discover(!) and then Prove(!!) (all by Itself(!!!)) Analogs of Collatz's Notorious 3x+1 Conjecture
null
By Doron Zeilberger [Appeared in J. of Difference Equations and Applications, v. 17, No. 3, (March 2011) , 375-386] .pdf .ps .tex Written: March 22, 2009. Paul Erdos claimed that mathematics is not yet ready to settle the 3x+1 conjecture. I agree, but very soon it will be! With the exponential growth of computer-generated mathematics, we (or rather our silicon brethrern) would have a shot at it. Of course, not by *number crunching*, but by *symbol crunching* and *automatic deduction*. In the present article, I taught my computer how to use the brilliant ideas of four human beings (Amal Amleh, Ed Grove, Candy Kent, and Gerry Ladas) to prove two-dimensional analogs of this notorious conjecture. Once programmed (using my Maple package LADAS) it reproduced their ten theorems, and generated 134 new ones, complete with proofs. All by itself! I believe that the proof of the original 3x+1 conjecure would be in the same vein, but one would need a couple of extra human ideas, and better computers. Added May 11, 2010: Watch the movie (produced by Edinah Gnang)
true
true
true
null
2024-10-12 00:00:00
2009-03-22 00:00:00
null
null
null
null
null
null
18,926,382
http://antoyo.ml/evolution-rust-programmer
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
34,404,397
https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p2759r0.pdf
null
null
null
true
false
false
null
null
null
null
null
null
null
null
null
8,920,224
http://gamasutra.com/blogs/PaulTozour/20150120/234443/The_Game_Outcomes_Project_Part_4_Crunch_Makes_Games_Worse.php
Blogs recent news | Game Developer
null
Preserving the Past With Charles Cecil: Game Developer Podcast Ep. 46 On the unionization frontlines with Autumn Mitchell, Emma Kinema and Chris Lusco: Game Developer Podcast Ep. 45 Behind the GDC scenes with Beth Elderkin and Sam Warnke: Game Developer Podcast ep. 43 What to do about Game Engines with Rez Graham and Bryant Francis: Game Developer Podcast Ep. 42
true
true
true
Explore the latest news and expert commentary on Blogs, brought to you by the editors of Game Developer
2024-10-12 00:00:00
2024-10-11 00:00:00
https://www.gamedeveloper.com/build/_assets/gamedeveloper-X2EP7LQ6.ico
website
gamedeveloper.com
Game Developer
null
null
40,717,143
https://twitter.com/olegkutkov/status/1802867851792932993
x.com
null
null
true
true
false
null
2024-10-12 00:00:00
null
null
null
null
X (formerly Twitter)
null
null
30,635,600
https://medium.com/@nickhbottomley/writing-with-git-434abffc751f
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
554,189
http://www.xconomy.com/seattle/2009/04/09/putting-uw-startup-dreams-on-hold-entrepreneur-advises-researchers-to-nurture-ideas-more/
Home | Informa Connect
null
#### We are Informa Connect # Live events, digital content and training for professionals who want to achieve more. Search live and on-demand events, training and other content See upcoming events ## Choose your interest Find out about our industry events, digital content, and on-demand experiences, providing you with exceptional insights, connections, and commercial edge. Upcoming events Attend our next events, either in person, online or on-demand. Choose an Interest Upcoming Courses Attend our training courses, either in person, online or on-demand. Choose an Interest Trending News & Insights See what your industry is talking about right now. Choose an Interest ## About Informa Connect Providing professionals with access to extraordinary people and exceptional insight. Latest Videos
true
true
true
null
2024-10-12 00:00:00
2024-10-11 00:00:00
https://informaconnect.c…9c4763325724.png
website
informaconnect.com
informaconnect.com
null
null
1,939,089
http://scribtex.com/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
3,177,442
http://www.techieinsider.com/news/11750
News of the World: Overview, Where to Watch & more
Sunil Bhuyan
**Movie Information** **Status**: Released**Movie Tagline**: News of the World**Genres**: Drama, Western, Adventure, Action**Release Date**: 2020-12-25**Runtime**: 118.0**Budget**:**Revenue**: $12.7 million**Production****Companies**: Universal Pictures, Playtone, Pretty Pictures, Perfect World Pictures **Cast & Crew** Tom Hanks, Helena Zengel, Michael Angelo Covino, Ray McKinnon, Mare Winningham, Elizabeth Marvel, Fred Hechinger, Bill Camp, Thomas Francis Murphy, Gabriel Ebert, Neil Sandilands, Winsome Brown, Chukwudi Iwuji, Christopher Hagen, Stafford Douglas, Michelle Campbell, Clint Obenchain, J. Nathan Simmons, Travis Johnson, Andy Kastelic, Jeff Ware, Chris Bylsma, Justin Tade, Darrin Giossi, Brenden Wedner, Clay James, Cash Lilley, Jared Berry, Truman Hanks, Michael Toby Sanchez, Shawn Howell, Alexander Alayon Jr. **Synopsis/Plot** News of the World is a 2020 American western drama film directed by Paul Greengrass and written by Luke Davies and Greengrass, based on the 2016 novel of the same name by Paulette Jiles. The film stars Tom Hanks as Captain Jefferson Kyle Kidd, a veteran of the Civil War who now travels around Texas in 1870, reading news from around the world to small town audiences. When he is asked to deliver a young girl, Johanna, to her aunt and uncle in San Antonio, he finds himself on a dangerous journey across the state. Along the way, they encounter danger and kindness, and Kidd must decide whether to take the girl to her family or keep her safe in his care. The film follows Kidd as he and Johanna make their way through the dangerous Texas frontier. They meet a variety of people, including a former Confederate soldier, a Native American chief, and a traveling showman. Kidd is determined to deliver Johanna to her family, but she is determined to remain with him. As they travel, Kidd and Johanna form a bond, and Kidd begins to see her as a daughter. The two must rely on each other to survive the dangerous journey and ultimately find a way to make it to San **How & Where to Watch “News of the World” ** Wondering where to watch News of the World? Here are the platforms that are currently streaming News of the World. **Note: **If you are not able to find any streaming platform listed above, it could be because of the regional restriction or because the title is actually not available on any of the platform. Here’s how to watch it if it is not available in your country. **Unlock “News of the World” & Watch it Anywhere in the World with VPN** Not able to find “News of the World” in your region? We’ve got a solution – use a VPN like CyberGhost or Nordvpn. ### Why Should You Use a VPN? A VPN does not just keep your internet browsing safe and private, it also unlocks unlimited content that might otherwise be restricted in your region. So, you can access and enjoy various shows and movies on platforms like Netflix, Hulu, Disney+, and Prime Video. A VPN works by changing your IP address with one from a different region. This way, it seems like you’re browsing from that region, and voila – you can bypass regional blocks. ### Top Reasons to Stream with a VPN: **Access Geo-restricted Content**: With a VPN, you can watch movies and shows from all over the world and not just those available in your region.**Data Protection**: A VPN protects your data from potential threats helping you keep your personal details safe.**No More Slow Streaming**: Some internet service providers might slow your connection during high-demand streaming times. A VPN helps you avoid this, ensuring smooth streaming. We recommend CyberGhost and NordVPN – two of the most trusted names in the VPN industry. ### Special VPN Offer: **CyberGhost**is offering our readers a 45-day money-back guarantee plus a**Whopping 82% discount**.**NordVPN**comes with a 30-day money-back guarantee and a**special 63% discount**. **Recommended Movies similar to News of the World** - Duniyadari - Brave Blue World: Racing to Solve Our Water Crisis - Once There Was A Clown - Deool - A Man of Action - The White Tiger - The Dig - Driveways - Ma Rainey’s Black Bottom - Hillbilly Elegy - One Night in Miami… - Cha Cha Real Smooth - I Care a Lot - Wake Up Sid - The Trial of the Chicago 7 - Michel Vaillant - Plan B - Good Luck to You, Leo Grande - Nomadland - Sound of Metal - Prison 77
true
true
true
https://www.youtube.com/watch?v=zTZDb_iKooI
2024-10-12 00:00:00
2023-07-27 00:00:00
null
article
techieinsider.com
Techie Insider
null
null
19,490,024
https://pandaily.com/didi-president-jean-liu-visits-family-of-murdered-driver/
null
null
null
true
false
false
null
null
null
null
null
null
null
null
null
15,592,201
https://arstechnica.com/science/2017/10/quantum-algorithm-finds-higgs-needle-in-photon-haystack/
Higgs boson uncovered by quantum algorithm on D-Wave machine
Chris Lee
Machine learning has returned with a vengeance. I still remember the dark days of the late '80s and '90s, when it was pretty clear that the current generation of machine-learning algorithms didn't seem to actually learn much of anything. Then big data arrived, computers became chess geniuses, conquered Go (twice), and started recommending sentences to judges. In most of these cases, the computer had sucked up vast reams of data and created models based on the correlations in the data. But this won't work when there aren't vast amounts of data available. It seems that quantum machine learning might provide an advantage here, as a recent paper on searching for Higgs bosons in particle physics data seems to hint. ## Learning from big data In the case of chess, and the first edition of the Go-conquering algorithm, the computer wasn't just presented with the rules of the game. Instead, it was given the rules and all the data that the researchers could find. I'll annoy every expert in the field by saying that the computer essentially correlated board arrangements and moves with future success. Of course, it isn't nearly that simple, but the key was in having a lot of examples to build a model and a decision tree that would let the computer decide on a move. In the most-recent edition of the Go algorithm, this was still true. In that case, though, the computer had to build its own vast database, which it did by playing itself. I'm not saying this to disrespect machine learning but to point out that computers use their ability to gather and search for correlations in truly vast amounts of data to become experts—the machine played 5 million games against itself before it was unleashed on an unsuspecting digital opponent. A human player would have to complete a game every 18 seconds for 70 years to gather a similar data set.
true
true
true
Particle physics data sorted by quantum machine learning but still needs work.
2024-10-12 00:00:00
2017-10-25 00:00:00
https://cdn.arstechnica.…d5329754f8_b.jpg
article
arstechnica.com
Ars Technica
null
null
24,072,714
http://classics.mit.edu/Epictetus/epicench.html
The Internet Classics Archive
null
Commentary:A few comments have been posted aboutThe Enchiridion.Download:A 40k text-only version is available for download. 1.Some things are in our control and others not. Things in our control are opinion, pursuit, desire, aversion, and, in a word, whatever are our own actions. Things not in our control are body, property, reputation, command, and, in one word, whatever are not our own actions. The things in our control are by nature free, unrestrained, unhindered; but those not in our control are weak, slavish, restrained, belonging to others. Remember, then, that if you suppose that things which are slavish by nature are also free, and that what belongs to others is your own, then you will be hindered. You will lament, you will be disturbed, and you will find fault both with gods and men. But if you suppose that only to be your own which is your own, and what belongs to others such as it really is, then no one will ever compel you or restrain you. Further, you will find fault with no one or accuse no one. You will do nothing against your will. No one will hurt you, you will have no enemies, and you not be harmed. Aiming therefore at such great things, remember that you must not allow yourself to be carried, even with a slight tendency, towards the attainment of lesser things. Instead, you must entirely quit some things and for the present postpone the rest. But if you would both have these great things, along with power and riches, then you will not gain even the latter, because you aim at the former too: but you will absolutely fail of the former, by which alone happiness and freedom are achieved. Work, therefore to be able to say to every harsh appearance, "You are but an appearance, and not absolutely the thing you appear to be." And then examine it by those rules which you have, and first, and chiefly, by this: whether it concerns the things which are in our own control, or those which are not; and, if it concerns anything not in our control, be prepared to say that it is nothing to you.2.Remember that following desire promises the attainment of that of which you are desirous; and aversion promises the avoiding that to which you are averse. However, he who fails to obtain the object of his desire is disappointed, and he who incurs the object of his aversion wretched. If, then, you confine your aversion to those objects only which are contrary to the natural use of your faculties, which you have in your own control, you will never incur anything to which you are averse. But if you are averse to sickness, or death, or poverty, you will be wretched. Remove aversion, then, from all things that are not in our control, and transfer it to things contrary to the nature of what is in our control. But, for the present, totally suppress desire: for, if you desire any of the things which are not in your own control, you must necessarily be disappointed; and of those which are, and which it would be laudable to desire, nothing is yet in your possession. Use only the appropriate actions of pursuit and avoidance; and even these lightly, and with gentleness and reservation.3.With regard to whatever objects give you delight, are useful, or are deeply loved, remember to tell yourself of what general nature they are, beginning from the most insignificant things. If, for example, you are fond of a specific ceramic cup, remind yourself that it is only ceramic cups in general of which you are fond. Then, if it breaks, you will not be disturbed. If you kiss your child, or your wife, say that you only kiss things which are human, and thus you will not be disturbed if either of them dies.4.When you are going about any action, remind yourself what nature the action is. If you are going to bathe, picture to yourself the things which usually happen in the bath: some people splash the water, some push, some use abusive language, and others steal. Thus you will more safely go about this action if you say to yourself, "I will now go bathe, and keep my own mind in a state conformable to nature." And in the same manner with regard to every other action. For thus, if any hindrance arises in bathing, you will have it ready to say, "It was not only to bathe that I desired, but to keep my mind in a state conformable to nature; and I will not keep it if I am bothered at things that happen.5.Men are disturbed, not by things, but by the principles and notions which they form concerning things. Death, for instance, is not terrible, else it would have appeared so to Socrates. But the terror consists in our notion of death that it is terrible. When therefore we are hindered, or disturbed, or grieved, let us never attribute it to others, but to ourselves; that is, to our own principles. An uninstructed person will lay the fault of his own bad condition upon others. Someone just starting instruction will lay the fault on himself. Some who is perfectly instructed will place blame neither on others nor on himself.6.Don't be prideful with any excellence that is not your own. If a horse should be prideful and say, " I am handsome," it would be supportable. But when you are prideful, and say, " I have a handsome horse," know that you are proud of what is, in fact, only the good of the horse. What, then, is your own? Only your reaction to the appearances of things. Thus, when you behave conformably to nature in reaction to how things appear, you will be proud with reason; for you will take pride in some good of your own.7.Consider when, on a voyage, your ship is anchored; if you go on shore to get water you may along the way amuse yourself with picking up a shellfish, or an onion. However, your thoughts and continual attention ought to be bent towards the ship, waiting for the captain to call on board; you must then immediately leave all these things, otherwise you will be thrown into the ship, bound neck and feet like a sheep. So it is with life. If, instead of an onion or a shellfish, you are given a wife or child, that is fine. But if the captain calls, you must run to the ship, leaving them, and regarding none of them. But if you are old, never go far from the ship: lest, when you are called, you should be unable to come in time.8.Don't demand that things happen as you wish, but wish that they happen as they do happen, and you will go on well.9.Sickness is a hindrance to the body, but not to your ability to choose, unless that is your choice. Lameness is a hindrance to the leg, but not to your ability to choose. Say this to yourself with regard to everything that happens, then you will see such obstacles as hindrances to something else, but not to yourself.10.With every accident, ask yourself what abilities you have for making a proper use of it. If you see an attractive person, you will find that self-restraint is the ability you have against your desire. If you are in pain, you will find fortitude. If you hear unpleasant language, you will find patience. And thus habituated, the appearances of things will not hurry you away along with them.11.Never say of anything, "I have lost it"; but, "I have returned it." Is your child dead? It is returned. Is your wife dead? She is returned. Is your estate taken away? Well, and is not that likewise returned? "But he who took it away is a bad man." What difference is it to you who the giver assigns to take it back? While he gives it to you to possess, take care of it; but don't view it as your own, just as travelers view a hotel.12.If you want to improve, reject such reasonings as these: "If I neglect my affairs, I'll have no income; if I don't correct my servant, he will be bad." For it is better to die with hunger, exempt from grief and fear, than to live in affluence with perturbation; and it is better your servant should be bad, than you unhappy. Begin therefore from little things. Is a little oil spilt? A little wine stolen? Say to yourself, "This is the price paid for equanimity, for tranquillity, and nothing is to be had for nothing." When you call your servant, it is possible that he may not come; or, if he does, he may not do what you want. But he is by no means of such importance that it should be in his power to give you any disturbance.13.If you want to improve, be content to be thought foolish and stupid with regard to external things. Don't wish to be thought to know anything; and even if you appear to be somebody important to others, distrust yourself. For, it is difficult to both keep your faculty of choice in a state conformable to nature, and at the same time acquire external things. But while you are careful about the one, you must of necessity neglect the other.14.If you wish your children, and your wife, and your friends to live for ever, you are stupid; for you wish to be in control of things which you cannot, you wish for things that belong to others to be your own. So likewise, if you wish your servant to be without fault, you are a fool; for you wish vice not to be vice," but something else. But, if you wish to have your desires undisappointed, this is in your own control. Exercise, therefore, what is in your control. He is the master of every other person who is able to confer or remove whatever that person wishes either to have or to avoid. Whoever, then, would be free, let him wish nothing, let him decline nothing, which depends on others else he must necessarily be a slave.15.Remember that you must behave in life as at a dinner party. Is anything brought around to you? Put out your hand and take your share with moderation. Does it pass by you? Don't stop it. Is it not yet come? Don't stretch your desire towards it, but wait till it reaches you. Do this with regard to children, to a wife, to public posts, to riches, and you will eventually be a worthy partner of the feasts of the gods. And if you don't even take the things which are set before you, but are able even to reject them, then you will not only be a partner at the feasts of the gods, but also of their empire. For, by doing this, Diogenes, Heraclitus and others like them, deservedly became, and were called, divine.16.When you see anyone weeping in grief because his son has gone abroad, or is dead, or because he has suffered in his affairs, be careful that the appearance may not misdirect you. Instead, distinguish within your own mind, and be prepared to say, "It's not the accident that distresses this person., because it doesn't distress another person; it is the judgment which he makes about it." As far as words go, however, don't reduce yourself to his level, and certainly do not moan with him. Do not moan inwardly either.17.Remember that you are an actor in a drama, of such a kind as the author pleases to make it. If short, of a short one; if long, of a long one. If it is his pleasure you should act a poor man, a cripple, a governor, or a private person, see that you act it naturally. For this is your business, to act well the character assigned you; to choose it is another's.18.When a raven happens to croak unluckily, don't allow the appearance hurry you away with it, but immediately make the distinction to yourself, and say, "None of these things are foretold to me; but either to my paltry body, or property, or reputation, or children, or wife. But to me all omens are lucky, if I will. For whichever of these things happens, it is in my control to derive advantage from it."19.You may be unconquerable, if you enter into no combat in which it is not in your own control to conquer. When, therefore, you see anyone eminent in honors, or power, or in high esteem on any other account, take heed not to be hurried away with the appearance, and to pronounce him happy; for, if the essence of good consists in things in our own control, there will be no room for envy or emulation. But, for your part, don't wish to be a general, or a senator, or a consul, but to be free; and the only way to this is a contempt of things not in our own control.20.Remember, that not he who gives ill language or a blow insults, but the principle which represents these things as insulting. When, therefore, anyone provokes you, be assured that it is your own opinion which provokes you. Try, therefore, in the first place, not to be hurried away with the appearance. For if you once gain time and respite, you will more easily command yourself.21.Let death and exile, and all other things which appear terrible be daily before your eyes, but chiefly death, and you win never entertain any abject thought, nor too eagerly covet anything.22.If you have an earnest desire of attaining to philosophy, prepare yourself from the very first to be laughed at, to be sneered by the multitude, to hear them say,." He is returned to us a philosopher all at once," and " Whence this supercilious look?" Now, for your part, don't have a supercilious look indeed; but keep steadily to those things which appear best to you as one appointed by God to this station. For remember that, if you adhere to the same point, those very persons who at first ridiculed will afterwards admire you. But if you are conquered by them, you will incur a double ridicule.23.If you ever happen to turn your attention to externals, so as to wish to please anyone, be assured that you have ruined your scheme of life. Be contented, then, in everything with being a philosopher; and, if you wish to be thought so likewise by anyone, appear so to yourself, and it will suffice you.24.Don't allow such considerations as these distress you. "I will live in dishonor, and be nobody anywhere." For, if dishonor is an evil, you can no more be involved in any evil by the means of another, than be engaged in anything base. Is it any business of yours, then, to get power, or to be admitted to an entertainment? By no means. How, then, after all, is this a dishonor? And how is it true that you will be nobody anywhere, when you ought to be somebody in those things only which are in your own control, in which you may be of the greatest consequence? "But my friends will be unassisted." -- What do you mean by unassisted? They will not have money from you, nor will you make them Roman citizens. Who told you, then, that these are among the things in our own control, and not the affair of others? And who can give to another the things which he has not himself? "Well, but get them, then, that we too may have a share." If I can get them with the preservation of my own honor and fidelity and greatness of mind, show me the way and I will get them; but if you require me to lose my own proper good that you may gain what is not good, consider how inequitable and foolish you are. Besides, which would you rather have, a sum of money, or a friend of fidelity and honor? Rather assist me, then, to gain this character than require me to do those things by which I may lose it. Well, but my country, say you, as far as depends on me, will be unassisted. Here again, what assistance is this you mean? "It will not have porticoes nor baths of your providing." And what signifies that? Why, neither does a smith provide it with shoes, or a shoemaker with arms. It is enough if everyone fully performs his own proper business. And were you to supply it with another citizen of honor and fidelity, would not he be of use to it? Yes. Therefore neither are you yourself useless to it. "What place, then, say you, will I hold in the state?" Whatever you can hold with the preservation of your fidelity and honor. But if, by desiring to be useful to that, you lose these, of what use can you be to your country when you are become faithless and void of shame.25.Is anyone preferred before you at an entertainment, or in a compliment, or in being admitted to a consultation? If these things are good, you ought to be glad that he has gotten them; and if they are evil, don't be grieved that you have not gotten them. And remember that you cannot, without using the same means [which others do] to acquire things not in our own control, expect to be thought worthy of an equal share of them. For how can he who does not frequent the door of any [great] man, does not attend him, does not praise him, have an equal share with him who does? You are unjust, then, and insatiable, if you are unwilling to pay the price for which these things are sold, and would have them for nothing. For how much is lettuce sold? Fifty cents, for instance. If another, then, paying fifty cents, takes the lettuce, and you, not paying it, go without them, don't imagine that he has gained any advantage over you. For as he has the lettuce, so you have the fifty cents which you did not give. So, in the present case, you have not been invited to such a person's entertainment, because you have not paid him the price for which a supper is sold. It is sold for praise; it is sold for attendance. Give him then the value, if it is for your advantage. But if you would, at the same time, not pay the one and yet receive the other, you are insatiable, and a blockhead. Have you nothing, then, instead of the supper? Yes, indeed, you have: the not praising him, whom you don't like to praise; the not bearing with his behavior at coming in.26.The will of nature may be learned from those things in which we don't distinguish from each other. For example, when our neighbor's boy breaks a cup, or the like, we are presently ready to say, "These things will happen." Be assured, then, that when your own cup likewise is broken, you ought to be affected just as when another's cup was broken. Apply this in like manner to greater things. Is the child or wife of another dead? There is no one who would not say, "This is a human accident." but if anyone's own child happens to die, it is presently, "Alas I how wretched am I!" But it should be remembered how we are affected in hearing the same thing concerning others.27.As a mark is not set up for the sake of missing the aim, so neither does the nature of evil exist in the world.28.If a person gave your body to any stranger he met on his way, you would certainly be angry. And do you feel no shame in handing over your own mind to be confused and mystified by anyone who happens to verbally attack you?29.In every affair consider what precedes and follows, and then undertake it. Otherwise you will begin with spirit; but not having thought of the consequences, when some of them appear you will shamefully desist. "I would conquer at the Olympic games." But consider what precedes and follows, and then, if it is for your advantage, engage in the affair. You must conform to rules, submit to a diet, refrain from dainties; exercise your body, whether you choose it or not, at a stated hour, in heat and cold; you must drink no cold water, nor sometimes even wine. In a word, you must give yourself up to your master, as to a physician. Then, in the combat, you may be thrown into a ditch, dislocate your arm, turn your ankle, swallow dust, be whipped, and, after all, lose the victory. When you have evaluated all this, if your inclination still holds, then go to war. Otherwise, take notice, you will behave like children who sometimes play like wrestlers, sometimes gladiators, sometimes blow a trumpet, and sometimes act a tragedy when they have seen and admired these shows. Thus you too will be at one time a wrestler, at another a gladiator, now a philosopher, then an orator; but with your whole soul, nothing at all. Like an ape, you mimic all you see, and one thing after another is sure to please you, but is out of favor as soon as it becomes familiar. For you have never entered upon anything considerately, nor after having viewed the whole matter on all sides, or made any scrutiny into it, but rashly, and with a cold inclination. Thus some, when they have seen a philosopher and heard a man speaking like Euphrates (though, indeed, who can speak like him?), have a mind to be philosophers too. Consider first, man, what the matter is, and what your own nature is able to bear. If you would be a wrestler, consider your shoulders, your back, your thighs; for different persons are made for different things. Do you think that you can act as you do, and be a philosopher? That you can eat and drink, and be angry and discontented as you are now? You must watch, you must labor, you must get the better of certain appetites, must quit your acquaintance, be despised by your servant, be laughed at by those you meet; come off worse than others in everything, in magistracies, in honors, in courts of judicature. When you have considered all these things round, approach, if you please; if, by parting with them, you have a mind to purchase equanimity, freedom, and tranquillity. If not, don't come here; don't, like children, be one while a philosopher, then a publican, then an orator, and then one of Caesar's officers. These things are not consistent. You must be one man, either good or bad. You must cultivate either your own ruling faculty or externals, and apply yourself either to things within or without you; that is, be either a philosopher, or one of the vulgar.30.Duties are universally measured by relations. Is anyone a father? If so, it is implied that the children should take care of him, submit to him in everything, patiently listen to his reproaches, his correction. But he is a bad father. Is you naturally entitled, then, to a good father? No, only to a father. Is a brother unjust? Well, keep your own situation towards him. Consider not what he does, but what you are to do to keep your own faculty of choice in a state conformable to nature. For another will not hurt you unless you please. You will then be hurt when you think you are hurt. In this manner, therefore, you will find, from the idea of a neighbor, a citizen, a general, the corresponding duties if you accustom yourself to contemplate the several relations.31.Be assured that the essential property of piety towards the gods is to form right opinions concerning them, as existing "I and as governing the universe with goodness and justice. And fix yourself in this resolution, to obey them, and yield to them, and willingly follow them in all events, as produced by the most perfect understanding. For thus you will never find fault with the gods, nor accuse them as neglecting you. And it is not possible for this to be effected any other way than by withdrawing yourself from things not in our own control, and placing good or evil in those only which are. For if you suppose any of the things not in our own control to be either good or evil, when you are disappointed of what you wish, or incur what you would avoid, you must necessarily find fault with and blame the authors. For every animal is naturally formed to fly and abhor things that appear hurtful, and the causes of them; and to pursue and admire those which appear beneficial, and the causes of them. It is impractical, then, that one who supposes himself to be hurt should be happy about the person who, he thinks, hurts him, just as it is impossible to be happy about the hurt itself. Hence, also, a father is reviled by a son, when he does not impart to him the things which he takes to be good; and the supposing empire to be a good made Polynices and Eteocles mutually enemies. On this account the husbandman, the sailor, the merchant, on this account those who lose wives and children, revile the gods. For where interest is, there too is piety placed. So that, whoever is careful to regulate his desires and aversions as he ought, is, by the very same means, careful of piety likewise. But it is also incumbent on everyone to offer libations and sacrifices and first fruits, conformably to the customs of his country, with purity, and not in a slovenly manner, nor negligently, nor sparingly, nor beyond his ability.32.When you have recourse to divination, remember that you know not what the event will be, and you come to learn it of the diviner; but of what nature it is you know before you come, at least if you are a philosopher. For if it is among the things not in our own control, it can by no means be either good or evil. Don't, therefore, bring either desire or aversion with you to the diviner (else you will approach him trembling), but first acquire a distinct knowledge that every event is indifferent and nothing to you., of whatever sort it may be, for it will be in your power to make a right use of it, and this no one can hinder; then come with confidence to the gods, as your counselors, and afterwards, when any counsel is given you, remember what counselors you have assumed, and whose advice you will neglect if you disobey. Come to divination, as Socrates prescribed, in cases of which the whole consideration relates to the event, and in which no opportunities are afforded by reason, or any other art, to discover the thing proposed to be learned. When, therefore, it is our duty to share the danger of a friend or of our country, we ought not to consult the oracle whether we will share it with them or not. For, though the diviner should forewarn you that the victims are unfavorable, this means no more than that either death or mutilation or exile is portended. But we have reason within us, and it directs, even with these hazards, to the greater diviner, the Pythian god, who cast out of the temple the person who gave no assistance to his friend while another was murdering him.33.Immediately prescribe some character and form of conduce to yourself, which you may keep both alone and in company. Be for the most part silent, or speak merely what is necessary, and in few words. We may, however, enter, though sparingly, into discourse sometimes when occasion calls for it, but not on any of the common subjects, of gladiators, or horse races, or athletic champions, or feasts, the vulgar topics of conversation; but principally not of men, so as either to blame, or praise, or make comparisons. If you are able, then, by your own conversation bring over that of your company to proper subjects; but, if you happen to be taken among strangers, be silent. Don't allow your laughter be much, nor on many occasions, nor profuse. Avoid swearing, if possible, altogether; if not, as far as you are able. Avoid public and vulgar entertainments; but, if ever an occasion calls you to them, keep your attention upon the stretch, that you may not imperceptibly slide into vulgar manners. For be assured that if a person be ever so sound himself, yet, if his companion be infected, he who converses with him will be infected likewise. Provide things relating to the body no further than mere use; as meat, drink, clothing, house, family. But strike off and reject everything relating to show and delicacy. As far as possible, before marriage, keep yourself pure from familiarities with women, and, if you indulge them, let it be lawfully." But don't therefore be troublesome and full of reproofs to those who use these liberties, nor frequently boast that you yourself don't. If anyone tells you that such a person speaks ill of you, don't make excuses about what is said of you, but answer: " He does not know my other faults, else he would not have mentioned only these." It is not necessary for you to appear often at public spectacles; but if ever there is a proper occasion for you to be there, don't appear more solicitous for anyone than for yourself; that is, wish things to be only just as they are, and him only to conquer who is the conqueror, for thus you will meet with no hindrance. But abstain entirely from declamations and derision and violent emotions. And when you come away, don't discourse a great deal on what has passed, and what does not contribute to your own amendment. For it would appear by such discourse that you were immoderately struck with the show. Go not [of your own accord] to the rehearsals of anyauthors, nor appear [at them] readily. But, if you do appear, keepyour gravity and sedateness, and at the same time avoid being morose. When you are going to confer with anyone, and particularly of those in a superior station, represent to yourself how Socrates or Zeno would behave in such a case, and you will not be at a loss to make a proper use of whatever may occur. When you are going to any of the people in power, represent to yourself that you will not find him at home; that you will not be admitted; that the doors will not be opened to you; that he will take no notice of you. If, with all this, it is your duty to go, bear what happens, and never say [to yourself], " It was not worth so much." For this is vulgar, and like a man dazed by external things. In parties of conversation, avoid a frequent and excessive mention of your own actions and dangers. For, however agreeable it may be to yourself to mention the risks you have run, it is not equally agreeable to others to hear your adventures. Avoid, likewise, an endeavor to excite laughter. For this is a slippery point, which may throw you into vulgar manners, and, besides, may be apt to lessen you in the esteem of your acquaintance. Approaches to indecent discourse are likewise dangerous. Whenever, therefore, anything of this sort happens, if there be a proper opportunity, rebuke him who makes advances that way; or, at least, by silence and blushing and a forbidding look, show yourself to be displeased by such talk.34.If you are struck by the appearance of any promised pleasure, guard yourself against being hurried away by it; but let the affair wait your leisure, and procure yourself some delay. Then bring to your mind both points of time: that in which you will enjoy the pleasure, and that in which you will repent and reproach yourself after you have enjoyed it; and set before you, in opposition to these, how you will be glad and applaud yourself if you abstain. And even though it should appear to you a seasonable gratification, take heed that its enticing, and agreeable and attractive force may not subdue you; but set in opposition to this how much better it is to be conscious of having gained so great a victory.35.When you do anything from a clear judgment that it ought to be done, never shun the being seen to do it, even though the world should make a wrong supposition about it; for, if you don't act right, shun the action itself; but, if you do, why are you afraid of those who censure you wrongly?36.As the proposition, "Either it is day or it is night," is extremely proper for a disjunctive argument, but quite improper in a conjunctive one, so, at a feast, to choose the largest share is very suitable to the bodily appetite, but utterly inconsistent with the social spirit of an entertainment. When you eat with another, then, remember not only the value of those things which are set before you to the body, but the value of that behavior which ought to be observed towards the person who gives the entertainment.37.If you have assumed any character above your strength, you have both made an ill figure in that and quitted one which you might have supported.38.When walking, you are careful not to step on a nail or turn your foot; so likewise be careful not to hurt the ruling faculty of your mind. And, if we were to guard against this in every action, we should undertake the action with the greater safety.39.The body is to everyone the measure of the possessions proper for it, just as the foot is of the shoe. If, therefore, you stop at this, you will keep the measure; but if you move beyond it, you must necessarily be carried forward, as down a cliff; as in the case of a shoe, if you go beyond its fitness to the foot, it comes first to be gilded, then purple, and then studded with jewels. For to that which once exceeds a due measure, there is no bound.40.Women from fourteen years old are flattered with the title of "mistresses" by the men. Therefore, perceiving that they are regarded only as qualified to give the men pleasure, they begin to adorn themselves, and in that to place ill their hopes. We should, therefore, fix our attention on making them sensible that they are valued for the appearance of decent, modest and discreet behavior.41.It is a mark of want of genius to spend much time in things relating to the body, as to be long in our exercises, in eating and drinking, and in the discharge of other animal functions. These should be done incidentally and slightly, and our whole attention be engaged in the care of the understanding.42.When any person harms you, or speaks badly of you, remember that he acts or speaks from a supposition of its being his duty. Now, it is not possible that he should follow what appears right to you, but what appears so to himself. Therefore, if he judges from a wrong appearance, he is the person hurt, since he too is the person deceived. For if anyone should suppose a true proposition to be false, the proposition is not hurt, but he who is deceived about it. Setting out, then, from these principles, you will meekly bear a person who reviles you, for you will say upon every occasion, "It seemed so to him."43.Everything has two handles, the one by which it may be carried, the other by which it cannot. If your brother acts unjustly, don't lay hold on the action by the handle of his injustice, for by that it cannot be carried; but by the opposite, that he is your brother, that he was brought up with you; and thus you will lay hold on it, as it is to be carried.44.These reasonings are unconnected: "I am richer than you, therefore I am better"; "I am more eloquent than you, therefore I am better." The connection is rather this: "I am richer than you, therefore my property is greater than yours;" "I am more eloquent than you, therefore my style is better than yours." But you, after all, are neither property nor style.45.Does anyone bathe in a mighty little time? Don't say that he does it ill, but in a mighty little time. Does anyone drink a great quantity of wine? Don't say that he does ill, but that he drinks a great quantity. For, unless you perfectly understand the principle from which anyone acts, how should you know if he acts ill? Thus you will not run the hazard of assenting to any appearances but such as you fully comprehend.46.Never call yourself a philosopher, nor talk a great deal among the unlearned about theorems, but act conformably to them. Thus, at an entertainment, don't talk how persons ought to eat, but eat as you ought. For remember that in this manner Socrates also universally avoided all ostentation. And when persons came to him and desired to be recommended by him to philosophers, he took and- recommended them, so well did he bear being overlooked. So that if ever any talk should happen among the unlearned concerning philosophic theorems, be you, for the most part, silent. For there is great danger in immediately throwing out what you have not digested. And, if anyone tells you that you know nothing, and you are not nettled at it, then you may be sure that you have begun your business. For sheep don't throw up the grass to show the shepherds how much they have eaten; but, inwardly digesting their food, they outwardly produce wool and milk. Thus, therefore, do you likewise not show theorems to the unlearned, but the actions produced by them after they have been digested.47.When you have brought yourself to supply the necessities of your body at a small price, don't pique yourself upon it; nor, if you drink water, be saying upon every occasion, "I drink water." But first consider how much more sparing and patient of hardship the poor are than we. But if at any time you would inure yourself by exercise to labor, and bearing hard trials, do it for your own sake, and not for the world; don't grasp statues, but, when you are violently thirsty, take a little cold water in your mouth, and spurt it out and tell nobody.48.The condition and characteristic of a vulgar person, is, that he never expects either benefit or hurt from himself, but from externals. The condition and characteristic of a philosopher is, that he expects all hurt and benefit from himself. The marks of a proficient are, that he censures no one, praises no one, blames no one, accuses no one, says nothing concerning himself as being anybody, or knowing anything: when he is, in any instance, hindered or restrained, he accuses himself; and, if he is praised, he secretly laughs at the person who praises him; and, if he is censured, he makes no defense. But he goes about with the caution of sick or injured people, dreading to move anything that is set right, before it is perfectly fixed. He suppresses all desire in himself; he transfers his aversion to those things only which thwart the proper use of our own faculty of choice; the exertion of his active powers towards anything is very gentle; if he appears stupid or ignorant, he does not care, and, in a word, he watches himself as an enemy, and one in ambush.49.When anyone shows himself overly confident in ability to understand and interpret the works of Chrysippus, say to yourself, " Unless Chrysippus had written obscurely, this person would have had no subject for his vanity. But what do I desire? To understand nature and follow her. I ask, then, who interprets her, and, finding Chrysippus does, I have recourse to him. I don't understand his writings. I seek, therefore, one to interpret them." So far there is nothing to value myself upon. And when I find an interpreter, what remains is to make use of his instructions. This alone is the valuable thing. But, if I admire nothing but merely the interpretation, what do I become more than a grammarian instead of a philosopher? Except, indeed, that instead of Homer I interpret Chrysippus. When anyone, therefore, desires me to read Chrysippus to him, I rather blush when I cannot show my actions agreeable and consonant to his discourse.50.Whatever moral rules you have deliberately proposed to yourself. abide by them as they were laws, and as if you would be guilty of impiety by violating any of them. Don't regard what anyone says of you, for this, after all, is no concern of yours. How long, then, will you put off thinking yourself worthy of the highest improvements and follow the distinctions of reason? You have received the philosophical theorems, with which you ought to be familiar, and you have been familiar with them. What other master, then, do you wait for, to throw upon that the delay of reforming yourself? You are no longer a boy, but a grown man. If, therefore, you will be negligent and slothful, and always add procrastination to procrastination, purpose to purpose, and fix day after day in which you will attend to yourself, you will insensibly continue without proficiency, and, living and dying, persevere in being one of the vulgar. This instant, then, think yourself worthy of living as a man grown up, and a proficient. Let whatever appears to be the best be to you an inviolable law. And if any instance of pain or pleasure, or glory or disgrace, is set before you, remember that now is the combat, now the Olympiad comes on, nor can it be put off. By once being defeated and giving way, proficiency is lost, or by the contrary preserved. Thus Socrates became perfect, improving himself by everything. attending to nothing but reason. And though you are not yet a Socrates, you ought, however, to live as one desirous of becoming a Socrates.51.The first and most necessary topic in philosophy is that of the use of moral theorems, such as, "We ought not to lie;" the second is that of demonstrations, such as, "What is the origin of our obligation not to lie;" the third gives strength and articulation to the other two, such as, "What is the origin of this is a demonstration." For what is demonstration? What is consequence? What contradiction? What truth? What falsehood? The third topic, then, is necessary on the account of the second, and the second on the account of the first. But the most necessary, and that whereon we ought to rest, is the first. But we act just on the contrary. For we spend all our time on the third topic, and employ all our diligence about that, and entirely neglect the first. Therefore, at the same time that we lie, we are immediately prepared to show how it is demonstrated that lying is not right.52.Upon all occasions we ought to have these maxims ready at hand: "Conduct me, Jove, and you, 0 Destiny, Wherever your decrees have fixed my station."Cleanthes "I follow cheerfully; and, did I not, Wicked and wretched, I must follow still Whoever yields properly to Fate, is deemed Wise among men, and knows the laws of heaven."Euripides, Frag. 965 And this third: "0 Crito, if it thus pleases the gods, thus let it be. Anytus and Melitus may kill me indeed, but hurt me they cannot."Plato's Crito and Apology THE END
true
true
true
The Enchiridion by Epictetus, part of the Internet Classics Archive
2024-10-12 00:00:00
2009-01-01 00:00:00
null
null
null
null
null
null
1,460,689
http://www.geekypeek.com/?p=631
null
null
null
true
true
false
null
2024-10-12 00:00:00
null
null
null
null
null
null
null
7,621,437
http://src.am
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
30,384,005
https://blog.redwarp.app/image-filters/
Code bits
null
# Computing image filters with wgpu-rs This post describes creating a simple image processing pipeline with compute shaders, using `wgpu-rs` and `Rust` . # Getting started You probably already know this, but your GPU (aka your Graphic Processing Unit - your graphics card if you have one) does not only render graphics, but is also capable of computing regular algorithms. Yup, you can use your GPU to calculate a fibonacci sequence if that is your fancy. One of the things that your GPU excels at is parallel computation, as they are optimized to render multiple pixels at once. Accessing the power of the graphics cards for computing used to be fairly complex: - As usual, Nvidia has its own proprietary library, CUDA. - OpenCL is an open source and free parallel programming API made by the Khronos group (also responsible for OpenGL and Vulkan, all the cool stuff). - Android implemented their own compute API, RenderScript. Nowadays, each rendering API has their own solution as well. You can do GPU computation using - Metal on Apple. - DirectX 11+ on Windows. - Vulkan everywhere. In the `Rust` ecosystem, `wgpu-rs` is a great library that will abstract these different backends, and allow you to write portable GPU computation code that will run everywhere (hopefully, I'm currently only trying the code on a Windows machine without a means to really test portability). Who is the target of this article?Beginners in GPU programming like me, with some notion of`Rust` , who like the idea of using their GPU for something else than graphics, but are mostly tinkering and wondering what they are doing every step of the way. # Creating a basic grayscale filter The plan is simple: - Take a sample image. - Load it in the graphics card as a texture. - Apply a compute shader to calculate a grayscale version of it. - Retrieve the resulting image and save it to disk. ## A couple of dependencies... Let's start with creating a new project. ``` ``` As always, this will create a new `Rust` project, including a `Cargo.toml` file and a hello world `main.rs` file. Let's edit the `Cargo.toml` file and add all the dependencies we will need. ``` [] = "image-filters" = "0.1.0" = "2021" [] = "1.0" = "1.9" = "0.24" = "0.2" = "0.14" ``` So, what are those? `wgpu` is obvious.`image` will allow us to load a png file, decode it, and read it as a stream of bytes.`bytemuck` is a utility crate used for casting between plain data types.`anyhow` is here so we can rethrow most results as this is just sample code.`pollster` is used here as several function in`wgpu` are async.`pollster` lets you block a thread until a future completes. ## Wgpu basics Let's get started in the `main` method. ``` ``` We return an `anyhow::Result` to simplify error handling, and declare usage of `pollster::FutureExt` so we can `block_on()` the async calls easily. We then create the device and the queue. - The device represents an open connection to your GPU, and we will use it later to create the resources we need (like textures). - We will use the queue to issue commands to the GPU. ``` let instance = new; let adapter = instance .request_adapter .block_on .ok_or?; let = adapter .request_device .block_on?; ``` This is fairly standard: - you create your instance, requesting any backend. You could instead specify the one of your choice, like `wgpu::Backends::VULKAN` . - when creating your adapter, you can specify your power preferences. Here, I ask for `HighPerformance` , but you could also choose`LowPerformance` . - you then create your device and queue, and they will come in handy later for every operation. We use pollster here to block on `request_adapter` and `request_device` methods, as they are `async` calls. ## Loading the texture For simplicity, we shall work with a png file and include it as bytes in the source code. ``` let input_image = load_from_memory?.to_rgba8; let = input_image.dimensions; ``` Using the image crate, we load the sushi image, and make sure it is using the `rbga` format. Using the device, we then create a wgpu texture. ``` let texture_size = Extent3d ; let input_texture = device.create_texture; ``` - No mipmapping or multi sampling are used here, so we keep `mip_level_count` and`sample_count` to 1. - Its usage specifies: + `TEXTURE_BINDING` : the texture can be bound to a shader for sampling, meaning we will be able to retrieve its pixels in our compute code. +`COPY_DST` : we can copy data into it. And we need to copy data into it, as the texture is currently empty. - The format is another interesting beast: several formats are supported by `wgpu` . Using`Rgba8Unorm` means that the texture contains 8 bit per channel (aka a byte), in the r, g, b, a order, but that the u8 values from [0 - 255] of each channel will be converted to a float between [0 - 1]. ``` queue.write_texture; ``` We copy the image data to the texture, which we can do as we declared the texture usage `COPY_DST` . Every pixel is made of 4 bytes, one per color channel, meaning that `bytes_per_row` is 4 times the width of the image. ## Creating an output texture We will use an output texture to store the grayscale version of our image. ``` let output_texture = device.create_texture; ``` Its usage is slightly different: `COPY_SRC` instead of`COPY_DST` , as we will copy from it later to retrieve our filtered image.`STORAGE_BINDING` instead of`TEXTURE_BINDING` to indicate that it will be bound in a shader as a place to store the computation result. ## Shader time ### Shader what? A compute shader is a set of instructions that will be given to your GPU to tell it what calculations are needed. In the same way that a CPU program can be written in multiple languages (Rust, C, C++, ...), a GPU program can be written in multiple languages (GLSL, HLSL, SIR-V, MSL) that need to be compiled as well. It could be a mess, but `wgpu` uses a universal shader translator, `naga` , that allow you to write your shader in `wgsl` or `glsl` , and make sure they are properly converted for each backend. If you run your program on an Apple computer using the `metal` backend, your shader will be translated to the metal shading language (or `msl` ) automagically. With all that being said, let's take a look at our `wgsl` instructions to convert an image from color to grayscale. ``` [[group(0), binding(0)]] var input_texture : texture_2d<f32>; [[group(0), binding(1)]] var output_texture : texture_storage_2d<rgba8unorm, write>; [[stage(compute), workgroup_size(16, 16)]] fn grayscale_main( [[builtin(global_invocation_id)]] global_id : vec3<u32>, ) { let dimensions = textureDimensions(input_texture); let coords = vec2<i32>(global_id.xy); if(coords.x >= dimensions.x || coords.y >= dimensions.y) { return; } let color = textureLoad(input_texture, coords.xy, 0); let gray = dot(vec3<f32>(0.299, 0.587, 0.114), color.rgb); textureStore(output_texture, coords.xy, vec4<f32>(gray, gray, gray, color.a)); } ``` Contrarily to the CPU approach, where we would write one piece of code that iterates on every pixel to calculate its grayscale value, the compute shader will be a piece of code that runs concurrently on each pixel. We declare two variable, input and output texture, that match the textures we created in `Rust` . The output is of the type `texture_storage_2d` , with the same `rgba8unorm` type as before. Our `grayscale_main` function declares a **workgroup size**, but more on that later. The rest is straightforward: - Get the coordinates of the current pixel. - Get the dimensions of the input image. - If we are out of bounds, return. - Load the pixel. - Calculate the gray value of said pixel. - Write it to the output texture. Having chosen the `Rbga8Unorm` format for our textures, the colors are retrieved as a float between 0 and 1, and we don't need to cast them when multiplying the r, g and b values to figure out the grayscale value. If we had chosen instead the`Rbga8Uint` format instead, textureLoad would instead return a color of type`vec<u8>` , keeping the values between 0 and 255, and we would first need to cast them to float, before multiplying them and recasting them to unsigned byte before writing down the output. ### Loading the shader and creating the pipeline Okay, back to Rust! ``` let shader = device.create_shader_module; let pipeline = device.create_compute_pipeline; ``` Our shader is loaded as text. We specify our entry point, matching the `grayscale_main` function in the shader. ## Bind group We then proceed to creating our bind group: it is the Rust representation of the data that will be attached to the GPU: In the shader, we annotated our input_texture with `[[group(0), binding(0)]]` . We must now tell our `Rust` code what it corresponds to. ``` let texture_bind_group = device.create_bind_group; ``` For the group 0, we match our `input_texture` to the binding 0, and our `output_texture` to the binding 1, just like in the shader! In this example, we bind two textures, but we could also bind data buffers or a texture sampler if we wanted. `pipeline.get_bind_group_layout(0)` automatically creates a bind group layout for us, based on the shader. Alternatively, we could create the bind group layout by hand instead, to be even more specific. It is out of scope here, so let's ignore that for this article. ## Workgroup and dispatch ### Workgroup ? Didn't I tell you that we would speak about workgroup? A workgroup is a set of invocations which concurrently execute a compute shader stage entry point (here, our main function), and share access to shader variables in the workgroup address space. In our shader, we specified a workgroup of dimension 16 by 16. It can be seen as 2D matrix of instructions executed at once. In our case, 16 by 16 equals 256. Our shader will process when running 256 pixels at once! Take that, sequential computing! Of course, our image is a bit bigger than 16x16, so we need to call this compute shader multiple times to handle every single pixel. How many times exactly? Well, we simply divide the width and height of our image by the workgroup dimensions, and it will tell us how many times we need to run this 16x16 matrix to cover everything. Let's have a simple helper method for that: ``` ``` This method makes sure that there will be enough workgroup to cover each pixel. If we had a width of 20 pixels and a height of 16, using the workgroup of dimension 16 by 16, we would be missing a band of 4 pixels by only creating a single workgroup. We would need to create a second workgroup to handle the extra pixels, and we would then be able to cover 32 pixels in width. Some work will go to waste, but it is better than not applying our filters to a band of pixels. ### Dispatching We will need a command encoder to chain our different operations: ``` let mut encoder = device.create_command_encoder; ``` And now we create our compute pass, set our pipeline, bind our textures, and dispatch our work to the GPU! ``` ``` Dispatching tells `wgpu` how many invocations of the shader, or how many workgroups, must be created in each dimension. For a picture of 48x32 pixels, we would need to dispatch 6 workgroups: 3 in the `x` dimensions times 2 in the`y` dimensions. `dispatch` takes a third argument, set here to 1: workgroup can also be defined in three dimensions! But we are working on 2d textures, so we won't use it. ### Global Invocation Id So how do we go from workgroup to pixel position? Simple: we used in the shader the `global_invocation_id` built-in variable! The `global_invocation_id` gives us the coordinate triple for the current invocation's corresponding compute shader grid point. Hum, I feel that is not helping so much. Let's just say that it multiplies the current workgroup identifier (our dispatch action creates several workgroup, and gives to each of them a `x` and a `y` ) with the workgroup size, and add to it the `local_invocation_id` , meaning the coordinates of the current invocation within its workgroup. Let's start again with our 48x32 image. 6 workgroup will be created, with ids (0, 0), (1, 0), (2, 0), (1, 0), (1, 1) and (1, 2) When the workgroup (1, 0) is running, 256 invocations will be running in parallel, with their own local identifier within the group: (0, 1), ... (0, 15), (1, 0) ... (7, 8) ... (15, 15). If we take the invocation (7, 8) of the workgroup (0, 1), its global invocation id will be (0 * 16 + 7, 1 * 16 + 8), meaning (7, 24). Which gives us the coordinate of the pixel this specific invocation will work on. ## Fetching our result Fetching our results will be done in three steps: - we will copy our texture to a buffer. - we will map our buffer, so it's available to the CPU. - we will recreate an image from the buffered data. ### Copying our texture to a buffer ``` encoder.copy_texture_to_buffer; ``` Wait what? What is this `padded_bytes_per_row` ? Where does that come from? I guess we need to speak about **padding**. Similarly to the method we used to copy our image to a texture, we must here specify the number of bytes we copy per line (or row) of our texture. There is a caveat though: This `bytes_per_row` argument must be a multiple of 256, or the function will panic. Reading the doc for this method states: /// # Panics - `source.layout.bytes_per_row` isn't divisible by [`COPY_BYTES_PER_ROW_ALIGNMENT` ]. `COPY_BYTES_PER_ROW_ALIGNMENT` is equal to 256. So we need to calculate a number of bytes per row that is a multiple of 256 and that is equal to the closest multiple of 256. Damn. Let's take our 48x32 image again. Its width is 48. There are 4 bytes per pixel, so we would want to read 4 x 48 = 192 bytes per row. 192 is not a multiple of 256, so we take the next multiple of 256 that fits 192. In this case, well, that is 256. It will be our `padded_bytes_per_row` value. Let's write a helper method to calculate that. ``` /// Compute the next multiple of 256 for texture retrieval padding. ``` Let's set `padded_bytes_per_row` and `unpadded_bytes_per_row` (we will need it too). ``` let padded_bytes_per_row = padded_bytes_per_row; let unpadded_bytes_per_row = width as usize * 4; ``` We call the `copy_texture_to_buffer` : ``` encoder.copy_texture_to_buffer ``` ### Time to submit work! Up until now, we have been declaring to wgpu the work we want to be done, and we have added all of our compute commands to the encoder. But nothing has happened yet! Time to queue all of that work. ``` queue.submit; ``` By doing so, we tell wgpu to start processing the command of the encoder asynchronously. ### Mapping the data Let's map our data, and wait until the submitted actions have been completed. ``` let buffer_slice = output_buffer.slice; buffer_slice.map_async; device.poll; ``` We need to wait on `poll` , to make sure that the submitted instructions have been completed and that the data is available in the mapped buffer. `map_async` takes a callback, which probably should be used in real production code to check for errors. For this article, I'll just ignore it (bad, bad, bad). We can then access the data: ``` let padded_data = buffer_slice.get_mapped_range; ``` At this point, we have a slice of data that is padded to 256, and we need to convert it to our regular unpadded data. So let's create our final pixels: ``` let mut pixels: = vec!; for in padded_data .chunks_exact .zip ``` We create a `Vec<u8>` to contain our final data, and copy line by line our data, only considering the unpadded data. Finally, let's save! ``` if let Some = from_raw ``` We are done! ## Output After all this work, you have it! The gray sushi! # Final thoughts So, is it worth it? You tell me! For this example in particular, definitely not! Iterating over an array of pixels, running an O(n) algorithm to change the color to gray... a CPU will do such a good job that it is not worth the trouble of writing all that code. But it was a fun thing to do! An obvious caveat to this approach is that there is a limit to the texture size one can load in a GPU. For example, on Vulkan, the max width and height of a 2d texture is 4096 pixel. If you wanted to load an image that was bigger than that, (like, if your camera as a 48 megapixel resolution, and your photos are 7920x6002 pixels), you would need to write some extra code to split your image into smaller chunks, and reassemble the result. # A few links First of all, if you want to build it and run it yourself, you will find the code here: I made a few other filters for fun, including a slightly more involved gaussian blur: Several useful links: - wgpu-rs - homepage for the `wgpu-rs` project. - Get started with GPU Compute on the web - it helped and inspired me to write this article. - The sushi picture, by gnokii - royalty free sushi. - WGSL Spec - read the doc, it helps!
true
true
true
null
2024-10-12 00:00:00
2022-02-11 00:00:00
null
null
null
null
null
null
7,882,563
http://startupunicorn.com/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
25,107,689
https://www.youtube.com/watch?v=to2SMng4u1k&feature=emb_logo
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
23,779,990
https://www.youtube.com/watch?v=-fXmaeDMsh0
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
6,829,421
http://blog.datafox.co/google-docs-track-stocks/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
12,782,815
http://www.programmingposters.com/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
31,822,414
https://highassurance.rs/landing.html
High Assurance Rust: Developing Secure and Robust Software
null
This book is an introduction to building performant software we can **justifiably trust**. That means having sufficient data to support confidence in our code's functionality and security. Trustworthiness is a hallmark of **high assurance** software. With assurance as our driving concept, we'll take a hands-on, project-based approach to two fundamental but often inaccessible topics in software development: **systems programming** and **low-level software security**. You'll learn Rust - a modern, multi-paradigm language that emphasizes speed and correctness. Most programming books teach a new language by presenting a dozen small, unrealistic programs. Not this one. We'll design, write, and validate a fully-featured alternative to the ordered map and set implementations in Rust's standard library. You'll gain a deep understanding of the Rust language by re-implementing one of its major dynamic collections, one idiomatic API at a time. Unlike the standard version, our implementation will be: - **Maximally Safe.**Upholds Rust's strongest memory safety guarantees, for all possible executions.- To test properties the compiler can't prove, we'll learn advanced program analysis techniques, including *differential fuzzing*and*deductive verification**. - To test properties the compiler can't prove, we'll learn advanced program analysis techniques, including - **Extremely Portable.**Capable of running on every operating system, or even without one (e.g. "bare metal").- Our library is a *hardened component*. To integrate it within larger codebases, we'll add*CFFI bindings*to make the Rust functions callable from other languages - including C and Python. - Our library is a - **Highly Available.**Offers*fallible*APIs for handling cases that could otherwise result in a crash.- E.g. *Out-of-Memory (OOM) error*- when all pre-allocated memory has been exhausted. - E.g. ## The State-of-the-Art in Practical Software Assurance We'll use cutting-edge, open-source software assurance tools to validate the code we write in this book. Some of these tools are mature and used in commercial industry: `rustc` (modern compiler)`libFuzzer` (fuzz testing framework)`rr` ("time-travel" debugger)`qemu` (whole-system emulator) Other tools are experimental and under active research. A full inventory is available in the appendix. Visually, this book covers the below topics (contrasted roughly on tradeoff of **development speed** and **formal rigor**). Don't worry, we'll provide clear explanations and context for each. Notice the bias toward development speed. We're interested in **lightweight processes** that, in the long run, enable us to **ship quality code faster** and spend **less time patching** security and reliability failures. Techniques you can apply to real-world code. Today. Unlike other Rust books, you won't just learn the language. You'll learn how to *reason* about software security at the leading edge. To think like an attacker. And to write code resistant to attack. That mental model is valuable no matter what programming language you primarily use. ## Sponsors Supporting this Book The development of this book (research, writing, and coding) is made possible through the generous support of: Under the first tranche of the 2022 Project Grants Program. A full list of awarded projects is available here, please check out the range of exciting work happening within the global Rust community! You need to build a data structure library to serve a mission-critical application. It must run on nearly any device, operate in the field for years on end without error, and tolerate attacker-controlled input. There will be no patches and there can be no failures. Your code must survive.Ship strong. * == may be subject to change! This book is a work in progress. If you'd like to be notified when it's finished and a physical print is available, please sign up here.
true
true
true
null
2024-10-12 00:00:00
2022-06-14 00:00:00
null
null
null
null
null
null
12,529,493
https://sematext.com/blog/2016/09/13/logstash-alternatives/
5 Awesome Logstash Alternatives: Pros & Cons [2023] - Sematext
Radu Gheorghe
When it comes to centralizing logs to Elasticsearch, the first log shipper that comes to mind is Logstash. People hear about it even if it’s not clear what it does: – Bob: *I’m looking to aggregate logs* – Alice: *you mean… like… Logstash?* When you get into it, you realize centralizing logs often implies a bunch of things, and Logstash isn’t the only log shipper that fits the bill: **fetching data from a source**: a file, a UNIX socket, TCP, UDP…**processing it**: appending a timestamp, parsing unstructured data, adding Geo information based on IP**shipping it**to a destination. In this case, either Sematext Logs or Elasticsearch. Sematext Logs has an Elasticsearch API so shipping logs there is just as simple as shipping to an Elasticsearch instance. Keep in mind, the shipper should ideally be able to**buffer and retry**log shipping because Elasticsearch can be down or struggling, or the network can be down. **Use Logstash or any Logstash alternative to send logs to Sematext Logs – Hosted ELK as a Service.** Get Started In this post, we’ll describe Logstash and 5 of the best “alternative” log shippers (Logagent, Filebeat, Fluentd, rsyslog and syslog-ng), so you know which fits which use-case depending on their advantages. If you want to jump right to Sematext Logs and understand how to use them to centralize your logs, then check out this short video below. ## Logstash Logstash is not the oldest shipper of this list (that would be syslog-ng, ironically the only one with “new” in its name), but it’s certainly** the best known**. That’s because **it has lots of plugins**: inputs, codecs, filters and outputs. Basically, you can take pretty much any kind of data, enrich it as you wish, then push it to lots of destinations. **Typical use cases: What is Logstash used for?** Logstash is typically used for collecting, parsing, and storing logs for future use as part of a log management solution. **Logstash Advantages** Logstash’s main strongpoint is **flexibility, due to the number of plugins**. Also, its **clear documentation** and **straightforward configuration** format means it’s used in a variety of use-cases. This leads to a virtuous cycle: you can find online recipes for doing pretty much anything. Here are a few Logstash recipe examples from us: “5 minute tutorial intro”, “How to reindex data in Elasticsearch”, “How to parse Elasticsearch logs”, “How to rewrite Elasticsearch slowlogs so you can replay them with JMeter”. **Logstash Disadvantages** Logstash’s biggest con or “Achille’s heel” has always been **performance and resource consumption** (the default heap size is 1GB). Though performance improved a lot over the years, it’s still a lot slower than the alternatives. We’ve done some benchmarks comparing Logstash to rsyslog and to filebeat and Elasticsearch’s Ingest node. This can be a problem for high traffic deployments, when Logstash servers would need to be comparable with the Elasticsearch ones. That said, you can delegate the heavy processing to one or more central Logstash boxes, while keeping the logging servers with a simpler – and thus less resource-consuming – configuration. It also helps that Logstash comes with configurable in-memory or on-disk buffers: Because of the flexibility and abundance of recipes, **Logstash** **is a great tool for prototyping**, especially for more complex parsing. **If you have big servers,** you might as well install Logstash on each. You won’t need much buffering if you’re tailing files, because the file itself can act as a buffer (i.e. Logstash remembers where it left off): **If you have small servers,** installing Logstash on each is a no go, so you’ll need a lightweight log shipper on them, that could push data to Elasticsearch through one (or more) central Logstash servers: As your logging project moves forward, you may or may not need to change your log shipper because of performance/cost. When choosing whether Logstash performs well enough, it’s important to have a good estimation of throughput needs – which would predict how much you’d spend on Logstash hardware. #### Log Management & Analytics – A Quick Guide to Logging Basics Looking to replace Splunk or a similar commercial solution with Elasticsearch, Logstash, and Kibana (aka, “ELK stack” or “Elastic stack”) or an alternative logging stack? In this eBook, you’ll find useful how-to instructions, screenshots, code, info about structured logging with rsyslog and Elasticsearch, and more. **Download yours**. **Logstash vs Logagent** This is our log shipper that was born out of the need to make it easy for someone who didn’t use a log shipper before to send logs to Sematext Logs (our log management software that exposes the Elasticsearch API). And because Sematext Logs exposes the Elasticsearch API, Logagent can be just as easily used to push data to your own Elasticsearch cluster. **Logagent Advantages** The main one is ease of use: if **Logstash is easy** (actually, you still need a bit of learning if you never used it, that’s natural), Logagent really gets you started in a minute. It tails everything in /var/log out of the box, parses various logging formats out of the box (Elasticsearch, Solr, MongoDB, Apache HTTPD…). It can mask sensitive data like PII, date of birth, credit card numbers, etc. It will also do GeoIP enriching based on IPs (e.g. for access logs) and update the GeoIP database automatically. It’s also light and fast, you’ll be able to put it on most logging boxes (unless you have very small ones, like appliances). Like Logstash, Logagent has input, filter and output plugins. Persistent buffers are also available, and it can write to and read from Kafka. **Logagent Disadvantages** Logagent is still young, although is developing and maturing quickly. It has some interesting functionality (e.g. it accepts Heroku or Cloud Foundry logs), but it is not yet as flexible as Logstash. To summarize the main differences between Logstash and Logagent are that Logstash is more mature and more out-of-the-box functionality, while Logagent is lighter and easier to use. **Logagent Typical use-cases** Logagent is a good choice of a shipper that can do everything (tail, parse, buffer) that you can install on each logging server. Especially if you want to get started quickly. Logagent can easily parse and ship Docker containers logs. It works with Docker Swarm, Docker Datacenter, Docker Cloud, as well as Amazon EC2, Google Container Engine, Kubernetes, Mesos, RancherOS, and CoreOS, so for Docker log shipping, this is the tool to use. Sematext Logs also offers a **preconfigured, hosted Logagent, at no additional cost**. This is useful if you want to ship logs from journald, if you want to centralize GitHub events, or if you’re using a PaaS such as Cloud Foundry or Heroku. **Logstash vs Filebeat** As part of the Beats “family”, Filebeat is a lightweight log shipper that came to life precisely to address the weakness of Logstash: Filebeat was made to be that lightweight log shipper that pushes to Logstash, Kafka or Elasticsearch. So the main differences between Logstash and Filebeat are that Logstash has more functionality, while Filebeat takes less resources. The same goes when you compare Logstash vs Beats in general: while Logstash has a lot of inputs, there are specialized beats (most notably MetricBeat) that do the job of collecting data with very little CPU and RAM. **Filebeat Advantages** Filebeat is just a tiny binary with no dependencies. **It takes very little resources** and, though it’s young, I find it quite** reliable** – mainly because it’s **simple** and there are few things that can go wrong. That said, you have lots of knobs regarding what it can do. **For example**, how aggressive it should be in searching for new files to tail and when to close file handles when a file didn’t get changes for a while. To help you get started, Filebeat comes with modules for specific log types. **For example**, the Apache module will point Filebeat to default access.log and error.log paths, configure Elasticsearch’s Ingest node to parse them, configure Elasticsearch’s mappings and settings as well as deploy Kibana dashboards for analyzing things like response time and response code breakdown. **Filebeat Disadvantages** Filebeat’s scope is very limited, so you’ll have a problem to solve somewhere else. For example, if you use Logstash down the pipeline, you have about the same performance issue. Because of this, Filebeat’s scope is growing. Initially it could only send logs to Logstash and Elasticsearch, but now it can send to Kafka and Redis. Filebeat can also do some filtering: it can drop events or append metadata to them. **Filebeat Typical use-cases** Filebeat is great for solving a specific problem: you log to files, and you want to either: **ship directly to Elasticsearch**. This works if you want to just “grep” them or if you log in JSON (Filebeat can parse JSON). Or, if you want to use Elasticsearch’s Ingest for parsing and enriching (assuming the performance and functionality of Ingest fits your needs)**put them in Kafka/Redis**, so another shipper (e.g. Logstash, or a custom Kafka consumer) can do the enriching and shipping. This assumes that the chosen shipper fits your functionality and performance needs**ship to Logstash**. Like the above, except you’re relying on Logstash to buffer instead of Kafka/Redis. Simpler, but less flexible and fault tolerant **Filebeat to Elasticsearch’s Ingest** Elasticsearch comes with its own parsing capabilities (like Logstash’s filters) called Ingest. This means you can push directly from Filebeat to Elasticsearch, and have Elasticsearch do both parsing and storing. You shouldn’t need a buffer when tailing files because, just as Logstash, Filebeat remembers where it left off: **Filebeat to Kafka** If you need buffering (e.g. because you don’t want to fill up the file system on logging servers), you can use a central Logstash for that. However, Logstash’s queue doesn’t have built-in sharding or replication. For larger deployments, you’d typically use Kafka as a queue instead, because Filebeat can talk to Kafka as well: Top 5 Logstash Alternatives: https://t.co/pJ6SlE7DW2 #filebeat @fluentd @rsyslog #syslogng #logagent pic.twitter.com/di0crzj2aY — Sematext Group, Inc. (@sematext) September 13, 2016 To summarize the **differences between Logstash and **Filebeat**:** Logstash | Filebeat | | Resource usage | heavy | light | Input options | many | fewer: files, TCP/UDP (including syslog), Kafka, etc | Output options | many | fewer: Logstash, Elasticsearch, Kafka, etc | Buffering | disk, memory | disk (beta), memory | ## Logstash vs **rsyslog** The default syslog daemon on most Linux distros, rsyslog can do so much more than just picking Linux logs from the syslog socket and writing to /var/log/messages. It can tail files, parse them, buffer (on disk and in memory) and ship to a number of destinations, including Elasticsearch. You can find more info on how to use rsyslog for processing Apache and system logs here. **Rsyslog Advantages ** **rsyslog is the fastest shipper** that we tested so far. If you use it as a simple router/shipper, any decent machine will be limited by network bandwidth, but it really shines when you want to parse multiple rules. Its grammar-based parsing module (mmnormalize) works at constant speed no matter the number of rules (we tested this claim). This means that with 20-30 rules, like you have when parsing Cisco logs, it will outperform the regex-based parsers like grok by at least a factor of 100. It’s also **one of the lightest parsers you can find**, depending on the configured memory buffers. **Rsyslog Disadvantages** rsyslog requires more work to get the configuration right (you can find some sample configuration snippets here on our blog) and this is made more difficult by two things: - documentation is hard to navigate, especially for somebody new to the terminology - versions up to 5.x had a different configuration format (expanded from the syslogd config format, which it still supports). Newer versions can still work with the old format, but most newer features (like the Elasticsearch output, Kafka input and output) only work with the new configuration format Though rsyslog tends to be reliable once you get to a stable configuration, you’re likely to find some interesting bugs along the way. Automatic testing constantly improves in rsyslog, but it’s not yet as good as something like Logstash or Filebeat. To summarize, the main difference between Logstash and rsyslog is that Logstash is easier to use while rsyslog lighter. **Rsyslog Typical use-cases** rsyslog fits well in scenarios where you either need something very light yet capable (an appliance, a small VM, collecting syslog from within a Docker container). If you need to do processing in another shipper (e.g. Logstash) you can forward JSON over TCP for example, or connect them via a Kafka/Redis buffer. rsyslog also works well when you need that ultimate performance. Especially if you have multiple parsing rules. Then it makes sense to invest time in getting that configuration working. To summarize the differences between Logstash and rsyslog: Logstash | rsyslog | | Resource usage | heavy | light | Inputs | many | fewer: files, all syslog flavors, Kafka | Filters | many | fewer: GeoIP, anonymizing, etc. Though events can be manipulated through variables and templates | Outputs | many | many (Elasticsearch, Kafka, SQL..) though still fewer than Logstash | Regex parsing | grok | grok (less mature) | Grammar-based parsing | dissect (less mature) | liblognorm (powerful, fast) | Multiple processing pipelines | yes | yes | Exposes internal metrics | yes, pull (HTTP API) | yes, push (input module) | Queues | memory, disk | memory, disk, hybrid. Outputs can have their own queues | Variables | event-specific (metadata) | event-specific and global | **Logstash vs syslog-ng** You can think of syslog-ng as an alternative to rsyslog (though historically it was actually the other way around). It’s also a modular syslog daemon, that can do much more than just syslog. It has disk buffers and Elasticsearch output support. Equipped with a grammar-based parser (PatternDB), it has all you probably need to be a good log shipper to Elasticsearch. **Syslog-ng Advantages** Like rsyslog, it’s a light log shipper and it also performs well. Probably not 100% as well as rsyslog because it has a simpler architecture, but we’ve seen 570K logs/s processed on a single host many years ago. Unlike rsyslog, it features a clear, consistent configuration format and has nice documentation. Packaging support for various distros is also very good. It’s also the only log shipper here that can run correlations across multiple log messages (assuming they are all in the buffer). **Syslog-ng Disadvantages** The main reason why distros switched to rsyslog was syslog-ng Premium Edition, which used to be much more feature-rich than the Open Source Edition which was somewhat restricted back then. We’re concentrating on the Open Source Edition here, all these log shippers are open source. Things have changed in the meantime, for example disk buffers, which used to be a PE feature, landed in OSE. Still, some features, like the reliable delivery protocol (with application-level acknowledgements) have not made it to OSE yet. **Syslog-ng Typical use-cases** Similarly to rsyslog, you’d probably want to deploy syslog-ng on boxes where resources are tight, yet you do want to perform potentially complex processing. As with rsyslog, there’s a Kafka output that allows you to use Kafka as a central queue and potentially do more processing in Logstash or a custom consumer: The difference is, syslog-ng has an easier, more polished feel than rsyslog, but likely not that ultimate performance: for example, only outputs are buffered, so processing is done before buffering – meaning that a processing spike would put pressure up the logging stream. **Logstash vs Fluentd** Fluentd was built on the idea of logging in JSON wherever possible (which is a practice we totally agree with!) so that log shippers down the line don’t have to guess which substring is which field of which type. As a result, there are logging libraries for virtually every language, meaning you can easily plug in your custom applications to your logging pipeline. **Fluentd Advantages** Like most Logstash plugins, Fluentd plugins are in Ruby and very easy to write. So there are lots of them, pretty much any source and destination has a plugin (with varying degrees of maturity, of course). This, coupled with the “fluent libraries” means you can easily hook almost anything to anything using Fluentd. Also, Fluentd is now a CNCF project, so the Kubernetes integration is very good. **Fluentd Disadvantages** Because in most cases you’ll get **structured data through Fluentd**, it’s not made to have the **flexibility of other shippers on this list** (Filebeat excluded). You can still parse unstructured via regular expressions and filter them using tags, for example, but you don’t get features such as local variables or full-blown conditionals. Also, while performance is fine for most use-cases, it’s not in on the top of this list: buffers exist only for outputs (like in syslog-ng), single-threaded core and the Ruby GIL for plugins means ultimate performance on big boxes is limited, but resource consumption is acceptable for most use-cases. **For small/embedded devices**, you might want to look at Fluent Bit, which is to Fluentd similar to how Filebeat is for Logstash. Except that Fluent Bit is single-threaded, so throughput will be limited. **Fluentd Typical use-cases** Fluentd is a good fit when you have diverse or exotic sources and destinations for your logs, because of the number of plugins. Also, if most of the sources are custom applications, you may find it easier to work with fluent libraries than coupling a logging library with a log shipper. Especially if your applications are written in multiple languages – meaning you’d use multiple logging libraries, which may behave differently. To summarize the differences between Logstash and Fluentd: Logstash | Fluentd | | Resource usage | high | low | Variables | yes | no | Inputs | many | many | Outputs | many | many | Queue | memory, disk. For filters and outputs | Memory, disk. For outputs | Libraries | nothing specific | many | ### Don’t forget to download **your Quick Guide to Logging Basics** **Some honorable alternatives mentions** There are some technologies that are definitely worth mentioning in this conversation. Without trying to be exhaustive, we’ll try to address the most important ones. **Logstash vs Apache Flume** Apache Flume’s architecture is different than that of most shippers described here. You have sources (inputs), channels (buffers) and sinks (outputs). Processing, such as parsing unstructured data, would be done preferably in outputs, to avoid pipeline backpressure. The most interesting output is based on Morphlines, which can do processing like Logstash’s grok, but also send data to the likes of Solr and Elasticsearch. Unfortunately, the Morphlines Elasticsearch plugin didn’t get much attention since its initial contribution (by our colleague Paweł, many years ago). **Logstash vs Splunk** Splunk isn’t a log shipper, it’s a commercial logging solution, so it doesn’t compare directly to Logstash. To compare Logstash with Splunk, you’ll need to add at least Elasticsearch and Kibana in the mix, so you can have the complete ELK stack. Alternatively, you can point Logstash to Sematext Logs. That said, there are two main differences between Splunk and ELK: one is that ELK is open-source, and the other is that Splunk tends to do a lot of query-time parsing. By contrast, in ELK you’d typically parse logs with Logstash to make them structured, and index them in Elasticsearch. In short, compared to Splunk, ELK trades disk space and write performance for query performance. Which, for large datasets, is a good trade-off. If you want to read more about Splunk’s features and how it can help with log management, read our review of the best log analysis tools. Want to see how Sematext stacks up? Check out our page on Sematext vs Splunk. **Logstash vs Graylog** Graylog is another complete logging solution, an open-source alternative to Splunk. It uses Elasticsearch as its storage backend. Its graylog-server component aims to do what Logstash does and more: everything goes through graylog-server, from authentication to queries. graylog-server also has pipeline definitions and buffering parameters, like Logstash and other log shippers mentioned here. Graylog is nice because you have a complete logging solution, but it’s going to be harder to customize than an ELK stack. If you’re interested in learning more bout Graylog and how it compares to yet other similar solutions, check out our article on the best log management tools. **Conclusion: How does Logstash compare to these alternatives?** First of all, the conclusion is that you’re awesome for reading all the way to this point. If you did that, you get the nuances of an “it depends on your use-case” kind of answer. All these shippers have their pros and cons, and ultimately **it’s down to your specifications** (and in practice, also to your personal preferences) to choose the one that works best for you. **Avoid the hassle and costs of managing the Elastic Stack on your own servers by sending logs to Sematext Logs – Hosted ELK as a Service.** Get started! **If you need help deciding, integrating, or really any help with logging **don’t be afraid to reach out – **we offer ****Logging Consulting**.
true
true
true
Need a Logstash alternative? Learn differences, similarities, advantages & disadvantages in performance, config & capabilities of the most popular log shippers.
2024-10-12 00:00:00
2023-01-15 00:00:00
https://sematext.com/wp-…Alternatives.png
article
sematext.com
Sematext
null
null
27,636,171
https://www.youtube.com/watch?v=3j7am9kjMrk
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
41,378,881
https://sfg.civlab.org/
Civ Lab | SF Gov Graph
null
You need to enable JavaScript to run this app.
true
true
true
Complete map of San Francisco's government
2024-10-12 00:00:00
null
/og-image.png
website
civlab.org
sfg.civlab.org
null
null
38,196,909
https://threadreaderapp.com/thread/1721932638556991971.html
null
null
null
true
false
false
null
null
null
null
null
null
null
null
null
99,762
http://radar.oreilly.com/archives/2008/01/the_rest_of_the.html
Radar - O’Reilly
Ben Lorica; Matt Welsh
# Radar Now, next, and beyond: Tracking need-to-know trends at the intersection of business and technology ## Areas we’re focusing on: Few technologies have the potential to change the nature of work and how we live as artificial intelligence (AI) and machine learning (ML). Everything from new organizational structures and payment schemes to new expectations, skills, and tools will shape the future of the firm. Stay on top of the emerging tools, trends, issues, and context necessary for making informed decisions about business and technology. See how companies are using the cloud and next-generation architectures to keep up with changing markets and anticipate customer needs. We’re charting a course from today’s tech-driven economy to a “next” economy that strikes a better balance between people and automation.
true
true
true
Now, next, and beyond: Tracking need-to-know trends at the intersection of business and technology
2024-10-12 00:00:00
2024-01-01 00:00:00
https://cdn.oreillystati…ial-1200x630.jpg
article
oreilly.com
O’Reilly Media
null
null
1,804,518
http://www.youtube.com/watch?v=vqd9qbzT9tc
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
41,728,809
https://arrowsmithlabs.com/blog/what-happens-when-you-visit-a-liveview-url
What happens when you visit a LiveView URL?
null
*Learn Phoenix LiveView* is the comprehensive tutorial that teaches you everything you need to build a complex, realistic, fully-featured web app with Phoenix LiveView. Click here to learn more! A classic job interview question is: *“What happens when you type a URL into your browser’s address bar and hit Enter?”* In my last post, I explained how it works in a Phoenix app: Phoenix initializes a `%Plug.Conn{}` representing the incoming HTTP request, then passes it through a chain of functions returning a new `%Plug.Conn{}` with the response. For a static page rendered by a controller, that’s all. But if the page is a LiveView (arguably Phoenix’s killer feature), we’re only just beginning. In this post I’ll continue the story and explain how pressing “Enter” in your address bar can make Phoenix open a websocket and mount a LiveView. For illustrative purposes, let’s scaffold a LiveView in the same `Example` app from the previous post. Generate a set of LiveViews for a resource called `Product` : ``` $ mix phx.gen.live Store Product products name:string ``` Follow the generator’s instructions by adding routes to the router: ``` defmodule ExampleWeb.Router do use ExampleWeb, :router … scope "/", ExampleWeb do pipe_through :browser get "/", PageController, :home resources "/users", UserController + + live "/products", ProductLive.Index, :index + live "/products/new", ProductLive.Index, :new + live "/products/:id/edit", ProductLive.Index, :edit + + live "/products/:id", ProductLive.Show, :show + live "/products/:id/show/edit", ProductLive.Show, :edit end … ``` Then run the new migration file: ``` $ mix ecto.migrate 08:33:06.318 [info] == Running 20240929072955 Example.Repo.Migrations.CreateProducts.change/0 forward 08:33:06.319 [info] create table products 08:33:06.343 [info] == Migrated 20240929072955 in 0.0s ``` Start the server and visit localhost:4000/products. It renders the `ProductLive.Index` LiveView: Because this page is a LiveView, a websocket is now open between browser and server. LiveView lets us add rich, interactive UI without any custom Javascript: So how does it work? When you hit “Enter” in the address bar, your browser has no idea that the page will be served by LiveView. How could it know? It makes an HTTP `GET` request like it would for any other URL. Initially, Phoenix handles this like any other HTTP request: it initializes a `%Plug.Conn{}` then passes it through a list of plugs as defined in the `Endpoint` and `Router` . The path `GET /products` matches this route defined by `live/3` : ``` # lib/example_web/router.ex defmodule ExampleWeb.Router do … scope "/", ExampleWeb do pipe_through :browser … live "/products", ProductLive.Index, :index … ``` So the `%Plug.Conn{}` is passed through the `:browser` pipeline. Overall, the `%Plug.Conn{}` gets passed through the same chain of functions as before: ``` conn |> Plug.Static.call() |> Phoenix.LiveDashboard.RequestLogger.call() |> Phoenix.LiveReloader.call() |> Phoenix.CodeReloader.call() |> Phoenix.Ecto.CheckRepoStatus.call() |> Plug.RequestId.call() |> Plug.Telemetry.call() |> Plug.Parsers.call() |> Plug.MethodOverride.call()key |> Plug.Head.call() |> Plug.Session.call() |> Phoenix.Controller.accepts() |> Plug.Conn.fetch_session() |> Phoenix.LiveView.Router.fetch_live_flash() |> Phoenix.Controller.put_root_layout() |> Phoenix.Controller.protect_from_forgery() |> Phoenix.Controller.put_secure_browser_headers() ``` See my previous post if you’re not sure how this work. The client is still waiting for an HTTP response, but this time it won’t be rendered by a controller. It’ll be rendered by `ProductLive.Index` . No spam. Unsubscribe any time. First off, Phoenix **mounts** the LiveView by calling its `mount/3` function: ``` # lib/example_web/live/product_live/index.ex defmodule ExampleWeb.ProductLive.Index do use ExampleWeb, :live_view … @impl true def mount(_params, _session, socket) do {:ok, stream(socket, :products, Store.list_products())} end … ``` The argument `socket` is an instance of `%Phoenix.LiveView.Socket{}` , representing the websocket connection between client and server. (The actual websocket isn’t open yet, but we’ll discuss that in a moment.) We no longer have access to the `%Plug.Conn{}` , but the `%Plug.Conn{}` ‘s data will still be used when returning the HTTP response. For example, any HTTP headers set by the `Endpoint` or a router pipeline will be included in the response returned by the LiveView. `mount/3` must return a tuple `{:ok, socket}` , where `socket` is an updated `%Phoenix.LiveView.Socket{}` containing whatever data is necessary to render the initial LiveView. In this case, we’re loading all products then streaming them, but the details don’t matter here. After `mount/3` , LiveView calls `handle_params/3` to make additional updates to the socket if needed. (The distinction between `mount/3` and `handle_params/3` doesn’t matter for the purposes of this post): ``` # lib/example_web/live/product_live/index.ex defmodule ExampleWeb.ProductLive.Index do … @impl true def handle_params(params, _url, socket) do {:noreply, apply_action(socket, socket.assigns.live_action, params)} end … ``` Finally, LiveView must **render** its template, written in **HEEx** (HTML + Embedded Elixir). For `ProductLive.Index` the template is defined at `lib/example_web/live/product_live/index.html.heex` : ``` <.header> Listing Products <:actions> <.link patch={~p"/products/new"}> <.button>New Product</.button> </.link> </:actions> </.header> … ``` Equivalently, we could have defined a function `render/1` which renders HEEx with a `~H` sigil: ``` # lib/example_web/live/product_live/index.ex defmodule ExampleWeb.ProductLive.Index do … # This function, if it existed, would be exactly equivalent to defining the # template in a separate .html.heex file: @impl true def render(assigns) do ``` ``` ~H""" <.header> Listing Products <:actions> <.link patch={~p"/products/new"}> <.button>New Product</.button> </.link> </:actions> </.header> … """ ``` ``` end ``` The HTML rendered by this HEEx is sent as the response to the original HTTP request. See for yourself: open your browser’s network inspector, refresh the page, and look for the `GET` request to `/products` . You’ll see it received a `200` response with the rendered HTML: We still haven’t opened the websocket that makes LiveView work. But we’ve rendered a static page of HTML like so: HTTP is a **request-response** protocol. The browser sends a request, the server responds, then it’s over. If the browser wants more it needs to send another request. And if the server has something new to say, it can’t initiate the conversation, and can only wait for the browser’s next request. **Websockets** work differently. Unlike HTTP, a websocket allows **full duplex** communication, where either side can send a message to the other at any time. Once a websocket is open between browser and server, it stays open until one side closes it (e.g. user closes the browser tab). Websockets are how LiveView works its magic. Once the initial HTTP request has rendered its first response, it opens a websocket connection over which all future communication happens. Back in the network inspector, you can see a websocket connection has been opened to the URL `ws://localhost:4000/live/websocket` : Since we’re in `dev` mode, there’s also a websocket open to the path `/phoenix/live_reload/socket/websocket` , which is used for code reloading (i.e. automatically updating your app when you change the code.) It’s not opened in tests or production, and we won’t discuss it further here. Look again at the HTML that was rendered for the original GET request. It contains a `<script>` tag: Without getting into the full details of Phoenix asset compilation: this tag loads your app’s compiled `app.js` file, which includes the Javascript from `assets/js/app.js` in your repo. Among other things, that Javascript opens a websocket connection: ``` // assets/app.js … import {Socket} from "phoenix" import {LiveSocket} from "phoenix_live_view" … let liveSocket = new LiveSocket("/live", Socket, { longPollFallbackMs: 2500, params: {_csrf_token: csrfToken} }) … // connect if there are any LiveViews on the page liveSocket.connect() ``` Back in `endpoint.ex` , you can see that in addition to all the `plug` s, the server calls `socket/3` with first argument `"/live"` : ``` # lib/example_web/endpoint.ex defmodule ExampleWeb.Endpoint do use Phoenix.Endpoint, otp_app: :example … socket "/live", Phoenix.LiveView.Socket, websocket: [connect_info: [session: @session_options]], longpoll: [connect_info: [session: @session_options]] … ``` This defines a websocket endpoint at `/live/websocket` , which is what our browser is currently connected to as seen in the network inspector. If it’s not possible to open a websocket (true for around 2-3% of servers according to Elixir creator José Valim), it also provides an endpoint `/live/longpoll` so that the client can fall back to long polling. If you stick some debugging code in `ProductLive.Index.mount/3` , you might notice something interesting: ``` # lib/slax_web/live/produce_live/index.ex defmodule SlaxWeb.ProductLive.Index do … @impl true def mount(_params, _session, socket) do + IO.puts("mounting") {:ok, stream(socket, :products, Store.list_products())} end @impl true def handle_params(params, _url, socket) do + IO.puts("handling params") {:noreply, apply_action(socket, socket.assigns.live_action, params)} end … ``` Refresh the page and look in your server logs: ``` [info] GET /productsproducts mounting … handling params … [info] Sent 200 in 13ms [info] CONNECTED TO Phoenix.LiveView.Socket in 13µs … mounting … handling params … ``` As you can see, `mount/3` and `handle_params/3` are called **twice** when the page loads. (The template is rendered twice too, as you can see if you e.g. stick `<% IO.inspect("rendering") %>` somewhere in the `index.html.heex` template.) They’re called once for the initial HTTP request, then again after the websocket has connected. You can check the difference by calling `connected?/1` : ``` @impl true def mount(_params, _session, socket) do - IO.puts("mounting") + IO.puts("mounting (connected: #{connected?(socket)})") {:ok, stream(socket, :products, Store.list_products())} end @impl true def handle_params(params, _url, socket) do - IO.puts("handling params") + IO.puts("handling params (connected: #{connected?(socket)})") {:noreply, apply_action(socket, socket.assigns.live_action, params)} end ``` Refresh the page and check the logs again. ``` [info] GET /products mounting (connected: false) … handling params (connected: false) … [info] Sent 200 in 13ms [info] CONNECTED TO Phoenix.LiveView.Socket in 13µs … mounting (connected: true) … handling params (connected: true) ``` By rendering the initial page of static HTML, LiveView gives the user a visual response as quickly as possible. The HTML also contains the `<script>` tag to load the Javascript that opens the websocket. Then once the websocket is open, LiveView calls `mount/3` again with the connected `socket` and re-renders. This lets you perform any additional initialization that requires a connected socket. With a websocket now open, LiveView waits for an event such as user interaction: And as you interact with the page, you can see messages being sent over the websocket in both directions. LiveView keeps the websocket open when possible. If, for example, you navigate to a new page with `push_navigate` , there’s no HTTP request: LiveView re-uses the existing websocket[1], and only calls `mount/3` once, with a connected `socket` for the new page’s LiveView. Re-using the websocket saves resources as it avoids the overhead of a new HTTP request. So that’s how a LiveView mounts. The above diagram doesn’t come close to including the full LiveView lifecycle - many things can happen after mounting, such as navigation with `navigate` or `patch` , events triggered by user interaction or pushed from hooks, messages handled with `handle_info/3` and more. But you can learn about all of that and more from my course at PhoenixLiveView.com. No spam. Unsubscribe any time.
true
true
true
Understand how Phoenix mounts a new LiveView and opens a websocket
2024-10-12 00:00:00
2024-10-03 00:00:00
https://arrowsmithlabs.c…liveview_url.jpg
article
phoenixonrails.com
Arrowsmith Labs
null
null
26,744,434
https://www.gq.com/story/get-fit-from-just-walking
How Fit Can You Get From Just Walking?
Graham Isador
Four months ago my friend John Sharkman stepped on the scale and realized he was the heaviest he'd ever been. Sharkman—a former college football quarterback—was weighing in at 263 pounds, fifty pounds heavier than his time as an elite athlete. The realization that he'd jumped up to the size of a lineman was humbling, and he knew he needed to shed some weight. He asked me, his fitness journalist friend, to help. But the request came with quite a number of caveats: he didn't want to cut off certain food or alcohol, he didn't want to go to the gym, and he didn't want the whole process to feel that hard. In the past, I've undertaken a number of successful fitness and fat loss challenges. I've taken all the pre-workout in the world, done thousands of kettlebell swings, gone paleo. But Sharkman's request got me thinking: What is the least amount of effort necessary for substantial weight loss? Can you get *real* results by just kind of messing around? So in our group chat, Sharkman and a few other friends made a commitment to walking 10,000 steps a day and tracking our food. We aimed for about 2,000 calories. Sharkman dubbed the initiative Health Zone. After four months following those guidelines, my friend dropped 43 pounds. Collectively the group chat was down 105. Those are life-changing, infomercial-pitch numbers. Some caveats obviously apply: losing weight is hard, and keeping it off is even harder. Your mileage will almost certainly vary. But the whole experience made me wonder: just how fit can you get from just walking ? "I think walking is probably the single most underutilized tool in health and wellness," says nutrition coach and personal trainer Jeremy Fernandes. According to Fernandes, the reason we rarely hear about walking as a major fitness tool—in the same conversations as stuff like yoga or expensive spinning bikes—is that people aren’t emotionally prepared for fitness to be easy. “Most people want to believe that working out and fat loss needs to be hard. If you need impossibly crushing workouts to get in better shape, then you’re not responsible when you fail,” he says. "But a basic program performed consistently—even a half-assed effort done consistently—can bring you a really long way, much further than going hardcore once in a while." It's not like walking is some secret. 10,000 steps is the default recommendation of some of the most popular fitness trackers on the market, and long walks have been a hidden weapon of superhero body transformations for ages. But until witnessing Sharkman undergo his transformation I didn't realize just how powerful just walking could be. Of course, it's not the right tool for every goal. It won't get you over the finish line of a marathon. And if you want to achieve some sort of beach body, unless you already have some muscle mass, at a certain point simply getting leaner starts to have diminishing returns. That’s why celebrity trainer and champion bodybuilder Eren Legend is wary of signing off on walking as a solution for looking better naked. "If you do cardio and you have a pear-shaped body, all that you can expect is to become a smaller pear," says Legend. “The only way to change your body composition, the shape and look of your body, is to perform a form of resistance-based training. That’s not to say that 10k steps is bad—if you’re regularly performing some type of physical activity your body is going to change. But is it the most efficient way? Some form of resistance training like weight lifting or sprints in addition to a nutrition plan will get you to your goals faster.” Similarly, fitness tracker Whoop has given up on counting steps at all together. While most trackers and fitness apps count steps, the designers behind Whoop believe step count alone doesn't tell you enough, choosing instead to measure heart rate. Whoop's vice president of performance Kristen Holmes—a three-time field hockey all-American and one of the most successful coaches in Ivy League history—explained: “Simply counting steps doesn’t really tell you that much. All steps aren’t created equal.” A brisk walk is more beneficial than a slow walk. A jog might be more beneficial than that, she said. The company determined heart rate was the best way to tell you how hard those steps were working. So if you're chasing high-level performance, single-digit body fat, or a bodybuilder physique, then relying solely a ton of walking isn't the right move. But the reality is that most average people are pretty far from those goals, and focusing on the routines of really high performers my be doing more harm than good. In other words, expecting that you'll accomplish the training required for a movie star body when starting out a fitness routine is setting yourself up for disappointment. Walking a bunch, on the other hand, is something that is relatively simple to fit into your everyday life. The best fitness routine is always going to be the routine that you follow consistently. And I can vouch for the—unscientific, absolutely not peer reviewed—results. "Walking is something you're completely capable of starting right now," said Sharkman. "It sounds cheesy to say changing your life is that simple, but this definitely changed mine."
true
true
true
Walking is good for you, obviously. But can it whip you into shape?
2024-10-12 00:00:00
2021-04-01 00:00:00
https://media.gq.com/pho…mit/IMG_0136.jpg
article
gq.com
GQ
null
null
6,719,265
https://plus.google.com/+ResearchatGoogle/posts/fpjmKvEkTEf
New community features for Google Chat and an update on Currents
Google
Note: This blog post outlines upcoming changes to Google Currents for Workspace users. For information on the previous deprecation of Google+ for users with personal Google accounts, please see this post . What's Changing We are nearing the end of this transition. Beginning July 5, 2023, Currents will no longer be available. Workspace administrators can export Currents data using Takeout before August 8, 2023. Beginning August 8th, Currents data will no longer be available for download. Although we are saying goodbye to Currents, we continue to invest in new features for Google Chat , so teams can connect and collaborate with a shared sense of belonging. Over the last year, we've delivered features designed to support community engagement at scale, and will continue to deliver more. Here is a summary of the features with additional details below: This month, we’re enabling new ways for organizations to share information across the enterprise with announcements in Google Chat . This gives admin controls to limit permissions for posting in a space, while enabling all members to read and react, helping ensure that important updates stay visible and relevant. Later this year, we plan to simplify membership management by integrating Google Groups with spaces in Chat, enable post-level metrics for announcements, and provide tools for Workspace administrators to manage spaces across their domain. Announcements in Google Chat Managing space membership with Google Groups We’ve already rolled out new ways to make conversations more expressive and engaging such as in-line threading to enable rich exploration of a specific topic without overtaking the main conversation and custom emojis to enable fun, personal expression. In-line threaded conversations Discover and join communities with up to 8,000 members We’ve also made it easier for individuals to discover and join communities of shared interest . By searching in Gmail , users can explore a directory of available spaces covering topics of personal or professional interest such as gardening, pets, career development, fitness, cultural identity, and more, with the ability to invite others to join via link. Last year, we increased the size of communities supported by spaces in Chat to 8,000 members , and we are working to scale this in a meaningful way later this year. A directory of spaces in Google Chat for users to join. Our partner community is extending the power of Chat through integrations with essential third-party apps such as Jira, GitHub, Asana, PagerDuty , Zendesk and Salesforce . Many organizations have built custom workflow apps using low-code and no-code tools , and we anticipate that this number will continue to grow with the GA releases of the Chat API and AppSheet’s Chat app building capabilities later this year. For teams to thrive in this rapidly changing era of hybrid work, it’s essential to build authentic personal connections and a strong sense of belonging, no matter when or where individuals work. We will continue to make Google Chat the best option for Workspace customers seeking to build a community and culture for hybrid teams, with much more to come later this year. Who's impacted Admins and end users Why it’s important The transition from Currents to spaces in Google Chat removes a separate, siloed destination and provides organizations with a modern, enterprise-grade experience that reflects how the world is working today. Google Workspace customers use Google Chat to communicate about projects, share organizational updates, and build community. Recommended action Availability Spaces in Google Chat are available to all Google Workspace customers and users with personal Google Accounts. Resources
true
true
true
Note: This blog post outlines upcoming changes to Google Currents for Workspace users. For information on the previous deprecation of Googl...
2024-10-12 00:00:00
2023-04-12 00:00:00
https://blogger.googleus…_LINKS%20(2).png
article
googleblog.com
Google Workspace Updates
null
null
28,346,710
https://www.colby.so/posts/live-search-with-rails-and-stimulusreflex
Building a Live Search Experience with StimulusReflex and Ruby on Rails
Davidcolby
# Building a Live Search Experience with StimulusReflex and Ruby on Rails 28 Aug 2021As we approach the release of Rails 7, the Rails ecosystem is full of options to build modern web applications, fast. Over the last 9 months, I’ve written articles on building type-as-you-search interfaces with Stimulus and with the full Hotwire stack, exploring a few of the options available to Rails developers. Today, we’re going to build a live search experience once more. This time with StimulusReflex, a “new way to craft modern, reactive web interface with Ruby on Rails”. StimulusReflex relies on WebSockets to pass events from the browser to Rails, and back again, and uses morphdom to make efficient updates on the client-side. When we’re finished, our application will look like this: It won’t win any beauty contests, but it will give us a chance to explore a few of the core concepts of StimulusReflex. As we work, you will notice some conceptual similarities between StimulusReflex and Turbo Streams, but there are major differences between the two projects, and StimulusReflex brings functionality and options that don’t exist in Turbo. Before we get started, this article will be most useful for folks who are comfortable with Ruby on Rails and who are new to StimulusReflex. If you prefer to skip ahead to the source for the finished project, you can find the full code that accompanies this article on Github. Let’s dive in. ## Setup To get started, we’ll create a new Rails application, install StimulusReflex, and scaffold up a `Player` resource that users will be able to search. From your terminal: In addition to the above, you’ll also need to have Redis installed and running in your development environment. The `stimulus_reflex:install` task will be enough to get things working in development but you should review the installation documentation in detail ahead of any production deployment of a StimulusReflex application. With the core of the application ready to go, start up your rails server and head to http://localhost:3000/players. Create a few players in the UI or from the Rails console before moving on. ## Search, with just Rails We’ll start by adding the ability to search players without any StimulusReflex at all, just a normal search form that hits the existing `players#index` action. To start, update `players/index.html.erb` as shown below: Here we’re rendering a search form at the top of the page and we’ve moved rendering players to a partial that doesn’t exist yet. Create that partial next: And fill it in with: Finally, we’ll add a very rudimentary search implementation to the `index` method in the `PlayersController` , like this: If you refresh /players now, you should be able to type in a search term, submit the search request to the server, and see the search applied when the index page reloads. Now let’s start adding StimulusReflex, one layer at a time. ## Creating a reflex StimulusReflex is built around the concept of reflexes. A `reflex` , to oversimplify it a bit, is a Ruby class that responds to user interactions from the front end. When you’re working with StimulusReflex, you’ll write a lot of reflexes that do work that might otherwise require controller actions and a lot of client-side JavaScript logic. Reflexes can do a lot, from re-rendering an entire page on demand to kicking off background jobs, but for our purposes we’re going to create one reflex that handles user interactions with the search form we added in the last section. Instead of a GET request to the `players#index` action, form submissions will call a method in the reflex class that processes the search request and updates the DOM with the results of the search. We’ll start with generating the reflex using the built-in generator: This generator creates two files, a `PlayerSearch` reflex class in the `reflexes` directory and a `player-search` Stimulus controller in `javascripts/controllers` . We’ll define the reflex in the `PlayerSearch` class, and then, optionally, we can use the `player-search` controller to trigger that reflex from a front end action and hook into reflex lifecycle methods that we might care about on the front end. The simplest implementation of a working `PlayerSearch` reflex is to update an instance variable from the reflex, and then rely on a bit of StimulusReflex magic to do everything else. We’ll start with the magical version. First, add a `search` method to the `PlayerSearch` reflex: Then update the search form to trigger the reflex on submit: And update `PlayersController#index` to only assign a new value to `@players` when it hasn’t already been set by the reflex action. With these changes in place, we can refresh the players page, submit a search, and see that searching works fine. So what’s going on here? In the form, we’re listening for the submit event and, when it is triggered, the `data-reflex` attribute fires the `search` method that we defined in the `PlayerSearch` reflex class. `PlayerSearch.search` automatically gets access the `params` from the nearest form so we can use `params[:query]` like we would in a controller action. We use the query param to assign a value to `@players` and, because we haven’t told it to do anything else, the reflex finishes by processing `PlayersController#index` , passing the updated `players` instance variable along the way and using `morphdom` to update the content of the page as efficiently as possible. So we can finish this article by deleting the form’s submit button and moving the reflex from the submit event on the form to the input event on the text field, right? Not so fast. While what we have “works”, our implementation is currently inefficient and hard to maintain and expand. Future developers will have to piece together what’s going on in `search` . We’re also re-rendering the entire HTML body even though we know that only a small part of the page actually needs to change. We can do a little better. ## Using Selector Morphs The magical re-processing of the `index` action happens because the default behavior of a reflex is to trigger a full-page morph when a reflex method runs. While page morphs are easy to work with, we can be more explicit about our intentions and more precise in our updates by using selector morphs. Selector morphs are more efficient than page morphs because selector morphs skip routing, controller actions, and template rendering. Selector morphs are also more clear in their intention and easier to reason about since we know exactly what will change on the page when the reflex runs. Full page morphs are powerful and simple to use, but my preference is to use selector morphs when the use case calls for updating small portions of the page. Let’s replace the magical page morph with a selector morph. First, as you might have guessed, selector morphs use an identifier to target their DOM changes. We’ll add an id to the `<tbody>` in the `players` partial to give the selector morph something to target. Next we’ll update the search form: Here we’ve scrapped the submit button and we’ve replaced the `data-reflex` on the submit button with a Stimulus controller directly on the query text field. The `player-search` controller was created by the generator we ran earlier to create the `PlayerSearch` reflex, and we’ll fill in the Stimulus controller next: Here we’re inheriting from a Stimulus `ApplicationController` , which was automatically created by the `stimulus_reflex:install` task we ran at the beginning of this article. Since we’re inheriting from `ApplicationController` , we have access to `this.stimulate` , which we can use to trigger any reflex we like. Why would we use a Stimulus controller instead of a `data-reflex` attribute on a DOM element? Using a Stimulus controller gives us a little more flexibility and power than if we attach the reflex to the DOM directly, which we’ll explore in the next section. Before we expand the Stimulus controller, let’s finish up the implementation of the selector morph by updating the `PlayerSearch#search` like this: Here we no longer need `players` to be an instance variable. Instead, we pass it in as a local to the players partial which the selector morph renders to replace the children of `#players-list` . With this in place, we can refresh the page and start typing in the search form. If you’ve followed along so far, you should see that as you type, the content of the players table is updated. If you check the server logs, you’ll see that instead of the controller action processing and the entire application layout re-rendering, the server only runs the database query to filter the players and then renders the players partial. Skipping routing and full page rendering dramatically reduces the amount of time and resources used to handle the request. ## Expanding the Stimulus controller Now we’ve got live search in place using a selector morph. Incredible work so far! Let’s finish up by expanding the Stimulus controller to make the user experience a bit cleaner and learn a little more about StimulusReflex in the process. First, searching on each keystroke isn’t ideal. Let’s adjust `search` to wait for the user to stop typing before calling the `PlayerSearch` reflex. Nothing fancy here, and you should probably consider a more battle tested debounce function in production, but it’ll do for today. Next, it would be nice to give the user a visual cue that the list of players has updated. One way to do that is to animate the list when it updates and StimulusReflex helpfully gives us an easy way to listen for and react to reflex life-cycle events. Here we’re combining a custom StimulusReflex client-side life-cycle callback (`beforeSearch` ) with the Web Animations API to add a simple fade effect to the players list each time it updates. In addition to the client-side events, StimulusReflex provides server-side life-cycle callbacks, which we don’t have a use for in this particular article, but they exist if you need them. Now we have visual feedback for users as they type. Let’s finish this article by allowing users to clear a search without having to backspace the input until its empty. This last exercise will give us a chance to look at using more than one selector morph in a single reflex and to expand the Stimulus controller a bit more. ## Resetting search results Our goal is to add a link to the page that displays whenever the search text box isn’t empty. When a user clicks the link, the search box should be cleared, the players list should be updated to list all of the players in the database, and the reset link should be hidden. We’ll start by adding a new partial to render the link: And fill that in with: The reset link will only display if the local `query` variable is present. Clicks on the link are routed to a `player-search` Stimulus controller, calling the `reset` function that doesn’t exist yet. Before we update the Stimulus controller, let’s adjust the index view, like this: Here we’ve inserted the a new `reset_link` partial, wrapped in a `#reset-link` div. More importantly, we’ve adjusted how the `player-search` Stimulus controller is connected to the DOM. Instead of the controller being attached to the search text field, the controller is now on a wrapper div. While we didn’t have to make this change to the controller connection, doing so makes it clear that the controller is interested in more than just the text input and opens up the possibility of using targets to more specifically reference DOM elements in the future. This change also gives us an opportunity to look at one more piece of functionality of StimulusReflex-enabled functionality in Stimulus controllers. Update the Stimulus controller like this: We’ve made two important changes here. First, since the Stimulus controller is no longer inside of the search form, the search reflex will no longer be able to reference `params` implicitly. We handle this change by passing the value of the search box to `stimulate` as an additional argument. `Stimulate` “is extremely flexible” and we take advantage of that flexibility to ensure the search reflex receives the search query even without access to the search form’s params. Next, we added `reset` , which simply triggers the `search` reflex without an additional argument. On the server side, we need to update `PlayerSearch#search` like this: Here we updated `search` to take an optional `query` argument. The value of `query` is used to set the value of `players` and then two selector morphs replace the content of `players-list` and `reset-link` . In action, our final product looks like this: ## An alternative approach If you review the method signature of `stimuluate` , you’ll notice that we could have solved the problem of passing the value of the search box to the server in other ways. Instead of passing in event.target.value, we could have passed `event.target` like this: `this.stimulate(‘PlayerSearch#search, event.target)` . This approach would override the default value of the server-side Reflex `element` , allowing us to call `event.target.value` to access the value of the search box from the server. While this would work for the `search` function, it wouldn’t work for `reset` since we need to ignore the value of the search box when resetting the form. We could make it all work by passing an element to override the default `element` assignment, but it would take more effort. Passing in the value explicitly allows us to use `PlayerSearch#search` to handle both `search` and `reset` requests and keeps our code a bit cleaner on the server side. This is a matter of preference without a definitive answer on which approach is “best”. Implementing a solution overriding `element` on the server side would work fine. Also viable would be using an entirely different reflex action for the reset link. StimulusReflex offers plenty of flexibility, and some choices will come down to what feels best to you and your team. ## Wrapping up Today we looked at implementing a simple search-as-you-type interface with Ruby on Rails and StimulusReflex. This simple example should give you some indication of the power StimulusReflex has to deliver modern, fast web applications while keeping code complexity low and developer happiness high. Even better, StimulusReflex plays nicely with Turbo Drive and Turbo Frames, giving developers the ability to mix-and-match to choose the best tool for the job. To keep learning about building Rails applications with StimulusReflex: - Dive into the (excellent, very well-maintained) official documentation - Check out demo applications demonstrating some core StimulusReflex concepts - Join the StimulusReflex discord to learn from lots of folks way sharper than me - Learn more advanced usage patterns with StimulusReflexPatterns from Julian Rubisch As always, thanks for reading!
true
true
true
Using Ruby on Rails and StimulusReflex to build a search-as-you-type, instant feedback search experience
2024-10-12 00:00:00
2021-08-28 00:00:00
null
article
colby.so
Colby.so
null
null
9,913,726
https://blog.markewaldron.com/2015/06/19/how-to-add-disqus-to-your-ghost-blog/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
39,686,653
https://www.bloomberg.com/news/articles/2024-03-12/accel-backs-startup-seeking-to-use-ai-to-kill-finance-paperwork
Bloomberg
null
To continue, please click the box below to let us know you're not a robot. Please make sure your browser supports JavaScript and cookies and that you are not blocking them from loading. For more information you can review our Terms of Service and Cookie Policy. For inquiries related to this message please contact our support team and provide the reference ID below.
true
true
true
null
2024-10-12 00:00:00
null
null
null
null
null
null
null
3,636,410
http://www.sebastianmarshall.com/working-anywhere
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
8,979,007
https://torrentfreak.com/huge-security-flaw-leaks-vpn-users-real-ip-addresses-150130/
Huge Security Flaw Leaks VPN Users' Real IP-Addresses * TorrentFreak
Ernesto Van der Sar
The Snowden revelations have made it clear that online privacy is certainly not a given. Just a few days ago we learned that the Canadian Government tracked visitors of dozens of popular file-sharing sites. As these stories make headlines around the world interest in anonymity services such as VPNs has increased, as even regular Internet users don’t like the idea of being spied on. Unfortunately, even the best VPN services can’t guarantee to be 100% secure. This week a very concerning security flaw revealed that it’s easy to see the real IP-addresses of many VPN users through a WebRTC feature. With a few lines of code websites can make requests to STUN servers and log users’ VPN IP-address and the “hidden” home IP-address, as well as local network addresses. The vulnerability affects WebRTC-supporting browsers including Firefox and Chrome and appears to be limited to Windows machines. A demo published on GitHub by developer Daniel Roesler allows people to check if they are affected by the security flaw. **IP-address leak** The demo claims that browser plugins can’t block the vulnerability, but luckily this isn’t entirely true. There are several easy fixes available to patch the security hole. **Chrome** users can install the WebRTC block extension or ScriptSafe, which both reportedly block the vulnerability. **Firefox** users should be able to block the request with the NoScript addon. Alternatively, they can type “about:config” in the address bar and set the “media.peerconnection.enabled” setting to false. TF asked various VPN providers to share their thoughts and tips on the vulnerability. Private Internet Access told us that the are currently investigating the issue to see what they can do on their end to address it. (Update link changed 2018: PIA published an article on the issue today) TorGuard informed us that they issued a warning in a blog post along with instructions on how to stop the browser leak. Ben Van Der Pelt, TorGuard’s CEO, further informed us that tunneling the VPN through a router is another fix. “Perhaps the best way to be protected from WebRTC and similar vulnerabilities is to run the VPN tunnel directly on the router. This allows the user to be connected to a VPN directly via Wi-Fi, leaving no possibility of a rogue script bypassing a software VPN tunnel and finding one’s real IP,” Van der Pelt says. “During our testing Windows users who were connected by way of a VPN router were not vulnerable to WebRTC IP leaks even without any browser fixes,” he adds. While the fixes above are all reported to work, the leak is a reminder that anonymity should never be taken for granted. As is often the case with these type of vulnerabilities, VPN and proxy users should regularly check if their connection is secure. This also includes testing against DNS leaks and proxy vulnerabilities. **Update:** Freebsd also appears to be affected by the vulnerability. **Update:** Other OSes and browsers may also be affected, please test your connection to be sure. **Update: ** The WebRTC block extension is not 100% effective and can be bypassed with an iframe. The check page was updated to reveal this.
true
true
true
VPN users are facing a massive security flaw as websites can easily see their home IP-addresses through WebRTC. The vulnerability is limited to supporting browsers such as Firefox and Chrome, and appears to affect Windows users only. Luckily the security hole is relatively easy to fix.
2024-10-12 00:00:00
2015-01-30 00:00:00
null
article
torrentfreak.com
Torrentfreak
null
null
28,449,703
https://www.visualcapitalist.com/comparing-genetic-similarities-of-various-life-forms/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null