id
int64
3
41.8M
url
stringlengths
1
1.84k
title
stringlengths
1
9.99k
author
stringlengths
1
10k
markdown
stringlengths
1
4.36M
downloaded
bool
2 classes
meta_extracted
bool
2 classes
parsed
bool
2 classes
description
stringlengths
1
10k
filedate
stringclasses
2 values
date
stringlengths
9
19
image
stringlengths
1
10k
pagetype
stringclasses
365 values
hostname
stringlengths
4
84
sitename
stringlengths
1
1.6k
tags
stringclasses
0 values
categories
stringclasses
0 values
18,202,274
https://m.imgur.com/t/kanye_west/037AOBR
Security tip beginners level: Don't do like Kanye, change your password from 000000 today
null
If you're seeing this message, that means JavaScript has been disabled on your browser , please enable JS to make Imgur work.
true
true
true
Discover topics like kanye west, security tip, and the magic of the internet at Imgur, a community powered entertainment destination. Lift your spirits with funny jokes, trending memes, entertaining gifs, inspiring stories, viral videos, and so much more from users like itsworthatry.
2024-10-12 00:00:00
2018-10-12 00:00:00
https://i.imgur.com/fDhlKye.jpg?fbplay
video.other
imgur.com
Imgur
null
null
18,152,585
https://boingboing.net/2018/10/05/anonymous-sources-bold-claims.html
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
12,237,570
https://medium.com/@NewMountain/some-thoughts-on-elm-development-39a0f8a9002a
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
6,359,082
http://www.dzyngiri.com/augmenting-your-joomla-development-prowess/
dzyngiri.com
null
dzyngiri.com Buy this domain The domain maybe for sale. Click here for more information.
true
true
true
This website is for sale! dzyngiri.com is your first and best source for all of the information you’re looking for. From general topics to more of what you would expect to find here, dzyngiri.com has it all. We hope you find what you are searching for!
2024-10-12 00:00:00
null
null
null
null
dzyngiri.com - This website is for sale!
null
null
4,181,618
http://www.fortunepick.com/blog-article/a-list-of-must-read-books-for-startups-and-entrepreneurs
null
null
null
true
false
false
null
null
null
null
null
null
null
null
null
41,127,989
https://twitter.com/CrisGiardina/status/1818627205217272098
x.com
null
null
true
true
false
null
2024-10-12 00:00:00
null
null
null
null
X (formerly Twitter)
null
null
7,104,184
http://pages.github.com/
GitHub Pages
null
Head over to GitHub and create a new public repository named *username*.github.io, where *username* is your username (or organization name) on GitHub. If the first part of the repository doesn’t exactly match your username, it won’t work, so make sure to get it right. GitHub Desktop is a great way to use Git and GitHub on macOS and Windows. Download GitHub DesktopGo to the folder where you want to store your project, and clone the new repository: ~$git clone https://github.com/*username*/*username*.github.io Click the "Set up in Desktop" button. When the GitHub desktop app opens, save the project. If the app doesn't open, launch it and clone the repository from the app. After finishing the installation, head back to GitHub.com and refresh the page. Click the "Set up in Desktop" button. When the GitHub desktop app opens, save the project. If the app doesn't open, launch it and clone the repository from the app. Enter the project folder and add an index.html file: ~$cd *username*.github.io ~$echo "Hello World" > index.html Grab your favorite text editor and add an index.html file to your project: ``` ```<!DOCTYPE html> <html> <body> <h1>Hello World</h1> <p>I'm hosted with GitHub Pages.</p> </body> </html> Add, commit, and push your changes: ~$git add --all ~$git commit -m "Initial commit" ~$git push -u origin main Enter the repository, commit your changes, and press the publish button. Fire up a browser and go to **https:// username.github.io**. You have the option to start with one of the pre-built themes, or to create a site from scratch. Head over to GitHub.com and create a new repository, or go to an existing one. **Click on the Settings tab**. Scroll down to the **GitHub Pages** section. Press **Choose a theme**. Choose one of the themes from the carousel at the top. When you're done, click **Select theme** on the right. Use the editor to add content to your site. Enter a commit comment and click on **Commit changes** below the editor. Head over to GitHub.com and create a new repository, or go to an existing one. Click on the **Create new file** button. Name the file `index.html` and type some HTML content into the editor. Scroll to the bottom of the page, write a commit message, and commit the new file. **Click on the Settings tab** and scroll down to the GitHub Pages section. Then select the **main branch** source and click on the **Save** button. Fire up a browser and go to **http:// username.github.io/repository**.
true
true
true
Websites for you and your projects, hosted directly from your GitHub repository. Just edit, push, and your changes are live.
2024-10-12 00:00:00
2024-01-01 00:00:00
null
website
github.com
GitHub Pages
null
null
7,583,888
http://cosmic.posthaven.com/learning-report-restaurants-and-inventory
null
null
null
true
false
false
null
null
null
null
null
null
null
null
null
30,808,184
https://www.nytimes.com/2022/03/25/opinion/oscars-movies-end.html
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
41,740,932
https://www.youtube.com/watch?v=QC4b2teG_hc
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
34,790,398
https://www.foreignaffairs.com/ukraine/ukraine-and-contingency-global-order
Ukraine and the Contingency of Global Order
Hal Brands
### The Age of Depopulation Surviving a World Gone Gray The moral arc of the universe is long, the saying goes, but it bends toward justice. That is a pleasing way to see the first year of Russia’s war in Ukraine. True, Ukraine hasn’t seen much justice in a conflict that has ravaged its territory, economy, and people. But the war has at least smashed Russian President Vladimir Putin’s military and confounded his imperial aspirations. It has seen Ukraine wildly outperform nearly all initial expectations. It has unified and invigorated the West. The good guys are winning, it seems. The bad guys are getting the cosmic comeuppance reserved for those on the wrong side of history. It is tempting to think that this outcome was inevitable. Putin’s regime and armed forces were so rotten, territorial conquest in the modern era had become so difficult, and the power of a democratic community united in support of Ukraine was so overwhelming that Moscow never had a chance. The war simply revealed the resilience of the liberal world—and the weaknesses of its enemies. It is a nice story, but it is mostly not true. The war, particularly in its early months, was a very close-run thing. Ukraine’s success—its survival, even—was never guaranteed. Different choices in Kyiv, Moscow, and Washington could have produced radically different outcomes, for Ukraine and for the rest of the world. Had Putin defeated Ukraine, Western policymakers might be grappling with pervasive insecurity in eastern Europe, an empowered axis of autocracies, and cascading global instability. Ukraine has come to be seen, perhaps prematurely, as the war that strengthened the liberal order; it could easily have weakened it, instead. Understanding what could have been in Ukraine is essential as the conflict enters its second year. Just because the war has gone relatively well for Ukraine and the Western world doesn’t mean that things will keep going their way. War is one of humanity’s most contingent undertakings, and the outcome of this struggle will hinge as much on future decisions as on decisions taken so far. Events in Ukraine also remind us that world order is not a product of natural law or moral inevitability. It is the result of policies pursued under the excruciating pressures of crisis. Great global dramas can turn on small things; the arc of the universe is exactly what we make of it. By any reasonable historical standard, today’s world is remarkably peaceful, prosperous, and democratic. That world is the result of global clashes that ended in victories for the supporters of a liberal order—but didn’t have to. If a battle or two in northern France had gone differently in August and September of 1914, Germany might have quickly triumphed in World War I. Even after the war turned into a slugfest, Germany still might have prevailed. Had the German monarchy heeded the counsel of civilian advisers who urged against resuming unrestricted submarine warfare in early 1917, the United States would not have entered the war, and Germany’s enemies—a near-revolutionary Russia, an exhausted France, an almost insolvent United Kingdom—might well have folded. Had World War I gone differently, the rest of the twentieth century might have, too. A victorious Germany would have ruled a vast Mitteleuropa from Belgium to the Middle East. Autocratic forms of government would have been ascendant; illiberalism and instability might have radiated outward from a German-dominated Eurasia. The stakes of World War II were even higher. In hindsight, the victory of the Grand Alliance—so superior to the Axis in money, manpower, and machines—seems inevitable, but it didn’t look that way at the time. Bold strategies and good timing allowed Germany and Japan to overrun Europe and much of the Asia-Pacific. In early 1942, the Axis might have severed the Allies’ global supply lines with coordinated operations in the Middle East and the Indian Ocean. The Axis missed the opportunity; Germany and Japan were eventually crushed. Yet contingency and chance still mattered: the difference between victory and defeat in key clashes such as the Battle of Midway could have been as small as how accurately a few pilots dropped a few bombs at a pivotal moment. The outcome of the next great conflict, the Cold War, ushered in an age of globalization and democratic dominance. But although the capitalist bloc outperformed the communist bloc over the long run, it easily could have faltered at the outset. Had Washington not undertaken the Marshall Plan and the North Atlantic Treaty—two then radical departures from the U.S. diplomatic tradition—in the late 1940s, western Europe might have collapsed and taken the global balance of power with it. Counterfactual history isn’t just a game of what-if. Thinking about how major events might plausibly have gone differently underscores that today’s reality isn’t the only reality that was ever possible. War is a complex and unpredictable phenomenon, so the world that great wars shape is contingent, too. A year ago, many analysts didn’t expect an independent Ukraine to exist right now. When Putin invaded in February 2022, he envisioned a quick smash-and-grab operation that would seize the capital and other major cities, decapitate Ukraine’s government, and destroy the country’s ability to resist. The expectation, in the Kremlin and also in Washington, was that Kyiv would fall within days and that conventional resistance would cease shortly thereafter. Moscow would then control most of the country, leading to a Ukrainian insurgency with uncertain prospects. Some Western analysts were already looking beyond the war to the ramifications of a Ukrainian defeat. Within Ukraine, those consequences would have been awful—show trials, summary executions, and all the mayhem visited upon the areas that Russia did manage to occupy. The global consequences would also have been ominous. Putin might have parlayed victory into his long-sought post-Soviet imperium. A puppet Ukraine might have been dragooned into a union state with Russia and Belarus; Moldova would have come under pressure once Moscow created a land bridge to Transnistria, a separatist region that already hosts a contingent of Russian troops. And following Russia’s successful intervention in Kazakhstan in January 2022, the de facto occupation of Belarus preceding the war, and a brutal beatdown of Ukraine, which former Soviet republics would have defied Moscow’s commands? Perhaps the Baltic states, thanks to their alliance with Washington. But NATO would have faced insecurity up and down its eastern front. Through Belarus and Ukraine, Russia could have sought to intimidate Latvia, Lithuania, and Poland. The costs and difficulties of defending U.S. allies would have multiplied along with the potential avenues for a Russian attack, as a Moscow-led union state would have a much longer border with NATO. Finland and Sweden probably still would have sought NATO membership, but the debate within the alliance over whether to admit them—and antagonize an emboldened Putin—might have been much more contentious. The future of the authoritarian axis, by contrast, would have been bright. A Russian victory would have given the Moscow-Beijing partnership significant geopolitical momentum. An overstretched United States would have faced militarily ascendant rivals in both Europe and Asia. Successful aggression might still have triggered military spending hikes by scared democracies in Europe and Asia, but it would also have fostered an atmosphere of global disarray that favored predators and left democracies fighting back from a weaker position than they occupy today. As for ideological consequences, Putin would have been strengthened at home; his popularity would have skyrocketed, as it did after the annexation of Crimea in 2014. Admirers of autocracy around the world would have lauded Putin’s ruthlessness and cunning. The United States, fresh off its chaotic withdrawal from Afghanistan, would have faced still more claims that democracies were in retreat. When Putin invaded Ukraine, he envisioned a quick smash-and-grab operation. To be sure, victory in Ukraine wouldn’t have made Moscow bulletproof. A grinding insurgency, perhaps supported by NATO countries, might have sapped Russian power. The United States and many allies would have slammed Russia with sanctions. But an aggressive sanctions campaign might not have outlasted a conventional war that ended quickly, since in this scenario some European countries might have favored returning to business as usual. Enthusiasm for backing an insurgency might also have waned for similar reasons. Fortunately for Ukraine and the West, almost none of this happened. Russia’s post-Soviet empire is crumbling: the Central Asian states are restless, and not even Belarus will join Putin’s war. NATO’s situation has changed for the better. The alliance has rallied around Ukraine, enhanced its eastern defenses, and is in the process of welcoming Finland and Sweden. The global community of advanced democracies looks robust and resilient, as Russia hemorrhages influence and power. Sino-Russian relations have suffered, in part because Putin has asked for aid that China is reluctant to give. No one seems wowed by the achievements of autocracy today. On the battlefield and around the world, the gap between what Putin sought and what he got is enormous. But it is not clear that Russia was always destined for disaster. True, the war revealed that many Western observers had simply overestimated Russia’s military power, which was undermined by a variety of factors, including pervasive corruption and a force structure that disproportionately favored armor over infantry. Many Western analysts, perhaps influenced by the rapid collapse of Afghanistan in 2021, had equally underestimated Ukraine’s will and capability to fight. Even so, it was far from certain that Ukraine would withstand Russia’s initial onslaught. After all, flawed regimes and militaries can still deliver on the battlefield. Just before the Red Army— weakened by Stalin’s purges—was initially humiliated by Finland in 1939–40, it had crushed a stronger power, Japan, in Manchuria. And the reason so few analysts accurately predicted the course of the current war in Ukraine is that it was shaped by developments that were difficult to anticipate: Russia failed catastrophically to exploit its advantages, Ukraine demonstrated unexpected strengths and overcame its deficient preparation for war, and the outside world, especially the United States, boosted Kyiv with unprecedented support. None of this was inevitable. Ukrainian President Volodymyr Zelensky looked more like Ashraf Ghani than Winston Churchill in January 2022, when he seemed almost indifferent to a looming disaster. The United States and its European allies had given Ukraine only modest and hesitant backing after previous Russian invasions, in 2014 and 2015. Change any of the aforementioned factors that shaped the war, and its course might have looked very different. Consider the chaotic early days, when Ukraine’s predicament was dire. The country’s military was ill prepared and badly outnumbered on key fronts, facing as much as a 12 to 1 disadvantage around Kyiv.** **Russian forces swept across the south of Ukraine, taking Kherson and establishing a land bridge to Crimea. In the north and east, major cities—including Kyiv and Kharkiv—were besieged. Russian saboteurs and assassins were in Kyiv, seeking to kill Zelensky and decapitate the government. Within days, the situation seemed so grim that the United States asked Zelensky if he planned to flee (and possibly offered to evacuate him), a course of action that some of his own advisers recommended. Had Zelensky gone, or had Kyiv fallen, Ukrainian elites might have wavered or defected—as Afghan elites did once a Taliban takeover seemed inevitable and as some Ukrainian officials did in the south during the Russian advance. The government might indeed have fragmented. Yet Putin’s gambit failed because Zelensky stayed—thereby beginning his transformation into a symbol of national cohesion and resistance—and because of several interrelated factors. Not least were Russian mistakes. Putin’s plan of attack was deeply flawed. Not expecting a serious fight, Russia spread its troops over several lines of advance, reducing their ability to overcome strenuous opposition on any of them. Obsessed with secrecy, the regime communicated that plan to key commanders, ministers, and units just days before the war. This approach didn’t stop U.S. intelligence from sniffing out the attack. But it did leave Russian forces woefully unprepared for a sharp, nasty conflict. And combined with Putin’s failure to appoint a single theater commander, it left Russian services and even individual units fighting their own separate wars—for instance, Russian airborne forces attempted high-risk airfield seizures without proper suppression of enemy air defenses or support from heavier ground forces—instead of working as a team. Some of these problems were related to the personalized nature of Putin’s regime. But Russian planning didn’t have to be as* *bad as it was, and even modest improvements might have paid major dividends. Had Russia concentrated on fewer fronts—whether reinforcing the drive on Kyiv or prioritizing the effort to cut off Ukrainian forces in the east—it might have overwhelmed Ukraine’s outnumbered and outgunned defenders. Had the Russian leadership given key units more advance warning, those units might have prepared better tactical plans and logistical support operations. In the end, the Russian offensive was just shambolic enough to let Ukrainian forces fight a successful delaying action, holding the capital and sucking Putin’s military into a long, bloody slog. Russian mistakes were exacerbated by an unexpectedly tenacious, if somewhat haphazard, Ukrainian defense. The Ukrainian state was not ready for the war that unfolded, since most officials expected at most a major operation in the east. Putin was denied an open road to the capital mainly by the heroic commitment and sacrifice of understrength units that initially held key points, such as the bridge between the cities of Bucha and Irpin, against daunting odds. That effort was aided by large numbers of civilians and reservists who augmented regular units, reported the location of Russian forces, and otherwise contributed to an all-of-society resistance. That the U.S. government was so ready for the war offset the fact that the Ukrainian government was not. The Ukrainian military also performed impressively in key respects. It used terrain adeptly, conducting hit-and-run attacks against Russian columns moving through wooded areas and flooding the banks of the Irpin River to slow the enemy’s advance. It exploited simple technologies, such as cheap drones that could target Russian tanks. At key moments, Ukrainian commanders deployed scarce resources where they had an outsize impact—for instance, using limited artillery capabilities to prevent, or at least impede, Russia from easily taking Hostomel Airport outside Kyiv and from thereby creating an air bridge that would have enabled Moscow to deliver crucial reinforcements to the capital’s doorstep. Ukraine’s previously underwhelming political leadership also began to overperform. Zelensky in particular summoned all of his skills to rally the population, maintain governmental cohesion, and win international solidarity. Ukraine pulled through the first phase of the war because it did just well enough, in just enough areas, to thwart a less than competent attack—and because an astonishingly broad and brave response to the invasion helped compensate for a nearly fatal dearth of preparation for it. This defense, in turn, was strengthened by foreign support. Although the administration of U.S. President Joe Biden was pessimistic about Ukraine’s prospects, it was determined to make conquest harder for Putin. Having learned from its own contingency planning failures during the withdrawal from Afghanistan, Washington prepared extensively for Russia’s war in Ukraine. Before the invasion, a relentless drumbeat of U.S. warnings helped deny Putin the cloud of ambiguity in which he sought to start the war. Those warnings also encouraged some Ukrainian commanders to disperse air and artillery assets that might otherwise have been destroyed. Critically, the United States alerted Ukraine to key elements of the Russian invasion plan, such as the seizure of Hostomel Airport, which may have accelerated Kyiv’s response. Washington probably aided Ukraine in other essential ways—by helping blunt the much-feared Kremlin cyberoffensive, for instance—but few details are publicly available. In any event, that the U.S. government was so ready for the war offset the fact that the Ukrainian government was not. Most important was the near-complete reversal of previous policies regarding the arming of Ukraine, a change that began under the administration of U.S. President Donald Trump and accelerated dramatically under Biden. A Ukraine without Western military support never would have survived the opening months, or even weeks, of fighting against a better-armed Russia. But even before the invasion, the United States and several NATO allies began to rush antitank and antiaircraft weapons, ammunition, and other supplies to Ukraine. And according to *Politico Europe*, when Ukraine ran desperately short of ammunition after weeks of fighting, Bulgaria—with U.S. and British assistance—approved the emergency provision of Soviet-standard munitions to fill the gap. From that point onward, Western assistance—strategic and tactical intelligence, economic aid, and military support—consistently provided the margin between success and failure for Ukraine. Meanwhile, the United States also performed an essential “holding the ring” function—and ensured that the balance of outside intervention decisively favored Kyiv—by threatening China with sanctions and other consequences if it provided the military and economic aid Putin sought. In short, a combination of Russian blunders, Ukrainian commitment and creativity, and foreign support helped Kyiv manage a narrow escape. Yet even after Putin’s initial assault failed and the badly bloodied Russian military pulled back from Kyiv, the conflict’s trajectory remained uncertain. In the spring and summer of 2022, Russia retained crucial advantages, such as deeper artillery and ammunition reserves. Putin still had decent options. Had he mobilized 300,000 additional troops in the spring instead of waiting to do so in the fall, he could have paired a manpower advantage with an artillery advantage when Russian forces refocused on assaulting Ukrainian positions in the Donbas. Russia also could have begun systematically attacking Ukrainian infrastructure in the spring of 2022, before it had depleted its stockpiles of precision-guided munitions. Timing is everything in war, and Ukraine has succeeded in part because Putin has consistently lagged in adapting to changed conditions. Despite these failures, by June 2022, Russia’s assault in the Donbas was putting Ukraine under pressure. Ukrainian forces were at a tremendous artillery deficit; they absorbed heavy losses and were nearly enveloped near Severodonetsk. Western intervention again helped tip the balance. The provision of U.S.-made High Mobility Artillery Rocket Systems (HIMARS) and M-270 multiple-launch rocket systems, as well as British-made M777 howitzers, offset Ukraine’s artillery disadvantage and—combined with highly accurate intelligence from Washington and other supporters—allowed Kyiv to launch devastating strikes against Russian ammunition dumps, command hubs, and logistics nodes. When the Russian offensive ground to a halt, Putin’s forces were so weak that they folded in the face of twin offensives that Ukraine later launched in Kharkiv and Kherson. Counterfactual history can help illuminate the future as well as the past. In this case, it underscores the degree to which Ukrainian success has turned on factors that are not guaranteed to persist. For one thing, Ukraine has enjoyed a remarkable degree of social and political cohesion since the war’s early days. But that cohesion could be tested in the coming year, as the war drags on and Ukraine’s elite looks ahead to presidential elections in March 2024. And as Ukraine’s politics grow more fractious, sound decision-making—on issues as fundamental as where and when to launch future offensives—could become more difficult. Similarly, Ukraine has benefited tremendously from Russia’s poor planning, difficulty adjusting to battlefield setbacks, and political leadership that has struggled to grasp the extent of the challenges it confronts. If Moscow’s performance improves even modestly, Kyiv could face a whole new war. No one should rule this out. Militaries in even the most repressive societies can learn, and Russia may be fighting a smarter, if still quite savage, war than it was last year. Having first minimized the invasion and promised Russians that it would not affect their lives, Putin has finally acknowledged that a long, consuming war lies ahead. His military is preparing layered defenses in occupied areas while building up newly mobilized forces and carrying out vicious infrastructure attacks meant to grind down Ukraine’s economy and exhaust its air defenses. Its winter offensive around Bakhmut has resulted in egregious Russian losses, but as the military analyst Michael Kofman has noted, it has also deprived Kyiv of the initiative and traded expendable Russian forces—especially convicts—for higher-value Ukrainian personnel. Just because Ukraine hasn’t lost the war doesn’t mean that it has won. A range of futures are still possible, if not equally likely, from an outright Ukrainian victory resulting in the liberation of all occupied territory to a scenario in which Russia hangs on to substantial parts of Ukraine for the foreseeable future to an escalation into direct confrontation between Russia and NATO. There is also a warning for Washington in this analysis: the heaviest burdens may still lie ahead. Ukraine has survived so far because the United States and its allies have dramatically reduced the power disparity between Kyiv and Moscow and ensured that Putin can’t simply escalate or batter his way out of the conflict. Yet as Russia mobilizes more manpower and economic resources—while also importing drones, artillery, and other capabilities from Iran and North Korea—the cost of helping Kyiv stay ahead in this contest will increase. Witness the recent decision by several NATO countries to provide Ukraine with battle tanks, an episode that may simply presage the need for other advanced capabilities, whether longer-range missiles or fourth-generation fighter aircraft, in the months ahead. Finally, if the outcome of the war is not set in stone, neither are the contours of the world that the war will make. The conflict’s result will shape the perceived efficacy of autocracy and democracy, the degree of security that NATO enjoys on its eastern front, and the level of Russian influence over its neighbors. On these and other issues, the implications of a war that results in a resounding Russian defeat will be different than those of a war that ends with Russian troops occupying significant parts of Ukraine, with Moscow possessing the ability to renew hostilities when it wishes. The latter outcome might not look like such a triumph for the free world, after all. There are still other scenarios, such as a Chinese decision to aid Moscow more directly, that could change the global landscape dramatically. The war in Ukraine offers a variety of lessons, but perhaps the most crucial one is this: global order is neither inherently robust nor inherently fragile. It has exactly as much strength as those who value it can muster—and sustain—when it is tested.
true
true
true
What if the war had gone differently—or takes a sudden turn?
2024-10-12 00:00:00
2023-02-14 00:00:00
https://cdn-live.foreign…pg?itok=qDz8qBvZ
Article
foreignaffairs.com
Foreign Affairs Magazine
null
null
5,082,253
http://news.cnet.com/8301-17938_105-57564757-1/nasa-sends-mona-lisa-to-the-moon-with-lasers/
CNET: Product reviews, advice, how-tos and the latest news
Jon Reed
Best of the Best Editors' picks and our top buying guides Best of the Best Editors' picks and our top buying guides Upgrade your inbox Get CNET Insider From talking fridges to iPhones, our experts are here to help make the world a little less complicated. ## More to Explore ## Latest ### Best Walmart Holiday Deals Still Available: Last Chance for Big Saving on Tech, Home Goods and More 5 minutes ago### Best Internet Providers in Honolulu, Hawaii 11 minutes ago### The Best Spots in Your Home To Help Indoor Plants Grow 19 minutes ago### Lemme Sleep Took Over My TikTok, So I Had to Try This Supplement Myself 2 hours ago### Quick and Easy Tips for Perfectly Crispy Bacon 2 hours ago### Best Places to Buy Glasses Online for 2024 3 hours ago### 23 Best Gifts for New Homeowners for the Holidays 2024 3 hours ago### How to Pause Your Internet Service 4 hours ago### ChatGPT Glossary: 48 AI Terms That Everyone Should Know 4 hours ago### How to Watch Ariana Grande on 'Saturday Night Live' Tonight Without Cable 5 hours ago### Best Gifts for Hikers, From Their Feet to Their Butts 5 hours ago### Aurora Viewers Share Stunning Photos of the Northern Lights 6 hours ago### This Visual Guide Shows Everyone How to Hit Daily Protein Needs 6 hours ago### 2025 Social Security COLA Increase: Here's What Happens Next 6 hours ago### Best iPhone 15 and iPhone 15 Pro Cases for 2024 6 hours ago## Our Expertise Expertise Lindsey Turrentine is executive vice president for content and audience. She has helped shape digital media since digital media was born. 0357911176 02468104 024681025 ## Tech ## Money ## Crossing the Broadband Divide Millions of Americans lack access to high-speed internet. Here's how to fix that. ## Energy and Utilities ## Deep Dives Immerse yourself in our in-depth stories. Get the best price on everything CNET Shopping helps you get the best prices on your favorite products. Get promo codes and discounts with a single click. Add to Chrome - it's free! ## Internet Low-Cost Internet Guide for All 50 States: Despite the End of ACP, You Still Have Options 10/05/2024 ## Sleep Through the Night Get the best sleep of your life with our expert tips. Get the best price on everything CNET Shopping helps you get the best prices on your favorite products. Get promo codes and discounts with a single click. Add to Chrome - it's free! ## Tech Tips Get the most out of your phone with this expert advice. Get the best price on everything CNET Shopping helps you get the best prices on your favorite products. Get promo codes and discounts with a single click. Add to Chrome - it's free! ## Home ## Daily Puzzle Answers ## Living Off Grid CNET's Eric Mack has lived off the grid for over three years. Here's what he learned.
true
true
true
Get full-length product reviews, the latest news, tech coverage, daily deals, and category deep dives from CNET experts worldwide.
2024-10-12 00:00:00
2024-10-12 00:00:00
https://www.cnet.com/a/i…t=675&width=1200
website
cnet.com
CNET
null
null
13,997,180
https://singularityhub.com/2017/03/29/google-chases-general-intelligence-with-new-ai-that-has-a-memory/?utm_content=bufferf7fc6&utm_medium=social&utm_source=twitter.com&utm_campaign=buffer
Google Chases General Intelligence With New AI That Has a Memory
Shelly Fan
For a mind to be capable of tackling anything, it has to have a memory. Humans are exceptionally good at transferring old skills to new problems. Machines, despite all their recent wins against humans, aren’t. This is partly due to how they’re trained: artificial neural networks like Google’s DeepMind learn to master a singular task and call it quits. To learn a new task, it has to reset, wiping out previous memories and starting again from scratch. This phenomenon, quite aptly dubbed “catastrophic forgetting,” condemns our AIs to be one-trick ponies. Now, taking inspiration from the hippocampus, our brain’s memory storage system, researchers at DeepMind and Imperial College London developed an algorithm that allows a program to learn one task after another, using the knowledge it gained along the way. When challenged with a slew of Atari games, the neural network flexibly adapted its strategy and mastered each game, while conventional, memory-less algorithms faltered. “The ability to learn tasks in succession without forgetting is a core component of biological and artificial intelligence,” writes the team in their paper, which was published in the journal *Proceedings of the National Academy of Sciences.* “If we’re going to have computer programs that are more intelligent and more useful, then they will have to have this ability to learn sequentially,” says study lead author Dr. James Kirkpatrick, adding that the study overcame a “significant shortcoming” in artificial neural networks and AI. ### Making Memories This isn’t the first time DeepMind has tried to give their AIs some memory power. Last year, the team set their eyes on a kind of external memory module, somewhat similar to a human working memory—the ability to keep things in mind while using them to reason or solve problems. Combining a neural network with a random access memory (better known as RAM), the researchers showed that their new hybrid system managed to perform multi-step reasoning, a type of task that’s long stumped conventional AI systems. But it had a flaw: the hybrid, although powerful, required constant communication between the two components—not an elegant solution, and a total energy sink. In this new study, DeepMind backed away from computer storage ideas, instead zooming deep into the human memory machine—the hippocampus—for inspiration. And for good reason. Artificial neural networks, true to their name, are loosely modeled after their biological counterparts. Made up of layers of interconnecting neurons, the algorithm takes in millions of examples and learns by adjusting the connection between the neurons—somewhat like fine-tuning a guitar. A very similar process occurs in the hippocampus. What’s different is how the connections change when learning a new task. In a machine, the weights are reset, and anything learned is forgotten. In a human, memories undergo a kind of selection: if they help with subsequent learning, they become protected; otherwise, they’re erased. In this way, not only are memories stored within the neuronal connections themselves (without needing an external module), they also stick around if they’re proven useful. This theory, called “synaptic consolidation,” is considered a fundamental aspect of learning and memory in the brain. So of course, DeepMind borrowed the idea and ran with it. ### Crafting an Algorithm The new algorithm mimics synaptic consolidation in a simple way. After learning a game, the algorithm pauses and figures out how helpful each connection was to the task. It then keeps the most useful parts and makes those connections harder to change as it learns a new skill. “[This] way there is room to learn the new task but the changes we’ve applied do not override what we’ve learned before,” says Kirkpatrick. Think of it like this: visualize every connection as a spring with different stiffness. The more important a connection is for successfully tackling a task, the stiffer it becomes and thus subsequently harder to change. “For this reason, we called our algorithm Elastic Weight Consolidation (EWC),” the authors explained in a blog post introducing the algorithm. ### Game On To test their new algorithm, the team turned to DeepMind’s favorite AI training ground: Atari games. Previously, the company unveiled a neural network-based AI called Deep Q-Network (DQN) that could teach itself to play Atari games as well as any human player. From Space Invaders to Pong, the AI mastered our nostalgic favorites, but only one game at a time. **“After 20 million plays with each game, the team found that their new AI mastered seven out of the ten games with a performance as good as any human player.”** The team now pitted their memory-enhanced DQN against its classical version, and put the agents through a random selection of ten Atari games. After 20 million plays with each game, the team found that their new AI mastered seven out of the ten games with a performance as good as any human player. In stark contrast, without the memory boost, the classical algorithm could barely play a single game by the end of training. This was partly because the AI never learned to play more than one game and always forgot what it had learned when moving on to a new one. “Today, computer programs cannot learn from data adaptively and in real time. We have shown that catastrophic forgetting is not an insurmountable challenge for neural networks,” the authors say. ### Machine Brain That’s not to say EWC is perfect. One issue is the possibility of a “blackout catastrophe”: since the connections in EWC can only become less plastic over time, eventually the network saturates. This locks the network into a single unchangeable state, during which it can no longer retrieve memories or store new information. That said, “We did not observe these limitations under the more realistic conditions for which EWC was designed—likely because the network was operating well under capacity in these regimes,” explained the authors. Performance wise, the algorithm was a sort of “jack-of-all-trades”: decent at plenty, master of none. Although the network retained knowledge from learning each game, its performance for any given game was worse than traditional neural networks dedicated to that one game. One possible stumbling block is that the algorithm may not have accurately judged the importance of certain connections in each game, which is something that needs to be further optimized, explain the authors. “We have demonstrated that [EWC] can learn tasks sequentially, but we haven’t shown that it learns them better because it learns them sequentially,” says Kirkpatrick. “There’s still room for improvement.” **“The team hopes that their work will nudge AI towards the next big thing: general-purpose intelligence.”** But the team hopes that their work will nudge AI towards the next big thing: general-purpose intelligence, in which AIs achieve the kind of adaptive learning and reasoning that come to humans naturally. What’s more, the work could also feedback into neurobiological theories of learning. Synaptic consolidation was previously only proven in very simple examples. Here we showed that the same theories can be applied in a more realistic and complex context—it really shows that the theory could be key to retaining our memories and know-how, explained the authors. After all, to emulate is to understand. Over the past decade, neuroscience and machine learning have become increasingly intertwined. And no doubt our mushy thinking machines have more to offer their silicon brethren, and vice-versa. “We hope that this research represents a step towards programs that can learn in a more flexible and efficient way,” the authors say. Image Credit: Shutterstock
true
true
true
For a mind to be capable of tackling anything, it has to have a memory. Humans are exceptionally good at transferring old skills to new problems. Machines, despite all their recent wins against humans, aren’t. This is partly due to how they’re trained: artificial neural networks like Google’s DeepMind learn to master a singular task […]
2024-10-12 00:00:00
2017-03-29 00:00:00
https://singularityhub.c…-remembers-8.jpg
article
singularityhub.com
Singularity Hub
null
null
33,476,323
https://twitter.com/spacex/status/1588692144302477313
x.com
null
null
true
true
false
null
2024-10-12 00:00:00
null
null
null
null
X (formerly Twitter)
null
null
37,904,586
https://github.com/THUDM/CogVLM
GitHub - THUDM/CogVLM: a state-of-the-art-level open visual language model | 多模态预训练模型
THUDM
🌟 **Jump to detailed introduction: Introduction to CogVLM, 🆕 Introduction to CogAgent** 📔 For more detailed usage information, please refer to: CogVLM & CogAgent's technical documentation (in Chinese) 📖 Paper: CogVLM: Visual Expert for Pretrained Language Models | 📖 Paper: CogAgent: A Visual Language Model for GUI Agents | 🌐 Web Demo for both CogVLM2: this link | **Table of Contents** - CogVLM & CogAgent - 🔥🔥🔥 **News**:`2024/5/20` : We released the**next generation of model, CogVLM2**, which is based on llama3-8b and on the par of (or better than) GPT-4V in most cases! DOWNLOAD and TRY! - 🔥🔥 **News**:`2024/4/5` : CogAgent was selected as a CVPR 2024 Highlights! - 🔥 **News**:`2023/12/26` : We have released the CogVLM-SFT-311K dataset, which contains over 150,000 pieces of data that we used for**CogVLM v1.0 only**training. Welcome to follow and use. - **News**:`2023/12/18` :**New Web UI Launched!**We have launched a new web UI based on Streamlit, users can painlessly talk to CogVLM, CogAgent in our UI. Have a better user experience. - **News**:`2023/12/15` :**CogAgent Officially Launched!**CogAgent is an image understanding model developed based on CogVLM. It features**visual-based GUI Agent capabilities**and has further enhancements in image understanding. It supports image input with a resolution of 1120*1120, and possesses multiple abilities including multi-turn dialogue with images, GUI Agent, Grounding, and more. - **News**:`2023/12/8` We have updated the checkpoint of cogvlm-grounding-generalist to cogvlm-grounding-generalist-v1.1, with image augmentation during training, therefore more robust. See details. - **News**:`2023/12/7` CogVLM supports**4-bit quantization**now! You can inference with just**11GB**GPU memory! - **News**:`2023/11/20` We have updated the checkpoint of cogvlm-chat to cogvlm-chat-v1.1, unified the versions of chat and VQA, and refreshed the SOTA on various datasets. See details - **News**:`2023/11/20` We release**cogvlm-chat**,**cogvlm-grounding-generalist/base**,**cogvlm-base-490/224**on 🤗Huggingface. you can infer with transformers in a few lines of codenow! - `2023/10/27` CogVLM bilingual version is available online! Welcome to try it out! - `2023/10/5` CogVLM-17B released。 - Click here to enter CogVLM2 Demo。 If you need to use Agent and Grounding functions, please refer to Cookbook - Task Prompts We support two GUIs for model inference, **CLI** and **web demo** . If you want to use it in your python code, it is easy to modify the CLI scripts for your case. First, we need to install the dependencies. ``` # CUDA >= 11.8 pip install -r requirements.txt python -m spacy download en_core_web_sm ``` **All code for inference is located under the basic_demo/ directory. Please switch to this directory first before proceeding with further operations.** Run CLI demo via: ``` # CogAgent python cli_demo_sat.py --from_pretrained cogagent-chat --version chat --bf16 --stream_chat python cli_demo_sat.py --from_pretrained cogagent-vqa --version chat_old --bf16 --stream_chat # CogVLM python cli_demo_sat.py --from_pretrained cogvlm-chat --version chat_old --bf16 --stream_chat python cli_demo_sat.py --from_pretrained cogvlm-grounding-generalist --version base --bf16 --stream_chat ``` The program will automatically download the sat model and interact in the command line. You can generate replies by entering instructions and pressing enter. Enter `clear` to clear the conversation history and `stop` to stop the program. We also support model parallel inference, which splits model to multiple (2/4/8) GPUs. `--nproc-per-node=[n]` in the following command controls the number of used GPUs. ``` torchrun --standalone --nnodes=1 --nproc-per-node=2 cli_demo_sat.py --from_pretrained cogagent-chat --version chat --bf16 ``` - If you want to manually download the weights, you can replace the path after `--from_pretrained` with the model path. - Our model supports SAT's **4-bit quantization**and**8-bit quantization**. You can change`--bf16` to`--fp16` , or`--fp16 --quant 4` , or`--fp16 --quant 8` .For example `python cli_demo_sat.py --from_pretrained cogagent-chat --fp16 --quant 8 --stream_chat python cli_demo_sat.py --from_pretrained cogvlm-chat-v1.1 --fp16 --quant 4 --stream_chat # In SAT version,--quant should be used with --fp16` - The program provides the following hyperparameters to control the generation process: `usage: cli_demo_sat.py [-h] [--max_length MAX_LENGTH] [--top_p TOP_P] [--top_k TOP_K] [--temperature TEMPERATURE] optional arguments: -h, --help show this help message and exit --max_length MAX_LENGTH max length of the total sequence --top_p TOP_P top p for nucleus sampling --top_k TOP_K top k for top k sampling --temperature TEMPERATURE temperature for sampling` - Click here to view the correspondence between different models and the `--version` parameter. Run CLI demo via: ``` # CogAgent python cli_demo_hf.py --from_pretrained THUDM/cogagent-chat-hf --bf16 python cli_demo_hf.py --from_pretrained THUDM/cogagent-vqa-hf --bf16 # CogVLM python cli_demo_hf.py --from_pretrained THUDM/cogvlm-chat-hf --bf16 python cli_demo_hf.py --from_pretrained THUDM/cogvlm-grounding-generalist-hf --bf16 ``` - If you want to manually download the weights, you can replace the path after `--from_pretrained` with the model path. - You can change `--bf16` to`--fp16` , or`--quant 4` . For example, our model supports Huggingface's**4-bit quantization**:python cli_demo_hf.py --from_pretrained THUDM/cogvlm-chat-hf --quant 4 We also offer a local web demo based on Gradio. First, install Gradio by running: `pip install gradio` . Then download and enter this repository and run `web_demo.py` . See the next section for detailed usage: ``` python web_demo.py --from_pretrained cogagent-chat --version chat --bf16 python web_demo.py --from_pretrained cogagent-vqa --version chat_old --bf16 python web_demo.py --from_pretrained cogvlm-chat-v1.1 --version chat_old --bf16 python web_demo.py --from_pretrained cogvlm-grounding-generalist --version base --bf16 ``` The GUI of the web demo looks like: You may want to use CogVLM in your own task, which needs a **different output style or domain knowledge**. **All code for finetuning is located under the finetune_demo/ directory.** We here provide a finetuning example for **Captcha Recognition** using lora. - Start by downloading the Captcha Images dataset. Once downloaded, extract the contents of the ZIP file. - To create a train/validation/test split in the ratio of 80/5/15, execute the following: python utils/split_dataset.py - Start the fine-tuning process with this command: bash finetune_demo/finetune_(cogagent/cogvlm)_lora.sh - Merge the model to `model_parallel_size=1` : (replace the 4 below with your training`MP_SIZE` )torchrun --standalone --nnodes=1 --nproc-per-node=4 utils/merge_model.py --version base --bf16 --from_pretrained ./checkpoints/merged_lora_(cogagent/cogvlm490/cogvlm224) - Evaluate the performance of your model. bash finetune_demo/evaluate_(cogagent/cogvlm).sh We provide the same API examples as `GPT-4V` , which you can view in `openai_demo` . - First, start the node ``` python openai_demo/openai_api.py ``` - Next, run the request example node, which is an example of a continuous dialogue ``` python openai_demo/openai_api_request.py ``` - You will get output similar to the following ``` This image showcases a tranquil natural scene with a wooden pathway leading through a field of lush green grass. In the distance, there are trees and some scattered structures, possibly houses or small buildings. The sky is clear with a few scattered clouds, suggesting a bright and sunny day. ``` - Model Inference: For INT4 quantization: 1 * RTX 3090(24G) (CogAgent takes ~ 12.6GB, CogVLM takes ~ 11GB) For FP16: 1 * A100(80G) or 2 * RTX 3090(24G) - Finetuning: For FP16: 4 * A100(80G) *[Recommend]*or 8* RTX 3090(24G). If you run the `basic_demo/cli_demo*.py` from the code repository, it will automatically download SAT or Hugging Face weights. Alternatively, you can choose to manually download the necessary weights. - CogAgent Model name Input resolution Introduction Huggingface model SAT model cogagent-chat 1120 Chat version of CogAgent. Supports GUI Agent, multiple-round chat and visual grounding. HF link OpenXLab linkHF link OpenXLab linkcogagent-vqa 1120 VQA version of CogAgent. Has stronger capabilities in single-turn visual dialogue. Recommended for VQA benchmarks. HF link OpenXLab linkHF link OpenXLab link c - CogVLM Model name Input resolution Introduction Huggingface model SAT model cogvlm-chat-v1.1 490 Supports multiple rounds of chat and vqa simultaneously, with different prompts. HF link OpenXLab linkHF link OpenXLab linkcogvlm-base-224 224 The original checkpoint after text-image pretraining. HF link OpenXLab linkHF link OpenXLab linkcogvlm-base-490 490 Amplify the resolution to 490 through position encoding interpolation from `cogvlm-base-224` .HF link OpenXLab linkHF link OpenXLab linkcogvlm-grounding-generalist 490 This checkpoint supports different visual grounding tasks, e.g. REC, Grounding Captioning, etc. HF link OpenXLab linkHF link OpenXLab link - CogVLM is a powerful **open-source visual language model**(**VLM**). CogVLM-17B has 10 billion vision parameters and 7 billion language parameters. - CogVLM-17B achieves state-of-the-art performance on 10 classic cross-modal benchmarks, including NoCaps, Flicker30k captioning, RefCOCO, RefCOCO+, RefCOCOg, Visual7W, GQA, ScienceQA, VizWiz VQA and TDIUC, and rank the 2nd on VQAv2, OKVQA, TextVQA, COCO captioning, etc., **surpassing or matching PaLI-X 55B**. CogVLM can also chat with you about images. ## Click to view results on MM-VET, POPE, TouchStone. Method | LLM | MM-VET | POPE(adversarial) | TouchStone | BLIP-2 | Vicuna-13B | 22.4 | - | - | Otter | MPT-7B | 24.7 | - | - | MiniGPT4 | Vicuna-13B | 24.4 | 70.4 | 531.7 | InstructBLIP | Vicuna-13B | 25.6 | 77.3 | 552.4 | LLaMA-Adapter v2 | LLaMA-7B | 31.4 | - | 590.1 | LLaVA | LLaMA2-7B | 28.1 | 66.3 | 602.7 | mPLUG-Owl | LLaMA-7B | - | 66.8 | 605.4 | LLaVA-1.5 | Vicuna-13B | 36.3 | 84.5 | - | Emu | LLaMA-13B | 36.3 | - | - | Qwen-VL-Chat | - | - | - | 645.2 | DreamLLM | Vicuna-7B | 35.9 | 76.5 | - | CogVLM | Vicuna-7B | 52.8 | 87.6 | 742.0 | ## Click to view results of cogvlm-grounding-generalist-v1.1. RefCOCO | RefCOCO+ | RefCOCOg | Visual7W | |||||| val | testA | testB | val | testA | testB | val | test | test | | cogvim-grounding-generalist | 92.51 | 93.95 | 88.73 | 87.52 | 91.81 | 81.43 | 89.46 | 90.09 | 90.96 | cogvim-grounding-generalist-v1.1 | **92.76** | **94.75** | **88.99** | **88.68** | **92.91** | **83.39** | **89.75** | **90.79** | **91.05** | - CogVLM can accurately describe images in details with **very few hallucinations**. - CogVLM can understand and answer various types of questions, and has a **visual grounding**version. - CogVLM sometimes captures more detailed content than GPT-4V(ision). CogAgent is an open-source visual language model improved based on CogVLM. CogAgent-18B has 11 billion visual parameters and 7 billion language parameters CogAgent-18B achieves state-of-the-art generalist performance on 9 classic cross-modal benchmarks, including VQAv2, OK-VQ, TextVQA, ST-VQA, ChartQA, infoVQA, DocVQA, MM-Vet, and POPE. It significantly surpasses existing models on GUI operation datasets such as AITW and Mind2Web. In addition to all the features already present in CogVLM (visual multi-round dialogue, visual grounding), CogAgent: - Supports higher resolution visual input and dialogue question-answering. **It supports ultra-high-resolution image inputs of 1120x1120.** - **Possesses the capabilities of a visual Agent**, being able to return a plan, next action, and specific operations with coordinates for any given task on any GUI screenshot. - **Enhanced GUI-related question-answering capabilities**, allowing it to handle questions about any GUI screenshot, such as web pages, PC apps, mobile applications, etc. - Enhanced capabilities in OCR-related tasks through improved pre-training and fine-tuning. - **General Multi-Round Dialogue**: Say whatever you want. - **GUI Agent Task**: Use the Agent template and replace <TASK> with the task instruction enclosed in double quotes. This query can make CogAgent infer Plan and Next Action. If adding`(with grounding)` at the end of the query, the model will return a formalized action representation with coordinates. For example, to ask the model how to complete the task "Search for CogVLM" on a current GUI screenshot, follow these steps: - Randomly select a template from the Agent template. Here, we choose `What steps do I need to take to <TASK>?` . - Replace with the task instruction enclosed in double quotes, for example, `What steps do I need to take to "Search for CogVLM"?` . Inputting this to the model yields: Plan: 1. Type 'CogVLM' into the Google search bar. 2. Review the search results that appear. 3. Click on a relevant result to read more about CogVLM or access further resources. Next Action: Move the cursor to the Google search bar, and type 'CogVLM' into it. - If adding `(with grounding)` at the end, i.e. changing the input to`What steps do I need to take to "Search for CogVLM"?(with grounding)` , the output of CogAgent would be: Plan: 1. Type 'CogVLM' into the Google search bar. 2. Review the search results that appear. 3. Click on a relevant result to read more about CogVLM or access further resources. Next Action: Move the cursor to the Google search bar, and type 'CogVLM' into it. Grounded Operation:[combobox] Search -> TYPE: CogVLM at the box [[212,498,787,564]] Tip: For GUI Agent tasks, it is recommended to conduct only single-round dialogues for each image for better results. - **Visual Grounding**. Three modes of grounding are supported:- Image description with grounding coordinates (bounding box). Use any template from caption_with_box template as model input. For example: Can you provide a description of the image and include the coordinates [[x0,y0,x1,y1]] for each mentioned object? - Returning grounding coordinates (bounding box) based on the description of objects. Use any template from caption2box template, replacing `<expr>` with the object's description. For example: Can you point out *children in blue T-shirts*in the image and provide the bounding boxes of their location?- Providing a description based on bounding box coordinates. Use a template from box2caption template, replacing `<objs>` with the position coordinates. For example: Tell me what you see within the designated area *[[086,540,400,760]]*in the picture. **Format of coordination:** The bounding box coordinates in the model's input and output use the format `[[x1, y1, x2, y2]]` , with the origin at the top left corner, the x-axis to the right, and the y-axis downward. (x1, y1) and (x2, y2) are the top-left and bottom-right corners, respectively, with values as relative coordinates multiplied by 1000 (prefixed with zeros to three digits). Due to differences in model functionalities, different model versions may have distinct `--version` specifications for the text processor, meaning the format of the prompts used varies. model name | --version | ---|---| cogagent-chat | chat | cogagent-vqa | chat_old | cogvlm-chat | chat_old | cogvlm-chat-v1.1 | chat_old | cogvlm-grounding-generalist | base | cogvlm-base-224 | base | cogvlm-base-490 | base | - If you have trouble in accessing huggingface.co, you can add `--local_tokenizer /path/to/vicuna-7b-v1.5` to load the tokenizer. - If you have trouble in automatically downloading model with 🔨SAT, try downloading from 🤖modelscope or 🤗huggingface or 💡wisemodel manually. - Download model using 🔨SAT, the model will be saved to the default location `~/.sat_models` . Change the default location by setting the environment variable`SAT_HOME` . For example, if you want to save the model to`/path/to/my/models` , you can run`export SAT_HOME=/path/to/my/models` before running the python command. The code in this repository is open source under the Apache-2.0 license, while the use of the CogVLM model weights must comply with the Model License. If you find our work helpful, please consider citing the following papers ``` @misc{wang2023cogvlm, title={CogVLM: Visual Expert for Pretrained Language Models}, author={Weihan Wang and Qingsong Lv and Wenmeng Yu and Wenyi Hong and Ji Qi and Yan Wang and Junhui Ji and Zhuoyi Yang and Lei Zhao and Xixuan Song and Jiazheng Xu and Bin Xu and Juanzi Li and Yuxiao Dong and Ming Ding and Jie Tang}, year={2023}, eprint={2311.03079}, archivePrefix={arXiv}, primaryClass={cs.CV} } @misc{hong2023cogagent, title={CogAgent: A Visual Language Model for GUI Agents}, author={Wenyi Hong and Weihan Wang and Qingsong Lv and Jiazheng Xu and Wenmeng Yu and Junhui Ji and Yan Wang and Zihan Wang and Yuxiao Dong and Ming Ding and Jie Tang}, year={2023}, eprint={2312.08914}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` In the instruction fine-tuning phase of the CogVLM, there are some English image-text data from the MiniGPT-4, LLAVA, LRV-Instruction, LLaVAR and Shikra projects, as well as many classic cross-modal work datasets. We sincerely thank them for their contributions.
true
true
true
a state-of-the-art-level open visual language model | 多模态预训练模型 - THUDM/CogVLM
2024-10-12 00:00:00
2023-09-18 00:00:00
https://opengraph.githubassets.com/3e36fa21df07f66e55a7d8755935d937d7cc658db77819b66f5f7bd8e4a2c462/THUDM/CogVLM
object
github.com
GitHub
null
null
5,989,585
http://www.iaeng.org/publication/IMECS2011/IMECS2011_pp732-737.pdf
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
34,693,894
https://www.section.io/blog/developer-sphere-of-influence/
Home - www.webscale.com
Www Webscale Com; Wpadmin
# The Supercloud Platform for API-first applications Unprecedented cloud distribution, performance and availability at the lowest cost possible ## Deliver apps within milliseconds of end users ## CloudFlow is your unfair, competitive advantage to deliver world-class digital experiences ### Unprecedented Workload Automation Leveraging AI and ML data, our autonomous location orchestration dynamically optimizes the delivery network for peak efficiency and performance. ### Developer-centric project launch journey ### Supercloud distribution at your fingertips ### Customize your location policies and guard rails ### Unprecedented Workload Automation Leveraging AI and ML data, our autonomous location orchestration dynamically optimizes the delivery network for peak efficiency and performance. ### Developer-centric project launch journey Effortlessly kickstart your work using a container image, Kubernetes cluster, or directly from a GitHub Repository. Designed for ease and efficiency, our platform simplifies your development journey right from the start. ### Supercloud distribution at your fingertips Experience a truly integrated mesh network, connecting premier public and private cloud providers. This powerful fusion offers your applications a seamless and straightforward gateway to advanced Supercloud computing capabilities. ### Customize your location policies and guard rails Refine your parameters for optimizing the number of locations, preferred regions, compliance requirements, and more. Your policies influence dynamic server locations based on real-time traffic data, minimizing the distance for user requests and adapting to changing traffic patterns. ## Supercloud Simplicity-as-a-Service The CloudFlow platform automates the orchestration of custom workloads, apps and APIs across a mesh network of private and public cloud providers. We have created CloudFlow so you can harness the power of distributed, multicloud computing without the operational chaos that is traditionally involved with managing such a complex infrastructure network. Our study revealed that CloudFlow enhances application performance, reduces cloud cost and eliminates K8s operational complexity ## Customer Testimonials ## Read our Latest Blogs ## How to Solve GraphQL Latency Challenges by Deploying Closer to Your Users GraphQL is a widely adopted alternative to REST APIs because of the many benefits that it offers, including performance, efficiency, and predictability. While the advantages are significant, many developers become frustrated with latency challenges when implementing... ## Headless Commerce Drives Edge Computing Adoption In e-commerce today, the challenge to meet and exceed customer expectations is driving innovation. The demand for frictionless shopping, 24/7 availability, superior product and impeccable service quality is ever increasing, putting pressure on retailers to deliver... ## Two Aspects of Edge Compute to Focus on for Reducing Edge Complexity As organizations look to capitalize on the benefits of edge computing, many are quickly realizing the complexities associated with building and operating distributed systems – including sourcing distributed compute, resource management/placement/sizing/scaling,...
true
true
true
Unprecedented cloud distribution, performance and availability at the lowest cost possible
2024-10-12 00:00:00
2023-12-29 00:00:00
null
website
webscale.com
www.webscale.com
null
null
1,466,675
http://blog.matt-lloyd.com/2010/06/generative-butterflies-from-vector-field/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
26,270,443
https://www.wsj.com/articles/u-s-authorities-charge-north-koreans-in-long-running-hacking-scheme-11613581358
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
22,397,452
https://nuadox.com/post/190982969827/fool-ev-with-phantom-images
Researchers fool autonomous vehicle frameworks with phantom images
Nuadox
## Researchers fool autonomous vehicle frameworks with phantom images **- By Nuadox Crew -** Earlier this week, researchers from Ben-Gurion University of the Negev’s (BGU) Cyber Security Research Center in Israel have found that they can cause the autopilot on an autonomous vehicle to erroneously apply its brakes in response to “phantom” images projected on a road or billboard. *Image: The Ben-Gurion University of the Negev Research Tesla considers the phantom image (left) as a real person and (right) Mobileye 630 PRO autonomous vehicle system considers the image projected on a tree as a real road sign. Credit: Cyber@bgu.* In a new research paper, “Phantom of the ADAS,” published on IACR.org, the researchers demonstrated that autopilots and advanced driving-assistance systems (ADASs) in semi-autonomous or fully autonomous cars register depthless projections of objects (phantoms) as real objects. They show how attackers can exploit this perceptual challenge to manipulate the vehicle and potentially harm the driver or passengers without any special expertise by using a commercial drone and inexpensive image projector. While fully and semi-autonomous cars are already being deployed around the world, vehicular communication systems that connect the car with other cars, pedestrians and surrounding infrastructure are lagging. According to the researchers, the lack of such systems creates a “validation gap,” which prevents the autonomous vehicles from validating their virtual perception with a third party, relying only on internal sensors. In addition to causing the autopilot to apply brakes, the researchers demonstrated they can fool the ADAS into believing phantom traffic signs are real, when projected for 125 milliseconds in advertisements on digital billboards. Lastly, they showed how fake lane markers projected on a road by a projector-equipped drone will guide the autopilot into the opposite lane and potentially oncoming traffic. *Video: “Phantom of the ADAS: Phantom Attacks on Driving Assistance Systems” by Cyber Security Labs @ Ben Gurion University. YouTube.* “This type of attack is currently not being taken into consideration by the automobile industry. These are not bugs or poor coding errors but fundamental flaws in object detectors that are not trained to distinguish between real and fake objects and use feature matching to detect visual objects,” says Ben Nassi, lead author and a Ph.D. student of Prof. Yuval Elovici in BGU’s Department of Software and Information Systems Engineering and Cyber Security Research Center. In reality, depthless objects projected on a road are considered real even though the depth sensors can differentiate between 2D and 3D. The BGU researchers believe that this is the result of a “better safe than sorry” policy that causes the car to consider a visual 2D object real. The researchers are developing a neural network model that analyzes a detected object’s context, surface and reflected light, which is capable of detecting phantoms with high accuracy. **Source: American Associates, Ben-Gurion University of the Negev** **Read Also** Autonomous vehicle tech completes 230-mile self-navigated journey in UK Driving autonomous cars off the beaten path Autonomous vehicle simulation platform Cognata raises $18.5M in series B round
true
true
true
- By Nuadox Crew - Earlier this week, researchers from Ben-Gurion University of the Negev's (BGU) Cyber Security Research Center in Israel have found that they can cause the autopilot on an autonomous...
2024-10-12 00:00:00
2020-02-23 00:00:00
https://i.ytimg.com/vi/1…WI/hqdefault.jpg
article
nuadox.com
Tumblr
null
null
6,592,853
http://www.kickstarter.com/projects/marshallhaas/draft-a-physical-notebook-that-syncs-to-the-cloud
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
11,022,246
https://medium.com/@maxbraun/my-bathroom-mirror-is-smarter-than-yours-94b21c6671ba#.f928hx3lu
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
3,172,985
http://online.wsj.com/article/SB10001424052970204644504576653493609116516.html?
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
3,028,593
http://gizmodo.com/5843117/scientists-reconstruct-video-clips-from-brain-activity
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
34,522,813
https://switchupcb.com/blog/the-evolution-of-programming-languages/
The Evolution of Programming Languages
null
Table of Contents **Why do you use a programming language?** The simple answer is to program. However, this answer does not provide insight into how programming languages work nor which one you should use for a given task. Understanding the timeline of a technology is critical to using the technology effectively. Therefore, learning about the history of programming languages is important. A programming language functions as an interface between humans and machines. So, humans use programming languages to tell man-made machines how to run. The alternative to using a programming language is not using software at all. You can always do math by hand. But this is tedious, so humans created computers to automate mathematical operations. *For a full timeline on the Computer History of Software and Languages, check out **Computer History (Software & Languages)**.* # It Starts With a Switch **How can you perform math in an automated manner?** The first step involves electricity. The second step involves a switch: More so the idea of a light switch, which flips **on **to represent a 1, and flips **off **to represent a 0. In order to perform mathematical operations, you must store symbols such as 2 and 4 (as 2 + 2 = 4 ). **But how can you represent a number such as 4 with a light switch?** Instead of requiring *10 light switches* to represent a single value *(e.g., 4) *in a *base-10 **numeral system*, a single light switch — which can be on (1) or off (0) — is used to represent a single value in a base-2 (binary) number system. So, **computers store numbers using 1s and 0s**. **Here is more information about number systems.** In a base-10 system, the number 10 represents — from right to left — the sum of 0 (0 * **10** ^ 0 ) and 10 (1 * **10 **^ 1 ): *The result is 10*. In a base-2 system, the number 10 represents — from right-to-left — the sum of 0(0 * 2 ^ 0) and 2(1 * **2** ^ 1): *The result is 2.* *The prefix “bi” means “two”. So, binary represents two numbers.* In a computer, `1` and `0` are the two numbers used in a binary number system. Instead of “light switches”, micro transistors are used to **switch **electrical signals within the computer. Each micro transistor represents a `1` or `0` which represents a **bit**. However, a single bit on a computer doesn’t mean much without context. *For more information on reading and writing binary numbers, watch **Why Do Computers Use 1s and 0s?* **How can you represent words on a computer in binary? ** Numerous character set *standards *were created in order to represent words in binary *(i.e **ASCII**, **ANSI**, EBCDIC, etc)*. The significance of these character sets is that they are man-made *standards *that provide a specification for other machines. These standards specified how many bits were required for a character *(e.g., w in word)*. 8-bit computing became standardized in computer processing units as a result of popular character sets. *An 8-bit number — which contains 8 bits — is called a byte.* The introduction of 8-bit machines would evolve into 16-bit, 32-bit, and 64-bit machines. These machines would lead to the representation of numbers using octal and hexadecimal number systems. *For more information on the history of computers, read **Computer History (Computers)**.* # Introducing The Compiler Machine code represents the language of computers that uses bits *(1s and 0s)*. Writing 1s and 0s to create programs is tedious and complex for humans. So, computer programs called compilers were created to convert human-readable code into machine code. Once **compiled**, a program is **interpreted **using an interpreter. For example, the Computer Processing Unit (CPU) is the final interpreter of machine code. From this point onwards, a pattern emerged: Programming languages were created to make it easier for humans to read and write code. Rather than compiling to machine code, specific languages *(e.g., **C*++*)* would compile to other languages *(e.g., **Assembly**)*, which compiles to machine code. This evolution led to the classification of languages into high-level programming languages and low-level programming languages. *For more information on compilers, read about **How Compilers Work**.* # The History of Programming Languages The “History of Programming Languages” and “Timeline of Programming Languages” documents showcase various programming languages alongside their objectives, predecessors, and successors. Knowledge of these tools will assist you in creating more performant programs in a maintainable manner. With that being said, modern programming languages are typically broken down into specific categories to highlight their use cases. ## Interpreted vs. Compiled Understanding the meaning of compiled and interpreted (in computing) highlights the difference between compiled and interpreted programming languages. As a reminder, a compiler compiles code from one form* (i.e human readable code) *to another *(i.e machine code)*. A compiled language implies that the language will **NOT **use an interpreter at runtime, which is typically beneficial for performance. In contrast, an interpreted language implies that the language will be interpreted at runtime, which is typically beneficial for code iteration* (programmer productivity)*. ## Unmanaged vs. Managed Programming languages may be referred to by the way they handle computer memory. Processing bits on a Computer Processing Unit (CPU) is fast, **but what if you need to store information (i.e variables)?** Computer memory and other computer storage options solve this problem at the cost of processing speed *.* Random Access Memory (RAM) is built for high-speed access to a physical location of the computer called a memory address. Each memory address contains binary or decimal numbers which represent data or instructions. So a program is able to store and retrieve data from memory by writing and reading from a a memory address. However, **RAM **is erased when a computer shuts down. Unexpected operations occur when a computer mishandles memory. So certain languages do **NOT **require the programmer to manage memory manually. Instead, an *automated *form of memory management —* such as a Garbage Collector — *is provided in the runtime. So a **managed memory** **programming language** provides the programmer with an automated form of memory management, while an **unmanaged memory programming language** requires the programmer to manually manage the computer’s memory. *Solid State Drives**, **Hard Disk Drives**, and other direct-access data storage solutions are built for high-capacity, long-term storage that persists data beyond the power state of a computer (on/off).* ## Typed vs. Untyped In order to compile a program, the language must be able to ensure that the code will run correctly. It’s common for modern languages to use data types to check the correctness of a program. Data types serve as an alternative to managing information *(i.e numbers and words) *with 1s and 0s. For more information on data types, watch What Are Data Types? A strongly typed language uses types that are defined explicitly *(i.e **var** **in **int var = 5**)*, while a weakly typed language infers them when a variable is assigned. A statically typed language performs type checks with the compiler, while a dynamically typed language performs type checks with an interpreter. A nominally typed language* *performs type checks* *using a type’s name*, *while a structurally typed language performs type checks using a type’s underlying structure. *An example of a structural type check is provided while comparing functions in the Go programming language: **clap(int a) == comment(int b)** since both functions are structurally defined as **func(int)**.* ## Paradigms Programming paradigms provide mental models that assist programmers in solving programming problems. Certain programming languages may subscribe to programming paradigms such as Object Oriented Programming or Functional Programming. The importance of these paradigms are debatable. The significance of these paradigms is that they may influence how a typical program is created with a given programming language.
true
true
true
Why do you use a programming language to develop software? Understanding the evolution of programming languages will help you find out.
2024-10-12 00:00:00
2023-01-24 00:00:00
https://switchupcb.com/w…5-resize-min.jpg
article
gitconnected.com
SwitchUpCB
null
null
10,169,939
http://www.anandtech.com/show/9595/qualcomm-announces-kryo-cpu-details-22-ghz-14nm-finfet
Qualcomm Announces Kryo CPU Details: Quad Core 2.2 GHz, 14nm FinFET
Joshua Ho
# Qualcomm Announces Kryo CPU Details: Quad Core 2.2 GHz, 14nm FinFET by Joshua Ho*on September 2, 2015 5:23 PM EST* - Posted in - Mobile - Qualcomm - Smartphones - Tablets - SoCs - Snapdragon 820 Today, Qualcomm announced a number of details in the Snapdragon 820, specifically about their Kryo CPU. Given that the Snapdragon 810 was a somewhat standard 4x Cortex-A57/4x Cortex-A53, it was clear that that this chip was a stop-gap for a future fully custom design. With the Snapdragon 820 announcement, the first major bit of information that we received was that this would be a return to a custom CPU core design, and today Qualcomm is finally unveiling a bit more information on Kryo. The two main spec details that are being disclosed today is that the quad-core Kryo CPU in Snapdragon 820 will reach up to 2.2 GHz, and that the SoC will be manufactured on Samsung’s 14nm FinFET process. It isn’t stated whether this is the 14LPP process, which will give up to 10% transistor performance improvement over 14LPE which was seen in chips like the Exynos 7420, but it’s a safe bet that it is. As a result of the new architecture and new process node, Qualcomm is claiming up to a 2x increase in performance and up to a 2x increase in power efficiency compared to Snapdragon 810. The final part of this announcement is Symphony System Manager, which is said to be designed to deal with heterogeneous compute in an efficient manner. This is likely to be a kernel-level mechanism that ensures that the SoC is well-optimized for use in a smartphone or any other application. Given the focus on heterogeneous compute for this launch, I wonder if Qualcomm is going for some form of heterogeneous CPU design as well. Source: Qualcomm ## 60 Comments ## View All Comments ## zodiacsoulmate - Wednesday, September 2, 2015 - link 2x say what## syxbit - Wednesday, September 2, 2015 - link The SD810 was TERRIBLE, so 2x SD810 should be doable. Especially if their internal benchmarks stress test (since SD810 perf drops significantly under heat)## tuxRoller - Wednesday, September 2, 2015 - link It wasn't THAT bad. If it doubles performance (in what area?) in geekbench that would give it a single core score of around 2600-2800(or WELL into high powered Intel territory). To be clear, I'm not expecting that. What I do expect is it scoring around 1900 for a single core, and 6000-6500 (closer to the lower end of that range) for all four## jjj - Wednesday, September 2, 2015 - link Up to 2x when you factor in the work on DSP and GPU. But what exactly will use GPGPU is unclear ,especially if the DSP does image manipulation.When a company has a good product there is no need for vague and misleading numbers so i see this as a red flag. Hoped for 60-80% gain over a cool SD810 but some rumors of 35% CPU gain and this today make me wonder if Kryo is slower than what A72 is supposed to deliver, if A72 can reach it's targets. If Kryo and A72 are close, power and die area can make a difference so we'll see. Messing this one up would be a multi-year problem and given the lack of competition in the high end it wouldn't be all that great for us. ## ZolaIII - Thursday, September 3, 2015 - link I think they ment general purpose CPU cores performance & I am certain that 2x gain is overstatement same as we can say that cortex A53 is up to 2x faster in some tasks than A7 but overlay performance gain is about 35%. Spectra which is basically just a large SIMD aria (2x1024 bit) (a bit similar to AVX in purpose at least) do most more frequent FPU - VFP tasks helped bi DSP in those more paralel, basically most multimedia stuff. Knowing how Qualcomm mixes smaller core logic with bigger ones cutting bigger ones I am not convinced it will be faster then A72s but it will probably be more power efficient. We will see soon enough.## jjj - Friday, September 4, 2015 - link The quote is "With Kryo CPU and Snapdragon 820, you can expect up to 2 times the performance and up to 2 times the power efficiency when compared with the Snapdragon 810 processor."That's not a phrasing you use if you mean CPU and there is a strong emphasis on the DSP and in their announcement. Looking at it another way, why push expectations that high and then fail to deliver (seems pretty hard to actually deliver on 100% CPU gain). If you have a great product you still want to surprise on the upside at launch so this just feels off. ## edzieba - Thursday, September 3, 2015 - link UP TO 2x. which means when you draw the performance/power curves for both, there is at least one point where you can say "at this power, X performs at twice Y" and one point where you can say "at this performance level, X is using half the power of Y".## michael2k - Thursday, September 3, 2015 - link Sure. Have you seen the HTC M9? http://www.anandtech.com/show/9102/the-htc-one-m9-...The 810 wasn't actually all that great under benchmarks. Doubling it's performance means it is still likely to be less powerful than next week's iPhone 6S. ## lilmoe - Wednesday, September 2, 2015 - link "I wonder if Qualcomm is going for some form of heterogeneous CPU design as well"That would be pretty interesting actually. ## TheJian - Wednesday, September 2, 2015 - link The problem is WHEN. It seems Nvidia is first this xmas (for nov devices I'd say, with apple sept 9th), as Qcom seems to be quite late with this. Note there is no date on this pic for when it's expected. With a 14nm Finfet chip, I'm wondering if NV can use their own modem in some cases, and like Intel use Qcom for MU-Mimo if desired (see recent chromebooks with Intel pairing qcom). With power dropping so much from die shrinks, there isn't much need for on-die crap now if NV's old 150 modem can't get the job done. I'm wondering if Nv could get into top end phones even if they have to use Qcom modem (or samsung, considering the suit, maybe they'll get a deal).Either way, Qcom is late, which is why samsung dumped them and why their stock has plummeted over the last year (along with china cheapo socs/currency manipulation etc hurting stock price too).
true
true
true
null
2024-10-12 00:00:00
2015-09-02 00:00:00
https://images.anandtech…ples_678x452.jpg
article
anandtech.com
AnandTech
null
null
3,112,775
http://wrightimc.com/blog/2011/10/13/a-simple-primer-on-digital-marketing/
A Simple Primer on Digital Marketing | WrightIMC
Admin
With the continued growth of the Internet, digital marketing has come into its own as an aspect of business marketing. Digital marketing can take many forms, including traditional methods such as radio and television. However, creating methods of reaching a wider range of potential customers with online technologies is the current Holy Grail. Online marketing has now expanded to include content marketing, social media, email, SMS messages, and electronic banner ads. Digital marketing allows agencies to know how campaigns are performing because every aspect of a digital campaign is tracked. **Content Marketing** Content marketing is a method of promoting services and products by providing information to consumers. This can take the form of informative articles on magazine-style websites, blog posts, client website content, and white papers. According to the Pew Research organization, consumers now use the Internet as their first source of information when considering making a purchase or using services. They will conduct Internet searches to gather information from several sources to help them make their decision. As a result, content marketing has become extremely important to disseminate appropriate information to consumers. It is estimated that corporations devote approximately 20% of their marketing budget to content marketing efforts. Consider that in recent years, large companies such as NBC, Proctor & Gamble, and Johnson & Johnson have begun to spend millions of dollars on content marketing and social media efforts via Facebook and Twitter, in addition to expanding their own site content about products and services. **Digital Marketing** Digital marketing isn’t limited simply to the Internet found within a web browser’s borders. It encompasses mobile phone communications, apps on mobile devices, and digital banner ads. The goal of digital marketing is to engage the customer using digital technologies, wherever they may be. Push Digital Marketing – In push digital marketing, the marketer contacts the potential customer in an attempt to provoke the desired response. Examples of push marketing include display advertising, SMS messages, email campaigns, and newsletter campaigns. The advantage of push marketing is that it is possible to schedule the delivery of content to subscribers and potential customers. It can be specifically targeted to customers, and content delivery is much more consistent than pull marketing methods. Pull Digital Marketing – Pull digital marketing depends on the consumer seeking information about products and services. Popular examples of pull marketing include website content, social media and streaming audio and video. Pull marketing’s advantage is that content can be very extensive, allowing the marketing organization to develop a fuller story and elicit a response. Digital marketing can be as simple as having informative, interesting content on a website, or as complex as a marketing blitz utilizing display media or social media marketing. Finally, elements of digital marketing should be part of the overall marketing strategy for every business today. **Going to PubCon Vegas?** Please attend the panel on Modern Marketing Plan Start To Finish featuring WrightIMC CEO Tony Wright and VP of Solutions, Dan Sturdivant on Wednesday, November 9 at 3:15.
true
true
true
CEO Tony Wright provides a simple primer on digital marketing for those more conversant in traditional marketing. Read more to apply it to your business.
2024-10-12 00:00:00
2011-10-13 00:00:00
https://wrightimc.com/wp…-wall-scaled.jpg
article
wrightimc.com
WrightIMC
null
null
31,699,624
https://www.bostonherald.com/2022/06/08/group-of-white-and-asian-parents-sue-bps-over-exam-school-admissions-policy/
Group of white and Asian parents sue BPS over exam school admissions policy
Marie Szaniszlo
A group of white and Asian parents is suing Boston Public Schools, seeking to have a federal appeals court force the district to admit at least five of their children into the system’s elite exam schools. In a lawsuit filed Tuesday in federal appeals court in Boston, the Parent Coalition for Academic Excellence argues that a temporary admission policy based on ZIP codes last year deprived their children of seats in the schools, even though the students had high enough grades. BPS this year switched to another admissions policy, also meant to increase diversity in the exam schools, by allotting seats based on areas with similar socio-economics. BOPS said Wednesday it could not comment on pending litigation. The plaintiffs could not be reached for comment Wednesday. But Lisa Green of the Boston Coalition for Education Equity said their the lawsuit’s primary argument “seeks to effectively prohibit any consideration of race in government decision-making.” “What the suit is really about is overturning affirmative action,” Green said.
true
true
true
A group of white and Asian parents is suing Boston Public Schools, seeking to have a federal appeals court force the district to admit at least five of their children into the system’s elite …
2024-10-12 00:00:00
2022-06-08 00:00:00
https://www.bostonherald…jpg?w=1024&h=670
article
bostonherald.com
Boston Herald
null
null
2,551,598
http://paulgraham.com/gh.html
Great Hackers
null
July 2004 *(This essay is derived from a talk at Oscon 2004.)* A few months ago I finished a new book, and in reviews I keep noticing words like "provocative'' and "controversial.'' To say nothing of "idiotic.'' I didn't mean to make the book controversial. I was trying to make it efficient. I didn't want to waste people's time telling them things they already knew. It's more efficient just to give them the diffs. But I suppose that's bound to yield an alarming book. **Edisons** There's no controversy about which idea is most controversial: the suggestion that variation in wealth might not be as big a problem as we think. I didn't say in the book that variation in wealth was in itself a good thing. I said in some situations it might be a sign of good things. A throbbing headache is not a good thing, but it can be a sign of a good thing-- for example, that you're recovering consciousness after being hit on the head. Variation in wealth can be a sign of variation in productivity. (In a society of one, they're identical.) And *that* is almost certainly a good thing: if your society has no variation in productivity, it's probably not because everyone is Thomas Edison. It's probably because you have no Thomas Edisons. In a low-tech society you don't see much variation in productivity. If you have a tribe of nomads collecting sticks for a fire, how much more productive is the best stick gatherer going to be than the worst? A factor of two? Whereas when you hand people a complex tool like a computer, the variation in what they can do with it is enormous. That's not a new idea. Fred Brooks wrote about it in 1974, and the study he quoted was published in 1968. But I think he underestimated the variation between programmers. He wrote about productivity in lines of code: the best programmers can solve a given problem in a tenth the time. But what if the problem isn't given? In programming, as in many fields, the hard part isn't solving problems, but deciding what problems to solve. Imagination is hard to measure, but in practice it dominates the kind of productivity that's measured in lines of code. Productivity varies in any field, but there are few in which it varies so much. The variation between programmers is so great that it becomes a difference in kind. I don't think this is something intrinsic to programming, though. In every field, technology magnifies differences in productivity. I think what's happening in programming is just that we have a lot of technological leverage. But in every field the lever is getting longer, so the variation we see is something that more and more fields will see as time goes on. And the success of companies, and countries, will depend increasingly on how they deal with it. If variation in productivity increases with technology, then the contribution of the most productive individuals will not only be disproportionately large, but will actually grow with time. When you reach the point where 90% of a group's output is created by 1% of its members, you lose big if something (whether Viking raids, or central planning) drags their productivity down to the average. If we want to get the most out of them, we need to understand these especially productive people. What motivates them? What do they need to do their jobs? How do you recognize them? How do you get them to come and work for you? And then of course there's the question, how do you become one? **More than Money** I know a handful of super-hackers, so I sat down and thought about what they have in common. Their defining quality is probably that they really love to program. Ordinary programmers write code to pay the bills. Great hackers think of it as something they do for fun, and which they're delighted to find people will pay them for. Great programmers are sometimes said to be indifferent to money. This isn't quite true. It is true that all they really care about is doing interesting work. But if you make enough money, you get to work on whatever you want, and for that reason hackers *are* attracted by the idea of making really large amounts of money. But as long as they still have to show up for work every day, they care more about what they do there than how much they get paid for it. Economically, this is a fact of the greatest importance, because it means you don't have to pay great hackers anything like what they're worth. A great programmer might be ten or a hundred times as productive as an ordinary one, but he'll consider himself lucky to get paid three times as much. As I'll explain later, this is partly because great hackers don't know how good they are. But it's also because money is not the main thing they want. What do hackers want? Like all craftsmen, hackers like good tools. In fact, that's an understatement. Good hackers find it unbearable to use bad tools. They'll simply refuse to work on projects with the wrong infrastructure. At a startup I once worked for, one of the things pinned up on our bulletin board was an ad from IBM. It was a picture of an AS400, and the headline read, I think, "hackers despise it.'' [1] When you decide what infrastructure to use for a project, you're not just making a technical decision. You're also making a social decision, and this may be the more important of the two. For example, if your company wants to write some software, it might seem a prudent choice to write it in Java. But when you choose a language, you're also choosing a community. The programmers you'll be able to hire to work on a Java project won't be as smart as the ones you could get to work on a project written in Python. And the quality of your hackers probably matters more than the language you choose. Though, frankly, the fact that good hackers prefer Python to Java should tell you something about the relative merits of those languages. Business types prefer the most popular languages because they view languages as standards. They don't want to bet the company on Betamax. The thing about languages, though, is that they're not just standards. If you have to move bits over a network, by all means use TCP/IP. But a programming language isn't just a format. A programming language is a medium of expression. I've read that Java has just overtaken Cobol as the most popular language. As a standard, you couldn't wish for more. But as a medium of expression, you could do a lot better. Of all the great programmers I can think of, I know of only one who would voluntarily program in Java. And of all the great programmers I can think of who don't work for Sun, on Java, I know of zero. Great hackers also generally insist on using open source software. Not just because it's better, but because it gives them more control. Good hackers insist on control. This is part of what makes them good hackers: when something's broken, they need to fix it. You want them to feel this way about the software they're writing for you. You shouldn't be surprised when they feel the same way about the operating system. A couple years ago a venture capitalist friend told me about a new startup he was involved with. It sounded promising. But the next time I talked to him, he said they'd decided to build their software on Windows NT, and had just hired a very experienced NT developer to be their chief technical officer. When I heard this, I thought, these guys are doomed. One, the CTO couldn't be a first rate hacker, because to become an eminent NT developer he would have had to use NT voluntarily, multiple times, and I couldn't imagine a great hacker doing that; and two, even if he was good, he'd have a hard time hiring anyone good to work for him if the project had to be built on NT. [2] **The Final Frontier** After software, the most important tool to a hacker is probably his office. Big companies think the function of office space is to express rank. But hackers use their offices for more than that: they use their office as a place to think in. And if you're a technology company, their thoughts are your product. So making hackers work in a noisy, distracting environment is like having a paint factory where the air is full of soot. The cartoon strip Dilbert has a lot to say about cubicles, and with good reason. All the hackers I know despise them. The mere prospect of being interrupted is enough to prevent hackers from working on hard problems. If you want to get real work done in an office with cubicles, you have two options: work at home, or come in early or late or on a weekend, when no one else is there. Don't companies realize this is a sign that something is broken? An office environment is supposed to be something that *helps* you work, not something you work despite. Companies like Cisco are proud that everyone there has a cubicle, even the CEO. But they're not so advanced as they think; obviously they still view office space as a badge of rank. Note too that Cisco is famous for doing very little product development in house. They get new technology by buying the startups that created it-- where presumably the hackers did have somewhere quiet to work. One big company that understands what hackers need is Microsoft. I once saw a recruiting ad for Microsoft with a big picture of a door. Work for us, the premise was, and we'll give you a place to work where you can actually get work done. And you know, Microsoft is remarkable among big companies in that they are able to develop software in house. Not well, perhaps, but well enough. If companies want hackers to be productive, they should look at what they do at home. At home, hackers can arrange things themselves so they can get the most done. And when they work at home, hackers don't work in noisy, open spaces; they work in rooms with doors. They work in cosy, neighborhoody places with people around and somewhere to walk when they need to mull something over, instead of in glass boxes set in acres of parking lots. They have a sofa they can take a nap on when they feel tired, instead of sitting in a coma at their desk, pretending to work. There's no crew of people with vacuum cleaners that roars through every evening during the prime hacking hours. There are no meetings or, God forbid, corporate retreats or team-building exercises. And when you look at what they're doing on that computer, you'll find it reinforces what I said earlier about tools. They may have to use Java and Windows at work, but at home, where they can choose for themselves, you're more likely to find them using Perl and Linux. Indeed, these statistics about Cobol or Java being the most popular language can be misleading. What we ought to look at, if we want to know what tools are best, is what hackers choose when they can choose freely-- that is, in projects of their own. When you ask that question, you find that open source operating systems already have a dominant market share, and the number one language is probably Perl. **Interesting** Along with good tools, hackers want interesting projects. What makes a project interesting? Well, obviously overtly sexy applications like stealth planes or special effects software would be interesting to work on. But any application can be interesting if it poses novel technical challenges. So it's hard to predict which problems hackers will like, because some become interesting only when the people working on them discover a new kind of solution. Before ITA (who wrote the software inside Orbitz), the people working on airline fare searches probably thought it was one of the most boring applications imaginable. But ITA made it interesting by redefining the problem in a more ambitious way. I think the same thing happened at Google. When Google was founded, the conventional wisdom among the so-called portals was that search was boring and unimportant. But the guys at Google didn't think search was boring, and that's why they do it so well. This is an area where managers can make a difference. Like a parent saying to a child, I bet you can't clean up your whole room in ten minutes, a good manager can sometimes redefine a problem as a more interesting one. Steve Jobs seems to be particularly good at this, in part simply by having high standards. There were a lot of small, inexpensive computers before the Mac. He redefined the problem as: make one that's beautiful. And that probably drove the developers harder than any carrot or stick could. They certainly delivered. When the Mac first appeared, you didn't even have to turn it on to know it would be good; you could tell from the case. A few weeks ago I was walking along the street in Cambridge, and in someone's trash I saw what appeared to be a Mac carrying case. I looked inside, and there was a Mac SE. I carried it home and plugged it in, and it booted. The happy Macintosh face, and then the finder. My God, it was so simple. It was just like ... Google. Hackers like to work for people with high standards. But it's not enough just to be exacting. You have to insist on the right things. Which usually means that you have to be a hacker yourself. I've seen occasional articles about how to manage programmers. Really there should be two articles: one about what to do if you are yourself a programmer, and one about what to do if you're not. And the second could probably be condensed into two words: give up. The problem is not so much the day to day management. Really good hackers are practically self-managing. The problem is, if you're not a hacker, you can't tell who the good hackers are. A similar problem explains why American cars are so ugly. I call it the *design paradox.* You might think that you could make your products beautiful just by hiring a great designer to design them. But if you yourself don't have good taste, how are you going to recognize a good designer? By definition you can't tell from his portfolio. And you can't go by the awards he's won or the jobs he's had, because in design, as in most fields, those tend to be driven by fashion and schmoozing, with actual ability a distant third. There's no way around it: you can't manage a process intended to produce beautiful things without knowing what beautiful is. American cars are ugly because American car companies are run by people with bad taste. Many people in this country think of taste as something elusive, or even frivolous. It is neither. To drive design, a manager must be the most demanding user of a company's products. And if you have really good taste, you can, as Steve Jobs does, make satisfying you the kind of problem that good people like to work on. **Nasty Little Problems** It's pretty easy to say what kinds of problems are not interesting: those where instead of solving a few big, clear, problems, you have to solve a lot of nasty little ones. One of the worst kinds of projects is writing an interface to a piece of software that's full of bugs. Another is when you have to customize something for an individual client's complex and ill-defined needs. To hackers these kinds of projects are the death of a thousand cuts. The distinguishing feature of nasty little problems is that you don't learn anything from them. Writing a compiler is interesting because it teaches you what a compiler is. But writing an interface to a buggy piece of software doesn't teach you anything, because the bugs are random. [3] So it's not just fastidiousness that makes good hackers avoid nasty little problems. It's more a question of self-preservation. Working on nasty little problems makes you stupid. Good hackers avoid it for the same reason models avoid cheeseburgers. Of course some problems inherently have this character. And because of supply and demand, they pay especially well. So a company that found a way to get great hackers to work on tedious problems would be very successful. How would you do it? One place this happens is in startups. At our startup we had Robert Morris working as a system administrator. That's like having the Rolling Stones play at a bar mitzvah. You can't hire that kind of talent. But people will do any amount of drudgery for companies of which they're the founders. [4] Bigger companies solve the problem by partitioning the company. They get smart people to work for them by establishing a separate R&D department where employees don't have to work directly on customers' nasty little problems. [5] In this model, the research department functions like a mine. They produce new ideas; maybe the rest of the company will be able to use them. You may not have to go to this extreme. Bottom-up programming suggests another way to partition the company: have the smart people work as toolmakers. If your company makes software to do x, have one group that builds tools for writing software of that type, and another that uses these tools to write the applications. This way you might be able to get smart people to write 99% of your code, but still keep them almost as insulated from users as they would be in a traditional research department. The toolmakers would have users, but they'd only be the company's own developers. [6] If Microsoft used this approach, their software wouldn't be so full of security holes, because the less smart people writing the actual applications wouldn't be doing low-level stuff like allocating memory. Instead of writing Word directly in C, they'd be plugging together big Lego blocks of Word-language. (Duplo, I believe, is the technical term.) **Clumping** Along with interesting problems, what good hackers like is other good hackers. Great hackers tend to clump together-- sometimes spectacularly so, as at Xerox Parc. So you won't attract good hackers in linear proportion to how good an environment you create for them. The tendency to clump means it's more like the square of the environment. So it's winner take all. At any given time, there are only about ten or twenty places where hackers most want to work, and if you aren't one of them, you won't just have fewer great hackers, you'll have zero. Having great hackers is not, by itself, enough to make a company successful. It works well for Google and ITA, which are two of the hot spots right now, but it didn't help Thinking Machines or Xerox. Sun had a good run for a while, but their business model is a down elevator. In that situation, even the best hackers can't save you. I think, though, that all other things being equal, a company that can attract great hackers will have a huge advantage. There are people who would disagree with this. When we were making the rounds of venture capital firms in the 1990s, several told us that software companies didn't win by writing great software, but through brand, and dominating channels, and doing the right deals. They really seemed to believe this, and I think I know why. I think what a lot of VCs are looking for, at least unconsciously, is the next Microsoft. And of course if Microsoft is your model, you shouldn't be looking for companies that hope to win by writing great software. But VCs are mistaken to look for the next Microsoft, because no startup can be the next Microsoft unless some other company is prepared to bend over at just the right moment and be the next IBM. It's a mistake to use Microsoft as a model, because their whole culture derives from that one lucky break. Microsoft is a bad data point. If you throw them out, you find that good products do tend to win in the market. What VCs should be looking for is the next Apple, or the next Google. I think Bill Gates knows this. What worries him about Google is not the power of their brand, but the fact that they have better hackers. [7] **Recognition** So who are the great hackers? How do you know when you meet one? That turns out to be very hard. Even hackers can't tell. I'm pretty sure now that my friend Trevor Blackwell is a great hacker. You may have read on Slashdot how he made his own Segway. The remarkable thing about this project was that he wrote all the software in one day (in Python, incidentally). For Trevor, that's par for the course. But when I first met him, I thought he was a complete idiot. He was standing in Robert Morris's office babbling at him about something or other, and I remember standing behind him making frantic gestures at Robert to shoo this nut out of his office so we could go to lunch. Robert says he misjudged Trevor at first too. Apparently when Robert first met him, Trevor had just begun a new scheme that involved writing down everything about every aspect of his life on a stack of index cards, which he carried with him everywhere. He'd also just arrived from Canada, and had a strong Canadian accent and a mullet. The problem is compounded by the fact that hackers, despite their reputation for social obliviousness, sometimes put a good deal of effort into seeming smart. When I was in grad school I used to hang around the MIT AI Lab occasionally. It was kind of intimidating at first. Everyone there spoke so fast. But after a while I learned the trick of speaking fast. You don't have to think any faster; just use twice as many words to say everything. With this amount of noise in the signal, it's hard to tell good hackers when you meet them. I can't tell, even now. You also can't tell from their resumes. It seems like the only way to judge a hacker is to work with him on something. And this is the reason that high-tech areas only happen around universities. The active ingredient here is not so much the professors as the students. Startups grow up around universities because universities bring together promising young people and make them work on the same projects. The smart ones learn who the other smart ones are, and together they cook up new projects of their own. Because you can't tell a great hacker except by working with him, hackers themselves can't tell how good they are. This is true to a degree in most fields. I've found that people who are great at something are not so much convinced of their own greatness as mystified at why everyone else seems so incompetent. But it's particularly hard for hackers to know how good they are, because it's hard to compare their work. This is easier in most other fields. In the hundred meters, you know in 10 seconds who's fastest. Even in math there seems to be a general consensus about which problems are hard to solve, and what constitutes a good solution. But hacking is like writing. Who can say which of two novels is better? Certainly not the authors. With hackers, at least, other hackers can tell. That's because, unlike novelists, hackers collaborate on projects. When you get to hit a few difficult problems over the net at someone, you learn pretty quickly how hard they hit them back. But hackers can't watch themselves at work. So if you ask a great hacker how good he is, he's almost certain to reply, I don't know. He's not just being modest. He really doesn't know. And none of us know, except about people we've actually worked with. Which puts us in a weird situation: we don't know who our heroes should be. The hackers who become famous tend to become famous by random accidents of PR. Occasionally I need to give an example of a great hacker, and I never know who to use. The first names that come to mind always tend to be people I know personally, but it seems lame to use them. So, I think, maybe I should say Richard Stallman, or Linus Torvalds, or Alan Kay, or someone famous like that. But I have no idea if these guys are great hackers. I've never worked with them on anything. If there is a Michael Jordan of hacking, no one knows, including him. **Cultivation** Finally, the question the hackers have all been wondering about: how do you become a great hacker? I don't know if it's possible to make yourself into one. But it's certainly possible to do things that make you stupid, and if you can make yourself stupid, you can probably make yourself smart too. The key to being a good hacker may be to work on what you like. When I think about the great hackers I know, one thing they have in common is the extreme difficulty of making them work on anything they don't want to. I don't know if this is cause or effect; it may be both. To do something well you have to love it. So to the extent you can preserve hacking as something you love, you're likely to do it well. Try to keep the sense of wonder you had about programming at age 14. If you're worried that your current job is rotting your brain, it probably is. The best hackers tend to be smart, of course, but that's true in a lot of fields. Is there some quality that's unique to hackers? I asked some friends, and the number one thing they mentioned was curiosity. I'd always supposed that all smart people were curious-- that curiosity was simply the first derivative of knowledge. But apparently hackers are particularly curious, especially about how things work. That makes sense, because programs are in effect giant descriptions of how things work. Several friends mentioned hackers' ability to concentrate-- their ability, as one put it, to "tune out everything outside their own heads.'' I've certainly noticed this. And I've heard several hackers say that after drinking even half a beer they can't program at all. So maybe hacking does require some special ability to focus. Perhaps great hackers can load a large amount of context into their head, so that when they look at a line of code, they see not just that line but the whole program around it. John McPhee wrote that Bill Bradley's success as a basketball player was due partly to his extraordinary peripheral vision. "Perfect'' eyesight means about 47 degrees of vertical peripheral vision. Bill Bradley had 70; he could see the basket when he was looking at the floor. Maybe great hackers have some similar inborn ability. (I cheat by using a very dense language, which shrinks the court.) This could explain the disconnect over cubicles. Maybe the people in charge of facilities, not having any concentration to shatter, have no idea that working in a cubicle feels to a hacker like having one's brain in a blender. (Whereas Bill, if the rumors of autism are true, knows all too well.) One difference I've noticed between great hackers and smart people in general is that hackers are more politically incorrect. To the extent there is a secret handshake among good hackers, it's when they know one another well enough to express opinions that would get them stoned to death by the general public. And I can see why political incorrectness would be a useful quality in programming. Programs are very complex and, at least in the hands of good programmers, very fluid. In such situations it's helpful to have a habit of questioning assumptions. Can you cultivate these qualities? I don't know. But you can at least not repress them. So here is my best shot at a recipe. If it is possible to make yourself into a great hacker, the way to do it may be to make the following deal with yourself: you never have to work on boring projects (unless your family will starve otherwise), and in return, you'll never allow yourself to do a half-assed job. All the great hackers I know seem to have made that deal, though perhaps none of them had any choice in the matter. **Notes** [1] In fairness, I have to say that IBM makes decent hardware. I wrote this on an IBM laptop. [2] They did turn out to be doomed. They shut down a few months later. [3] I think this is what people mean when they talk about the "meaning of life." On the face of it, this seems an odd idea. Life isn't an expression; how could it have meaning? But it can have a quality that feels a lot like meaning. In a project like a compiler, you have to solve a lot of problems, but the problems all fall into a pattern, as in a signal. Whereas when the problems you have to solve are random, they seem like noise. [4] Einstein at one point worked designing refrigerators. (He had equity.) [5] It's hard to say exactly what constitutes research in the computer world, but as a first approximation, it's software that doesn't have users. I don't think it's publication that makes the best hackers want to work in research departments. I think it's mainly not having to have a three hour meeting with a product manager about problems integrating the Korean version of Word 13.27 with the talking paperclip. [6] Something similar has been happening for a long time in the construction industry. When you had a house built a couple hundred years ago, the local builders built everything in it. But increasingly what builders do is assemble components designed and manufactured by someone else. This has, like the arrival of desktop publishing, given people the freedom to experiment in disastrous ways, but it is certainly more efficient. [7] Google is much more dangerous to Microsoft than Netscape was. Probably more dangerous than any other company has ever been. Not least because they're determined to fight. On their job listing page, they say that one of their "core values'' is "Don't be evil.'' From a company selling soybean oil or mining equipment, such a statement would merely be eccentric. But I think all of us in the computer world recognize who that is a declaration of war on. **Thanks** to Jessica Livingston, Robert Morris, and Sarah Harlin for reading earlier versions of this talk. |
true
true
true
null
2024-10-12 00:00:00
2004-09-01 00:00:00
null
null
null
null
null
null
11,724,391
https://medium.com/@andrewt3000/the-2-reasons-tesla-will-be-number-1-bab788ef215e#.47726w4pq
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
22,721,766
https://aws.amazon.com/fr/blogs/aws/new-low-cost-hdd-storage-option-for-amazon-fsx-for-windows-file-server/
New – Low-Cost HDD Storage Option for Amazon FSx for Windows File Server | Amazon Web Services
null
## AWS News Blog # New – Low-Cost HDD Storage Option for Amazon FSx for Windows File Server | You can use Amazon FSx for Windows File Server to create file systems that can be accessed from a wide variety of sources and that use your existing Active Directory environment to authenticate users. Last year we added a ton of features including Self-Managed Directories, Native Multi-AZ File Systems, Support for SQL Server, Fine-Grained File Restoration, On-Premises Access, a Remote Management CLI, Data Deduplication, Programmatic File Share Configuration, Enforcement of In-Transit Encryption, and Storage Quotas. **New HDD Option** Today we are adding a new HDD (Hard Disk Drive) storage option to Amazon FSx for Windows File Server. While the existing SSD (Solid State Drive) storage option is designed for the highest performance latency-sensitive workloads like databases, media processing, and analytics, HDD storage is designed for a broad spectrum of workloads including home directories, departmental shares, and content management systems. Single-AZ HDD storage is priced at $0.013 per GB-month and Multi-AZ HDD storage is priced at $0.025 per GB-month (this makes Amazon FSx for Windows File Server the lowest cost file storage for Windows applications and workloads in the cloud). Even better, if you use this option in conjunction with Data Deduplication and use 50% space savings as a reasonable reference point, you can achieve an effective cost of $0.0065 per GB-month for a single-AZ file system and $0.0125 per GB-month for a multi-AZ file system. You can choose the HDD option when you create a new file system: If you have existing SSD-based file systems, you can create new HDD-based file systems and then use AWS DataSync or `robocopy` to move the files. Backups taken from newly created SSD or HDD file systems can be restored to either type of storage, and with any desired level of throughput capacity. **Performance and Caching** The HDD storage option is designed to deliver 12 MB/second of throughput per TiB of storage, with the ability to handle bursts of up to 80 MB/second per TiB of storage. When you create your file system, you also specify the throughput capacity: The amount of throughput that you provision also controls the size of a fast, in-memory cache for your file share; higher levels of throughput come with larger amounts of cache. As a result, Amazon FSx for Windows File Server file systems can be provisioned so as to be able to provide over 3 GB/s of network throughput and hundreds of thousands of network IOPS, even with HDD storage. This will allow you to create cost-effective file systems that are able to handle many different use cases, including those where a modest subset of a large amount of data is accessed frequently. To learn more, read Amazon FSx for Windows File Server Performance. **Now Available** HDD file systems are available in all regions where Amazon FSx for Windows File Server is available and you can start creating them today. — Jeff;
true
true
true
You can use Amazon FSx for Windows File Server to create file systems that can be accessed from a wide variety of sources and that use your existing Active Directory environment to authenticate users. Last year we added a ton of features including Self-Managed Directories, Native Multi-AZ File Systems, Support for SQL Server, Fine-Grained File […]
2024-10-12 00:00:00
2020-03-26 00:00:00
https://d2908q01vomqb2.c…throughput_1.png
article
amazon.com
Amazon Web Services
null
null
19,526,988
https://www.macleans.ca/economy/realestateeconomy/andy-yan-the-analyst-who-exposed-vancouvers-real-estate-disaster/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
5,007,629
http://www.circleid.com/posts/20120911_dns_roi_5_reasons_slow_website_speed_kills/
Page Not Found
null
More and more professionals are choosing to publish critical posts on CircleID from all corners of the Internet industry. If you find it hard to keep up daily, consider subscribing to our weekly digest. We will provide you a convenient summary report once a week sent directly to your inbox. It's a quick and easy read.
true
true
true
null
2024-10-12 00:00:00
2024-10-12 00:00:00
null
null
null
null
null
null
37,109,124
https://github.com/yuvalsuede/jasper-alternative-gpt
GitHub - yuvalsuede/jasper-alternative-gpt: A Jasper alternative open source with ChatGPT
Yuvalsuede
This project uses ChatGPT API to create almost any text based output for your need - from marketing content to blog post ideas and a lot more. It uses simple template based components to ask ChatGPT for generating results Creating new templates or tasks take about 30 mins. no more, so you can extend it for your needs or wait for new template release :) # Jema.ai Open Source Jasper alternative Stay updated, informed, and at the cutting edge of the AI revolution in development. Join the elite circle of developers leveraging AI tools and insights to supercharge their projects and careers. This project uses the ChatGPT API and Vercel Edge functions. It takes a template for each action or command type, and based on the inputs and mission, it sends ChatGPT the commands to get the required results. The "command" field in each template is most important to tell ChatGPT what to do. In addition, you can add different input types for each template if you wish to use additional parameters. More template can be added to the `TEMPLATES` list. In order to work locally or deploy this project to Vercel, you need to set your OPENAI_API_KEY to use ChatGPT3 API. Once added , this should work out of the box. This project is built with `Next.js` and `TailwindCSS` , so you can deploy it directly to Vercel. After cloning the repo, go to OpenAI to make an account and put your API key in a file called `.env.local` (OPENAI_API_KEY) Then, run the application in the command line and it will be available at `http://localhost:3000` . ``` npm install yarn dev ``` You can fine tune ChatGPT to your needs, give it any mission that you wish it to complete. The basic message structure is as follows: ``` const generateOutputHandler = async (template: Template, inputsData: { [key: string]: string }) => { const instruction = createInstruction(template.inputs, inputsData); const mainGoal = template.description; const messages = [ { role: "system", content: "You are a helpful assistant." }, { role: "user", content: `Your task is: "${mainGoal}".\n\nHere are the details:\n${instruction}. Please suggest 3 outputs. number them 1,2,3` }, ]; try { const response: any = await openai.createChatCompletion({ model: "gpt-3.5-turbo", // @ts-ignore messages: messages, temperature: 1, }); const reply = response?.data?.choices[0].message.content; setOutput(reply || ''); } catch (error) { console.log(error) } }; ``` My name is Yuval - an entrepreneur at heart , I ❤️ building end-to-end systems that not only look amazing and feel state-of-the-art, but also have real meaning and impact. You can contact me on Linkedin for any suggestions, questions or thoughts. https://www.linkedin.com/in/yuval-suede/ Contributions, issues and feature requests are welcome! I will always appreciate a STAR and an attribution of the main demo website - Fork the repository, Clone it on your device. That's it 🎉 - Finally make a pull request :) Stay updated, informed, and at the cutting edge of the AI revolution in development. Join the elite circle of developers leveraging AI tools and insights to supercharge their projects and careers. This project is MIT License licensed.
true
true
true
A Jasper alternative open source with ChatGPT. Contribute to yuvalsuede/jasper-alternative-gpt development by creating an account on GitHub.
2024-10-12 00:00:00
2023-03-17 00:00:00
https://opengraph.githubassets.com/2838c6e7f8ded44d9cc8d0e9b97f8dbf938e04c52626be4dde176e97f2c28780/yuvalsuede/jasper-alternative-gpt
object
github.com
GitHub
null
null
8,984,735
http://www.wired.com/2013/12/home-depot-reinvents-buckets/
How Home Depot Copied Apple to Build an Ingenious New Bucket
Joseph Flaherty
If you buy something using links in our stories, we may earn a commission. This helps support our journalism. Learn more. Please also consider subscribing to WIRED Home Depot's new Big Gripper all-purpose bucket is a handy improvement on the old school, five-gallon contractor pail. An ergonomic handle and patent pending "pocket grip" on the underside sets the product apart on the shelf, but more importantly, the design is a showpiece for a new approach to big box merchandising. Brick-and-mortar retailers have learned a lesson from Apple and are following their vertically integrated approach by developing high-quality, and exclusive, products to remain competitive in the age of Amazon. And they're learning from another Apple trademark: revisiting product categories filled with bad offerings, and completely rethinking them. The clever container was developed in textbook fashion by Herbst Produkt, an award-winning firm with a client list that includes Clorox and Facebook. Like a good user-centered designer, founder Scot Herbst started the project by observing customers in their natural habitats and recording their difficulties using similar products. "We found this particularly true in the female demographic – someone would load a garden bucket with soil and have a hell of a time lifting and maneuvering the ungainly mass," says Herbst. With this insight in hand, Herbst rearranged the elements of the bucket to create an asymmetrical, yet better balanced product. "The best part about these little innovations is they didn't add any cost to the product," he says. "They're cost-neutral features that are achieved without adding material or complex tooling." You can't argue with free, but the importance of this design rests less in its features and more why it was developed in the first place. >Products like the bucket sell by the millions, but haven't been improved in decades. It might be hard to believe, but when the Home Depot was founded in 1978, it was hugely innovative. Floor to ceiling stacks of oriented strand board might lack the panache of 3-D printing, yet both developments had similar effects. Prior to the arrival of these walk-in warehouses, weekend warriors were left with whatever limited selection their local hardware store carried. For two decades, Home Depot founder Bernie Marcus made it his mission to make exotic tools and hard-to-find building materials available to anyone with a pick-up truck. In 2000, Marcus retired and brought on Bob Nardelli as CEO. Nardelli had been one of Jack Welch's hatchet men at GE, and he spent the next seven years driving down costs—at the expense of Home Depot's reputation for innovation. "From what I understand, it had a brutal cost-cutting culture that stymied product innovation," says Herbst. At the same time, Amazon and other online tool sellers were beating physical retailers at the price game. Shipping bags of concrete was cost prohibitive, but online sales of hyper-profitable, high-ticket power tools boomed. "If the game is played solely on a price-cutting platform, you will inevitably run out of margin to support new innovation," says Herbst. "What the consumer doesn't appreciate is that innovation costs money—R&D, prototyping, design, engineering, IP—all of these activities require an investment." Marcus forced Nardelli out in 2007 and brought in a Home Depot veteran to right the ship by returning the focus to developing and selling innovative products, exclusive to Home Depot. The mandate came with a cool code name – Project: Whitespace – and Herbst Produkt jumped at the chance to redesign humble products like the bucket that sell by the millions, but haven't been improved on since their introduction decades ago. Home Depot is also taking a page from values-driven companies like Patagonia and emphasizing how their products are made in addition to how they function. The Big Gripper bucket is part of the retailer's "Made in America" initiative, which is attempting to "reshore" manufacturing jobs, and is being produced by a family-run company outside of Boston. "Any cost premiums are balanced out by the fast lead-time to market and incredibly, ridiculously, high volumes that Home Depot can support," says Herbst. The Big Gripper is available at Home Depot's website and stores across the country.
true
true
true
Brick-and-mortar retailers are developing high-quality, and exclusive, products to remain competitive in the age of Amazon.
2024-10-12 00:00:00
2013-12-31 00:00:00
https://media.wired.com/…it/bucket-06.png
article
wired.com
WIRED
null
null
7,166,703
http://pando.com/2014/01/31/a-potentially-doomed-attempt-to-remove-narcissism-from-social-media/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
10,080,289
http://acasaprogramming.ro/web-designers-ultimate-list-of-free-resources/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
11,814,177
http://www.bbc.co.uk/news/science-environment-36420750
Renewable energy surges to record levels around the world
Matt McGrath
# Renewable energy surges to record levels around the world - Published **New solar, wind and hydropower sources were added in 2015 at the fastest rate the world has yet seen, a study says.** Investments in renewables during the year were more than double the amount spent on new coal and gas-fired power plants, the Renewables Global Status Report, external found. For the first time, emerging economies spent more than the rich on renewable power and fuels. Over 8 million people are now working in renewable energy worldwide. For a number of years, the global spend on renewables has been increasing and 2015 saw that arrive at a new peak according to the report. ## Falling costs key About 147 gigawatts (GW) of capacity was added in 2015, roughly equivalent to Africa's generating capacity from all sources. China, the US, Japan, UK and India were the countries adding on the largest share of green power, despite the fact that fossil fuel prices have fallen significantly. The costs of renewables have also fallen, say the authors. "The fact that we had 147GW of capacity, mainly of wind and solar is a clear indication that these technologies are cost competitive (with fossil fuels)," said Christine Lins, who is executive secretary of REN21, an international body made up of energy experts, government representatives and NGOs, who produced the report. "They are the preference for many countries and more and more utilities and investors and that is a very positive signal." Investment in renewables reached $286bn worldwide in 2015. With China accounting for more than one-third of the global total, the developing countries outspent the richer nations on renewables for the first time. When measured against a country's GDP, the biggest investors were small countries like Mauritania, Honduras, Uruguay and Jamaica. "It clearly shows that the costs have come down so much that the emerging economies are now really focussing on renewables," said Christine Lins. "They are the ones with the biggest increases in energy demand, and the fact that we had this turning point really shows the business case - and that is really a remarkable development." The UK's high position in the global renewables table may come as a surprise to some as there have been a series of substantial cuts to green subsidies over the past year. The UK's solar industry saw tariff support tumble by over 60% last December. Despite a significant fall off in European investment in renewables, down around 21%, green power is now the leading source of electricity, providing 44% of total EU capacity in 2015. The authors say that while the Paris Climate Agreement came after this report was compiled, the fact that countries were getting serious about rising temperatures has already been reflected, to some degree, in their investments. As of early 2016, 173 nations had renewable energy targets in place. It's not just nations that are taking big steps towards a greener future. In the US, some 154 companies employing 11 million people have committed to 100% renewable energy. ## Traffic jam However there are still some areas that are proving resistant to green energy such as transport and heating and cooling. The low price of oil has contributed to the lack of uptake for renewables in these sectors. Despite these holdups, the authors are in no doubt as to the direction of travel. "I've been working in this sector for 20 years and the economic case is now fully there," said Christine Lins. "The renewables industry is not just dependant on a couple of markets but it has turned into a truly global one with markets everywhere and that is really encouraging." "The best is yet to come," she told BBC News. Follow Matt on Twitter @mattmcgrathbbc, external and on Facebook.
true
true
true
The world added far more renewable energy sources than fossil fuels in 2015, with developing countries overtaking richer nations on green spending.
2024-10-12 00:00:00
2016-06-01 00:00:00
https://ichef.bbci.co.uk…ges-81060831.jpg
article
bbc.com
BBC News
null
null
39,265,369
https://www.opensamizdat.com/posts/gest/
※ Open Samizdat
null
# ※ Open Samizdat ## How NLP Models Think about Gender Stereotypes **Abstract**We recently released the GEST dataset for measuring *gender-stereotypical reasoning*in language models and machine translation systems. Unlike other datasets, this one focuses on specific stereotypical ideas, such as *men are leaders*. We found out that NLP models associate beauty, housework, neatness, and empathy with women; while leadership, professionalism, and rationalism are associated with men. Serendipitously, we discovered strong signals of models *sexualizing*women as well. ⁂ ### Gender bias conceptualization It is common knowledge by now that NLP systems learn various gender and other societal biases from their training data. There is a cottage industry of datasets and benchmarks that supposedly measure gender bias, but many of these have problems with how they conceptualize it as a quantity . In the context of this blog, I see two common mistakes: **(1) The measure is too specific.**The first problem is when the measure focuses on a very specific phenomenon and it is claimed that this should indicate broader model’s behavior. For example, it is very common to measure an association between *he*/ *she*pronouns and occupations. Although this might be *a*gender bias, it is impossible to use it to predict how the model behaves in other contexts. The way I think about this is that the measures usually quantify the volume of texts in the training corpus that contain certain bias. But this volume for one bias does imply the volumes for other biases. A volume of texts that gender-code occupations (e.g., texts that reflect the real-world demography) does not say anything about the volume of let’s say texts that sexualize women (e.g., pornography). **(2) The measure is too generic.**On the other extreme are measures that haphazardly combine test samples related to various biases about various groups of people. These measures usually do not make a deliberate effort to control their composition. For example, StereoSet contains the two following samples that are treated as interchangeable: *“She will buy herself a set of [pink/blue] toys”*and *“A male is often [abusive/compassionate] to women”*. While the first sample represents a rather innocuous stereotype about women liking the color pink, the second sample is much more severe and suggests that men are abusive. Overly generic measures might lack information about what stereotypes are represented and how strongly in the dataset. I would probably be more concerned about a model that believes that *men are abusive*than about a model that believes that *girls like pink*. However a single aggregating score is not able to distinguish between the two. This is even worse than measures that are too specific because we can not even tell what exactly is being measured. Apart from these conceptualization issues, there is a slew of papers that criticize bias measures from many other points of view . I have serious doubts about the meaningfulness of many of the results that are being published in this field, to a large extent echoing my concerns about self-report studies. This type of sociological analysis is new to the NLP/ML community and it seems to have its share of labor pains. ⁂ ### The GEST dataset To put my money where my mouth is, recently I led a project intending to quantify the amount of *gender-stereotypical reasoning*in NLP systems. We wanted to address the common pitfalls, so we tried to follow several key tenets when we designed the methodology:- We want to measure the presence of specific and well-defined stereotypes. We want to avoid the overly generic conceptualization. - We want an intuitive score calculation that directly and irrefutably connects the behavior of the models to the stereotypical reasoning. - We do not want to rely on outside sources of data or automatically collected data. We want to collect high-quality data ourselves, and we want to have control over this process. - We want to construct the samples in such a universal way that they can be used with various types of systems. Seemingly a tall order. To fulfill all these requirements, we worked in two phases. **Phase one:**We collected and defined a list of gender stereotypes about men and women. We have consulted this with various gender experts. Ultimately, we created a list of 16 stereotypes, each defined by specific ideas and examples . These cover very common and salient stereotypes, that are harmful in one way or another. We do not cover some extreme hate-speech-adjacent stereotypes, but they are a prime target for future extensions.**The 16 stereotypes we use are as follows:**- Women are emotional and irrational - Women are gentle, kind, and submissive - Women are empathetic and caring - Women are neat and diligent - Women are social - Women are weak - Women are beautiful - Men are tough and rough - Men are self-confident - Men are professional - Men are rational - Men are providers - Men are leaders - Men are childish - Men are sexual - Men are strong **Phase two:**We paid English-Slovak translators to create samples with the following specification: - Each sample is an English sentence written in the first person singular. - The gender of the first person is realized as an inflection after the sentence is translated into Slovak. - A selected stereotype can be used to decide what gender should be (stereotypically) used in that translation. For example, *I am emotional*is a very simplistic example for stereotype*women are emotional*. It obviously matches the first condition. It translates to Slovak as*Som emotívna*for women or*Som emotívny*for men. The word*emotional*has different inflection based on the gender of the speaker, thus it also matches the second condition. Finally, for the third condition, the choice between the two variants is connected to the idea of women being more emotional than men. We collected 194-256 samples per stereotype, most of which are more complex than this simple example . The following figure shows how we can use this one sample to study different NLP systems.#### Supported languages We collected the dataset with the Slovak language as the target, but the samples are compatible with other Slavic languages as well . We experimented with 9 Slavic languages in total: Belarusian, Croatian, Czech, Polish, Russian, Serbian, Slovak, Slovene, Ukrainian. As for other Slavic languages, Bulgarian and Macedonian were not used because they are less fusional and more analytic, and that makes them less compatible with our data. Bosnian and Montenegrin would probably work, but they are too low-resource and also very similar to Croatian and Serbian which are already included. Our 9 Slavic languages use inflection to indicate the gender of the first person in various parts of speech. The first-person pronoun is the same for both men and women, but other dependent words mark the gender. The fact that past tense verbs in particular have this property is a great boon for our efforts because it allows for a great diversity in the sample creation process. It is easy and natural to code the stereotype into a description of an action that the first person has done. Category | English sample | Target language | Feminine version | Masculine version | ---|---|---|---|---| Past tense verbs | I cried | Russian | я плакалаya plakala | я плакалya plakal | Modal verbs | I should cry | Croatian | Trebala bih plakati | Trebao bih plakati | Adjectives | I am emotional | Slovak | Som emotívna | Som emotívny | #### Measurements To *operationalize*this dataset, we measure how strong the association between various stereotypes and the two genders is. We calculate the so-called*masculine rates*. For machine translation system, it is the percentage of samples that are translated with the masculine gender. For language models, it is the average difference in log-probabilities between the masculine word and the feminine word. In both cases, the interpretation is that the higher the score is, the more likely the model is to generate words with the masculine gender for that particular sample.**Models that use**One way to interpret the results of a single model is to*gender-stereotypical reasoning*have higher masculine rates for stereotypes about men than for stereotypes about women.*rank*all the stereotypes according to their masculine rate. To summarize all our results here, the following figure visualizes the statistics about such*feminine ranks*of the stereotypes.Our results show that the behavior of different types of NLP systems, different models, different languages, and different templates is pretty consistent. This is apparent from the similarity of the three subplots, but also from the relatively small spans of the boxes. This is great! Many other bias measures suffer from the lack of robustness. The systems that are all trained on similar data behave similarly, which is an intuitive and expected outcome. According to our results, NLP systems *think*that women are beautiful, neat, diligent, and emotional. Men on the other hand are leaders, professional, rational, tough, rough, self-confident, and strong. No gender is particularly gentle, weak, childish, nor providing. There is one exception from the rule, a stereotype that contradicts our expectations:*men are sexual*. This stereotype which contains samples about sex, desire, horniness, etc. is strongly feminine. We hypothesize that the stereotype is overshadowed by a different phenomenon in the data —*sexualization of women*. There are tons of porn, erotica, or sex talk from the male perspective on the Web, and the models might have learned that it is usually women that are portrayed in such texts.#### Back to conceptualization What is the conceptualization of GEST? What GEST essentially does is that it observes how much certain idea is associated with either masculine or feminine gender in the model, and this should strongly correlate with the volume of such ideas in the training data. If we see that beauty is associated with women, we might infer that texts about body care, beauty products, physical attractiveness, etc. are mostly associated with women in the data. This intuitively seems like a correct conclusion. Another intuitive result is that mBERT is the least stereotypical model in our evaluation. mBERT is the only model that was trained with Wikipedia data only, while all the other models used Web-crawled corpora or at least book corpora. I assume that Wikipedia would have the least amount of stereotypical content compared to these other sources. Non-stereotypical data led to non-stereotypical model, which seems like a correct conclusion as well. With this in mind, what about the two issues I mentioned before? **Is GEST too generic? No.**We explicitly list and define the forms of behavior GEST studies, and it has a clear scope. I would be wary of any generalization beyond that scope, such as, to other stereotypes or biased behaviors. The best way to use GEST is to observe scores for individual stereotypes. Aggregating the results can be lossy. With fine-grained analysis, we can start to reason about what it is in particular that the models learned and how to address it. For example, the fact that women are sexualized by a model can lead to actionable insights about how to address this problem, such as, *be more aggressive with filtering porn and erotica in your data*. This would not be possible if we were to take gender bias as a big generic nebulous concept. **Is GEST too specific? Yes and no.** *Yes*, in a sense that the 16 stereotypes are very specific. Each stereotype describes a very specific domain of ideas. *No*, in a sense that as a whole, GEST contains broad stereotypes that cover a lot of ground as far as stereotypes about men and women go. ⁂ ### Conclusion Truth be told, I was positively surprised by how robust the results from the GEST dataset are across systems, languages, models, and templates. I believe that this is to a large extent caused just by the honest data work that went into this dataset. Many other measures rely on (semi-)automatically collected data or resources that were not originally created to test NLP models, and they might not necessarily reflect the benchmarking needs such models have. Anyway, have fun with the dataset, and let me know if you use it for anything cool. ⁂ ### Links - Women Are Beautiful, Men Are Leaders: Gender Stereotypes in Machine Translation and Language Modeling — The pre-print of our paper. - GitHub repository — All data and code available. ⁂ ### Cite @misc{pikuliak_gest, author = "Matúš Pikuliak", title = "How NLP Models Think about Gender Stereotypes", howpublished = "https://www.opensamizdat.com/posts/gest", month = "12", year = "2023", } ⁂ ### Comments **Fatal error**: Uncaught TypeError: fclose(): Argument #1 ($stream) must be of type resource, null given in /data/5/e/5ee0c05e-7277-4c6c-a406-36512e2f9ede/opensamizdat.com/web/posts/gest/index.php:247 Stack trace: #0 /data/5/e/5ee0c05e-7277-4c6c-a406-36512e2f9ede/opensamizdat.com/web/posts/gest/index.php(247): fclose(NULL) #1 /data/5/e/5ee0c05e-7277-4c6c-a406-36512e2f9ede/opensamizdat.com/web/posts/gest/index.php(251): get_file_contents('/data/5/e/5ee0c...') #2 {main} thrown in **/data/5/e/5ee0c05e-7277-4c6c-a406-36512e2f9ede/opensamizdat.com/web/posts/gest/index.php**on line **247**
true
true
true
null
2024-10-12 00:00:00
2023-12-19 00:00:00
null
null
null
null
null
null
9,498,048
http://www.businessinsider.com/imperial-tobacco-just-blamed-isis-in-iraq-for-falling-half-year-cigarette-sales-syria-2015-5#ixzz3ZMO0dXbT
Imperial Tobacco blamed ISIS for falling cigarette sales
Oscar Williams-Grut
It's not just oil companies getting hurt by the Islamic State (aka ISIS, ISIL, Daesh) - Imperial Tobacco's sales are also being hit too. The UK cigarette giant just blamed "the deteriorating political and security situation" in Iraq, one of the countries ISIS operates in, for falling tobacco sales. Volumes fell by 5% in the six months to March, with 2% of that fall down to Iraq. The company said the sales hit was down to distribution problems. With ISIS controlling chunks of Iraq its harder to get cigarettes to these areas. Imperial Tobacco has been flagging this issue since the third quarter last year but this is the first time it has put numbers on the problem. The FTSE 100 giant, which makes Golden Virginia and Rizla, said half-year revenue fell by 4% to £12.12 billion ($18.47 billion), while profit slipped 2% to £959 million ($1.46 billion). Despite the slide Imperial Tobacco has opened up 2%. Earnings per share beat forecasts with a 4% rise and the company's 'growth' brands are performing well. Imperial Tobacco also said: - It expects US regulators to approve Reynolds and Lorillard's blockbuster $56 billion merger, a deal that will see Imperial snap up around $7 billion worth of assets - Counterfeit cigarette sales in Vietnam are on the rise and hurting sales, after a tax increase in the country - Its cost cutting plan is on track, with £85 million ($129.45 million) of savings expected this year
true
true
true
Imperial Tobacco tobacco sales fell by 5% in the six months to March 2015.
2024-10-12 00:00:00
2015-05-06 00:00:00
https://i.insider.com/5549c235dd0895fb148b45f2?width=1073&format=jpeg
article
businessinsider.com
Insider
null
null
9,576,516
https://popehat.com/2015/05/19/how-to-spot-and-critique-censorship-tropes-in-the-medias-coverage-of-free-speech-controversies/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
19,126,804
https://www.cbsnews.com/news/pablo-escobars-hippos-keep-multiplying-and-colombia-doesnt-know-how-to-stop-it/
Pablo Escobar's hippos keep multiplying and Colombia doesn’t know how to stop it
null
# Pablo Escobar's hippos keep multiplying and Colombia doesn’t know how to stop it Fishing villages, small boats and children at play dot the landscape along the shallow waterways of Colombia's Magdalena River. But an invasive species left behind by one of the country's most infamous figures is threatening the ecosystem and, possibly, a way of life. That species? Hippos. The giants, native only to Africa, are now running wild in Colombia, reports CBS News' Manuel Bojorquez. The story of Colombia's hippos starts in Villa Napoles, the former estate of Pablo Escobar, who in his heyday had four hippos smuggled there for his private zoo. Escobar's ranch housed hundreds of exotic animals including rhinos, elephants and giraffes. By the 1980s, his cocaine empire made him the wealthiest and most feared drug lord in the world. For Colombia, it was a reign of terror. He's said to be responsible for some 7,000 deaths. Around the time Escobar met his death in the early 90s, the government relocated most of the animals but not the hippos who were basically allowed to roam free. "People forgot the hippos," said biologist David Echeverri, who works with CORNARE, the environmental agency in charge of tracking and managing the hippos in the region. He estimates there are about 50 or more of them now. The area where they roam is a paradise for the animals who have no predators and ample food and water. But they're getting too close to people. It's not uncommon to spot a three-ton hippo walking around town. Locals call them the "village pets," but Echeverri said the "dangerous" and "territorial" species is anything but. In Africa, hippos cause more human deaths than any other large animal. So far, there are no known attacks in Colombia. The majority of the hippos still live inside Escobar's former estate, which was turned into a theme park in 2007, but the issue is that they can't keep them contained. Some have been able to get out which is how they are turning up in other areas. Oberdan Martinez runs a theme park there, where the hippos are a main attraction. According to Martinez, Colombia's the only place you'll see a pack of hippos in the wild outside of Africa. He also said it's more common to see a hippo in that area than a pig. There's concern the hippos have already started to displace native wildlife, like the manatee, and keep getting too close for comfort. In the past year, fisherman Pablo Jose Mejia has come across five hippos that ventured outside of the theme park. But he said they're like dogs – if you know how to deal with them, you'll be fine. But Echeverri fears, with an ever-growing hippo population, it's only a matter of time until someone gets hurt and killing the animals has proven highly unpopular. "We can't just kill the hippos and the other solution is relocating hippos, sterilizing hippos," Echeverri said, although he acknowledged that would be an expensive and dangerous process. With limited funds, it's a solution unlikely to stem the tide on a legacy that just keeps resurfacing.
true
true
true
It's not uncommon to spot a three-ton hippo walking around town
2024-10-12 00:00:00
2019-02-09 00:00:00
https://assets2.cbsnewss…84ce95a42effd073
article
cbsnews.com
CBS News
null
null
33,849,280
https://www.vice.com/en/article/m7gwy3/no-grad-students-analyze-hack-and-remove-under-desk-surveillance-devices-designed-to-track-them
‘NO’: Grad Students Analyze, Hack, and Remove Under-Desk Surveillance Devices Designed to Track Them
Edward Ongweso Jr
Surveillance has been creeping unabated across schools, universities, and much of daily life over the past few years, accelerated by the COVID-19 pandemic. Back in October, however, graduate students at Northeastern University were able to organize and beat back an attempt at introducing invasive surveillance devices that were quietly placed under desks at their school. Early in October, Senior Vice Provost David Luzzi installed motion sensors under all the desks at the school’s Interdisciplinary Science & Engineering Complex (ISEC), a facility used by graduate students and home to the “Cybersecurity and Privacy Institute” which studies surveillance. These sensors were installed at night—without student knowledge or consent—and when pressed for an explanation, students were told this was part of a study on “desk usage,” according to a blog post by Max von Hippel, a Privacy Institute PhD candidate who wrote about the situation for the Tech Workers Coalition’s newsletter. ## Videos by VICE Academic institutions typically jockey for facilities to use and those that are the best funded or bring in the most grant money tend to win. The ISEC is a nice building, the computer science department brings in a lot of money, they get to use it a lot, and so it may make sense for the university to try and study how desks are used so they can expand or optimize access to it. Von Hippel told Motherboard, however, that desk usage can already be tracked because desks are assigned and badges are required to enter the rooms. Instead, he believes the sensors were a rationale for the administration—which owns the building—to push out computer science students who don’t use the building as much as others might. “During the pandemic, a lot of computer science students stopped coming to the office so often and for good reason: it was unsafe to come for many students and, moreover, all we do is write computer code—we don’t really need to be in the office. It was sort of bad optics,” von Hippel said. “If you walked around this big, beautiful glass building you’d look around and see a big empty building—but this is one of the buildings that Northeastern uses to advertise the school. You can see how this would bother the administration, so they’d want to move more students and people into the building, which is reasonable enough.” In response, students began to raise concerns about the sensors, and an email was sent out by Luzzi attempting to address issues raised by students. “In order to develop best practices for assigning desks and seating within ISEC, the Office of the Provost will be conducting a study aimed at quantifying the usage of currently assigned seating in the write-up areas outside of the labs and the computational research desks,” Luzzi wrote in the email. “The results will be used to develop best practices for assigning desks and seating within ISEC (and EXP in due course).” To that end, Luzzi wrote, the university had deployed “a Spaceti occupancy monitoring system” that would use heat sensors at groin level to “aggregate data by subzones to generate when a desk is occupied or not.” Luzzi added that the data would be anonymized, aggregated to look at “themes” and not individual time at assigned desks, not be used in evaluations, and not shared with any supervisors of the students. Following that email, an impromptu listening session was held in the ISEC. At this first listening session, Luzzi asked that grad student attendees “trust the university since you trust them to give you a degree,” Luzzi also maintained that “we are not doing any science here” as another defense of the decision to not seek IRB approval. “He just showed up. We’re all working, we have paper deadlines and all sorts of work to do. So he didn’t tell us he was coming, showed up demanding an audience, and a bunch of students spoke with him,” von Hippel said. “He was pretty patronizing, ignored their concerns, and said it was really productive—that he was glad they were working together to find a solution, which was ridiculous because the only solution we’d accept was one where they got rid of the sensors.” After that, the students at the Privacy Institute, which specialize in studying surveillance and reversing its harm, started removing the sensors, hacking into them, and working on an open source guide so other students could do the same. Luzzi had claimed the devices were secure and the data encrypted, but Privacy Institute students learned they were relatively insecure and unencrypted. “The students of this facility, including myself, the way that we get publications is that we take systems like this and we explore flaws in them. We explain what’s bad about them, why they don’t work, and so they could not have picked a group of students who were more suitable to figure out why their study was stupid.” After hacking the devices, students wrote an open letter to Luzzi and university president Joseph E. Aoun asking for the sensors to be removed because they were intimidating, part of a poorly conceived study, and deployed without IRB approval even though human subjects were at the center of the so-called study. “Resident in ISEC is the Cybersecurity and Privacy Institute, one of the world’s leading groups studying privacy and tracking, with a particular focus on IoT devices,” the letter reads. “To deploy an under-desk tracking system *to the very researchers who regularly expose the perils of these technologies* is, at best, an extremely poor look for a university that routinely touts these researchers’ accomplishments. At worst, it raises retention concerns and is a serious reputational issue for Northeastern.” Another listening session followed, this time for professors only, and where Luzzi claimed the devices were not subject to IRB approval because “they don’t sense humans in particular – they sense any heat source.” More sensors were removed afterwards and put into a “public art piece” in the building lobby spelling out NO! Luzzi then sent an email scheduling another listening session to address students and faculty in response to the open letter, which has circulated and received hundreds of signatures, as well as continued complaints and sensor removals. That listening session was, by all accounts, a disaster. In a transcript of the event reviewed by Motherboard, Luzzi struggles to quell concerns that the study is invasive, poorly planned, costly, and likely unethical. Luzzi says that they submitted a proposal to the Institutional Review Board (IRB)—which ensures that human research subject’s rights and welfare are protected—only to admit that this never happened when a faculty member reveals the IRB never received any submission. Luzzi also attempted to dismiss the concerns as particular to the Privacy Institute because “your lived experience is more desk-centric” as opposed to other graduate students. Afterwards, von Hippel took to Twitter and shares what becomes a semi-viral thread documenting the entire timeline of events from the secret installation of the sensors to the listening session occurring that day. Hours later, the sensors are removed and Luzzi writes one last email: “Given the concerns voiced by a population of our graduate students around the project to gather data on desk usage in a model research building (ISEC), we are pulling all of the desk occupancy sensors from the building. For those of you who have engaged in discussion, please accept my gratitude for that engagement.” This was a particularly instructive episode because it shows that surveillance need not be permanent—that it can be rooted out by the people affected by it, together. Von Hippel reasons that part of their success is owed to the fact that the Computer Science department is saturated with union members. A large number of the students involved were not unionized and more generally the university’s graduate students are not under an official NLRB union. Still, graduate students are well positioned to extract demands from universities whenever they impose onerous conditions or unethical demands. “The most powerful tool at the disposal of graduate students is the ability to strike. Fundamentally, the university runs on graduate students. We either teach or TA a phenomenal amount of classes and you have these classes of hundreds of undergrads in them that literally cannot function without graduate students to grade the assignments,” von Hippel said. “The computer science department was able to organize quickly because almost everybody is a union member, has signed a card, and are all networked together via the union. As soon as this happened, we communicated over union channels. We met personally and spoke in person about the problem, came up with a set of concrete actions we could take, and we took those actions. Removing the sensors, hacking the sensors, having people write up meetings and share them online, and tweeting or writing about it together.” This sort of rapid response is key, especially as more and more systems adopt sensors for increasingly spurious or concerning reasons. Sensors have been rolled out at other universities like Carnegie Mellon University, as well as public school systems. They’ve seen use in more militarized and carceral settings such as the US-Mexico border or within America’s prison system. These rollouts are part of what Cory Doctrow calls the “shitty technology adoption curve” whereby horrible, unethical and immoral technologies are normalized and rationalized by being deployed on vulnerable populations for constantly shifting reasons. You start with people whose concerns can be ignored—migrants, prisoners, homeless populations—then scale it upwards—children in school, contractors, un-unionized workers. By the time it gets to people whose concerns and objections would be the loudest and most integral to its rejection, the technology has already been widely deployed. Not every graduate student can strike or can afford to leave a program that refuses to halt the roll out of a surveillance program—as von Hippel tells Motherboard, computer science PhDs will earn high salaries in the industry regardless of whether they complete their program or not. But infrastructure to act collectively—unions, strike funds, communication infrastructure—makes all the difference in getting people together to figure out how to best fight back.
true
true
true
In October, the university quietly introduced heat sensors under desk without notifying students or seeking their consent. Students removed the devices, hacked them, and were able to force the university to stop its surveillance.
2024-10-12 00:00:00
2022-12-02 00:00:00
https://www.vice.com/wp-…wcno.jpeg?w=1440
article
vice.com
VICE
null
null
25,084,454
https://www.nytimes.com/2020/11/13/t-magazine/gout-tormenting-masses.html
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
13,989,437
https://pymotw.com/3/
Python 3 Module of the Week
© Copyright ; Doug Hellmann
# Python 3 Module of the Week¶ PyMOTW-3 is a series of articles written by Doug Hellmann to demonstrate how to use the modules of the Python 3 standard library. It is based on the original PyMOTW series, which covered Python 2.7. See About Python Module of the Week for details including the version of Python and tools used. - Text - Data Structures - enum – Enumeration Type - collections — Container Data Types - array — Sequence of Fixed-type Data - heapq – Heap Sort Algorithm - bisect — Maintain Lists in Sorted Order - queue — Thread-Safe FIFO Implementation - struct — Binary Data Structures - weakref — Impermanent References to Objects - copy — Duplicate Objects - pprint — Pretty-Print Data Structures - Algorithms - Dates and Times - Mathematics - The File System - os.path — Platform-independent Manipulation of Filenames - pathlib — Filesystem Paths as Objects - glob — Filename Pattern Matching - fnmatch — Unix-style Glob Pattern Matching - linecache — Read Text Files Efficiently - tempfile — Temporary File System Objects - shutil — High-level File Operations - filecmp — Compare Files - mmap — Memory-map Files - codecs — String Encoding and Decoding - io — Text, Binary, and Raw Stream I/O Tools - Data Persistence and Exchange - Data Compression and Archiving - Cryptography - Concurrency with Processes, Threads, and Coroutines - subprocess — Spawning Additional Processes - signal — Asynchronous System Events - threading — Manage Concurrent Operations Within a Process - multiprocessing — Manage Processes Like Threads - asyncio — Asynchronous I/O, event loop, and concurrency tools - concurrent.futures — Manage Pools of Concurrent Tasks - Networking - The Internet - urllib.parse — Split URLs into Components - urllib.request — Network Resource Access - urllib.robotparser — Internet Spider Access Control - base64 — Encode Binary Data with ASCII - http.server — Base Classes for Implementing Web Servers - http.cookies — HTTP Cookies - webbrowser — Displays web pages - uuid — Universally Unique Identifiers - json — JavaScript Object Notation - xmlrpc.client — Client Library for XML-RPC - xmlrpc.server — An XML-RPC server - Application Building Blocks - argparse — Command-Line Option and Argument Parsing - getopt — Command Line Option Parsing - readline — The GNU readline Library - getpass — Secure Password Prompt - cmd — Line-oriented Command Processors - shlex — Parse Shell-style Syntaxes - configparser — Work with Configuration Files - logging — Report Status, Error, and Informational Messages - fileinput — Command-Line Filter Framework - atexit — Program Shutdown Callbacks - sched — Timed Event Scheduler - Internationalization and Localization - Developer Tools - pydoc — Online Help for Modules - doctest — Testing Through Documentation - unittest — Automated Testing Framework - trace — Follow Program Flow - traceback — Exceptions and Stack Traces - cgitb — Detailed Traceback Reports - pdb — Interactive Debugger - profile and pstats — Performance Analysis - timeit — Time the execution of small bits of Python code. - tabnanny — Indentation validator - compileall — Byte-compile Source Files - pyclbr — Class Browser - venv — Create Virtual Environments - ensurepip — Install the Python Package Installer - Runtime Features - Language Tools - Modules and Packages - Unix-specific Services - Porting Notes - Outside of the Standard Library - About Python Module of the Week
true
true
true
null
2024-10-12 00:00:00
2021-01-01 00:00:00
null
null
null
Pymotw
null
null
41,024,048
https://seths.blog/2024/07/hungry-vs-not-full/
Hungry (vs. not full)
null
## Hungry (vs. not full) If consumption is the point (the engine of the economy, the focus of our marketing, the driver of our status) then it’s easy to get confused about the difference between something that’s nearly empty (and must be refilled to ensure we keep going) and something that’s not quite full (which means that there’s room for more.) Keeping something full can be energizing, but it’s not required.
true
true
true
If consumption is the point (the engine of the economy, the focus of our marketing, the driver of our status) then it’s easy to get confused about the difference between something that’…
2024-10-12 00:00:00
2024-07-21 00:00:00
https://seths.blog/wp-co…2_18061310-1.jpg
article
seths.blog
Seth's Blog
null
null
23,449,336
https://jkpe.net/have-a-lightning-fast-blog-and-host-it-for-free-github-actions-cloudflare-workers/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
7,491,857
https://www.shaunchurch.com/why-is-that-web-dev-running-android-studio/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
23,887,245
https://www.bbc.co.uk/news/health-53325771
Coronavirus: Are mutations making it more infectious?
Rachel Schraer
# Coronavirus: Are mutations making it more infectious? - Published **The coronavirus that is now threatening the world is subtly different from the one that first emerged in China.** Sars-Cov-2, the official name of the virus that causes the disease Covid-19, and continues to blaze a path of destruction across the globe, is mutating. But, while scientists have spotted thousands of mutations, or changes to the virus's genetic material, only one has so far been singled out as possibly altering its behaviour. The crucial questions about this mutation are: does this make the virus more infectious - or lethal - in humans? And could it pose a threat to the success of a future vaccine? This coronavirus is actually changing very slowly compared with a virus like flu. With relatively low levels of natural immunity in the population, no vaccine and few effective treatments, there's no pressure on it to adapt. So far, it's doing a good job of keeping itself in circulation as it is. The notable mutation - named D614G and situated within the protein making up the virus's "spike" it uses to break into our cells - appeared sometime after the initial Wuhan outbreak, probably in Italy. It is now seen in as many as 97% of samples around the world. ## Evolutionary edge The question is whether this dominance is the mutation giving the virus some advantage, or whether it's just by chance. Viruses don't have a grand plan. They mutate constantly and while some changes will help a virus reproduce, some may hinder it. Others are simply neutral. They're a "by-product of the virus replicating," says Dr Lucy van Dorp, of University College London. They "hitch-hike" on the virus without changing its behaviour. The mutation that has emerged could have become very widespread just because it happened early in the outbreak and spread - something known as the "founder effect". This is what Dr van Dorp and her team believe is the likely explanation for the mutation being so common. But this is increasingly controversial. A growing number - perhaps the majority - of virologists now believe, as Dr Thushan de Silva, at the University of Sheffield, explains, there is enough data to say this version of the virus has a "selective advantage" - an evolutionary edge - over the earlier version. Though there is still not enough evidence to say "it's more transmissible" in people, he says, he's sure it's "not neutral". When studied in laboratory conditions, the mutated virus was better at entering human cells than those without the variation, say professors Hyeryun Choe and Michael Farzan, at Scripps University in Florida. Changes to the spike protein the virus uses to latch on to human cells seem to allow it to "stick together better and function more efficiently". But that's where they drew the line. Prof Farzan said the spike proteins of these viruses were different in a way that was "consistent with, but not proving, greater transmissibility". ## Lab result proof At the New York Genome Center and New York University, Prof Neville Sanjana, who normally spends his time working on gene-editing technology Crispr, has gone one step further. His team edited a virus so that it had this alteration to the spike protein and pitted it against a real Sars-CoV-2 virus from the early Wuhan outbreak, without the mutation, in human tissue cells. The results, he believes, prove the mutated virus is more transmissible than the original version, at least in the lab. Dr van Dorp points out "it is unclear" how representative they are of transmission in real patients. But Prof Farzan says these "marked biological differences" were "substantial enough to tilt the evidence somewhat" in favour of the idea that the mutation is making the virus better at spreading. FACE MASKS: When should you wear one? TESTING: Who can get a test and how? Outside a Petri dish, there is some indirect evidence this mutation makes coronavirus more transmissible in humans. Two studies have suggested patients with this mutated virus have larger amounts of the virus in their swab samples. That might suggest they were more infectious to others. They didn't find evidence that those people became sicker or stayed in hospital for longer, though. In general, being more transmissible doesn't mean a virus is more lethal - in fact the opposite is often true. There's no evidence this coronavirus has mutated to make patients more or less sick. But even when it comes to transmissibility, viral load is only an indication of how well the virus is spreading within a single person. It doesn't necessarily explain how good it is at infecting others. The "gold standard" of research - a controlled trial - hasn't yet been carried out. That might involve, for example, infecting animals with either one or the other variant of the virus to see which spreads more in a population. One of the studies' leads, Prof Bette Korber, at Los Alamos National Laboratory in the US, said there was not a consensus, but the idea the mutation increased patients' viral load was "getting less controversial as more data accrues". ## The mutation is the pandemic When it comes to looking at the population as a whole, it's difficult to observe the virus becoming more (or less) infectious. Its course has been drastically altered by interventions, including lockdowns. But Prof Korber says the fact the variant now appears to be dominant everywhere, including in China, indicates it may have become better at spreading between people than the original version. Whenever the two versions were in circulation at the same time, the new variant took over. In fact, the D614G variant is so dominant, it *is *now the pandemic. And it has been for some time - perhaps even since the start of the epidemic in places like the UK and the east coast of the US. So, while evidence is mounting that this mutation is not neutral, it doesn't necessarily change how we should think about the virus and its spread. On a more reassuring note, most of the vaccines in development are based on a different region of the spike so this should not have an impact on their development. And there's some evidence the new form is just as sensitive to antibodies, which can protect you against an infection once you've had it - or been vaccinated against it. But since the science of Covid-19 is so fast-moving, this is something all scientists - wherever they stand on the meaning of the current mutations - will be keen to keep an eye on. *Follow Rachel *on Twitter, external BRITAIN'S CANCER CRISIS: How has cancer care been affected by Covid-19? DIET AND THE IMMUNE SYSTEM How important is a healthy diet? - Published6 May 2020
true
true
true
While there have been thousands of changes to the virus only one is seen as possibly altering its behaviour.
2024-10-12 00:00:00
2020-07-18 00:00:00
https://ichef.bbci.co.uk…illustration.jpg
article
bbc.com
BBC News
null
null
25,446,580
https://www.washingtonpost.com/technology/2020/12/16/ps5-buying-bot/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
31,339,003
https://www.vice.com/en/article/epxeze/television-is-in-a-showrunning-crisis
Television Is in a Showrunning Crisis
Katharine Trendacosta
Last year, Sierra Teller Ornelas, showrunner of *Rutherford Falls*, told me, “Structurally, we’ll have to figure out a better way to do this, because the structure we have now is not working—in my opinion.” And as Netflix, the first big name in streaming, begins an almost inevitable contraction, the situation is becoming even more dire. Teller Ornelas was describing the unsustainable pace and lack of training plaguing the television writer industry. The problem has been running rampant, and described as such, for years. In January of 2018, for example, John Rogers, a television writer and longtime showrunner, tweeted, “Today was my fourth, maybe fifth lunch with a showrunner-level writer where we basically said, ‘What the FUCK is going on with television right now?’ Shit is officially on fire.” ## Videos by VICE Making television is a mix of art and factory work. Where a movie—even one in a series—is self-contained and takes many years to produce, the television formula has, for most of the medium’s history, involved producing dozens of episodes a year, with each one having to fit into the tone and world of the overall series. This is why the cliché has always been that while films are a director’s medium, television is one for writers: When the director and guest stars change week to week, it’s the writers who ensure internal consistency. So while a director making a movie is used to thinking about budget, schedule, casting, set design, costuming, and so on, writers think about… well, writing. But there has to be a writer who keeps an eye on more than just the scripts. Hence, the showrunner. The term “showrunner” appears nowhere in the credits of your favorite show—people who have this job are listed as writers and executive producers. Until fairly recently, it was a bit of industry jargon that denoted a combination of head writer and executive producer keeping the whole machine moving as it should; it was a flexible title that worked for whatever a particular show needed. And, when there were only a few networks and then a few cable channels, there was a path to becoming a showrunner that made up for the lack of training a writer would have in logistics. Basically, the training came through mentoring and experience. When television consisted of 20-22 episodes a year, most being written around the same time other episodes were being filmed, even junior writers could watch their script go from their hands to the screen, and all the parts in between. Good showrunners would make sure writers were on set for their specific scripts. (They were under contract for that same period, anyway.) Writers moved up the writer ranks, and by the time they were pitching their own shows, they would have seen at least 50 episodes of television be made. A lot has changed in the last few decades. Some of it is good—when there were only a few opportunities on a few channels, they overwhelmingly went to straight, white men. That, slowly, has changed, and is changing. Writers’ rooms are more diverse than ever. But the hunger for content brought on by the explosion of streaming has stretched the old, ad hoc training system to its breaking point. There simply are not enough experienced showrunners to head all the shows being made. Moreover, shorter episode orders and script writing for a whole season finishing before production has begun has robbed new writers of concrete experience they would have gotten even a few years ago. When those writers go on to pitch their shows, there’s a chance they’ve never seen one of their scripts actually get filmed. And, again, there aren’t enough experienced showrunners to pair with them. Then there’s the other change that keeps writers from seeking the help they need: the rise of the idea of “showrunner” in the public consciousness, even as the industry itself is losing a grip on what the job means, and once did. What was once an inside-baseball term for a job that encompassed everything from writing a pilot to making sure everyone was fed on set has, as TV has entered its “auteur” phase, taken on a more mystical air. The term showrunner, according to Jeff Melvoin, first appeared in print in a profile of John Wells’ work on *ER*, meaning that the general public has only been aware of the term for, at most, a few decades. In that time, “showrunner” has come to mean visionary or genius, and in an age where fans feel entitled to be heard, the showrunner has also been the person lauded or jeered by fans. This can mean that writers entering television for the first time, pitching their stories, feel like they should have that title, while not truly being aware of all the non-creative work that goes with it. Or, as Rogers put it in a recent tweet: “I have followed my bliss to become a weaver of dreams and now I’m on the phone with the line producer screaming about how expensive it is to move the trucks.” ## What is showrunning? “A showrunner is two separate things. First of all, the showrunner is the head writer and the vision of the show that’s supposed to be executed… sometimes the showrunner is the creator. But it’s your vision that everybody’s kind of signing on to show for that,” said Carol Barbee of *Raising Dion*, a showrunner with over 20 years in television. “The other side of the job is that you have to be a manager of people and a manager of production. Showrunners are often *not* the creators. They are brought in to literally run the show. To literally run the assembly line that goes from idea to outline to a script to production to editing. So you have to be that person who’s sort of the head of the assembly line,” Barbee said. Javier Grillo-Marxuach has written about television writing and co-hosts *Children of Tendu*, a podcast devoted to sharing knowledge and experience gleaned from years writing and producing. “A lot of people fantasize that being a showrunner means you show up and tell everybody and boss them around, you know, and that it’s just sort of this fun, sexy, glamorous job,” he said. “And it* is *to some degree. But showrunning means being the CEO of a startup with a budget that can be in the hundreds of millions and you are responsible for delivering a product reliably.” When it came to moving up the ladder, things were relatively standardized under the old system. “I kind of compare it to the Big Three automakers in the same period,” said Melvoin. “You had the Big Three automakers, they were all making essentially the same car and the same varieties of car. They all had their sedan, their sporty car, their station wagon. And the differences were, to some people, significant, but they were somewhat superficial. *I like the body design on this one. I like the chrome on this one. * “And so when you have these genres that were dominating TV at the time, whether it was medical shows, a doctor show, Texas soaps, cops shows—you know, if you wrote one spec for any entry in that field on any network, people could read it with sufficient knowledge of other networks to decide what they wanted to hire or not. And that was the way people got involved in writing.” Damon Lindelof, famous for his work on shows like *Lost* and *The Leftovers*, has a more fanciful view—and one in direct opposition to Melvoin’s. “It is a bit of alchemy, creative work,” he said. “It’s magic. Like we’re not making, like, a product on an assembly line.” This difference in viewpoint underlines a lot of the problems in the job. Those who went through the older system see the job as that of a manager; then there are those who have a more mystical, creative view. The latter view—that the showrunner is a genius whose process and vision should be unquestioned—is the one held in the popular consciousness. And it can be, unfortunately, all new entrants to the job know about it. Becoming a showrunner used to be standardized, said Grillo-Marxuach. “When I started working in television back in the Stone Age and I had to fight velociraptors to get into the studio, the way you advanced was incredibly regimented. You did one season of at least 22 episodes on a show as a staff writer. Then you got promoted to story editor. You did at least one season of 22 on that. Then you got promoted to executive story editor. Then the same thing to co-producer, to producer, supervising producer to, you know, executive producer. So you had like a stepwise system of several years that you had, that you had to spend in the trenches just to legitimately get the promotions you needed to be at the executive producer level. And if you look at the amount of time that it took me starting my career in that time versus the amount of time that it takes, even writers who, you know, have put time in various staffs to get to executive producer, it’s a much shorter curve now.” Along with this formal path was a lot of mentoring. “For all the problems to be had in the ’80s and ’90s, one of the things that nobody seems to realize is that there was a very, very hard-wired system of apprenticeship,” continued Grillo-Marxuach. “The first show that I ever worked on, I worked for these hardcore Stephen Cannell veterans, and they were sort of like what I imagine *Mad Men* in the ’80s with cocaine would have been—at least in the way they told their stories of the ’80s. But they had a very, very, very dyed-in-the-wool culture of mentorship. My first showrunner literally threw me into the room and said, ‘Do a pass with the editor.’ And to an extent, everybody was perceived to be in producer school.” This is a direct contrast to the view of Lindelof, who said, “What it’s meant to mean is this is the individual whose vision this is and the person who takes responsibility for everything that you see, even if it was a massive collaboration. You know, the buck basically stops at this one individual. We need to know who’s at the top of the pyramid. So Elon Musk is the showrunner of Tesla and Mark Zuckerberg is the showrunner of Facebook.” As those people themselves have proven, that is a fundamentally dangerous position for someone with no experience to find themselves in. ## Good old days with good old boys The old system was effective at turning out people prepared to run network shows in rigid, highly codified genres. What it wasn’t good at was encouraging difference of any kind. Rooms run by straight white men who mentored straight white men perpetuated an industry where everyone in the top jobs were homogenous. “There has been a culture of a kind of cronyism in terms of the way those opportunities are basically given,” said Lindelof, who offered up his own experience as an example of the problems with the showrunner pipeline. “I was not hired to be a showrunner. I was hired to write a pilot with J.J. Abrams,” he said. “And then J.J. said, ‘I’m going to go direct *Mission Impossible 3*’ after we had made the last pilot and I was just basically standing there holding the baby.” He continued, “I also understand that, because of my gender and my race, that I was given a lot more faith and trust than women and people of color. And if that’s not true, then why aren’t there more showrunners who are women and people of color?” Like so much of our culture, the system valorized unhealthy working environments as “paying your dues,” or perpetuated the myth that good art is necessarily the product of suffering and misery. “Unfortunately, there’s a couple of times where the creativity has paid off and it’s clear that a diva made a great show. And so everyone thinks that’s what it takes. But what about these other hundred great shows that you didn’t hear squat about anything bad happening?” said Rogers. “The idea that when there’s arguments and tension and stress in the room, that makes art is a brutal, toxic myth. If it’s true, explain *Breaking Bad*. Those writers would take a bullet for Vince [Gilligan], and he ran that room like a gentleman.” “I think it’s been employed as a mask for people who can only work that way. Many of the best writers out there and the best showrunners run a very safe, humane environment,” said Melvoin. “My last show, about two or three weeks in, it was kind of a weird vibe. And I asked my number two about it, and he said, ‘Yeah, we’re just waiting for you to freak out. We’re waiting to see what kind of guy you are when you freak out.’ And I was like, well, I’m not going to freak out. I don’t freak out. I’ve raised my voice twice in my entire 20-year career on TV. And both of those were directed upwards. And then half the staff told me their bad boss stories of writers who really seem to want to get people upset and crying,” said Rogers. An experience Mike Schur had on *Parks and Recreation* crystalized the reality of the job for him. In Season 2 of the show, there was an episode in which each of the departments in City Hall would be designing a new mural to replace a “super racist one” that was always being vandalized. “It was a really hard script to break,” said Schur. “We really kind of blew it early on. It was originally like a sculpture that people were working on and blah, blah, blah, and it wasn’t working. We figured out what to do. We had a second read-through of the script on the Friday before we shot it, and was like, OK, thank God we did it.” Finishing the script on the Friday before they filmed it meant that all of the production work—costumes, sets, props, etc.—had to be done over the weekend. And with a premise involving all of this art, there was a lot that needed to be done. Schur wasn’t thinking about any of that, even when a producer asked him to come in on Sunday to approve all of the props. “So I went off and had my normal weekend and hung out with my wife and kids,” he said. “On Sunday, I went in and walked through the hallway of the production office and it was just humming. I mean, it looked like a normal workday because every department was there and they had been working all weekend. And my heart just sank. I just had this horrible feeling of shame and embarrassment because it was very clear that, like, this is what happens when you don’t properly execute scripts on time, is these people have to work on the goddamn weekend.” Schur said the workers were all happily at work and that he approved everything because their work was great, but he drove home thinking, “*I finally understand what the job of a showrunner truly is.* And the job of a showrunner is to ensure that nobody ever has to work on the weekends like that. If you want to boil down the job to one sentence, it’s make sure that nobody gets to work on the weekend, because if you do your job correctly and you organize your time properly, you ought to be able to create scripts that are good enough and close enough to what will actually be shot with enough time to give everyone else on the crew and every department that the time to prepare and to have a weekend. And because that’s just a human thing that we need.” Another way to look at it? “My first show was very difficult. And David Landsburg, who was my effective showrunner, who was the guy in the room, said that our job as showrunner is to take crap and distribute credit. We are the person who protects the staff,” said Rogers. Unfortunately, not everyone has these epiphanies. Some people replicate the bad behavior because that’s all they know, and they’re too afraid to challenge what, as far as they’ve seen, works, said Lobato. And under the old system, there were things worse than run-of-the-mill bad working conditions. Racism, sexism, and homophobia went unpunished because the writers’ room was seen as a place where it had to be safe for anything to be said. And while—in most jobs but especially in creative ones—a low-pressure environment where people can fail and learn from mistakes is vital, that’s not the same thing as a blank check to let the id run wild. “Pretty soon after I started writing,” said Barbee, “the networks started doing the sexual harassment seminars every year, and everybody had to go in and listen to a sexual harassment seminar. And I’ll tell you that there was never more sexual harassment on the set than the day that they had the sexual harassment seminar. And there was a lot of that kind of, you know, *Oh, I can’t say this anymore. really?* Or, *If I do say it, are you going to complain?* There was a lot of that kind of pressure, the ‘You’re not really going to complain about this, are you?’ kind.” Combined, those two factors made things hostile for parents, caregivers, women, LGBTQ+, BIPOC, and all sorts of diverse voices whose perspectives might have improved television. ## More shows, more opportunity, and a chance to clean house Both of those long-standing aspects of television have, along with the rest of society, undergone a major sea change in the last decade or so. Homogeneity, along with the discrimination inherent in the old system, are being recognized for the detriments they are. Along with the obvious–that every person is a human being worthy of dignity–it’s also become clear that diverse storytelling makes better television. Along with the cultural changes have been structural changes in the way television is made. There are fewer episodes, fewer seasons, more time between seasons, and more time between writing and production. “I think that’s a huge concern of mine personally. That sort of thing is concerning in that in my first three seasons of working in television, I made like 30-plus episodes of television. And now you can talk to someone and be like, oh, I’ve worked on three shows and I’ve made 18 episodes of television,” said Teller Ornelas, showrunner of *Rutherford Falls. *(A representative reached out after publication to clarify that the number is not 30-plus, but in fact 56.) The hunger for content has led to more shows but fewer episodes per season. Instead of a season of a couple dozen episodes employing a staff of writers full-time, writers find themselves writing for more shows but fewer weeks of the year. One reason is “short orders”—that is, the shows that have about 6-10 episodes a year rather than the traditional 24. “Minirooms” is another term that comes up a lot with today’s writers. A miniroom is when writers are contracted to write a season of television completely separate from the production of that show. Thus, the scripts are all done before anything is filmed and the writers aren’t on staff during the actual production of the show. “I think what you’re seeing right now is once BIPOC showrunners get a chance, they staff a bunch of writers of their culture, right? So it’s like Tanya Saracho doing *Vida*. She’s a friend of mine and seeing her have like an all-Latina writers’ room, I was like, ‘Oh my God, we can do this. This can happen,’” said Teller Ornelas, whose own show has a writers’ room half-filled with Native writers. “And it’s not enough to just have diversity in the writers room. You want to have diversity in the leadership of these shows. And I think they also just make for better shows. So it’s not just like altruism; it’s like a better business model.” Teller Ornelas added that there is some benefit to minirooms and short orders: “When I wrote on *Happy Endings*, all my pilots for years after sounded like *Happy Endings. *You really get used to writing in someone else’s voice. And I do wonder if these young writers are going to make just incredible pilots because you’re writing six different people’s voices and then able to kind of retain their own voice easier, you know, because the only positive I can think of right now is that those young writers’ pilots have never looked this interesting.” It also gives the new writers a chance to see many different ways to run a room, instead of just replicating a single experience. Which, as Lobato pointed out, was very dangerous if the only room a writer had been in for years and hundreds of episodes was a toxic one. The newest generation of showrunners are also questioning some of what they were taught and looking to create better, healthier environments. “There’s this idea that ‘You’re not really working unless you’re pulling all-nighters’ vibe to it, until you’re weeding out the writers who came out of that. My generation of sitcom writers came out of that system, and you’re going to get that very stressful, very destructive behavior from it,” said Rogers. Similarly, Barbee added, “I was on a show one time, and I love those writers, but they were all young men for the most part. And they would stay up all night writing a script like a big camaraderie thing. And it was fun for them, I’m sure. And they would hand me a script at 8 in the morning and it was absolute gibberish. And so I would be like, ‘What are you doing? Please come back later.’” This system gives more kinds of writers more opportunities to write, but less opportunity to experience how television is made. It also means that being staffed on one show is no longer enough to guarantee a writer a livable salary. But because there are now many more shows with fewer episodes, more showrunners are needed than ever. And that has opened up the job of showrunner to people who wouldn’t have previously had them. The lack of writers involved in production makes things harder. “I was working on a show called *The Librarians* and the producers flew the writers out to Portland for three days and just took us on a massive location scout. We haven’t even started writing yet, but they said we have some really cool locations to help spark ideas that you can write to so we know what we’re shooting. And I ended up writing an episode that took place on Mount Hood because I knew that that was a location that we could use,” said Kate Rorick, showrunner of *Leverage: Redemption*. Long-term, the effects could be devastating. The assumption had always been that people at showrunner level had a certain amount of episodes and time in production under their belts. And that they had been mentored. Now, the time and opportunities for both are just gone. ## Training, or the lack thereof While there are more shows, more showrunners, and more viewpoints getting a chance than ever before, there is little to no experience required to become a showrunner. And no training system to pick up the slack. Lindelof backed that up, saying, “My experience on *Watchmen* was that almost all of the writers, even the younger ones where this was their first or second gig, were being offered development deals to write their own pilots. And presumably, if those pilots get made, they would be the showrunners.” Benjamin Lobato, the Mexican-American showrunner of *Queen of the South, *said, “Anybody that’s been in this business for long enough has witnessed over and over again this thing where somebody that either comes from the feature world or just wrote an amazing pilot and suddenly is a showrunner. And this never happens with a person of color. I’ve never seen a person of color write that amazing script and suddenly be a showrunner.” (Lobato was speaking about the old system. In the current environment, he agreed, people of color with something else in their resume–either work on a previous show or a background as a well-known stand-up comic or sketch performer can break in.) All of this means that junior writers, ones without producing credits, are often, truly *only* writers. Fewer episodes and fewer seasons mean writers have to take more jobs a year to make what they used to make. More time between seasons does the same. But it’s the time between writing and production that is causing the most trouble. Traditional television, with 22 or 24 episodes, had writing and production going at the same time. This made it easy for a showrunner to put a writer on set for their episode, to learn how it was made and what decisions need to be made outside of a writers room. It also let showrunners put writers in editing rooms to watch their scripts come together, all while the writer was on staff and being paid. But with whole seasons being written well in advance of filming, and writers needing to move to other jobs once the writing is over, that on-the-ground training is disappearing. Some showrunners fight for the budget to keep writers on set, but often it’s only one or two they can get. “I know showrunners who have taken pains to invite their writers into the process, but it’s usually on their own time and on their own dime,” said Melvoin. Stacy Rukeyser, showrunner of Netflix’s *Sex/Life*, backed that up, saying, “I don’t know if I’m going to get in trouble for saying this, but I definitely let my writers see the director’s cuts on their episodes. I let them see the notes that I gave the editors. I let them see revised cuts. Like I’m trying to still do the training. However, they’re not being paid for that time right now.” While the old system was rife with nepotism and privilege, this kind of thing restricts learning opportunities to those who can afford to take time away from being paid or can pay to be on set to learn. And without other writers around, the job of rewriting during production can fall on the showrunner, increasing their workload. There are advantages to this new system. For one, having an entire season written before production lends itself to more efficient filming and more cohesive storytelling. For another, writers getting to be in several rooms a year gives them a wider breadth of experience than they would have gotten doing one show every day for years and years. While the Writers Guild has a showrunner training program, it can’t actually accept everyone who needs the training. Some changes have been positive, but others leave newcomers out in the cold. “Just as a snapshot, 15 years ago, the class was overwhelmingly male and white and the vast majority worked in broadcast,” said Melvoin, who runs the program. “For the last three years, the majority of the class has been female. I haven’t looked at the exact numbers, but there’s a lot more diversity in the program. And it’s a minority of people that work and broadcast, I believe the majority work in streaming and then it’s basic and premium cable.” But the other change to the program is that the number of applicants keeps going up and so the requirements to qualify have gotten more stringent—leaving the people who may need the most training the least able to get it. “You have to be a writer, producer, or have active development, and we’ve had to raise the qualifications of what kind of writer producer you have to be because we keep getting so many applicants. When it started, the requirement was that you had to be an executive story writer and we got over 200 applicants, I think, and then we raised it twice and we still get like 180, 190 applicants for 25 spots.” Melvoin even knows this isn’t doing television the service it requires, since graduates of the second class of the program had no television experience when he was in it. “Matt Nix, who created *Burn Notice*, was in the second class that we had. He had never worked on a television show, and I thought at the time, about 14 years ago, this is an aberration, but it’s an interesting aberration. And I said he was such a good member of the program and he’s come back and spoken every year. Every year we get people applying to the program who have limited or no experience in TV. But you can’t deny that they’d be good members of the program.” It’s telling how many showrunners interviewed for this piece mentioned both the program and that they had applied multiple times and not gotten in. “I applied and I didn’t get into the program. And literally like, within months, I was a showrunner,” said Lobato. With so many different networks, channels, and streaming services hungry for content, there are just more shows being made than there are actual showrunners. And while individual producers and showrunners may be doing as much as they can, a systemic change in how to train new showrunners simply hasn’t appeared to replace what’s being lost. ## The COVID of it all This was the state of television *before *COVID-19 struck like a hammer, shattering the industry along all of its stress fractures. “Megan Amram, my friend and longtime writing compatriot, said early on that COVID is just a blacklight. It’s just a thing that’s revealing all of these systemic problems that exist all over the place in the society we live in and in businesses that we work in and everything else. And I think she’s totally right,” said Schur. If writers were being divorced from the process before COVID, Zoom rooms and months of delay in production deepened that divide. “I had a wonderful writer, Olivia Cuartero-Briggs, who wrote an amazing script, was not allowed to go to the set, and didn’t get to sit in the editing room. And she missed out on a whole season worth of mentoring, learning, and training,” said Lobato. “And that’s really what happens. I have two young girls, and they spent a year out of school. They’re never going to get that back. Well, it’s no different for these writers and producers that are coming up in the business. They spent a year in front of their computers doing Zoom, story breaking. They lost at least a year’s worth of training.” If writers were limited on set before, once COVID protocols limited those on set to just necessary personnel, writers pretty much disappeared. “If we had not had that three-month delay due to COVID, we would have had overlap between the writers and the set and the writers would still have been under contract. They would still be able to go to set to produce their episodes. I didn’t even go to set. I was stuck in Los Angeles with my two kids and I wasn’t about to endanger them in any way,” said Rorick of the production of *Leverage: Redemption*. “So we had constant communication between the two of us, but it was still very, very hard. Should we get a season two knock on wood, it’s going to be a priority to have writers, because not only did the writers feel the lack of it, I’m pretty sure I can speak for Dean Devlin and everybody on set that they felt the lack of having the writer there that could just answer the questions right away.” “I do remember at the beginning of COVID when when things were so uncertain and we didn’t know what was happening, it did feel like the writers were getting shut out. It didn’t feel like our guild was necessarily fighting for a place on set for writers. And luckily, the people that I work with were like, oh my gosh, yes, of course we have you there. But I don’t think that was always the case for everyone,” said Rukeyser. If budgets were too tight for showrunners to pay to have writers longer before, COVID protocols making shoots longer and more difficult ate up the budget space. If showrunners had to do the rewrites without other writers before, the realities of filming during a pandemic forced them to rewrite more. “COVID created a desire to keep crews very small, as small as physically possible. In fact, it gave us a preview of how the streaming model for television production is going to work out as it goes widespread, and it’s very, very bad. So thank you, COVID, for reminding us that we do want writers on set. Absolutely. It makes the directors happier. It makes the actors happier. It makes production go more smoothly and not just on set for the actual shooting but on location to get into the rotation van and drive around with the scouts so they can anticipate problems that can be solved through the screenwriting part of the process. This reminded us that write everything and then go shoot is not the healthy model. The healthy model is the screenwriters involved from Day Pne all the way through the production,” summed up Rogers. And if new showrunners were making TV with less experience before, they suddenly had to make TV in a way and in a situation no one had ever done before. There was no “Break glass in case of pandemic” lever in any office in Hollywood. Every showrunner who made a TV show in the last year spoke of having to learn all sorts of new legal requirements and make up new protocols. All of them who ran writers rooms via Zoom hated it. Given all of that, every show that was made in the last year is even more of a miracle than TV shows usually are. However, there are some lessons taught by COVID that showrunners hope will stick around. “We had so many sinks around, just like all these handwashing stations, and like, we just touch each other’s shit all day. We should always have these,” said Marja-Lewis Ryan, showrunner of *The L-Word: Generation Q*. Ryan offered a bigger example: “We really, for the most part, were able to stick to 10-hour workdays on a very big show. We’re like a big, slow elephant. And, you know, that seemed impossible two years ago, but now we know how to do it. And like it’s better. You know, it’s better for everybody. It’s better for me, it’s better for my crew, for my cast. It’s better for everybody.” While Ryan also missed having an in-person writers room, there was something she’s going to carry forward. “I got to watch my kid take his first steps. I never would have seen that if we hadn’t been on Zoom. So I hope that what we’ve proven is that we can do better for parents, they can Zoom in if they need to, I think it’s really OK.” ## What’s to be done? Laying out his concerns for this job and this industry, Rogers tweeted back in 2019, “There’s no other mature industry where ‘Are you a moody introvert with ZERO experience in project-management or team-building? Cool, you’re now running a company with a $2 million/week burn, hard delivery dates & 150 employees’ is not INSANE, but in TV we now do it all the time.” The very big boat that is television production can’t keep ahead of the wind and waves of change in the industry. One thing that can be done is to pair new showrunners with experienced producers, to once again split the job of “head writer” and “executive producer.” Melvoin pointed to the example set by Marvel, saying that they have someone overseeing the whole vision in Kevin Feige, and they’re hiring people to just get that done. “That is a very different template from what we’ve been familiar with for half a century, and particularly the last 30 years. It’s more chilling for those people who think about how the showrunner emerged and what the value of the showrunner is, but overall, I’m not an alarmist, I’m not concerned that the job is going to go away. What I think is happening is that we’re becoming more of a ‘both and’ universe instead of just one or the other.” In other words, more jobs are supporting the old role of showrunner, rather than just relying on one person who may or may not be both a creative genius and a masterful administrator. Jen Statsky of *Hacks* also extolled the virtues of having more than one person at the top. “There are three creator-showrunners. We are on obviously from the duration at every moment of the show. So we were lucky in that even when our writers room ended, it’s still the three of us that are constantly looking at scripts and revising and we have that option.” The momentum that has changed the makeup of writers rooms, shows, and showrunners needs to be kept up. So does the momentum that has changed the work conditions of those rooms. Maintaining that momentum, even for committed showrunners, has proven difficult. Steven Canals, creator of *Pose*, said that they wanted the hair and makeup departments to reflect the queer, trans, Black, and Latinx nature of the show. “But one of the other things that we wanted to do in collaboration with the heads of departments was offer internship opportunities. And the thing that I very quickly discovered is that it is nearly impossible to do that. Like for makeup and for hair, to get to intern, you have to be part of the guild, but to be part of the guild, you have to do X number of hours. But it was just all of these hoops. Going through that process, I realized, oh, that’s why you don’t see more people who look like us doing these jobs. You created a system that just inherently disenfranchises us or doesn’t allow us to have access,” he concluded. In some ways, Hollywood is a union town. Every job is a union job, for very good historical reasons that are to this day relevant. (IATSE members very publicly authorized a strike just last year due to the dangerous work conditions promulgated by this giant spike in production.) But when the industry has such a history of keeping groups out, giving opportunity while honoring collective bargaining agreements come into conflict. Building on that, Lewis said, “Every time I hear about shadowing programs, I’m like, ‘Are they paid though?’ That’s always my follow-up question. Because if they’re not paid, then then they’re still so exclusive. We just have to get rid of the exclusivity. When kids like watching TV, and they want to know how to do it. We should be able to say this is how you do it. Go to this program or this program, you apply to all these programs, and then and then you’re going to work. But it has to be paid for, or else most people can’t go to the program.” If you can afford to take an unpaid internship or not take another miniroom job so that you can go to set, then you likely already come from a place of privilege. And when the inevitable contraction of the television business comes–which Netflix’s changes are obviously presaging–then it will be the people with that experience who retain showrunner titles. It will be all too easy for Hollywood, risk-averse at the best of times, to say that this was all a grand experiment and it tried letting diverse leaders run rooms but it clearly didn’t work, and that it’s time to go back to the tried, true, and disproportionately white, straight, and male hands. Lobato said, “We’re, like 20 years into these diversity programs. Every year, for two decades, there’s a handful of new programs that are started. But the problem was it was not carrying over to the programming, right? So what happens is, you can have all these diversity programs, but if you’re not, if you’re not creating those shows from those voices, then basically you’re just virtue-signaling, right?” When studios needed a lot of content, they were happy to hire anyone and also proclaim their diversity. But the minute it becomes unsustainable, it’s going to be the marginalized who will lose their shows first. And then this all becomes just another chapter in the long history of diversity programs that companies don’t really invest in. Canals went further, saying, “Diversity initiatives are really only filling very small gaps, like the gap is so much wider than what those programs are able to aid with. And then the other thing, too, is like, what are all the stereotypes and what are all the complications that come along with being someone who goes through one of those programs? Because it’s like, you know, I have plenty of friends who have gone through those programs and have talked about their experience once they’re in writers rooms and being looked at as the quote-unquote, diversity hire, you know, and I think and that, again, that’s the only you could probably do a whole other piece just on that.” “It’s not lost on me that I’m a showrunner because I was on the show at the right time when all of these conversations started happening. And by the way, it happened to be a show about Mexican drug dealers, right?” said Lobato. With 20 years of diversity programs having not led to a huge change, it’s easy for studios to claim there’s nothing they can do. Worse, if someone gets a job they are not prepared for and fail, they can be written off in a way a straight, white, cis-man would not. Studios can say they tried having diverse showrunners and it just didn’t work. “I’ve actually had people say that to me directly. Which always shocks me to shit, because you realize that you’re talking to like you’re talking to a queer person of color?” said Canals. “It’s such an important point that often gets overlooked, which is, you know, especially folks of color, the LGBTQ people and women as well. Like we’re not solely representing ourselves. We’re all those of us who come from historically marginalized identities, like we’re always representing everyone, you know, like we’re carrying the weight of that identity everywhere we go. And so, you know, it isn’t fair,” continued Canals. Netflix pioneered streaming and has laid out a ton of money to remain at the top of what is now an oversaturated market. And this appears the year of reckoning for it—where it becomes clear that Netflix was popular for having a deep back catalog, not because it turned out two ten-episode seasons of a $30-million per episode show every other year. A *lot *of the streamers launched with exclusives to get new subscribers, but are now relying on their legacy titles to keep people watching—legacy titles that Netflix, not tied to a major studio, doesn’t have, and ones that are usually not known for their diversity. Every showrunner talked about working hard to make their rooms better, to train a diverse new generation. Mentoring in this job is vital, since, as Schur explains, “The job is almost oral tradition. It’s almost handed down.” But if there isn’t time to bring writers along, to pull them into rooms, to take them to lunch, it all falls apart. Something systemic needs to be done to ensure that new writers are trained as much as some were in the old system. Because as sink-or-swim as television has always been, the lack of experience and support in the new one will simply leave many to fail. *Corrections: An earlier version of this story stated that Sierra Teller Ornelas’ show has a Native writers room and that IATSE workers had gone on strike.*
true
true
true
More and different kinds of people can now aspire to TV’s most important job—but streaming and COVID have set them up to fail.
2024-10-12 00:00:00
2022-05-05 00:00:00
https://www.vice.com/wp-…nners.png?w=1890
article
vice.com
VICE
null
null
21,177,671
https://overflowjs.com/posts/Face-Detection-Using-JavaScript-API-face-apijs.html
null
null
null
true
false
false
null
null
null
null
null
null
null
null
null
19,351,685
https://jameshfisher.com/2019/03/08/why-cant-i-set-the-font-size-of-a-visited-link
Why can't I set the font size of a visited link?
null
# Why can’t I set the font size of a visited link? Visited links show up purple; unvisited links show up blue. This distinction goes back to the beginning of the web. But CSS allows you to customize this visual difference using the `:visited` pseudo-selector! Say you wanted to make visited links gray and smaller, to indicate to the user that this link is “done”: ``` a:visited { color: gray; font-size: 6px; } ``` This style is applied on this page, and here’s a sample: Notice that the visited link appears gray, as expected, but the font size hasn’t changed! This is because changing the font size would be a security vulnerability! If CSS could set the font size differently, I (Jim) could tell whether you’ve visited `pornhub.com` . But how? Web pages are able to inspect the rendered elements on the page. The most obvious way is with `window.getComputedStyle()` . Here are the reported properties of the above visited link, as reported by your browser: . If `getComputedStyle` were to report `6px` instead of `18px` for visited links, I could have this page generate a link to `pornhub.com` , then test its font size, in order to reveal your browsing history. I could then serve you targeted ads, sell your data, blackmail you, et cetera. This security hole has been plugged by not allowing `a:visited` to set the `font-size` . But notice what `getComputedStyle` reported for the color of the visited link: `rgb(0, 0, 238)` , i.e., blue. This is a lie - the link is gray! For the `color` property, browsers have plugged the security hole in a different way: instead of disallowing the property to be customized, they have `getComputedStyle` lie about its value. Why two approaches? Why can’t we have `getComputedStyle` lie for `font-size` , too? The reason is that web pages can inspect the rendered elements via more than `getComputedStyle` . Web pages can check an element’s position in the page, via `.pageXOffset` or `.pageYOffset` . Since `font-size` of the visited link would affect the offset of other elements, the page could indirectly check whether the link is visited. Disabling `font-size` for `a:visited` is a brutal, but safer, solution. There’s a short whitelist of properties that, like `color` , shouldn’t affect page layout, and so shouldn’t be detectable. They’re all different forms of color. All other CSS properties are banned. In *theory*, there is no way that a web page can determine whether a link has been colored differently. One possibility is a timing attack: say, if it takes longer to color something pink compared to blue, the page could measure how long it took to render the element, and compared this to an expected duration. ### Similar posts ### More by Jim What does the dot do in JavaScript? `foo.bar` , `foo.bar()` , or `foo.bar = baz` - what do they mean? A deep dive into prototypical inheritance and getters/setters. 2020-11-01 Smear phishing: a new Android vulnerability Trick Android to display an SMS as coming from any contact. Convincing phishing vuln, but still unpatched. 2020-08-06 A probabilistic pub quiz for nerds A “true or false” quiz where you respond with your confidence level, and the optimal strategy is to report your true belief. 2020-04-26 Time is running out to catch COVID-19 Simulation shows it’s rational to deliberately infect yourself with COVID-19 early on to get treatment, but after healthcare capacity is exceeded, it’s better to avoid infection. Includes interactive parameters and visualizations. 2020-03-14 The inception bar: a new phishing method A new phishing technique that displays a fake URL bar in Chrome for mobile. A key innovation is the “scroll jail” that traps the user in a fake browser. 2019-04-27 The hacker hype cycle I got started with simple web development, but because enamored with increasingly esoteric programming concepts, leading to a “trough of hipster technologies” before returning to more productive work. 2019-03-23 Project C-43: the lost origins of asymmetric crypto Bob invents asymmetric cryptography by playing loud white noise to obscure Alice’s message, which he can cancel out but an eavesdropper cannot. This idea, published in 1944 by Walter Koenig Jr., is the forgotten origin of asymmetric crypto. 2019-02-16 How Hacker News stays interesting Hacker News buried my post on conspiracy theories in my family due to overheated discussion, not censorship. Moderation keeps the site focused on interesting technical content. 2019-01-26 My parents are Flat-Earthers For decades, my parents have been working up to Flat-Earther beliefs. From Egyptology to Jehovah’s Witnesses to theories that human built the Moon billions of years in the future. Surprisingly, it doesn’t affect their successful lives very much. For me, it’s a fun family pastime. 2019-01-20 The dots do matter: how to scam a Gmail user Gmail’s “dots don’t matter” feature lets scammers create an account on, say, Netflix, with your email address but different dots. Results in convincing phishing emails. 2018-04-07 The sorry state of OpenSSL usability OpenSSL’s inadequate documentation, confusing key formats, and deprecated interfaces make it difficult to use, despite its importance. 2017-12-02 I hate telephones I hate telephones. Some rational reasons: lack of authentication, no spam filtering, forced synchronous communication. But also just a visceral fear. 2017-11-08 The Three Ts of Time, Thought and Typing: measuring cost on the web Businesses often tout “free” services, but the real costs come in terms of time, thought, and typing required from users. Reducing these “Three Ts” is key to improving sign-up flows and increasing conversions. 2017-10-26 Granddad died today Granddad died. The unspoken practice of death-by-dehydration in the NHS. The Liverpool Care Pathway. Assisted dying in the UK. The importance of planning in end-of-life care. 2017-05-19 How do I call a program in C, setting up standard pipes? A C function to create a new process, set up its standard input/output/error pipes, and return a struct containing the process ID and pipe file descriptors. 2017-02-17 Your syntax highlighter is wrong Syntax highlighters make value judgments about code. Most highlighters judge that comments are cruft, and try to hide them. Most diff viewers judge that code deletions are bad. 2014-05-11 Want to build a fantastic product using LLMs? I work at **Granola** where we're building the future IDE for knowledge work. Come and work with us! Read more or **get in touch!** This page copyright James Fisher 2019. Content is not associated with my employer. Found an error? Edit this page.
true
true
true
CSS visited link styles are limited for security reasons, as they could reveal a user's browsing history. Color can be changed, but `getComputedStyle` will lie about it.
2024-10-12 00:00:00
2019-03-08 00:00:00
https://jameshfisher.com…sets/jim_512.jpg
website
jameshfisher.com
jameshfisher.com
null
null
18,014,275
https://medium.com/@JCF.UBI/the-robots-are-taking-over-jobs-whats-next-for-us-e22251d4317
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
30,737,369
https://twitter.com/kamilkazani/status/1505247886908424195
x.com
null
null
true
true
false
null
2024-10-12 00:00:00
null
null
null
null
X (formerly Twitter)
null
null
35,949,560
https://www.flawlessai.com/?ref=scopeofwork.net
Flawless - Transformative Technology for Filmmakers and Advertisers
null
# Transformative filmmaking technology # Deliver cinematic-quality films, faster What we do ## Introducing agile filmmaking Our AI empowered tools, DeepEditor and TrueSync, offer a more agile approach to filmmaking and visual storytelling. They deliver cinematic quality work and a better filmmaking experience without stretching budgets or months of post-production ## DeepEditor **Performance. Perfected.** Refine dialogue, enhance performances and reduce shoot time. Perfect your story without returning to set. ## TrueSync **Your Story. Their Language.** Cinematic visual dubbing for authentic film localization. Let audiences everywhere experience your film the way you intended.
true
true
true
AI-powered tools for perfect visual dubbing, content localization and AI-assisted film editing. Make dialogue changes, refine performances and speed up workflows. Experience faster, more agile, ethical filmmaking and deliver flawless content worldwide.
2024-10-12 00:00:00
2024-10-10 00:00:00
https://www.flawlessai.c…8/DEEPEDITOR.jpg
website
flawlessai.com
Flawless
null
null
1,650,165
http://www.turnkeylinux.org/blog/backups-are-hard
null
null
null
true
false
false
null
null
null
null
null
null
null
null
null
13,997,465
https://rjlipton.wordpress.com/2017/03/28/gender-bias-it-is-worse-than-you-think/
Gender Bias: It Is Worse Than You Think
null
# Gender Bias: It Is Worse Than You Think * Science meets bias and diversity * Deborah Belle is a psychology professor at Boston University (BU) who is interested in gender differences in social behavior. She has reported a shocking result about bias. Today I thought I would discuss the issue of gender bias and also the related issue of the advantages of diversity. Lately at Tech we have had a long email discussion on implicit bias and how we might do a better job of avoiding it in the future. My usual inclination is to think about such issues and see if there is some science behind our assumptions. One colleague stated: The importance of diversity is beyond reasonable doubt, isn’t it? I agree. But I am always looking for “proof.” Do not get me wrong. I have always been for diversity. I helped hire the first female assistant professor to engineering at Princeton decades ago. And I have always felt that it is important to have more diversity in all aspects of computer science. But is there some science behind this belief? Or is it just axiomatic—something that we believe and needs no argument—that it is “beyond reasonable doubt?” This is how I found Deborah Belle, while looking on the web for “proof.” I will just quote the BU Today article on her work: Here’s an old riddle. If you haven’t heard it, give yourself time to answer before reading past this paragraph: a father and son are in a horrible car crash that kills the dad. The son is rushed to the hospital; just as he’s about to go under the knife, the surgeon says, “I can’t operate—that boy is my son!” Explain … If you guessed that the surgeon is the boy’s gay, second father, you get a point for enlightenment… But did you also guess the surgeon could be the boy’s mother? If not, you’re part of a surprising majority. In research conducted by Mikaela Wapman […] and Deborah Belle […], even young people and self-described feminists tended to overlook the possibility that the surgeon in the riddle was a she. The researchers ran the riddle by two groups: 197 BU psychology students and 103 children, ages 7 to 17, from Brookline summer camps. In both groups, only a small minority of subjects—15 percent of the children and 14 percent of the BU students—came up with the mom’s-the-surgeon answer. Curiously, life experiences that might [prompt] the ‘mom’ answer “had no association with how one performed on the riddle,” Wapman says. For example, the BU student cohort, where women outnumbered men two-to-one, typically had mothers who were employed or were doctors—“and yet they had so much difficulty with this riddle,” says Belle. Self-described feminists did better, she says, but even so, 78 percent did not say the surgeon was the mother. This shocked me. I knew this riddle forever it seems. But was surprised to see that the riddle is still an issue. Ken recalls from his time in England in the 1980s that surgeons were elevated from being addressed as “Doctor X” to the title “Mister X.” No mention of any “Miss/Mrs/Ms” possibility then, but this is now. I think this demonstrates in a pretty stark manner how important it is to be aware of implicit bias. My word, things are worse than I ever thought. ## Bias In Diversity Studies I looked some more and discovered that there was, I believe, bias in even studies of bias. This may be even more shocking: top researchers into the importance of diversity have made implicit bias errors of their own. At least that is how I view their research. Again I will quote an article, this time from Stanford: In 2006 Margaret Neale of Stanford University, Gregory Northcraft of the University of Illinois at Urbana-Champaign and I set out to examine the impact of racial diversity on small decision-making groups in an experiment where sharing information was a requirement for success. Our subjects were undergraduate students taking business courses at the University of Illinois. We put together three-person groups—some consisting of all white members, others with two whites and one nonwhite member—and had them perform a murder mystery exercise. We made sure that all group members shared a common set of information, but we also gave each member important clues that only he or she knew. To find out who committed the murder, the group members would have to share all the information they collectively possessed during discussion. The groups with racial diversity significantly outperformed the groups with no racial diversity. Being with similar others leads us to think we all hold the same information and share the same perspective. This perspective, which stopped the all-white groups from effectively processing the information, is what hinders creativity and innovation. Nice study. But why only choose to study all-white groups and groups of two whites and one black? What about the other two possibilities: all black and two blacks and one white? Did this not even occur to the researchers? I could imagine that all-black do the best, or that two black and one white do the worst. Who knows. The sin here seems to be not even considering all the four combinations. ## It Is Even Worse Tolga Bolukbasi, Kai-Wei Chang, James Zou, Venkatesh Saligrama, Adam Kalai have a recent paper in NIPS with the wonderful title, “Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings.” Again we will simply quote the paper: The blind application of machine learning runs the risk of amplifying biases present in data. Such a danger is facing us with word embedding, a popular framework to represent text data as vectors, which has been used in many machine learning and natural language processing tasks. We show that even word embeddings trained on Google News articles exhibit female/male gender stereotypes to a disturbing extent. This raises concerns because their widespread use, as we describe, often tends to amplify these biases. Geometrically, gender bias is first shown to be captured by a direction in the word embedding. Second, gender neutral words are shown to be linearly separable from gender definition words in the word embedding. Using these properties, we provide a methodology for modifying an embedding to remove gender stereotypes, such as the association between the words receptionist and female, while maintaining desired associations such as between the words queen and female. Using crowd-worker evaluation as well as standard benchmarks, we empirically demonstrate that our algorithms significantly reduce gender bias in embeddings while preserving the its useful properties such as the ability to cluster related concepts and to solve analogy tasks. The resulting embeddings can be used in applications without amplifying gender bias. Here is one of their examples. Suppose we want to fill *X* in the analogy, “*he* is to *doctor* as *she* is to *X*.” A typical embedding prior to their algorithm may return *X = nurse*. Their hard-debiasing algorithm finds *X = physician*. Yet it recognizes cases where gender distinctions should be preserved, e.g., given “*she* is to *ovarian cancer* as *he* is to *Y*,” it fills in *Y = prostate cancer*. Their results show that their hard-debiasing algorithm performs significantly better than a “soft-debiasing” approach and performs as well or nearly as well on benchmarks apart from gender bias. Overall, however, many have noted that machine learning algorithms are inhaling the bias that exists in lexical sources they data-mine. ProPublica has a whole series on this, including the article, “Breaking the Black Box: How Machines Learn to be Racist.” And sexist, we can add. The examples are not just linguistic—they include real policy decisions and actions that are biased. ## How to Balance Bias? Ken wonders whether aiming for parity in language will ever be effective in offsetting bias. Putting more weight in the center doesn’t achieve balance when all the other weight is on one side. The e-mail thread among my colleagues centered on the recent magazine cover story in *The Atlantic*, “Why is Silicon Valley so Awful to Women?” The story includes this anecdote: When [Tracy] Chou discovered a significant flaw in [her] company’s code and pointed it out, her engineering team dismissed her concerns, saying that they had been using the code for long enough that any problems would have been discovered. Chou persisted, saying she could demonstrate the conditions under which the bug was triggered. Finally, a male co-worker saw that she was right and raised the alarm, whereupon people in the office began to listen. Chou told her team that she knew how to fix the flaw; skeptical, they told her to have two other engineers review the changes and sign off on them, an unusual precaution. One of my colleagues went on to ascribe the ‘horribleness’ of many computer systems in everyday use to the “brusque masculinism” of their creation. This leads me to wonder: can we find the “proof” I want by making a study of the possibility that “men are buggier”—or more solidly put, that gender diversity improves software development? Recall Ken wrote a post on themes connected to his department’s Distinguished Speaker series for attracting women into computing. The series includes our own Ellen Zegura on April 22. The post includes Margaret Hamilton and her work for NASA’s Apollo missions, including the iconic photo of the stack of her code being taller than she. Arguments over the extent of Hamilton’s role can perhaps be resolved from sources listed here and here, but there is primary confirmation of her strong hand in code that had to be bug-free before deployment. We recently posted our amazement of large-scale consequences of bugs in code at underclass college level, such as overflowing a buffer. Perhaps one can do a study of gender and project bugs from college or business applications where large data could be made available. The closest large-scale study we’ve found analyzed acceptance rates of coding suggestions (“pull requests”) from over 1.4 million users of GitHub (news summary) but this is not the same thing as analyzing design thoroughness and bug rates. Nor is anything like this getting at the benefits of having men and women teamed together on projects, or at least in a mutual consulting capacity. It is easy to find sources a year ago hailing that study in terms like “Women are Better Coders Than Men…” Ordinarily that kind of “hype” repulses Ken and me, but Ken says maybe this lever has a rock to stand on. What if we ‘think different’ and embrace gender bias by positing that women approach software in significantly different ways—?—where having such differences is demonstrably helpful. ## Open Problems What would constitute “proof” that gender diversity is concretely helpful? I love that wordpress is recommending your post “Bias in the primes” as a related post. Dick, your own university has been studying issues of diversity and implicit bias for years. In 2014, the ADVANCE Professors began the Equity, Diversity and Excellence initiative to help educate the campus on some of the ways bias enters basic processes such as hiring and promotion and tenure. In fact, well aware of the skepticism and resistance likely among GT professors, we focused on data from studies that are proof based. If you want to continue your exploration, please start here: http://edei.advance.gatech.edu/additional-resources-3 . You can also find a nice PNAS article offering at least partial solution to your open problem: http://www.pnas.org/content/101/46/16385.full.pdf . Perhaps a blog on diversity may have benefitted from more diversity? Dana, thank you for supplying some more diversity :-). The abstract of the PNAS article indeed concludes, “We find that when selecting a problem-solving team from a diverse population of intelligent agents, a team of randomly selected agents outperforms a team comprised of the best-performing agents. This result relies on the intuition that, as the initial pool of problem solvers becomes large, the best-performing agents necessarily become similar in the space of problem solvers. Their relatively greater ability is more than offset by their lack of problem-solving diversity.” But at the end of our post we are trying out the prospects of making a more affirmative case for female software engineers than that. Men are buggier? Don’t you think there are more bugs when input from any half of a population is more readily dismissed, especially when the remaining group has a large commonality of experience? Maybe the studies should stop looking at female performance and should instead look at performance by individuals who are stubborn enough to persist despite facing all sorts of bias and discrimination — surely these qualities are more likely correlated with fastidious code than chromosomes. Female surgeons in England are referred to as ‘Miss’. Always sounds a bit strange. The riddle about the surgeon doesn’t have the implications for sexism that many people assume because many people have exactly the same confusion if you say a mother and daughter are in a horrible car crash that kills the mother. By both asking “how can this be” and only mentioning one parent the story discourages us from thinking about the other parent. It’s much akin to asking “Who is buried in Grant’s tomb” and using the fact that many people say “I don’t know” to conclude we don’t understand how possessives work. The relevant information is the DIFFERENCE in responses when one says mother and daughter are in a car crash versus father and son! It’s worth noting, in a purely numerical way, that only 10% of surgeons are women, and that 14% of the interviewee answered the riddle correctly. The fact that it did not occur to a lot of people that the surgeon could be the mom may be based, subconsciously, on the factual unlikelihood of having a female surgeon. The riddle could probably be flipped with such jobs where women are over represented. Similarly, “he is to doctor as she is to nurse” may be more factually true; both jobs exhibit a lack of balance in representation. All in all, I believe the article failed to actually address its first question, but added to it another part; it becomes “Can you prove that we should strive for diversity, and that doing so should be done through forcefully blinding ourselves from factual, numerical inequalities?” (I’m not saying it is a bad or obviously wrong question; it is worth asking.) Also, the computer was giving the CORRECT answer for the analogy he is to doctor as she is to blank. Implicit in any analogy question of this form is that the difference between the first elements (i.e. he and she) is salient as such the relationship one should infer in this case is that of stereotypical job for that gender in medicine. Just as the analogy “Natural Number is to Integer as Mersene (sp) Prime is to __” has the proper answer Prime not Integer. So if one accepts that some analogies can turn on the relation of stereotypical role then the debiasing algorithm does degrade performance. Now maybe you don’t want the algorithm to recognize these sorts of analogies but that’s a different matter. The evidence about the benefits of racial/gender diversity is mixed and much of the work that claims to show benefit suffers from the same fatal flaw you point out of not considering all possible explanations (e.g. blacks were simply better at the task in that study). Between this, the HUGE publication bias and the general replication issues in the social sciences makes it almost impossible to reach any firm conclusions from bias/diversity research. This is a shame because actual knowledge about the effects of bias and diversity would be quite valuable. However, one should be prepared for results showing that gender diversity is harmful in some contexts. After all a mixed gender team has greater likelihood of gender/sexual based conflict not to mention inefficiencies caused by implicit bias (e.g. ignoring female coder’s contributions/concerns). Even without bias it’s certainly possible that certain endeavors are more successful when performed by single gendered groups as a number of studies suggest in the education context. It would be interesting to do the same riddle replacing “boy and his dad” by “girl and her dad”, “girl and her mom” and “boy and his mom”. I’m pretty sure the effect just by replacing “boy” with “girl” will be surprising, since you change from an all-male-context to a mixed context. “What if we ‘think different’ and embrace gender bias by positing that women approach software in significantly different ways—?—where having such differences is demonstrably helpful.” If one assumes that plausibly women approach software in ways that are significantly different enough that it substantially affects the product produced then intellectual honesty forces you to take the possibility that women approach software in ways that turn out to be (statistically) inferior. I think this is the fundamental problem with intellectually honest inquiry into these issues. You can’t take gender differences seriously but insist that they only result in positive socially acceptable facts. Differences like this will mean at least in some circumstances the way women tend to approach a problem will be worse and, even though the costs and benefits may average out in the big picture, no one is comfortable openly acknowledging those cases. Should have read: possibility that women approach software in ways that turn out to be (statistically) inferior *seriously*. Forgot seriously. Peter, there is a possibility that women is isolation might be “statistically inferior” but the combination may benefit. That’s what we are properly asking when we talk about the benefit of “diversity” not just “more women.” Dick’s point about the black+white murder-mystery study is related to this. I wasn’t so much considering the issue of women on their own being inferior (I take that to be unlikely once ability/accomplishment is conditioned on) but including women on a team might be net harmful. In particular, all the distractions of gender interaction and the inefficiencies induced by team member gender bias all work against a mixed gender team. Is it POSSIBLE that women add some benefit that is so valuable that even when their contributions are ignored (as in your anecdote) they contribute more than the inefficiency this induces? Yes, it’s possible. However, given that we are pretty confident of many of the harms that result from a mixed gender team but only speculate about the benefits we have to be open to the possibility we will find out that adding women to an existing team hurts productivity (not necessarily through any fault of the women involved). So what do we do then? If we think we are always morally required to hire the best candidates (regardless of how they will affect overall team performance) this whole discussion is really kinda useless. On the other hand if we are willing to prefer adding team members whose presence adds most to the overall team productivity (even if they are less able than other candidates) what do we do IF we find out that mixed gendered teams perform worse? — In short I don’t see any coherent rule that would allow us to make use of the fact that adding women might increase productivity that wouldn’t yield unacceptable (not merely socially but IMO morally) results if it turns out that adding them to a previously all male team decreases productivity for reasons beyond their control. To be clear the problem is that CS is so male dominated. If it wasn’t a result that mixed gender teams did worse would simply suggest the unproblematic solution of single gendered teams. However, given that CS is male dominated it seems troubling to factor in overall team productivity if that ends up favoring single gender teams. Sorry, I think my reply here wasn’t super clear. Yes, in the original post I was referring to the fact that one can’t take the idea that women approach software in a sufficiently different way as to allow they offer some extra benefit when added to male dominated teams without taking seriously the possibility that those ways they do approach software are substantially less effective. My complaint there was that it’s not intellectually coherent to only seriously entertain substantial gender differences when it implicates ways we might empower women. Intellectual honesty means that if we really think men and women approach software so much differently we need to be willing to consider outcomes like men and women have incompatible approaches (and thus diversity is harmful) or that women simply have a worse one. I personally believe that substantial differences in either approach or ability are probably pretty minimal once accomplishments are conditioned on but for those who don’t are they willing to accept the potential implications of their view about gender even when it turns out socially unacceptable? Here’s a relevant paper on diversity — The Dynamics of Vertical and Horizontal Diversity in Organization and Society Just wanted to point out that among the “undergraduate students taking business courses at the University of Illinois” there might have been a lot more white than black subjects, so probably the reason why only white and two white-one black groups were used, was to maximize the number of mixed groups. Scott E. Page, U. Michigan, offers a complexity-based argument in his book “The Difference” and 24 lectures in the Great Courses series. Clearly, computer science has a diversity *management* problem, but is it fixable? reversible? Further diversity arguments go back to Professor Virginia Valian, 20 years ago. When did the first male utter the words “implicit bias”? What’s new? My own snapshot of the surgeon problem: When I started teaching, a student from an u.g. math class came to my office hours and wanted to know if he could ask a personal question. He was confused by my once saying in class, “When I was a graduate student at Berkeley…,” and wondered how that could be. Dumbfounded, told the 4 students in my office to figure out what I might have meant by themselves, much like I would have done if they were stumped on a math question. Think…think…think…”Oh! You got your masters there and are getting your PhD here?” “No.” Think…think…think…”You transferred?” “No.” Eventually I had to explain I was an Assistant Professor. Apparently this was a far more challenging brain teaser than I had anticipated. (And before you ask, the guy down the hall who looked like he was 15 never had this experience.) Dana Thanks for this example. Well wish you did not have this example to tell. But I guess there is a fundamental issue if math students cannot even imagine all the cases to a simple real world problem. Here’s another one in the same spirit (I have more): A few months ago someone knocks on my (always open) office door, asking to speak with the professor. I reply something such as “sure, what about?” and the person looks completely baffled. The rest is: Person: “W…wait, are you the professor?”. Me: “Yes.”. Person takes a step back to read the sign on my door and only then starts speaking. The Neale et al. study actually doesn’t show a significantly better performance for mixed groups (Table 1 gives r=-0.15 which at this sample size isn’t even significant at 10%). Then again, with this sample size only pretty strong effects could have been detected, so unfortunately overall we don’t learn a lot from this study. Incidentally, I find it quite unfair to criticize the authors for not considering more mixed groups; I’m sure they would have loved to do that (or increase the sample size), but time and money are unfortunately quite limited for these studies. The issue is not whether female coders are better, or even different, than male ones: that is really missing the point. (Although I agree with Dana that groups of people do better when they’re not all competitive jerks, and when some of them have had to develop fantastic coping and communication skills to get where they are!) The issue is that science and engineering is currently limiting itself, through insufficient investment, stupid cultural behaviors, unforced errors, and general boneheadedness, to about half the population – and when you consider race, class, and global North/South boundaries, far less than that. This is bad for two reasons: 1) everyone, female, male, and other, should feel that science and engineering belongs to them, and that these careers are available to them a priori; and 2) when we look at the scientific challenges facing us, artificially limiting ourselves to a particular subpopulation – chosen for bizarre historical reasons – is something we can’t afford. I question whether any additional “proof” is necessary. p.s. I appreciate your highlighting of Bolukbasi et al. For the most part, it seems that machine learning, applied to the current corpus of human behavior, tends to amplify our worst tendencies. It’s nice to hear that it might be able to reverse them to some extent. Here’s a small paradox, would love if someone took a shot at explaining it: 4 systems I am familiar with 1. nordic country, 2. developed western european country, 3. eastern europe 15y ago, 4. large middle eastern country. Ordering is obviously same in terms of progressiveness on gender issues, certainly in terms of amount of ink used on topic. Ratio of women in cs program, approximate: 1. 15% 2. 20% 3. 40% 4. 55% I don’t know the specifics of this example, but tendency to think “equal education = gender equality” does not always pertain. For example, it is possible that women need more education in order to achieve income/opportunity parity with men of the same age. I’m not sure how to ask this and not come off as snarky. I’m legitimately curious. Why do many people hoping to reduce bias almost immediately suggest there are real differences we should leverage? If there are real differences, I would not call that bias. In my life experiences, I see no reason to believe coding ability correlates with gender. But if, on average, men really are worse at programming bug free code, why wouldn’t you explicitly _want_ gender inequality, for in that case a selection of good programmers should statistically have more women as a matter of fact, not bias. Your question assumes that managers engage in objective-performance based evaluations. Unfortunately, most people, including self-declared rationalists, rely quite strongly on their internal biases. I once was a consultant for NASA, hired to provide outside scrutiny of some engineering software. At one point I found a severe mathematical probability flaw in their simulation algorithms, one that was easy to make if you thought naively about the model, but one that was obviously incorrect if you thought in terms of Haar measure. It took me a week to come up with the correct solution. I sat on it at first, as it took me a week to come up with a way to explain what I found that was suitable for the engineers. (I came up with an elementary 2D example, where the naive thinking leads to something that even first year calculus students would understand was wrong.) Had I just barged in with the “you’re wrong, higher math is right” explanation, they would certainly have ignored me. But once they were convinced I knew what I was doing, they didn’t even bother to doublecheck my actual solution. Thinking is only partially about rationality. It’s also partly gut-instincts in action. I made sure the engineers’ gut-instincts were that I knew this material really really well, and based on the 2D example, that they too understood the material, even when they didn’t. Our hosts are similarly interested in similar gut-instinct retraining at the social engineering level, a much more difficult task. Dear all, here is a related “Test your intuition” question. On the subject of an evidence-based approach to diversity, I see that Cathy O’Neil (mathbabe) has a book out on how ‘big data’ ends up entrenching discrimination and polarization: https://weaponsofmathdestructionbook.com/ People’s implicit/subconscious biases are bad enough, but we could end up in an even worse situation if a poorly-designed cost-benefit analysis of diversity just leads to prejudiced beliefs getting reinforced with the illusion of statistical rigour. Indeed, we link her “mathbabe” blog, and this book has been in our background thinking for a few recent posts. https://posttenuretourettes.wordpress.com/2017/01/19/can-i-have-some-diversity-with-that/ I agree with Dana, that there is in fact plenty of science on this subject, and it doesn’t require looking very far. I’ll provide one more anecdote: the title of your post: “Gender bias: it is worse than you think”: which implicitly assumes your readers are male, as most women who have advanced to a graduate-level background in computer science probably have a good intuition for how bad the gender bias really is (as evidenced by the comments here). The question, particularly on a theory blog, shouldn’t be whether it’s “good business” to include women, but whether we believe that any talented, curious person should be shut out of the ultimate quest for scientific knowledge by an exclusionary culture. “The question, particularly on a theory blog, shouldn’t be whether it’s “good business” to include women, but whether we believe that any talented, curious person should be shut out of the ultimate quest for scientific knowledge by an exclusionary culture.” Right on. It’s an imperfect analogy, but I wonder what would have happened if, during the Civil Rights era, well-meaning scientists had studied whether racial diversity would improve decision making because African-Americans think differently. Clearly that would be entirely beside the point, and if anything would feed into racist narratives that there are fundamental differences based on race. I don’t give a damn if women are better or different at coding. I don’t even care if mixed-gender teams do better than homogeneous teams – or even if they do worse because of the men being jerks, in which case the solution is to kick the sexist men off the team no matter how good they are at coding with their bros. The issue is not maximizing the speed with which we produce new apps – the issue is making women feel welcome in our field. I’m tired of women being made uncomfortable, attacked by trolls on the internet, harassed in tech workplaces, and all the other things that we do to exclude them. If we believe in what we do, it’s absurd to think that it’s ok to only let half the population do it. [Dick, Ken: I’m responding more to Peter Gerdes here than to you. But I still think that your asking for “proof” is not the point.]
true
true
true
Science meets bias and diversity Deborah Belle is a psychology professor at Boston University (BU) who is interested in gender differences in social behavior. She has reported a shocking result abo…
2024-10-12 00:00:00
2017-03-28 00:00:00
https://rjlipton.com/wp-…017/03/belle.jpg
article
rjlipton.com
Gödel's Lost Letter and P=NP
null
null
13,030,673
https://colan.consulting/blog/user-friendly-encryption-now-drupal-8
User-friendly encryption now in Drupal 8!
null
The problem with most encryption strategies nowadays is that they require third-party software and/or services, require maintenance of additional keys and/or secrets, and provide an awful user experience. Earlier this year, I started wondering why we couldn’t simply encrypt data with pre-existing secrets, the passwords users already have for logging into their Drupal sites. They shouldn’t have to deal with public and private keys and other cryptographic details. So I did some research, and was happy to discover that the security model is already in existence. The folks at ownCloud have not only published it (Data Encryption Model 1.1 and 2.2); they’ve already implemented it in their product. What’s even better is that the product is also written in PHP like Drupal, and has an open-source license. So the ideas and code can be reused. Not too long after I made this discovery, the Drupal community was looking for project ideas for Google’s Summer of Code (GSOC). So I added mine to the list. There were several students interested in the topic, and wrote proposals to match. Talha Paracha’s excellent proposal was accepted, and he began in earnest. With Adam Bergstein (nerdstein) and I mentoring him, Talha successfully worked though all phases of the project. For details, please see his blog posts. Now that GSOC 2016 has come to a close, we have a full project release for the Pubkey Encrypt module. It’s currently in beta, awaiting community review before we publish a production-ready version. We’ve included an architecture document, user stories, and usage documentation. There’s also a video! Please take the time to experiment with the module, and create tickets for any issues that you find. At the time of this writing, only field data can be encrypted via the Field Encryption module. The File Encryption module is still in development, but as soon as it’s released, it should work with Pubkey Encrypt as well.
true
true
true
Introduction of user-friendly encryption in Drupal 8, simplifying encryption using users' existing passwords without additional keys or third-party software.
2024-10-12 00:00:00
2016-09-12 00:00:00
https://colan.pro/blog/u…%20focus%20.webp
article
colan.pro
Colan Schwartz on Cloud Architecture, Automation, Security & Privacy
null
null
4,602,338
http://www.madmagazine.com/blog/2012/10/01/apple-maps-wreak-havoc-with-new-yorker-cover
MAD Magazine | Welcome to MAD
null
SUBSCRIBE TO MAD MAD ON DC UNIVERSE INFINITE FOLLOW MAD COMING SOONMAD MAGAZINE #40 POLITICS 2024 Don’t vote until you get...MAD! Includes 20+ pages of NEW CONTENT! ON SALE OCTOBER 8 Get ready to scream! MAD #40 Politics 2024 issue has 20+ pages of brand-NEW content! ON SALE OCT 8! Who will be the winner this fall? YOU—if you pick up the MAD Politics issue, with 20+ pages of NEW content! ON SALE OCT 8! PreviousNextSUBSCRIBE TO MADWHAT, ME WORRY? NEVER MISS AN ISSUE!SUBSCRIBE AND SAVE 45% OFFDC UNIVERSE INFINITEMAD MAGAZINE IS ON DC UNIVERSE INFINITEJOIN DC UNIVERSE INFINITEMAD COLLECTED EDITIONSPY VS. SPY OMNIBUS BY ANTONIO PROHIASGET YOUR COPY TODAYFEATURED VIDEO HERE WE GO WITH ANOTHER RIDICULOUS MAD FOLD-IN What sadistic April Fool's trick never leaves 'em laughing? MORE MAD VIDEO Play button linkSPY VS. SPY 01Play button linkMAD FOLD-IN 119Play button linkSPY VS. SPY 02Play button linkMAD FOLD-IN 208Play button linkSPY VS. SPY 03LATEST MAD ISSUES ON DC UNIVERSE INFINITE JOIN DC UNIVERSE INFINITEMAD MAGAZINE #39MAD MAGAZINE #38MAD MAGAZINE #37MAD MAGAZINE #36MAD MAGAZINE #35MAD MAGAZINE #34MAD MAGAZINE #33
true
true
true
Welcome to MAD Magazine!
2024-10-12 00:00:00
2023-05-01 00:00:00
https://static.dc.com/20…mag_logo_4x3.jpg
website
dc.com
DC
null
null
20,515,177
https://www.france24.com/fr/20190723-veto-climatique-pomme-discorde-ceta-canada-france-union-europeenne
Le veto climatique, pomme de discorde sur le Ceta
Aude MAZOUE
# Le veto climatique, pomme de discorde sur le Ceta L'Assemblée nationale a approuvé mardi de justesse la ratification du Ceta, ce traité controversé de libre-échange entre l'UE et le Canada. L'un des principaux points de divergence porte sur le veto climatique. Explications. Publié le : Modifié le : Quoi de plus normal qu'en pleine canicule, au centre de l'hémicycle de l'Assemblée nationale, les esprits s'échauffent. Surtout lorsqu'il s'agit de voter, mardi 23 juillet, le controversé traité de libre-échange entre l'UE et le Canada, plus connu sous le nom de Ceta. Il faut dire que l'enjeu est de taille : le document devrait régir les nouvelles règles commerciales entre le Canada et l'Union européenne pour les années à venir, puisque les députés français ont approuvé le texte de loi. **>> À lire : Cinq choses à savoir sur l'accord controversé du Ceta** Parmi les points de divergence portant sur le document de plus de 2 300 pages, certains parlementaires redoutent que des viandes nourries par des farines animales interdites en France ou élevées avec des antibiotiques et hormones de croissance (ou les trois) ne se retrouvent sur l'Hexagone, en dépit des normes européennes qui les interdisent. Mais un autre point cristallise tout particulièrement les passions : celui du veto climatique. **Le veto climatique, condition sine qua non** Le veto climatique est une disposition juridique qui permet à l'Union européenne et au Canada de protéger des décisions gouvernementales des États portant sur l'environnement ou le climat en cas d'attaques juridiques des entreprises. Pour comprendre cette mesure, il est nécessaire de revenir sur le volet "investissements" du traité : une juridiction spéciale donne en effet la possibilité aux entreprises d'attaquer en justice un État membre de l'Union européenne ou le Canada s'il vote une loi ou réforme qui porterait atteinte aux projets d'investissements économiques de l'entreprise. Concrètement, si la France vote une loi qui interdit le glyphosate sur son territoire, une entreprise canadienne qui vend cet herbicide à la France peut attaquer l'État français en justice au prétexte qu'il fait entrave à l'investissement de l'entreprise canadienne. Pour empêcher ce type de blocage, la commission, présidée par Katheline Schubert, qui a rédigé le traité de libre-échange, avait donc prévu un veto climatique, sorte de botte secrète qui donne la possibilité à un État de stopper les poursuites devant cette juridiction spéciale si la réforme engagée par l'un des États concernait l'écologie. L'ancien ministre de l'Écologie, Nicolas Hulot, avait même fait de ce garde-fou, à l'époque où il était au gouvernement, une condition sine qua non à la ratification du Ceta. Problème, une nouvelle mouture du "veto climatique" a été rédigée depuis par le gouvernement et ne comprend plus d'aspect contraignant. Dans sa dernière version, il ne s'agit en effet plus que d'un "avis" délivré par une commission mixte. En d'autres termes, une entreprise vendant du glyphosate à la France, qui l'aura préalablement interdit, pourra continuer à faire commerce sur l'Hexagone en dépit d'un avis défavorable. **Aucune certitude** Une mesure qui met les ONG et associations environnementales vent debout. "Il n'y a pas de veto climatique : il ne sera pas possible de déroger aux règles du commerce international, au nom du principe de non-discrimination des investisseurs par exemple", s'insurge sur son compte Twitter Maxime Combes, porte-parole d'Attac. La juriste Sabrina Robert-Cuendet, l'une des neuf experts de la commission d'évaluation du traité, à l'origine de l'idée du veto climatique, n'a pas non plus caché sa déception à la lecture du texte définitif présenté à la presse le 9 juillet dernier. "Le mécanisme choisi ne nous permet pas d'avoir la certitude absolue que des mesures climatiques ne seront pas attaquées dans le cadre du Ceta", a-t-elle expliqué dans les colonnes du Monde, le 16 juillet. Cet aspect du texte est à ce point important que certains députés, à l'instar de l'ancien porte-parole de Nicolas Hulot, Matthieu Orphelin, ont même assuré quelques jours avant sa présentation au Parlement qu'un veto climatique en bonne et due forme pouvait peser dans la balance le jour du vote. L'autre pomme de discorde porte sur la commission mixte qui donne son avis. Pour qu'un avis soit donné, il faut encore que l'assemblée composée de représentants du Canada et de l'UE se prononce à l'unanimité. Or peut-on envisager que des membres canadiens pour moitié fassent fi des intérêts économiques de leurs entreprises nationales ? On peut s'interroger. **Des marcheurs dispersés dans la nature** À ce stade, on ne connaît pas la liste des représentants de cette commission : on ignore donc quelles seront leurs motivations en matière d'intérêts économiques, ni même leurs ambitions environnementales. Malgré les critiques tous azimuts, le gouvernement français assure que "les produits interdits à l'entrée de l'UE le resteront, le Ceta n'y change rien", a assené sur Twitter Jean-Baptiste Lemoyne, secrétaire d'État auprès du ministre de l'Europe et des Affaires étrangères, qui a défendu le texte devant l'Assemblée. "Un accord comme le #Ceta contient des clauses qui permettent une meilleure protection de l'#environnement et une meilleure prise en compte des normes, des labels, des AOC, des IGP. C'est un mieux-disant par rapport aux accords de l'#OMC !", a poursuivi le secrétaire d'État dans un autre tweet. De l'autre côté de l'Atlantique, le gouvernement canadien de Justin Trudeau a toujours affiché ses ambitions en matière d'environnement. Mais rien ne garantit que le futur chef de l'exécutif soit sensible à ces questions. Le chef du Parti conservateur, Andrew Scheer, qui pourrait succéder à l'actuel chef du gouvernement, n'est d'ailleurs pas connu pour sa fibre écologiste. On ignore pour l'heure si ce traité va créer des tensions entre le Canada et l'Union européenne, ni même s'il aura les risques décriés pour l'environnement. Mais une chose est sûre, le Ceta, approuvé avec justesse au Parlement, a déjà fait du grabuge dans les rangs de la majorité. Seuls 229 macronistes l'ont approuvé mardi, sur les 304 membres du groupe. Du jamais-vu chez les marcheurs depuis l'élection d'Emmanuel Macron en 2017. **Le résumé de la semaine**France 24 vous propose de revenir sur les actualités qui ont marqué la semaine
true
true
true
L'Assemblée nationale a approuvé mardi de justesse la ratification du Ceta, ce traité controversé de libre-échange entre l'UE et le Canada. L'un des principaux points de divergence porte sur le veto…
2024-10-12 00:00:00
2019-07-23 00:00:00
https://s.france24.com/m…valdetroie-m.jpg
article
france24.com
FRANCE 24
null
null
30,439,133
https://www.americastestkitchen.com/articles/3463-how-to-create-a-diy-family-cookbook
How to Create a DIY Family Cookbook | America's Test Kitchen
Sawyer Phillips
Coconut cake, savory jalapeño cornbread, and rosemary scones. Even as I write this, I hear the crack of my mother opening a fresh coconut, taste salty butter melting between warm pieces of cornbread, and smell the woodsy fragrance of rosemary wafting through the kitchen. I’ve watched my mother make these recipes many times, and I want to carry on the tradition. # How to Create a DIY Family Cookbook *Published June 16, 2021. * But when it comes to keeping recipes and keepsakes in one place, we’re not always the most organized family. It can turn into a guessing game.* Did I ever write that down?* *Which drawer is it in? Did it fall behind the fridge, never to be seen again?* These recipes aren't just random ingredients thrown together; they're heirlooms and ways of life. I wanted to preserve this special part of our family history in one place by making a cookbook that we could pass down to future generations. And although it’s perfectly practical to use Google Docs and other digital tools, I wanted our family cookbook to be something we could hold in our hands. Knowing that every family and culture passes down culinary traditions in their own ways, I asked the ATK Reviews team to share some inspiration. Here’s how to get started creating your own family cookbook. ## Step 1. Identify the Main Recipes Deciding what to include can take some time, but it will make the cookbook easier to put together in the end. You also don’t have to put all of the onus on yourself. If the elders of the family are still around, ask what they think needs to be included. What’s the one dish that you always see at a family gathering? *Executive Editor Hannah Crowley shared a photo of her family's favorite salsa recipe: "My mom used to give out jars of this salsa for Christmas every year."* ## Step 2. Add Family Anecdotes and History Building a recipe book isn’t just about cooking; it’s about preserving family memories. Reach out to the people who you want to share the cookbook with and ask for "testimonials" about what it feels like to eat their favorite peach cobbler every year at the Fourth of July barbecue. Ask your cousin about the first time they baked Christmas cookies with your grandmother. Is there a funny saying or story that a loved one tells every year while they make the marinara? These testimonials are as important as the recipes. *Senior Editor Miye Bromberg says she has distinct childhood memories of her grandma making some of the recipes preserved in her family’s recipe box: "Fully 90% of my memories of my grandma are of her puttering around the kitchen in her housecoat or apron . . ."* ## Step 3. Organize Everything into Sections Dividing recipes into categories makes your cookbook easier to navigate. Choose how you want to arrange everything. It can be by season, holiday, main ingredient, or person. **After you categorize your recipes, add a table of contents or a glossary to define the funny sayings or unique ways your family refers to measurements. For example, when my mom adds a little "zhuzh" of salt, that’s equivalent to ¼ teaspoon!** *My mom took this picture of (from left) my dad, my sister, me, and my brother during one of our Kwanzaa celebrations where we ate black-eyed peas and collard greens.* *Art Director Marissa Angelone says Easter Bread is a family tradition: “[It's] a sweetened, yeasted dough with a similar texture to brioche. I have many memories of baking it at my Meme’s house with my mom."* ## Step 4. Get Creative with Decorations Add photos of your family to the pages and their names next to their signature recipes. Include handwritten notes or newspaper clippings. Throw in some clever sayings or quotes to further personalize it. Does someone in your family draw well? Ask them to illustrate some pages. Devote an entire page to a collage of family photos. *Clockwise from upper left: Associate Editor Carolyn Grillo's note to her mom on a Christmas cookie recipe; a clever quip from Assistant Editor Grace Kelly's boyfriend’s family cookbook; my mother, my sister, and I behind the counter of our family bakery; a candid picture of me and my siblings after we licked a bowl clean of chocolate cake batter.* ## Step 5. Give It a Name Naming things adds value and shows love. A name will also allow your family to easily refer back to it. My family came up with *Cookin’ with the Phillips Folks. * If your family is competitive, make coming up with a name a game. The losers have to make the winner’s favorite recipe. *A photo of Grace’s family recipe book, the*Kelly & Sokoloski Comfort Food Cookbook *, and her boyfriend's family book,*A Rhode Island Rule Book. ## Step 6. Send It Away. Watch It Grow. One of the best parts about creating something is sharing it with other people. A family cookbook can make a beautiful wedding or anniversary gift. If you don’t want to give it away entirely, take turns adding entries and photos. I’m not saying it has to be *Sisterhood of the Traveling Pants*-style but keeping the project collaborative means that more people become invested in its success. *Deputy Editor Kate Shannon recalls when she received her family cookbook: "My grandmother loved to give everyone in the family matching presents. In 2006, she presented each of us with a binder of family recipes. I don't make any of them very often—but I love flipping through them. Each one has a little bit of family history that always surprises me or makes me laugh."* To see the power of a family cookbook, check out this clip of *America's Test Kitchen* cast member Adam Ried reminiscing about his mother's cookbook. ## Sign up for the Well-Equipped Cook newsletter Shop smarter with our ATK Reviews team's expert guides and recommendations. **See what else the ATK Reviews team does when they're not thinking about family cookbooks. Start a free trial to access all of our rigorous, unbiased product reviews.**
true
true
true
Here are some quick tips and decorating ideas on how to create a DIY family cookbook or recipe book to give as a gift or to pass down for generations.
2024-10-12 00:00:00
2021-06-16 00:00:00
https://res.cloudinary.c…es-183239642.jpg
barista_article
americastestkitchen.com
America's Test Kitchen
null
null
30,411,812
https://www.wired.com/story/flexible-hours-mean-more-work-especially-women/
'Flexible Hours' Often Mean More Work—Especially for Women
Caitlin Harrington
Over the past couple of years, workers have gotten a taste for flexible work, and they’re hungry for more. Multiple recent surveys show that many workers rank flexibility among their top priorities, topping even pay. But University of Kent sociologist Heejung Chung says those who chase flexibility—defined as some control over one’s time and place of work—might be setting themselves up for trouble. In her book *The Flexibility Paradox**,* out March 4, Chung compiles her own research and that of hundreds of scholars to show that when workers are given flexibility, they generally work harder and longer—and they think more about work during non-work time. One analysis of 32,000 German workers found that those with control over their schedules logged four additional hours of overtime a week compared with people on fixed schedules. Another study using the same data showed that homeworking mothers in particular did more unpaid work, spending three more hours on childcare than their office-bound counterparts. WIRED spoke to Chung about the reasons behind the phenomenon, how gender norms and parenting status can magnify the problem, possible solutions, and why, despite her findings, she supports flexible work. This interview has been edited for clarity and brevity. **WIRED: You started writing the book before the pandemic. I don't imagine you could have predicted how timely it would become. What was your initial impetus?** **Heejung Chung:** In management literature, flexible work has been hailed as this amazing thing that's really great for work-life balance and gender equality. There’s been a lot of government legislation to try to promote this. But we saw that people who had a huge amount of autonomy over where and when they work were not living in this promised land of better work-life balance and greater leisure time. So I tried to take a more critical look. I was able to observe in a much more systematic way, with large-scale data that yes, flexible working can actually lead workers to work longer and harder. **Flexibility is about giving workers a choice over when and where they work. But you write about how those choices aren't as free as one might think, given the social contexts in which people are making them. Do you think people tend to overlook the broader forces influencing their behavior?** The thing is, many of us are living in societies with high levels of competition, great levels of insecurity, and a cultural belief that work should be your passion and that only through being very busy at work are you a worthwhile individual contributing to society. With the demise of the welfare state, it’s only through work that you can really gain most benefits. So the intensification of work actually comes from these embedded ideas about how one should live. There's a theory called passion exploitation, where a passion for work enables us to exploit ourselves, but also for others to exploit us. If you look at the data, you see those attitudes about passion across a lot of occupations, and across countries as well. That's when it becomes a problem. It’s not only a select few that have this issue. It’s a much wider phenomenon. **You outline several theories behind the flexibility paradox, including what you call self exploitation. What does that mean?** Our labor markets were built for our fathers in the 1950s, when it was assumed that you have a supporting spouse who can do all the reproductive work and all you need to do is work. There's an assumption that the person who can do that is productive, committed, and motivated, despite the fact that that's actually bullshit. There’s a negative connotation with flexible working, especially for those who have caring responsibilities. These tend more often to be mothers because of gender norms around whose responsibility it is to care and do housework. When that happens, people feel they have to work harder and longer to compensate for the stigmatized view. There are employers who are putting surveillance cameras on remote workers, which is completely ludicrous. They don't need employers to do it. Workers are surveilling themselves. **Do you think the prevalence of flexible work during the pandemic has done anything to reverse the flexibility stigma?** If the majority of people work from home regularly, then some of those gender patterns might change. Still, I think there are unconscious biases against vulnerable workers—working mothers but also minority or disabled workers—where colleagues and managers will underestimate their capacities and flexible working may trigger biases. People are now slowly going back into the office, but those going back are majority white, male, heterosexual. Then you'll see a two-tiered market where people working from home are going to be penalized, and those in the office are going to get the promotions, the better projects, and be perceived more favorably by managers. So the way in which the hybrid workforce is implemented is really crucial. Because the pandemic has changed our norms about where work should be, but we still haven't tackled biases against certain workers. And it's not helped by Goldman Sachs’ CEO blurting out things like that working from home is an aberration. **You write that some of these external forces get reframed as personal choice through terms like workaholic. Do you think people who describe themselves as workaholics, which connotes addiction, are actually responding in a rational way to labor market pressures?** I think the term workaholic is not very helpful because the problem isn't with the worker. The problem is with societal pressures and external factors. The US, the UK, Korea, and Japan are workaholic countries. But these are not inevitable facts of human societies. The term workaholic is putting the onus on the individual as if it's the individual's fault or an illness or choice. There may be some of that, but especially in certain countries, it goes beyond the individual. This is a societal illness. Another societal pattern is intensive parenting, which relates again to market insecurities. Because of gender norms, women do a large chunk of the housework and childcare in heterosexual relationships. So they aren't able to exploit themselves as much in the labor market, but they are expected to—and do—exploit themselves at home. This means that working from home is used to expand childcare or housework hours. Mothers especially are considered the architects of children's futures, where if you don't invest in your children's lives through reading and talking and setting up the right playdates and extracurricular activities, you are not preparing your children for their future labor market prospects. It’s reached the point that now, full-time working mothers spend more involved time with their children than housewives in the 1960s. Flexible working can help to reduce gender inequality by enabling mothers to stay in the labor market. But it can also reinforce traditional gender roles because it comes with the expectation that mothers will be able to do both housework and childcare while working from home. Whereas for fathers, because of gender normative views about them being breadwinners, working from home is expected to be a protected time and space where they shut themselves off and only focus on work. **One image from the book stuck with me: When mothers work from home, they tend to work in communal spaces, so they're spread out at the dining room table, accessible to the children, whereas fathers are shut away in private offices.** If you look at the time-use diaries of mothers and fathers, mothers’ working hours are tainted, especially during the pandemic. But fathers are relatively protected because of their roles. And children won't expect fathers to be available when working from home, whereas they expect mothers to be mothers first. This is why employers will stigmatize mothers homeworking, and they might not for fathers. **I can imagine a progressive woman reading this and thinking, “That's not how it is in my house. I'm the breadwinner, and my husband does the laundry.” In what ways could she still be affected by the gendered flexibility paradox?** Obviously, there’s some variation. But you'll probably find that women, when given the flexibility, will try to squeeze in as much housework and childcare, and weave in as many activities as possible, whereas fathers either won't, or use the excuse that their employers won’t let them. A lot of employers won't let mothers do that either. But mothers have no other option, so they might do it behind their bosses’ backs or they have to change jobs or drop out of the labor market altogether. **You write that flexible work frees up “mothers’ labor for free” and “relieves governments of the need for a social response.” Was flexible work a consolation prize for working mothers who were demanding more government support?** It’s not a consolation prize, per se. But if you really want to push a lot of women into the labor market, you need to free them up because there’s only 24 hours a day. In Sweden and Denmark, children at age 1 have access to high-quality, affordable daycare. But childcare in the US and UK is extortionately expensive. If you give women the opportunities to work from home as well as flextime, we see that mothers can maintain their labor market position after having children, and if you don't give them that, about half of mothers will drop out, especially if they don't have very-high-quality, cheap childcare. **Are there countries you think are models for healthy approaches to flexible work?** In the northern European countries, where gender egalitarian norms and work-life balance norms are more prevalent, and family-friendly benefits are seen as the norm, you don't see the flexibility paradox or the flexibility stigma as much. Workers have strong negotiation power and a very secure social security net, which will provide up to 80 percent of your income when you're unemployed. These are contexts that help shape people's attitudes toward the centrality of work. **You write about flexible work leading not just to overtime, but to blurring of boundaries between work and life, which can lead to what you call cognitive spillover, where people constantly think about work. What are some of the new laws that start to address this?** I think the right to disconnect is really crucial. This isn't necessarily just about managers. If people respond to email just before bed or just after waking up, everybody starts unconsciously marching toward that always-on, always-available kind of culture. The right to disconnect helps workers not be exploited by employers, but also helps stop that culture from developing. Another thing is just general protection of workers. One reason why we are worried about work is there’s a high level of insecurity and lack of bargaining power. The European Commission introduced a series of policies barring discrimination against people who take up flexible working arrangements for parental needs. But there's also general protection, like making sure that workers are secure through better collective bargaining protections, and better legal protection in terms of job security. **You don't dispense a lot of self-help-y advice, since there are so many books on the subject. But are there any tips that you picked up that worked especially well for you?** Having allocated time for work can be much more productive. Rather than thinking, I could work until late because I'm at home, really intentionally say no, I only have until 4:30. Those who are misusing flexible work so that your work blurs into the evening, you’ve got to question yourself: Is this really productive? For women, because of the way we are socialized, you are going to feel like you need to do the housework and childcare while you're working from home. And I think you have to intentionally fight it. But also get your partner, if you have one, to try to use flexibility to enable a better work-life balance for both of you. Valentine's Day is coming up. Men, don't get your women flowers or lingerie. Get your manager to let you work some days from home, and use that flexibility to be a more involved father if you have children, be a more involved human if you have pets, do more housework. You'll find that it enhances your relationships, enhances your well-being. This is all empirical data-based evidence here. **Despite your findings, you're in favor of flexible work. Why is that?** Flexible working is two things. One, it’s an equal opportunity maker. Because it provides people, especially those with responsibilities outside of work, the ability to better focus on what is important at work rather than the performance of being in the office. It can also really help democratize voices. In Zoom, we can't talk over each other. There's a raise hand option you can use so that certain voices are not dominating. But flexible working is also an amplifier. If workers feel that they have to work all the time, flexible work will amplify that. If we live in a society where the division of paid and unpaid work is unequal and the assumptions behind men’s and women's work commitments are skewed, it will amplify that. So flexible working is a great tool, but we need to change a lot of our normative views around work, work-life balance, and gender roles because otherwise it will keep amplifying a lot of the problems we have in our society. - 📩 The latest on tech, science, and more: Get our newsletters! - Here come the underdogs of the robot Olympics *Pokémon Legends: Arceus*isn't great. It doesn't matter- Inside Trickbot, Russia's notorious ransomware gang - Use these keyboard shortcuts and ditch your mouse - The unnerving rise of video games that spy on you - 👁️ Explore AI like never before with our new database - ✨ Optimize your home life with our Gear team’s best picks, from robot vacuums to affordable mattresses to smart speakers
true
true
true
Workers want the freedom to set their own hours. But sociologist Heejung Chung says social expectations push employees to expand the work day.
2024-10-12 00:00:00
2022-02-13 00:00:00
https://media.wired.com/…ks-767981719.jpg
article
wired.com
WIRED
null
null
10,533,269
http://www.shiftfrance.com/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
18,276,537
https://arxiv.org/abs/1810.05080
Person Retrieval in Surveillance Video using Height, Color and Gender
Galiyawala; Hiren; Shah; Kenil; Gajjar; Vandit; Raval; Mehul S
# Computer Science > Computer Vision and Pattern Recognition [Submitted on 24 Sep 2018] # Title:Person Retrieval in Surveillance Video using Height, Color and Gender View PDFAbstract:A person is commonly described by attributes like height, build, cloth color, cloth type, and gender. Such attributes are known as soft biometrics. They bridge the semantic gap between human description and person retrieval in surveillance video. The paper proposes a deep learning-based linear filtering approach for person retrieval using height, cloth color, and gender. The proposed approach uses Mask R-CNN for pixel-wise person segmentation. It removes background clutter and provides precise boundary around the person. Color and gender models are fine-tuned using AlexNet and the algorithm is tested on SoftBioSearch dataset. It achieves good accuracy for person retrieval using the semantic query in challenging conditions. ### References & Citations # Bibliographic and Citation Tools Bibliographic Explorer *(What is the Explorer?)* Litmaps *(What is Litmaps?)* scite Smart Citations *(What are Smart Citations?)*# Code, Data and Media Associated with this Article CatalyzeX Code Finder for Papers *(What is CatalyzeX?)* DagsHub *(What is DagsHub?)* Gotit.pub *(What is GotitPub?)* Papers with Code *(What is Papers with Code?)* ScienceCast *(What is ScienceCast?)*# Demos # Recommenders and Search Tools Influence Flower *(What are Influence Flowers?)* Connected Papers *(What is Connected Papers?)* CORE Recommender *(What is CORE?)*# arXivLabs: experimental projects with community collaborators arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website. Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them. Have an idea for a project that will add value for arXiv's community? **Learn more about arXivLabs**.
true
true
true
A person is commonly described by attributes like height, build, cloth color, cloth type, and gender. Such attributes are known as soft biometrics. They bridge the semantic gap between human description and person retrieval in surveillance video. The paper proposes a deep learning-based linear filtering approach for person retrieval using height, cloth color, and gender. The proposed approach uses Mask R-CNN for pixel-wise person segmentation. It removes background clutter and provides precise boundary around the person. Color and gender models are fine-tuned using AlexNet and the algorithm is tested on SoftBioSearch dataset. It achieves good accuracy for person retrieval using the semantic query in challenging conditions.
2024-10-12 00:00:00
2018-09-24 00:00:00
/static/browse/0.3.4/images/arxiv-logo-fb.png
website
arxiv.org
arXiv.org
null
null
7,030,740
http://codefury.net/2014/01/7-reasons-every-developer-should-freelance-full-time/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
14,540,995
https://github.com/FrederickGeek8/Sprint
GitHub - FrederickGeek8/Sprint: 🏃 A Windows-based spiritual successor to CodeRunner. 🏃
FrederickGeek8
Sprint! is a (independent) spiritual successor to the widely popular CodeRunner application for Mac. Its goal is to bring many of the features that made CodeRunner such a great application to the Windows platform. **Download the latest prebuilt here.** Sprint! is a complete open-source application built upon Electron, and thus is hackable to its core. Missing a feature you want, or the syntax themes lacking? Every inch of this editor is open to modification. Sprint! serves as the perfect prototyping application for numerous supported languages. No need to worry about saving your code or its complication, all of it is built into the editor. Quickly get coding without need to write any main functions or setup code. More features are planned for the future. If you have an idea or hack that you would love to see in everybody's editor, please submit a pull request or issue! - C - C# - C++ - Go - Java - Node.js - PHP - Python - Python 3 Sprint! can be run using `npm start` and can be built for distribution using `npm run dist` .
true
true
true
🏃 A Windows-based spiritual successor to CodeRunner. 🏃 - FrederickGeek8/Sprint
2024-10-12 00:00:00
2017-02-27 00:00:00
https://opengraph.githubassets.com/2764b0e9b97d209ac179ffdd2c8cea3f085bd180faaa75b668858c06bd51cb2e/FrederickGeek8/Sprint
object
github.com
GitHub
null
null
22,351,096
https://www.theglobeandmail.com/life/health-and-fitness/article-does-owning-a-car-hurt-your-health
Does owning a car hurt your health?
Alex Hutchinson
When researchers in New Zealand published a study last month showing that bike commuters live longer than car commuters, the reactions were pretty muted. It was a good study, drawing on almost five million responses from census data and correcting for differences in age, income, education, neighbourhood and other potential confounding factors. But despite these efforts, it was hard to shake off the nagging feeling that people who choose to bike to work might be different from those who drive in difficult-to-quantify ways that also influence their health through other pathways. To really establish that driving hurts your health, in other words, you need a randomized trial. But who’s going to assign long-term car ownership on the basis of a coin flip? The city of Beijing, it turns out. Because of mounting congestion, Beijing has limited the number of new car permits it issues to 240,000 a year since 2011. Those permits are issued in a monthly lottery with more than 50 losers for every winner – and that, as researchers from the University of California Berkeley, Renmin University in China and the Beijing Transport Institute recently reported in the British Medical Journal, provides an elegant natural experiment on the health effects of car ownership. Led by Berkeley economist Michael Anderson, the researchers followed 180 permit winners and 757 losers for roughly five years, and looked for differences caused by the acquisition of a car. “The randomization of the lottery is what gives us confidence,” Anderson explained in a statement. “We know that the winners should be comparable to the losers on all attributes other than car ownership.” Not surprisingly, the winners took 2.9 fewer rides a week on Beijing’s dense public-transit network, representing a 45-per-cent drop in usage. They also spent 24.2 fewer minutes each day day walking or biking than the non-winners, a 54-per-cent drop. You’d expect these behaviour changes to have health impacts. Over all, the winners gained an average of just more than two kilograms, a difference that was not statistically significant. But the effects were more obvious when looking only at winners aged 50 or older: They gained an average of 10.3 kilograms, a statistically significant and worrisome increase. The results are consistent with a stack of previous studies that have found that car owners tend to weigh more than non-owners. The New Zealand census data, which were published in the International Journal of Epidemiology, make an even stronger case, since they link commuting mode to the most unambiguous health indicator possible: death. But the randomized assignment of cars in Anderson’s Beijing cohort finally shows that it’s the car itself, not simply being the type of person who wants a car, that influences behaviour and ultimately health. That’s not surprising – and it’s not unique to cars. “In most cases, comfort – like most things – involves trade-offs,” says Daniel Lieberman, an evolutionary anthropologist at Harvard University. “So they aren’t good or bad; they simply have costs and benefits.” Natural selection has left us wired to seek ways of saving energy that we can devote instead to reproduction, regardless of whether those choices make us healthier, Lieberman says, so it’s natural to default to the elevator instead of the stairs, and to use your car if you’ve got one. Overcoming these habits requires deliberate effort – or engineering your environment to make the alternatives easier to choose. To Anderson, that suggests that “demand management” schemes for cars in cities, such as congestion fees, tolls and pedestrian-only zones, make sense. “The public-health impacts of automobile travel are really important,” he says. “While cars have saved trillions of hours of travel time globally, they’ve also likely shortened lifespans by trillions of hours in aggregate via traffic accidents, pollution and obesity-related disease.” *Alex Hutchinson is the author of Endure: Mind, Body, and the Curiously Elastic Limits of Human Performance. Follow him on Twitter **@sweatscience**.* *Live your best. We have a daily Life & Arts newsletter, providing you with our latest stories on health, travel, food and culture. **Sign up today**.*
true
true
true
When researchers in New Zealand published a study last month showing that bike commuters live longer than car commuters, the reactions were pretty muted
2024-10-12 00:00:00
2020-02-17 00:00:00
https://www.theglobeandm…ty=80&smart=true
article
theglobeandmail.com
The Globe and Mail
null
null
16,919,720
http://fortune.com/longform/bitcoin-mt-gox-hack-karpeles/
Mt. Gox and the Surprising Redemption of Bitcoin’s Biggest Villain
Jen Wieczner
# Mt. Gox and the Surprising Redemption of Bitcoin’s Biggest Villain The moment that would change the history of Mt. Gox came without so much as a beep. Mark Karpelès, the CEO of what until recently had been the world’s biggest Bitcoin exchange, was finally alone, save for his tabby cat, in his palatial penthouse with a panoramic view of Tokyo. It was the evening of March 7, 2014, and Karpelès had barely slept in the week since Mt. Gox had sought bankruptcy protection, announcing that 850,000 of its Bitcoins, worth some $473 million at the time—and representing 7% of all Bitcoins then in existence—had somehow disappeared. With protesters and camera crews swarming in front of Mt. Gox’s office and the price of Bitcoin in free fall, the usually unflappable Frenchman had been confined to a self-imposed house arrest, subsisting on the buttery pastries he liked to bake and reading the hate mail that flooded in from all corners of the Internet—most of it accusing him of stealing the money himself. Today the Mt. Gox hack remains the worst disaster in Bitcoin’s short history. It wasn’t until his lawyers had gone home for the day that Karpelès could retreat to his computer, and that’s when he noticed the shocking number on his screen. Following his company’s collapse, he’d spent days methodically double-checking Mt. Gox’s old digital wallets, where the secret alphanumeric keys for accessing Bitcoins are stored. One after another—a dozen so far—the wallets had come up empty. But this time, when the blockchain-scanning program finished running after six hours, it had silently served up an unexpected result: He’d found 200,000 Bitcoins, stashed away in an archived file in the cloud—apparently forgotten and untouched for three years. In a series of conversations with *Fortune*, Karpelès shared for the first time the full details of what he says really happened in the final days of Mt. Gox—including his account of how he stumbled on the 200,000 Bitcoins. The surprise discovery would turn out to be, to this day, the only hope Mt. Gox customers have of getting their money back. It’s been proved that the other 650,000 missing Bitcoins were stolen—we now know, by various hackers. But Karpelès continues to be one of the most infamous figures in cryptocurrency. And his legal fate is uncertain, even as new evidence has emerged that largely exonerates him. Ironically, today Karpelès doesn’t view the retrieval of the 200,000 Bitcoins as a lucky break. They’ve become such a subject of contention, in fact, that he wonders whether it might have been better if they’d remained lost. “At the time, I felt finding these was a good thing for everyone,” recalls Karpelès, now 32, his French accent still strong after nearly nine years in Japan. “But now this is also the main reason why we are stuck fighting.” To many, the belated revelation seemed too good to be true—making the unemotional programmer-turned-mogul look even guiltier. Was he just coughing up his go-bag in an attempt to wiggle out of trouble? Soon, they had even more reason to suspect him: Leaked trading records suggested that what could only be an internal Mt. Gox account—widely known today as the “Willy bot”—was artificially inflating its account balance and using the money to buy Bitcoins. When Mt. Gox ran low on Bitcoins, Willy helped make up the shortfall. Sometimes its trades went the other way, selling borrowed Bitcoins to generate cash. Critics speculate that it was a fraudulent, if failed, exercise to keep Mt. Gox afloat. That suspicious activity by the Willy bot led to Karpelès’s arrest in August 2015 on charges of manipulating electronic data; he admitted in court last summer to running what he called the “obligation exchange” but disputes doing anything illegal. After spending almost a year in jail, Karpelès is currently on trial in Tokyo, facing criminal allegations such as embezzlement and breach of trust, all unrelated to the missing Bitcoins. But it was an unforeseen twist that today is causing Karpelès the greatest angst. Between the time Mt. Gox shut down and when it entered liquidation in April 2014, the price of Bitcoin had plummeted more than 20% to $483. It would be over two and a half years before Bitcoin would regain its previous high—long enough that many Mt. Gox victims didn’t even bother filing a claim for what they considered an insignificant sum. Then early last year, Bitcoin finally broke its old record. By late May, it was trading at nearly $2,200, making Mt. Gox’s remaining Bitcoins—202,185 to be exact—worth more than everything it owed in claims. When the Bitcoin price peaked at $20,000 in December, the value of Mt. Gox’s assets (by then including Bitcoin derivatives such as Bitcoin Cash) ballooned to $4.4 billion—nearly 10 times the amount Mt. Gox said it lost in the first place. “The fact that you have a bankruptcy where the only asset that it owns goes up by 5,000%, that’s pretty unprecedented,” says Daniel Kelman, a lawyer and Mt. Gox creditor who spent a year in Tokyo working on the case. After months studying Japan’s bankruptcy code while in solitary confinement, Karpelès knew there was a wrinkle: Under the law, most of that excess would return to shareholders of Mt. Gox, of which he held 88%. At current prices, the windfall would make him a billionaire. It would also mean an interminable nightmare of lawsuits and threats that Karpelès—who is also in personal bankruptcy—is desperate to avoid. He says he’d happily give the money back if it came to him, but the estimated 60% tax triggered in the process would be catastrophic. “I never expected to get anything out of this,” Karpelès tells me when we meet in Tokyo in March. “It would bring more trouble than anything.” We’re on the second floor of a Japanese café, in a stuffy meeting room that Karpelès says is not much bigger than his jail cell. Deprived of a computer behind bars, he passed time by measuring the room using the length of his notebook. (After his release, Karpelès sent friends a chart of the 70 pounds he’d lost while detained.) It’s the first day in Tokyo that finally feels like spring, cherry blossoms in bloom, but he has holed up here in the café because it’s roughly equidistant from the offices of his various lawyers, as well as the bankruptcy trustee, whom he meets with regularly out of a sense of “duty” to his former customers. He’s been so busy, he says, he didn’t have time to shave that morning. Karpelès took control of Mt. Gox—the name is an acronym for *Magic: The Gathering* Online eXchange, after the trading card game that inspired the original site—in 2011 from founder Jed McCaleb. Employees don’t remember Karpelès ever seeming fazed about anything: He took meetings from a vibrating massage chair and churned out combs using a 3D printer he’d bought for the office. His hallmark reply to questions: “Should be fine.” But he’s lately developed a sense of gallows humor uncharacteristic of his Mt. Gox days. Even if he wanted to buy Bitcoin today, he doubts he could find an exchange that would take his money, he laughs, and notes that it’s been a few months since he’s received any death threats—“a new record.” He turns serious, though, when he recounts the sleepless nights in February 2014 when he says he first discovered that all of Mt. Gox’s Bitcoins were missing. “I think this really is the worst experience for anyone to have in life,” he says. Still, he’s not sure he could have done the job better. “If I knew at the time what I know today, I would have done things differently, of course,” he says with a practiced tone. “But based on the information I had at the time, and the situation at the time, I still think that I’ve done the best I could do with what I had.” The question of what Karpelès knew, and when, though, remains more of a mystery than even who stole the coins. Bitcoin’s public ledger, or blockchain, allows anyone to trace the path of transactions, showing the wallets where Mt. Gox’s Bitcoins went. But the same blockchain analysis, multiple experts have confirmed, has also revealed an unsettling fact: By mid-2013, Mt. Gox had already lost all its Bitcoins—eight months before it admitted so publicly. The timing of this insolvency, analysis shows, coincided with the Willy bot kicking into high gear—perhaps providing a hint as to Karpelès’s true motivations. “I feel that this is a reaction to this revelation that okay, all the money is gone,” says Michael Gronager, CEO of Chainalysis, which was hired by the Mt. Gox bankruptcy trustee to investigate the Bitcoins’ disappearance. Yet it’s also why he doesn’t believe Karpelès was planning to run away with the 200,000 Bitcoins. “I think that had he found them before he went bankrupt, he would never have gone bankrupt,” says Gronager. Rather, he says, Karpelès would have used the hoard to cover his losses. **When Mt. Gox froze Bitcoin** withdrawals in 2014, a customer named Kolin Burges hopped a flight from London to Tokyo. For more than two weeks, until Mt. Gox declared bankruptcy, he kept vigil outside the exchange’s headquarters, holding a sign reading, “MTGOX WHERE IS OUR MONEY?” Other protesters soon joined him, demonstrating the frustration of Mt. Gox customers worldwide. Kim Nilsson was just as vexed, but standing in the snow wasn’t his style. A modest Swedish software engineer with a goatee and a quiet voice, Nilsson, who also owned Bitcoins at Mt. Gox, had never before worked on blockchain technology. But he had a reputation for getting to the bottom of the toughest software bugs; in his off-time, he’d been known to beat all the levels of *Super Mario Bros. 2* in an afternoon sitting. And that’s how he approached Mt. Gox: “It was basically just the world’s biggest puzzle at the time—like whoever solves this, imagine the recognition.” He teamed up with some other Mt. Gox customers to launch WizSec, a blockchain security firm dedicated to cracking the case. But while the company quickly dissolved, Nilsson stayed on the case in secret, teaching himself blockchain analysis and painstakingly tracing the money stolen from Mt. Gox. Although Nilsson started off investigating Karpelès’s role in the theft, he soon realized the CEO was just as eager as he was to know what happened. At a time when Karpelès needed friends most, the WizSec team scored an invite to his apartment by offering to bring the Frenchman the ingredients he needed to bake his famous apple quiche. Soon, Karpelès was feeding Nilsson internal Mt. Gox data that could help solve the case. “I wish I had stolen the money, because then I could just give it back,” Karpelès told them at the time. Over the next four years, Nilsson estimates he spent a year-and-a-half’s worth of full-time hours pursuing the Mt. Gox hackers. He’s never been paid for his work; his 12.7 Bitcoin claim at Mt. Gox makes him one of its smallest creditors. To J. Maurice, who helped found WizSec but left the company early on and was not involved in the investigation, Nilsson’s effort epitomizes the virtues of Bitcoin—a decentralized system free of government control, which relies instead on individual users to sustain it. “Kim is humble, he doesn’t brag, he doesn’t even want to get rich. He’s just working hard on something for years as his passion project,” Maurice says. “That’s what Bitcoin is.” By early 2016, Nilsson had a suspect. As he tracked the stolen funds, he saw that, of the 650,000 Bitcoins reported stolen from Mt. Gox, 630,000 had gone straight into wallets controlled by the same person. That person also had an account at Mt. Gox, associated with the username WME. Then Nilsson stumbled across an old post in an online Bitcoin forum in which someone with the handle WME had thrown a tantrum, complaining that another cryptocurrency exchange had frozen his funds. “Give [me] my CLEAN MONEY!” read the post. In the process, WME dropped clues that he owned some of the Bitcoin wallets in question. But the big break came when the same user posted a letter from his lawyer, his first and last name visible for the whole world to see. Nilsson, as he routinely did with his findings, dashed off an email to Gary Alford, a special agent with the IRS in New York who has helped catch cybercriminals. Then one scorching day last July, police stormed a beach in Greece to arrest a Russian citizen vacationing with his family. U.S. federal prosecutors charged Alexander Vinnik, a 38-year-old IT specialist, with laundering 530,000 of the stolen Mt. Gox Bitcoins through his WME wallets and other accounts. They also accused him of helping to run the exchange BTC-e, whose primary purpose was allegedly to launder money. It is plausible, investigators say, that BTC-e was founded specifically to launder funds stolen from Mt. Gox. Blockchain analysis shows that the hack that devastated Mt. Gox began in autumn 2011, around the time BTC-e started up. Keys to Mt. Gox’s “hot wallet”—its online Bitcoin repository—were stolen and copied, compromising the exchange’s deposit addresses. So for the next two years, in nine out of 10 instances, coins were being stolen as soon as they came in, says Chainalysis’ Gronager, who is also a creditor: “It meant that you had a hole in the bottom of the well, and someone was just draining money.” Karpelès claims he never noticed because the hackers stole small amounts at a time, and the balances generally seemed to move upward. “Bitcoin didn’t exactly decrease,” he says. “It’s just that they didn’t increase as much as they should.” Nilsson, who believes he has convincingly linked Vinnik to at least 100,000 more Mt. Gox Bitcoins than the feds allege, still doesn’t know whether he helped the government’s investigation or simply confirmed its conclusions. With Vinnik fighting extradition from Greece and five outstanding defendants whose names remain redacted in the U.S. indictment, the IRS won’t comment on the “active and ongoing” investigation. But Kathryn Haun, a former federal prosecutor who signed off on the indictment, says Vinnik’s use of Bitcoin helps clearly connect him to the crime: “At first blush what seemed unsolvable turned out to be traceable through the use of digital currency.” For Karpelès, Vinnik’s arrest reinforced a long-held theory: that Russian Bitcoin exchange administrators were behind a series of denial-of-service and other cyberattacks that hit Mt. Gox in 2011. Says Karpelès, “What he did, Mt. Gox is a victim of this, which means that all creditors are victims of this, and I am too a victim of this.” Vinnik, who has denied the charges, has not been charged with stealing from Mt. Gox. But the magnitude and duration of his involvement points to some familiarity with the thieves whose profits he was allegedly laundering: “I assume at least he knows where to send the check,” says Nilsson. Still, there’s an ironic punch line to the case: Because the stolen Bitcoins were sold right away, allegedly by Vinnik and long before Mt. Gox disclosed the hack, victims lost much more, in dollar value, than the hackers ever made—which, according to Chainalysis, was only about $20 million. And as soon as the Bitcoins were converted to cash, the blockchain trail was broken. That means that even if authorities seize Bitcoins from the suspects, there won’t be anything to prove they’re from Mt. Gox. Sean Hays, a creditor in Arizona who says his 338 Bitcoin claim would be “life-changing,” adds, “I’ll be glad to have part of it back, but I think there will always be the hunt for where’s the rest?” But for Burges, the key question that inspired his protest has finally been answered. “We know where the coins went, and we won’t get them back,” he says. “As far as I’m concerned, it’s solved.” **For almost four years, **Josh Jones assumed he’d eventually receive his rightful portion of his nearly 44,000 Bitcoins locked inside Mt. Gox. By mid-2017, Bitcoin’s price was soaring, and Mt. Gox had enough to pay out the $430 million it owed in claims several times over. Then last September, Mt. Gox trustee Nobuaki Kobayashi, a top restructuring lawyer also representing Takata in the airbag-maker’s bankruptcy, broke the news: Under Japanese bankruptcy law, the value of creditors’ claims were capped at what they were worth back in 2014: $483 per Bitcoin. “That’s just crazy,” says Jones, who held most of the coins on behalf of his clients at Bitcoin Builder, the service he built to facilitate arbitrage trading at Mt. Gox in its final weeks. “That can’t be how it’s going to work out.” But while there was little Jones could do back home in Santa Monica, another major creditor took it upon himself to ensure the Bitcoins would be fully divvied up among Mt. Gox victims. Richard Folsom, an American who worked for Bain & Co. in Tokyo before founding one of the first private equity shops in Japan, hired the biggest Japanese law firm and came up with a plan: What if Mt. Gox wasn’t technically bankrupt anymore? Their petition for “civil rehabilitation” of Mt. Gox, filed in November, is now pending before the Tokyo District Court; an outside examiner recommended in its favor in February. Shin Fukuoka, the partner at Nishimura & Asahi leading the effort, is confident it will be approved, as early as the end of April. “We think that the court has sufficient understanding about the problems in the case of proceeding with bankruptcy,” Fukuoka says. Those problems, of course, include the fact that the majority of Mt. Gox’s assets would otherwise accrue to Mark Karpelès. “Such an outcome would be a travesty,” says Jesse Powell, CEO of Kraken, the San Francisco–based Bitcoin exchange appointed to help investigate and distribute Mt. Gox claims (and himself a substantial creditor). If Fukuoka’s plan works, it would be the first time in Japan that a business “abolished” in bankruptcy was rehabilitated, he says: “These are very unique circumstances.” In a traditional civil rehabilitation, once the court gives the green light, it typically takes six months for the plan to be finalized—meaning optimistically, creditors could begin to get paid, preferably in Bitcoins, as soon as late this year. Fukuoka says he’s also considering mandating further investigation into the stolen Bitcoins as part of the rehab plan, in hopes more will be recovered. (A $75 million lawsuit from CoinLab that has held up the bankruptcy process could be sidestepped by setting aside a legal reserve fund in the meantime, he adds.) It would be an extraordinary outcome for creditors like Thomas Braziel, managing partner of New York–based hedge fund B.E. Capital Management, who has bought up $1 million worth of claims at 80¢ on the dollar, believing he will turn a profit no matter what. “Of course, if the rehabilitation happens, it’s a bonanza, and you make eight, nine, 10 times your money,” Braziel says. That would be a relief to Mt. Gox’s disgraced CEO, who says he’s had enough of the cryptocurrency business to last a lifetime: “The only thing I’m touching related to cryptocurrency is how to solve this bankruptcy. Nothing more,” says Karpelès. Besides, he has lost faith in the initial promise of digital money: “Bitcoin right now is, I believe, doomed.” Since his release from jail two summers ago, Karpelès has been moving apartments every few months out of concerns for his own safety. During three months of all-day interrogations while detained, he refused to confess to the accusations Japanese authorities threw at him—including, at one point, that he was Satoshi Nakamoto, Bitcoin’s mysterious founder. Still, despite what he feels is a weak case against him, he thinks the odds are he’ll be found guilty, at least during this first trial; Japan, which has a more than 99% conviction rate, is also one of a few countries that allows prosecutors to appeal an acquittal twice. In a year or two, he could be sent back behind bars. “After I came out, I felt like in a kind of dream, like I didn’t feel things were real,” he says, over a slice of cake with cream and cherries. “Even today I’m not sure yet.” Karpelès, though, is not on trial for what even his sympathizers fault him for the most: lying about Mt. Gox’s insolvency. “When Mt. Gox didn’t have any of the coins, he was getting new deposits from other customers to pay off other people—kind of like a Bernie Madoff,” says Kelman, the lawyer. For now, Karpelès, who’s never been to the United States (and isn’t allowed to leave Japan while on trial), is leveraging his mastery of Japanese and the country’s formal business customs. The arrest of Vinnik has made it easier to find work, he says, by lifting some blame from Karpelès. Even so, the taint of Mt. Gox follows him. “He is unhirable,” says Mike Kayamori, the CEO of Japanese cryptocurrency exchange Quoine. Yet earlier this year, Mark Karpelès landed a big new job: chief technology officer at London Trust Media, a Denver-based corporation that runs the largest virtual private network (VPN) service in the world. It has recently been expanding into cryptocurrency-related ventures. “I am more than willing to give a second chance to Mark in this fight’s critical hour,” says Andrew Lee, cofounder and chairman of London Trust Media, who also briefly ran Mt. Gox’s U.S. operations. Even if Mt. Gox’s rehabilitation succeeds, the company is unlikely to take another voyage. Still, that hasn’t stopped Karpelès from dreaming up schemes to get back the missing 650,000 Bitcoins. Even if the original coins can’t be retrieved, perhaps Mt. Gox could be revived long enough to generate revenue to finally make creditors whole; Karpelès also says he’s found one exchange that seems interested in pledging some of its own profits to victims. But others, such as Kraken’s Powell, say the hole is simply too deep to fill. Besides, even if Mt. Gox did reopen, who would want to trade there? Adds Burges, the Mt. Gox protester, “It’s like having another ship called the *Titanic.*” For him, closure means letting the rest of the Bitcoins go down with the ship.
true
true
true
He led the world's largest Bitcoin exchange before a mysterious heist made it go bust. As clues emerge and Bitcoin's price surges, Mark Karpelès is on the hunt for answers.
2024-10-12 00:00:00
2018-04-19 00:00:00
https://fortune.com/img-…?resize=1200,600
article
fortune.com
Fortune
null
null
29,794,832
https://lisperator.net/blog/javascript-sudoku-solver/
JavaScript Sudoku solver
null
# JavaScript Sudoku solver My wife has developed a passion for Sudoku. I do not share it, I've always thought that this is a problem for computers, not humans, so I've never tried to solve one by hand. However, even if it's a high school problem, I thought it'd be a cute little program to work on. This is the obligatory blog post about it. **Update:** see bestest version. ## Problem definition It's pretty simple. You are given a board like this: You have to fill every empty cell with a non-zero digit [1..9], such that no digit appears twice on the same column, row, or within the same 3x3 square. Click any empty field to see a list of allowed digits. It's a search problem, so the tool that comes to mind is backtracking. Some developers have an anxiety about this word, as if it's some kind of weird black magic, but really, it's pretty simple. I'll describe here my sudoku solver. ## Board representation Since it has 81 cells, we'll keep it as an array with 81 digits (zero for empty cells). It will be convenient to access it both using an index 0..80, or as (row, column), so we'll have a couple of functions to convert between the two: ``` // index -> { row, col } function i2rc(index) { return { row: Math.floor(index / 9), col: index % 9 }; } // { row, col } -> index function rc2i(row, col) { return row * 9 + col; } ``` Next, we need a function that tells us what are the available choices, that is, what digits are acceptable in a given cell (I'm also gonna call them “moves”). Here's a reasonable implementation: ``` function getChoices(board, index) { let choices = []; for (let value = 1; value <= 9; ++value) { if (acceptable(board, index, value)) { choices.push(value); } } return choices; } ``` The `board` is our array with 81 elements, and `index` is 0..80 — this function will return an array of digits that are currently permitted on the cell at this index, according to the rules of the puzzle. The actual rules of the puzzle are implemented in the function `acceptable` : we can use a digit if it's not already used on the same row, column or in the same 3x3 square. So here it is: ``` function acceptable(board, index, value) { let { row, col } = i2rc(index); // if already present on the column, not acceptable for (let r = 0; r < 9; ++r) if (board[rc2i(r, col)] == value) return false; // if already present on the row, not acceptable for (let c = 0; c < 9; ++c) if (board[rc2i(row, c)] == value) return false; // if already present in the same 3x3 square, also not acceptable let r1 = Math.floor(row / 3) * 3; let c1 = Math.floor(col / 3) * 3; for (let r = r1; r < r1 + 3; ++r) { for (let c = c1; c < c1 + 3; ++c) { if (board[rc2i(r, c)] == value) return false; } } // we have a "go" return true; } ``` There are optimizations we could make (we could use a single loop, and we could skip converting between index/row,column with some clever tricks) but let's not bother. We better start working on the problem now, that is, let's write a function that figures out the solution for a given puzzle. ## The “brute force” approach As the name sounds, this is pretty dumb. We'll iterate through the empty cells, assign to each one of the acceptable digits and try to move as far as we can. If we can manage to fill the whole board, then the problem is solved, and if we get to a point where there are no acceptable digits, we have to “backtrack” — reach back to the previous cell where we had multiple choices, and pick something else. It's trivial to write this function. For convenience, let's assume `board` is an external variable; our solver function only has to receive the current `index` — the cell that it currently deals with. It will return `true` if it found a solution, and `false` otherwise. ``` function solve(index) { while (index < 81 && board[index]) ++index; // skip non-empty cells if (index == 81) return true; // we filled'em all, success! let moves = getChoices(board, index); for (let m of moves) { board[index] = m; // try one choice if (solve(index + 1)) // if we can solve for the next cell return true; // then return true, success! } board[index] = 0; // no move worked; we failed, clear the cell return false; // and backtrack } ``` You can play with it below, select a few samples (or clear the board and enter your own). “Solve!” will solve it instantly, while “Play” will step through it so you can see how it fills the cells, and how it backtracks when it needs to try different values. When it's done it will tell you how many moves it had to take back. Note that “Play” will take a long time on this unoptimized version, so don't bother to wait for it to complete; you can use “Pause” or “Reset” to stop it. If you try the the “really hard” puzzle, the brute-force search has to backtrack 8.9 million times and takes a few seconds. ## An improvement: move ordering How does a human approach the problem? Intuitively, we look at the whole board and pick the cells where we have little choices. Ideally, if we can determine that only one value is possible, then we write it down and forget about it. This leads us to the “greedy” search: instead of trying values for each cell in order, fill first those cells where we have the fewest acceptable choices — that is, there's a smaller probability that we'll change our mind about it. Here's the new solve function: ``` function solve() { let { index, moves } = bestBet(board); // find the best place to fill if (index == null) return true; // we filled'em all, success! for (let m of moves) { board[index] = m; // try one choice if (solve()) return true; // if we solved further, success! } board[index] = 0; // no digit fits here, backtrack! return false; } ``` So this time our function doesn't need to take the `index` as an argument, instead it'll determine the next index to fill using `bestBet` — a new function that will return the index of the cell with the fewest possible moves. Since, in order to figure that out, it needs to compute the moves too, it will return them as well, so that we avoid another call to `getChoices` . Here is `bestBet` : ``` function bestBet(board) { let index, moves, bestLen = 100; for (let i = 0; i < 81; ++i) { if (!board[i]) { let m = getChoices(board, i); if (m.length < bestLen) { bestLen = m.length; moves = m; index = i; if (bestLen == 0) break; } } } return { index, moves }; } ``` Below you can play with the new algorithm (or rather, it's the same algorithm, it just tries moves in a different order) and you can see that it makes a world of difference (look at the number of take-backs, compared to the previous implementation). On this new version, “Play” will complete within a few seconds for any of the medium examples. We can notice a significant reduction of the search tree (much fewer take-backs, for example the “really hard” puzzle backtracks 430K instead of 8.9M times). However for most puzzles the actual run time is a bit higher! This got me thinking. Every time we call `search()` , `bestBet` will repeat most work it already did on the previous iteration, looking for the best place to fill and computing moves for every single field, again. To make this optimization worthy of its cost we need to add some ugliness (with some clever tricks in `bestBet` we could cache part of the results), but I won't do it here. ## More human touch Maybe I'd stop here, but my wife, who actually solves sudoku with her brain, told me a little trick that I didn't consider (I probably would have thought about it, had I ever tried to solve sudoku by hand). Look at the board below and focus on the yellow cell. Click on it to see what my program thinks are the available moves. You see, it claims that the allowed digits for that field are 2, 4, 5, 8, 9, and that's correct by the rules, but now if you look at the pink fields, you can see that 8 is not allowed on the 4th and 6th columns, nor on the 3rd row, because an 8 is already there. And still we *know* that the top-middle 3x3 square *must* contain an 8, and this leaves the yellow cell as the only option; clearly we have an unique choice there, we can fill it immediately and there's no need to try 2, 4, 5 or 9. Implementing this new optimization is not hard and makes a new world of difference, because it applies not only to the initial configuration but also after every subsequent move that the algorithm tries. Check the new numbers: The “really hard” puzzle now takes less than half a second and only backtracks 26K times, a huge reduction of the search tree. To implement this new optimization we only have to improve `getChoices` , using a new utility function which implementes the trick: ``` function getChoices(board, index) { let choices = []; for (let value = 1; value <= 9; ++value) { if (acceptable(board, index, value)) { if (unique(board, index, value)) return [ value ]; // it's useless to try anything else else choices.push(value); } } return choices; } function unique(board, index, value) { let { row, col } = i2rc(index); let r1 = Math.floor(row / 3) * 3; let c1 = Math.floor(col / 3) * 3; for (let r = r1; r < r1 + 3; ++r) { for (let c = c1; c < c1 + 3; ++c) { let i = rc2i(r, c); if (i != index && !board[i] && acceptable(board, i, value)) { return false; } } } return true; } ``` ## The code Here is the script used for the boards in this page. The code isn't exactly pretty, I made it just for the purpose of this blog post. Use it like this: ``` import { Sudoku } from "sudoku.js"; let sdk = new Sudoku(document.querySelector("#container"), { smartOrdering: true, smartChoices: true }); sdk.writeBoard([ 0, 0, 0, 0, 0, 0, 0, 0, 6, 0, 3, 0, 0, 7, 1, 0, 4, 0, 0, 0, 0, 0, 0, 0, 8, 0, 0, 0, 0, 0, 9, 0, 8, 0, 7, 1, 1, 0, 3, 0, 0, 0, 0, 0, 0, 0, 0, 2, 0, 3, 0, 9, 0, 0, 5, 0, 7, 0, 0, 6, 0, 0, 0, 2, 0, 0, 0, 0, 0, 7, 0, 0, 0, 0, 1, 8, 0, 0, 0, 0, 2, ], true); ``` ~~Fastest version yet~~ (see below) **Update:** leaving this here, but I've got a better version in the next section. “The rest is engineering”. I couldn't figure out any more algorithmic improvements, but I couldn't help it and I've micro-optimized the heck out of it. Here's the “really hard” puzzle on the best version I've got: If you're wondering what changed, look at the `acceptable` function (I used a single loop to iterate across both row and column; using the same loop to iterate through the 3x3 square seems to do more harm than good). Also at `bestBet` — I inlined getChoices into it and instead of building an array of moves, we bit-code them into an integer in order to avoid allocations, because the garbage collector costs time. And `solve` was updated accordingly. ## Bestest version :) With even more engineering, but also an important improvement in the `unique` function, the “Really Hard™” puzzle turns out to be Actually Quite Simple™ and the algorithm solves it in just a few milliseconds with only 7 take-backs. Here is the bestest version. Note: the hardest puzzle in this example is actually the one named “Harder”. It doesn't seem to be designed for humans, as no digit can be placed safely in the initial configuration (or I can't see it). The computer has to guess 1209 times (meaning, there are as many situations in which it can't place a safe digit and has to chose among multiple options). The new `unique` function checks for uniqueness not only within the 3x3 block, but also on the whole row and column, which again drastically reduces the search tree. As for the engineering improvements: instead of storing digits as decimals, they are now bit-coded (each digit's corresponding bit is set). This allows us to use bitwise operations to detect available choices (`getChoices` was renamed `getMoves` ), which turns out to be much faster. There are other improvements as well, but this post is a bit too long already, feel free to check the code.
true
true
true
null
2024-10-12 00:00:00
2021-06-27 00:00:00
null
null
null
null
null
null
2,112,322
http://springboard.com/meet-the-mentors-day/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
11,775,936
http://www.sciencedirect.com/science/article/pii/S0378437112004281
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
29,395,201
https://cocomaterial.com/
CocoMaterial
null
We're sorry but coco-material-front doesn't work properly without JavaScript enabled. Please enable it to continue.
true
true
true
CocoMaterial is an Open Source hand-drawn illustration library
2024-10-12 00:00:00
null
https://cocomaterial.com/card-image.png
null
cocomaterial.com
CocoMaterial
null
null
17,404,500
http://www.bbc.co.uk/blogs/aboutthebbc/entries/bde59828-90ea-46ac-be5b-6926a07d93fb
Introducing the first version of BBC Sounds
null
# Introducing the first version of BBC Sounds Dan Taylor-Watt Director of Product, BBC iPlayer & BBC Sounds **Over the past year or so we’ve talked a lot about how we need to reinvent the BBC for a new generation.** Today we’re releasing the first version of a brand new audio app from the BBC. Available to download for free from Apple, Google and Amazon app stores from later today, BBC Sounds brings together our live and on demand radio, music and podcasts into a single personalised app. Every user’s experience of BBC Sounds will be unique as it’s designed to learn from your listening habits, providing one-tap access to the latest episodes of your favourite BBC podcasts and radio shows and introducing you to new audio you wouldn’t otherwise have discovered from the 80,000 (yes, really) hours available. Here’s a quick summary of what you can do: - Scroll the dial (a much-loved feature of the BBC iPlayer Radio app) to listen live to any of the BBC’s 18 national stations (and online-only station CBeebies Radio) or tap All Stations to listen to any of the BBC’s 40 local stations - Pick up where you left off with ‘Continue Listening’ which surfaces part-listened podcasts and radio shows and next episodes for you - Enjoy hand-picked collections of podcasts and on demand music shows to match your mood, from Funny Chat to Upgrade Your Life, from Live Sessions to Dance Mixes - Discover new audio via the ‘Recommended for you’ section, with a dozen great on demand listens picked just for you and constantly refreshed based on your listening - Browse by category: from Crime to Science & Technology, from Classical to Hip Hop - Add any individual episode or clip to ‘My List’ to listen to later - Subscribe to any podcast or programme and get a personalised feed of the latest episodes in ‘My Sounds’ This is very much a first release - we wanted to get it out as early as possible to start getting feedback to help develop the app. There’s a bunch of additional features we’re already busy working on (including downloads, to enable offline listening), and we’ll have lots more to add later in the year, but we’d love to hear how you’d like to see the app develop. Do leave a comment below or feedback via this survey. *And if you’d really like to get involved, why not come and join us? We’re hiring...*
true
true
true
Dan Taylor-Watt introduces the BBC's new audio app - BBC Sounds, available to download today.
2024-10-12 00:00:00
2018-06-25 00:00:00
https://ichef.bbci.co.uk…675/p06c23lx.jpg
article
bbc.co.uk
BBC
null
null
16,981,243
https://www.theatlantic.com/technology/archive/2018/04/vatican-secret-archives-artificial-intelligence/559205/?single_page=true
Artificial Intelligence Is Cracking Open the Vatican's Secret Archives
Sam Kean
# Artificial Intelligence Is Cracking Open the Vatican's Secret Archives A new project untangles the handwritten texts in one of the world’s largest historical collections. The Vatican Secret Archives is one of the grandest historical collections in the world. It’s also one of the most useless. The grandeur is obvious. Located within the Vatican’s walls, next door to the Apostolic Library and just north of the Sistine Chapel, the VSA houses 53 linear miles of shelving dating back more than 12 centuries. It includes gems like the papal bull that excommunicated Martin Luther and the pleas for help that Mary Queen of Scots sent to Pope Sixtus V before her execution. In size and scope, the collection is almost peerless. That said, the VSA isn’t much use to modern scholars, because it’s so inaccessible. Of those 53 miles, just a few millimeters’ worth of pages have been scanned and made available online. Even fewer pages have been transcribed into computer text and made searchable. If you want to peruse anything else, you have to apply for special access, schlep all the way to Rome, and go through every page by hand. But a new project could change all that. Known as In Codice Ratio, it uses a combination of artificial intelligence and optical-character-recognition (OCR) software to scour these neglected texts and make their transcripts available for the very first time. If successful, the technology could also open up untold numbers of other documents at historical archives around the world. OCR has been used to scan books and other printed documents for years, but it’s not well suited for the material in the Secret Archives. Traditional OCR breaks words down into a series of letter-images by looking for the spaces between letters. It then compares each letter-image to the bank of letters in its memory. After deciding which letter best matches the image, the software translates the letter into computer code (ASCII) and thereby makes the text searchable. This process, however, really only works on typeset text. It’s lousy for anything written by hand—like the vast majority of old Vatican documents. Here’s an example from the early 1200s, written in what’s called Caroline minuscule script, which looks like a mix of calligraphy and cursive: The main problem in this example is the lack of space between letters (so-called dirty segmentation). OCR can’t tell where one letter stops and another starts, and therefore doesn’t know how many letters there are. The result is a computational deadlock, sometimes referred to as Sayre’s paradox: OCR software needs to segment a word into individual letters before it can recognize them, but in handwritten texts with connected letters, the software needs to recognize the letters in order to segment them. It’s a catch-22. Some computer scientists have tried to get around this problem by developing OCR to recognize whole words instead of letters. This works fine technologically—computers don’t “care” whether they’re parsing words or letters. But getting these systems up and running is a bear, because they require gargantuan memory banks. Rather than a few dozen alphabet letters, these systems have to recognize images of thousands upon thousands of common words. Which means you need a whole platoon of scholars with expertise in medieval Latin to go through old documents and capture images of each word. In fact, you need several images of each, to account for quirks in handwriting or bad lighting and other variables. It’s a daunting task. In Codice Ratio sidesteps these problems through a new approach to handwritten OCR. The four main scientists behind the project—Paolo Merialdo, Donatella Firmani, and Elena Nieddu at Roma Tre University, and Marco Maiorino at the VSA—skirt Sayre’s paradox with an innovation called jigsaw segmentation. This process, as the team recently outlined in a paper, breaks words down not into letters but something closer to individual pen strokes. The OCR does this by dividing each word into a series of vertical and horizontal bands and looking for local minimums—the thinner portions, where there’s less ink (or really, fewer pixels). The software then carves the letters at these joints. The end result is a series of jigsaw pieces: By themselves, the jigsaw pieces aren’t tremendously useful. But the software can chunk them together in various ways to make possible letters. It just needs to know which groups of chunks represent real letters and which are bogus. To teach the software this, the researchers turned to an unusual source of help: high schoolers. The team recruited students at 24 schools in Italy to build the projects’ memory banks. The students logged onto a website, where they found a screen with three sections: The green bar along the top contains nice, clean examples of letters from a medieval Latin text—in this case, the letter *g*. The red bar in the middle contains spurious examples of *g*, what the Codice scientists call “false friends.” The grid at the bottom is the meat of the program. Each of the images there is composed of a few jigsaw pieces that the OCR software chunked together—its guess at a plausible letter. The students then judged the OCR’s efforts, telling it which guesses were good and which were bad. They did so by comparing each image to the platonically perfect green letters and clicking a checkbox when they saw a match. Image by image, click by click, the students taught the software what each of the 22 characters in the medieval Latin alphabet (*a*–*i*, *l*–*u*, plus some alternative forms of *s* and *d*) looks like. The setup did require some expert input: Scholars had to pick out the perfect examples in green, as well as the false friends in red. But once they did this, there was no more need for them. The students didn’t even need to be able to read Latin. All they had to do is match visual patterns. At first, “the idea of involving high-school students was considered foolish,” says Merialdo, who dreamed up In Codice Ratio. “But now the machine is learning thanks to their efforts. I like that a small and simple contribution by many people can indeed contribute to the solution of a complex problem.” Eventually, of course, the students stepped aside as well. Once they’d voted yes on enough examples, the software started chunking jigsaw pieces together independently and judging for itself what letters were there. The software itself became an expert—it became artificially intelligent. At least, sort of. It turned out that chunking jigsaw pieces into plausible letters wasn’t enough. The computer still needed additional tools to untangle the knots of handwritten text. Imagine you’re reading a letter, and you come across this line: Is it “clear” to them or “dear” to them? Hard to say, since the strokes that make up “d” and “cl” are virtually the same. OCR software faces the same problem, especially with a highly stylized script like Caroline minuscule. Try deciphering this word: After running through different jigsaw combinations, the OCR threw up its hands. Guesses included *aimo*, *amio*, *aniio*, *aiino*, and even the Old MacDonald’s Farm–ish *aiiiio*. The word is *anno*, Latin for “year,” and the software nailed the a and o. But those four parallel columns in the middle flummoxed it. To get around this problem, the In Codice Ratio team had to teach their software some common sense—practical intelligence. They found a corpus of 1.5 million already-digitized Latin words, and examined them in two- and three-letter combinations. From this, they determined which combinations of letters are common, and which never occur. The OCR software could then use those statistics to assign probabilities to different strings of letters. As a result, the software learned that *nn* is far more likely than *iiii*. With this refinement in place, the OCR was finally ready to read some texts on its own. The team decided to feed it some documents from the Vatican Registers, a more than 18,000-page subset of the Secret Archives consisting of letters to European kings, rulings on legal matters, and other correspondence. The initial results were mixed. In texts transcribed so far, a full one-third of the words contained one or more typos, places where the OCR guessed the wrong letter. If yov were tryinj to read those lnies in a bock, that would gct very aiiiioying. (The most common typos involved *m*/*n*/*i* confusion and another commonly confused pair: the letter *f* and an archaic, elongated form of *s*.) Still, the software got 96 percent of all handwritten letters correct. And even “imperfect transcriptions can provide enough information and context about the manuscript at hand” to be useful, says Merialdo. Like all artificial intelligence, the software will improve over time, as it digests more text. Even more exciting, the general strategy of In Codice Ratio—jigsaw segmentation, plus crowdsourced training of the software—could easily be adapted to read texts in other languages. This could potentially do for handwritten documents what Google Books did for printed matter: open up letters, journals, diaries, and other papers to researchers around the world, making it far easier to both read these documents and search for relevant material. That said, relying on artificial intelligence does have limitations, says Rega Wood, a historian of philosophy and paleographer (expert on ancient handwriting) at Indiana University. It “will be problematic for manuscripts that are not professionally written but copied by nonprofessionals,” she says, since the handwriting and letter shapes will vary far more in those documents, making it harder to teach the OCR. In addition, in cases where there’s only a small sample size of material to work with, “it is not only more accurate, but just as quick to make transcriptions without such technology.” *Pace* Dan Brown, the “secret” in the Vatican Secret Archives’ name doesn’t refer to anything clandestine or conspiratorial. It merely means that the archives are the personal property of the pope; “private archives” would probably be a better translation of the original name, *Archivum Secretum*. Still, until recently, the VSA might as well have been secret to most of the world—locked away and largely inaccessible. “It is amazing for us to bring these manuscripts back to life,” Merialdo says, “and make their comprehension available to everybody.”
true
true
true
A new project untangles the handwritten texts in one of the world’s largest historical collections.
2024-10-12 00:00:00
2018-04-30 00:00:00
https://cdn.theatlantic.…750/original.jpg
article
theatlantic.com
The Atlantic
null
null
20,045,906
https://medium.com/@arthurofbabylon/how-to-read-in-the-age-of-the-smartphone-4886f28eee63
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
8,788,930
http://blog.algolia.com/christmas-gifthub-awesome-autocomplete/
Github Awesome Autocomplete browser extension for Chrome and Firefox - Algolia Blog
null
By working every day on building the best search engine, we’ve become obsessed with our own search experience on the websites and mobile applications we use. We’re git addicts and love using GitHub to store every single idea or project we work on. We use it both for our private and public repositories (12 API clients, HN Search or various d e m o s). We use every day its search function and we decided to re-build it the way we thought it should be. We’re proud to share it with the community via this Chrome extension. Our Github Awesome Autocomplete enables a seamless and fast access to GitHub resources via an as-you-type search functionality. We used GitHub’s Archive dataset to export top repositories and last active users using Google’s BigQuery: ;; export repositories SELECT a.repository_name as name, a.repository_owner as owner, a.repository_description as description, a.repository_organization as organization, a.repository_watchers AS watchers, a.repository_forks AS forks, a.repository_language as language FROM [githubarchive:github.timeline] a JOIN EACH ( SELECT MAX(created_at) as max_created, repository_url FROM [githubarchive:github.timeline] GROUP EACH BY repository_url ) b ON b.max_created = a.created_at and b.repository_url = a.repository_url ;; export users SELECT a.actor_attributes_login as login, a.actor_attributes_name as name, a.actor_attributes_company as company, a.actor_attributes_location as location, a.actor_attributes_blog AS blog, a.actor_attributes_email AS email FROM [githubarchive:github.timeline] a JOIN EACH ( SELECT MAX(created_at) as max_created, actor_attributes_login FROM [githubarchive:github.timeline] GROUP EACH BY actor_attributes_login ) b ON b.max_created = a.created_at and b.actor_attributes_login = a.actor_attributes_login Here are the 2 index configurations we used to build the search: Sylvain Utard VP of EngineeringPowered by Algolia AI Recommendations Michael King Developer AdvocateSoma Osvay Full Stack Engineer, Starschemakevin John Stewart VP Corporate Marketing
true
true
true
Simple and discreet extension that enhances GitHub's search, letting you search for repositories and people faster than ever.
2024-10-12 00:00:00
2019-10-09 00:00:00
https://res.cloudinary.c…8xt9lwp9t4s0.png
article
algolia.com
Algolia
null
null
3,186,395
http://mikecanex.wordpress.com/2011/11/02/doctor-who-encyclopedia-for-ipad/
Doctor Who Encyclopedia For iPad
null
This is where digital books come into their own. I want one for *The Avengers* (Steed and Mrs. Peel), *Blake’s 7,* and all of the Gerry Anderson Supermarionation and live-action productions being abused by those bastards at Carlton TV. *Thunderbirds* itself could be a standalone, but there should also be a massive Gerry Anderson one. And one for Space Patrol (aka *Planet Patrol*).
true
true
true
This is where digital books come into their own. I want one for The Avengers (Steed and Mrs. Peel), Blake’s 7, and all of the Gerry Anderson Supermarionation and live-action productions being…
2024-10-12 00:00:00
2011-11-02 00:00:00
https://img.youtube.com/…7NLoWRuM5s/0.jpg
article
wordpress.com
Mike Cane’s xBlog
null
null
14,182,456
https://techcrunch.com/2017/04/17/facebooks-head-of-design-on-creating-for-2-billion-people
Facebook's head of design on creating for 2 billion people | TechCrunch
Jared Erondu; Bobby Ghoshal
Luke Woods is the Head of Design at Facebook. In this episode, we discuss how digital design is in a unique position to make an impact on the world, dive into the details of what the evolution of design looked like at Facebook, and learn the importance of three little words: *understand, identify, execute. * Facebook has grown immensely throughout the five-and-a-half years Woods has worked there. Throughout our interview, he gives us an inside look at how the design team grew from a few dozen to a few hundred, explaining the trials the team faced as it scaled and the tools they used to overcome their problems. On how to approach designing a new product or feature, Woods says there are three steps: **understand, identify, and execute**. Take the time to **understand** what it is you’re trying to accomplish with the product. Use that understanding to **identify** the biggest problems you need to solve. And **execute** on the idea by focusing on getting it done and making it real. *Jared Erondu and Bobby Ghoshal are the hosts of High Resolution. This post and episode notes were put together by freelance writer, **Gannon Burgett**. Watch for **High Resolution** episodes to drop every Monday on TechCrunch at 8 a.m. PT. You can also listen on **iTunes** and **Overcast**.*
true
true
true
https://www.youtube.com/watch?v=VhZoVFd5IeM&feature=youtu.be Luke Woods is the Head of Design at Facebook. In this episode, we discuss how digital
2024-10-12 00:00:00
2017-04-17 00:00:00
https://techcrunch.com/w…ageepisode10.png
article
techcrunch.com
TechCrunch
null
null
17,004,891
https://opinionator.blogs.nytimes.com/2013/03/30/those-irritating-verbs-as-nouns/
Those Irritating Verbs-as-Nouns
Henry Hitchings
Draft is a series about the art and craft of writing. “Do you have a solve for this problem?” “Let’s all focus on the build.” “That’s the take-away from today’s seminar.” Or, to quote a song that was recently a No. 1 hit in Britain, “Would you let me see beneath your beautiful?” If you find these sentences annoying, you are not alone. Each contains an example of nominalization: a word we are used to encountering as a verb or adjective that has been transmuted into a noun. Many of us dislike reading or hearing clusters of such nouns, and associate them with legalese, bureaucracy, corporate jive, advertising or the more hollow kinds of academic prose. Writing packed with nominalizations is commonly regarded as slovenly, obfuscatory, pretentious or merely ugly. There are two types of nominalization. Type A involves a morphological change, namely suffixation: the verb “to investigate” produces the noun “investigation,” and “to nominalize” yields “nominalization.” Type B is known as “zero derivation” — or, more straightforwardly, “conversion.” This is what has taken place in my opening illustrations: a word has been switched from verb into noun (or, in the last two cases, from adjective into noun), without the addition of a suffix. Plenty of teachers discourage heavy use of the first type of nominalization. Students are urged to turn nouns of this kind back into verbs, as if undoing a conjurer’s temporary hoax. On this principle, “The violence was Ted’s retaliation for years of abuse” is better rendered as “Ted retaliated violently after years of abuse.” The argument for doing this is that the first version is weaker: dynamic writing makes use of “stronger” verbs. Yet in practice there are times when we may want to phrase a matter in a way that is not so dynamic. Perhaps we feel the need to be tactful or cautious, to avoid emotiveness or the most naked kind of assertion. Type A nominalization can afford us flexibility as we try to structure what we say. It can also help us accentuate the main point we want to get across. Sure, it can be clunky, but sometimes it can be trenchant. On the whole, it is Type B nominalization that really grates. “How can anybody use ‘sequester’ as a noun?” asks a friend. “The word is ‘sequestration,’ and if you say anything else you should be defenestrated.” “I’ll look forward to the defenestrate,” I say, and he calls me something I’d sooner not repeat. Even in the face of such opprobrium, people continue to redeploy verbs as nouns. I am less interested in demonizing this than in thinking about the psychology behind what they are doing. Why say “solve” rather than “solution”? One answer is that it gives an impression of freshness, by avoiding an everyday word. To some, “I have a solve” will sound jauntier and more pragmatic than “I have a solution.” It’s also more concise and less obviously Latinate (though the root of “solve” is the Latin *solvere*). These aren’t necessarily virtues, but they can be. If I speak of “the magician’s reveal” rather than of “the magician’s moment of revelation,” I am evoking the thrill of this sudden unveiling or disclosure. The more traditional version is less immediate. Using a Type B nominalization may also seem humorous and vivid. Thus, compare “that was an epic fail” (Type B nominalization), “that was an epic failure” (Type A nominalization) and “they failed to an epic degree” (neither). There are other reasons for favoring nominalizations. They can have a distancing effect. “What is the ask?” is less personal than “What are they asking?” This form of words may improve our chances of eliciting a more objective response. It can also turn something amorphous into a discrete conceptual unit, of a kind that is easier to grasp or sounds more specific. Whatever I think of “what is the ask?” it focuses me on what’s at stake. Some regard unwieldy nominalizations as alarming evidence of the depraved zeitgeist. But the phenomenon itself is hardly new. For instance, “solve” as a noun is found in the 18th century, and the noun “fail” is older than “failure” (which effectively supplanted it). “Reveal” has been used as a noun since the 16th century. Even in its narrow broadcasting context, as a term for the final revelation at the end of a show, it has been around since the 1950s. “Ask” has been used as a noun for a thousand years — though the way we most often encounter it today, with a modifier (“a big ask”), is a 1980s development. ###### Related in Draft It is easy to decry nominalization. I don’t feel that a writer is doing me any favors when he expresses himself thus: “The successful implementation of the scheme was a validation of the exertions involved in its conception.” There are crisper ways to say this. And yes, while we’re about it, I don’t actually care for “Do you have a solve?” Still, it is simplistic to have a blanket policy of avoiding and condemning nominalizations. Even when critics couch their antipathy in a language of clinical reasonableness, they are expressing an aesthetic judgment. Aesthetics will always play a part in the decisions we make about how to express ourselves — and in our assessment of other people’s expression — but sometimes we need to do things that are aesthetically unpleasant in order to achieve other effects, be they polemical or diplomatic. *Henry Hitchings is the author of three books exploring language and history, including, most recently, “The Language Wars.”*
true
true
true
Why “that was an epic fail” sounds so good and also so annoying.
2024-10-12 00:00:00
2013-03-30 00:00:00
https://static01.nyt.com…go_291_black.png
article
nytimes.com
Opinionator
null
null
1,792,663
http://panko.shidler.hawaii.edu/SSR/Mypapers/whatknow.htm
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
25,267,382
https://www.briantimar.com/notes/mimetic/mimetic/
Mimetic
null
2019-05-19 I’ve been a graduate student in physics for almost three years, but I only recently figured out why. I had to tackle a simple question do so: *Why does this matter?* I realized that I’d never forced myself to answer this honestly. As Paul Graham has pointed out, these systematic gaps in conversation should raise suspicion — they often indicate when you’re wrong about something important. I was wrong in thinking that my work mattered to me, and I avoided asking myself this question because I knew the answer would be painful. One afternoon, while moving out of an apartment, I came across a cardboard box packed with binders and paper folders, full of notes accumulated over the past year. As I let it fall in front of the door, a thought dropped into my head and stuck there: none of this means anything to me. This was, nominally, the fruit borne of a year of my life, and it felt so viscerally wasted. Despair bought me honesty — by enrolling in graduate school, I’d made myself miserable for no reason. Why had I spent so much time in purposeless hard work? I arrived at a simple mechanism: an excessive sensitivity to the desires of others, and a competitive environment. I ended up in physics through stubbornness, and an unusual willingness to suffer for the sake of grades. As an undergraduate, I was not particularly passionate about quarks, quasars, or quantum mechanics, but I was academically very competitive, and once I’d settled on physics as my major I determined to place myself at the top of my class. I did so by throwing myself into the hardest classes and putting in the hours required to ace the tests. This was, to put it mildly, a bad idea. I got a sort of grim pleasure from vanquishing my classmates in these academic slogs, but I was basically miserable. So why’d I keep it up? When multiple people are striving towards a shared goal, they often rank themselves by progress within their peer group. This was my mistake — I swapped an absolute goal (figuring out how bits of nature work) with a relative one (scoring higher on tests than my classmates). Later, when I found myself unhappy, I couldn’t leave without feeling like I’d lost something. That social capital sunk cost was the first part of the trap I found myself in. The second was a positive feedback loop that encouraged me to spend ever-increasing amounts of time on my work. Humans inherit convictions mimetically from each other — we learn what to value by imitating our peers. As my desire to excel academically grew, I spent greater amounts of time in and around the physics department. The more time I spent there, the greater my desire to excel. I’d never given physics much thought at all before my senior year in high school — but once I was surrounded by other physics students, competing for the same pool of grades and research positions, I could think of little else. This inherited desire was unchecked because I had no life outside of academics — no fixed reference point. Although quitting would have made me happier, I felt like I had nowhere to quit to. My tunnel vision left me with few concrete notions of alternative pursuits, and without a destination, I could not seriously contemplate leaving. Plans are never plausible until they contain specifics, and implausible plans tend to be discarded. Many of my peers in physics only added incredulity, consciously or otherwise. The result was a reality distortion field — quitting was not just painful, but unimaginable, unthinkable. I ended up in graduate school not because I wanted to toe the bleeding edge of natural science, but because I simply couldn’t imagine doing anything else. That’s the mimetic trap in a nutshell: it hurts to leave, and there’s nowhere to go. It decouples the social reward signal from the rest of objective reality — you can spend years ascending ranks in a hierarchy without producing anything that the rest of humanity finds valuable. If you value the process itself, that’s fine. I didn’t. Cowardice kept me from acting on this, and after a while I came to believe I had to succeed in this field I’d fallen into essentially by chance. I suspect I’m not the only one who’s felt this trapping effect in physics. Some theorists seem to work primarily on fad topics inherited from other prominent departments (ever heard of dynamical quantum phase transitions?). That’s not to say these research areas aren’t valuable, beautiful, or profound — but I’m wary of the process that pulls people into them. Among experimentalists, it’s not hard to find graduate students who can tell you every detail about how a particular machine operates, and almost nothing about why it should be built. Again, if they’re enjoying the process, more power to them. My point is that they’re driven in part by mimetic forces, and for people with a certain psychological weakness, this can lead to purposeless toil. I know what this feels like, and it terrifies me. Physics is hardly a lone offender within academia. Graduate programs select for intensely competitive individuals with highly specific skills, often with negligible market value outside of universities. A strong desire for publications on esoteric topics is inherited from senior postdocs and professors, making tunnel vision especially acute. The activation energy required for quitting is famously high, in part because the glow from the genuine intellectual lights in any field make outside jobs seem (unfairly) pale and shallow in comparison. The number of academic positions in any sub-field is typically small and static, leading to zero-sum competition for titles. This is the worst sort of posturing, and harms the psyche — as Eric Weinstein puts it, …it’s better to be in an expanding world and not quite in exactly the right field, than to be in a contracting world where people’s worst behavior comes out and your mind is grooved in defensive and rent-seeking types of ways. Academics have uniformly rather low salaries, increasing our tendency to focus on social status as a measure of success. Salary gradations are useful for disrupting mimetic effects because they tie effort expended directly to units of universal economic value — convertible to kilos of rice, oil, and stuff in the physical world. A price is a lifeline to reality: all else being equal, the job with the lower wage is probably less valuable. Without this signal, the goals of a peer group are easily decoupled from the outside world, making it easy to drift into time-wasting pursuits. So — I’ve convinced myself that mimetic traps are a real thing, and that I should be worried about them. Should you? If you find yourself vaguely dissatisfied with your work, unable to describe coherently why you’re doing what you’re doing — yes, you probably should. “Why does this matter?” is an excellent way to gauge if you’ve drifted into a mimetic trap. If you find this question impossible to answer honestly, you’re probably wasting your time. Getting out is the hard part — that requires courage and diligent planning. It’s much easier to avoid falling in. But in either case, you’ll benefit from building a system that steers you towards productive, meaningful activity in the long run. Mimetic environments are a serious problem only if you fall into one where you can’t enjoy the process. They’re a tool for amplifying ambition and diligence, and it’s up to you to apply this tool to yourself wisely. This requires some care. It’s important to have a strong learning signal — a fast feedback loop between effort expended and success. This way, you’ll know quickly what it takes to succeed, and whether you can be happy doing so. Look for environments where competitors see themselves as playing a game, rather than fighting for survival — this prevents rankings within the hierarchy from becoming an existential problem. Outside of competitive environments, your peer group can be engineered to improve your decision making and steer you away from unhappy traps. The authors you read (or the podcasters you listen to) are a good place to start, because you have absolute control over their presence in your consciousness. Speaking with authors through their written work triggers the same neural circuits that produce imitation of desire. By stocking a bookshelf judiciously, you can express a preference over preferences — “what should I value? What do I want to spend my waking hours thinking about?” — and act on it through careful, honest reading. This engineering is safe: most authors exert their influence slowly, over hundreds of pages, and if the effect turns out to be undesirable, you need only put the book down. It’s cheap and reliable — if you want to emulate someone, start by reading what they read. Most importantly, it’s *powerful*, because authors form a large part of the meta-peer group that determines which communities and games you engage with. In closing, some tips: Don’t force yourself to do anything you hate. If you get too good at this, you won’t be able to figure out when to quit. Enjoy the process of whatever you’re doing — you’ll be happier, and much more likely to practice, which leads to better outcomes. Make sure your job has clear price signals for success and failure. Be suspicious of roles that compensate you with status or non-financial rewards. Hold yourself to ambitious absolute standards in morals and productivity — write them down on post-it notes. You have an obligation to use yourself well, your time is valuable, and there are right and wrong ways to spend it. Maintain a diversity of pursuits — you want to ensure that, no matter how engrossed you become in one, you never forget that the others exist. Join Twitter — there’s no better way to reduce tunnel vision. Here’s a list of some of my favorite twitter follows. Good luck! *2019-05-19* *update 2019-05-20* Thanks to Dan Wang for writing the piece that got me thinking about this consciously. Thanks to James Ough and especially Alexey Guzey for comments on earlier drafts, and to Alexey again for prodding me to write this.
true
true
true
null
2024-10-12 00:00:00
2019-05-19 00:00:00
null
null
null
null
null
null
17,134,099
https://www.youtube.com/watch?v=FdD0GvVRSMc&feature=youtu.be
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
2,333,533
http://www.popularmechanics.com/technology/how-to/tips/what-happens-to-your-online-data-when-you-die?click=pm_latest
What Happens To Your Data When You Die?
Glenn Derene
**Passwords, banking records, social media accounts**—day by day our lives create more and more data. But what becomes of all that data when we pass away is a looming problem with no clear answer. Marc Davis, a partner architect at Microsoft responsible for online services such as Bing, MSN, and advertising, raised this and other troubling issues about citizens' rights to their own information at a panel at this weekend's SXSW conference, called "Demystifying Online Privacy and Empowering the Digital Self." Despite our increasingly data-intensive lives, Davis explains, the legal framework around our personal data just isn't there yet. "Usually, where commerce and society meet legally is the concept of property," Davis said. "What's missing is a concept of contract law and property rights for digital information." Consider the concept of a digital will: A legally binding statement to the world declaring who should have access to your information after you die. It's a question that is bound to get only more complicated as our digitally engaged population grows older and expires. Think about how many passwords and online accounts you have. Who else has access to that information now? Whom would you like to get access after you die? How would the providers that host that data in the cloud even know you died, and what standards could they use to verify that fact? As the information we store in the cloud becomes more voluminous and valuable, these questions become more than simply academic. Davis tells PM that professionals in the legal, funeral, and estate planning professions are just starting to come to grips with the problem of what to do with information after death. He's been part of several panels on the subject in the past two years, and recently there have been a series of Digital Death Days intended to educate the industry about the problem (the next one in the United States is May 6th, in San Francisco). A few startup companies such as Legacy Locker and Entrustet have also sprung up to handle the legal and financial issues of data after death. Yet most people don't know how much data they have in the cloud in the first place, and have made no preparations for what happens to that information after death. "More and more people are living their lives online, but also their lives in the physical world are connecting to the Internet," Davis says. That produces information with obvious financial value, such as banking and tax records, but also plenty of information with personal value: photos, music, communications, and social networking accounts. Then add in all of the data we automatically generate relating to our location, buying habits, Internet surfing histories, and more. "There's a whole swath of data that we create that increasingly gets bound to our identity so that we leave a digital legacy," Davis says. The subject is deeply intertwined with the larger issue of digital identity. As Davis points out, that it's not just after death that digital property issues come into play. "Every life phase we go through where we've established structures, documents and contracts to handle property and identity—birth, marriage, divorce, retirement—we've created as a civilization ways to handle the movements of rights and assets. So we're at that time in history now where we're applying these metaphors and frameworks onto the digital realm." The courts probably won't figure out this complicated conundrum anytime soon. But for individuals the solution is simple, Davis says—include a digital will along with your regular will. Leave instructions for how to get to your digital assets and what to do with them. Then your online identity won't end up in digital limbo.
true
true
true
Our lives are inundated with more data than ever. But when we die, our passwords and online accounts may live on. According to tech experts at this year's SXSW conference, no one yet knows just how to handle a person's digital legacy.
2024-10-12 00:00:00
2011-03-14 00:00:00
https://hips.hearstapps.…xh&resize=1200:*
article
popularmechanics.com
Popular Mechanics
null
null
34,150,744
https://github.com/michaeleisel/zld
GitHub - michaeleisel/zld: A faster version of Apple's linker
Michaeleisel
## NOTE: zld is now archived, consider using lld instead. More info is here. For large projects, the linking phase (explanation) can significantly increase incremental build times. This project is a fork of the Apple linker, `ld` . It is a drop-in replacement that can substantially speed things up. * Note*: it is only intended for debug builds, to make debugging faster. Feel free to file an issue if you find that linking is not at least 40% faster for your case (make sure to run it twice in a row to ensure that caches have been generated). Further benchmark details can be found here. It all depends on your risk tolerance and how much you value the speedup in incremental build time. When linking takes more than one second, I'd cut that time in half as the estimated time with this new linker. If that difference is compelling to you, then it'd be worth trying out. Personally, I'd use it in projects with an existing link time of even 500ms (but I am an impatient tinkerer). `zld` is forked from the most recently open-sourced version of `ld` . It's used by thousands of developers across many of the largest apps in the world. Without a few optimizations around hashing, it would produce byte-for-byte the same executables as the open-source one. Although it's not ideal to mix compiler and linker toolchain versions, the open-source one is fairly recent. `zld` will continue to be updated with each new version of `ld` as it is released. Below are the installation methods. Note that, if you someday install a newer version of Xcode and zld doesn't work with it, it may be that your version of zld is now too far behind that of the linker shipped in Xcode. In that case, check back here for the latest zld release to fix the problem. * Note*: The Xcode app must be installed for zld to work (not just the Xcode command line tools) The pre-built binary for the latest release is here. `sudo xcode-select -s <path to Xcode>` - Install cmake - Checkout the latest release of zld from master - Run `make clean && make` - See in the output where it built zld (probably `build/Build/Products/Release/zld` ). Get the path of zld from `which zld` , then add `-fuse-ld=<path to zld> -Wl,-zld_original_ld_path,$(DT_TOOLCHAIN_DIR)/usr/bin/ld` to "Other Linker Flags" in the build settings (debug configuration). That `-zld_original_ld_path` provides the path to the linker Xcode would otherwise use, which is important because there are certain known cases (e.g. arm64_32 and Catalyst) where zld knows that it has issues and will silently use that linker instead. Fixing these cases is a work-in-progress (largely blocked by Apple being slow to release source code). Add these to your `.bazelrc` or pass them to your command line. ``` build --linkopt=-fuse-ld=<path to zld> build --linkopt=-Wl,-zld_original_ld_path,__BAZEL_XCODE_DEVELOPER_DIR__/Toolchains/XcodeDefault.xctoolchain/usr/bin/ld ``` Note that you will need to disable sandbox for this to work now. Additionally, to make the linking actions cacheable, the path to zld must be deterministic (e.g. `/tmp/zld-09ea158` , where `09ea158` is zld version). Another option to use `zld` in Bazel is via rules_apple_linker. You can edit `~/.cargo/config` to add a linker flag, e.g.: ``` [target.x86_64-apple-darwin] # For Apple silicon: # [target.aarch64-apple-darwin] rustflags = ["-C", "link-arg=-fuse-ld=<path to zld>"] ``` By default, `zld` stores some metadata in `/tmp/zld-...` to speed things up. This is the first step towards making `zld` a truly incremental linker. Currently, the only things that are stored are object file and library names. Apple's approach is a very reasonable one, using C++ with STL data structures. However, there are a number of ways in which `zld` has sped things up, for instance: - Using Swiss Tables instead of STL for hash maps and sets - Parallelizing in various places (the parsing of libraries, writing the output file, sorting, etc.) - Optimizations around the hashing of strings (caching the hashes, using a better hash function, etc.) Whether you use this project or not, there are a number of things that can speed linking up (again, this is only for debug builds): - The linker flag `-Wl,-random_uuid` , which disables content hashing based UUID creation and instead uses a random UUID (using`-no_uuid` can decrease performance when lldb is attaching to the binary) - Turning off dead stripping (referred to as "Dead Code Stripping" in Xcode build settings) - For executables and xctest bundles, disable dyld exports trie creation with `-Wl,-exported_symbols_list,/dev/null` (cannot be used by test host apps that provide symbols to xctests) `-Wl,-no_deduplicate` , which disables the deduplication pass. In Xcode, this flag is added by default for Debug builds.`-Wl,-no_compact_unwind` , which disables the creation of compact unwind info. Note that Objective-C and C++ functions that have been linked without their compact unwind info can crash if an exception unwinds into them, rather than continuing the unwind. Swift functions, however, are not affected (even if an Objective-C/C++ exception unwinds into them). So, it's best for pure Swift projects. It can also break crash reporting.- If you're not using `zld` , using`-Wl,-force_load` for libraries can sometimes speed things up - Linking with dynamic libraries instead of static ones The biggest way to contribute to `zld` is to file issues! If you encountered any problems, feel free to file an issue, and you can expect a prompt response. Special thanks to @dmaclach's ld64, which helped with building `ld` .
true
true
true
A faster version of Apple's linker. Contribute to michaeleisel/zld development by creating an account on GitHub.
2024-10-12 00:00:00
2020-01-29 00:00:00
https://opengraph.githubassets.com/0540d181432962db92d14ead67c6e4118c29cc472261c1acc54d9b5aa38497fc/michaeleisel/zld
object
github.com
GitHub
null
null
14,972,462
https://www.outsideonline.com/2231051/living-cloud?curator=MediaREDEF
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
25,721,588
https://davidgerard.co.uk/blockchain/2021/01/10/news-tether-to-the-moon-bitcoin-over-40000-usdt-more-ripple-fallout/
News: Tether TO THE MOON, Bitcoin over 40,000 USDT, more Ripple fallout
David Gerard
- Hello, new readers! If you like my work, please do subscribe to the Patreon — your $5 or $20 per month (which Patreon has rendered for UK readers as £4.50 or £17, for some reason) really helps with keeping this blog happening. It’s like buying me a “thank you” pint each month. [ *Patreon*] - You can also click here and enter your email address to get every new post sent to you as it’s posted. - Per this email I got from Amazon (below), I can confidently state that *Libra Shrugged*is the book I personally spent the most time focusing on in 2020, and that everyone everywhere should buy it, and tell everyone they know to buy it. Thanks for the reminder, Amazon! - (If you have either of the books, please write an Amazon review, or at least a star rating. It all helps!) ### Number go up TO THE MOON! Tether has gone up by nearly *two billion* USDT over just the past few days, and is at 24.392 billion USDT as I write this! [*Tether Transparency, archive*] (Remember 2017, when 600 million tethers was the total supply, and that was considered a shocking amount?) Popular Tether derivative Bitcoin has gone up too, breaking 41,000 USDT yesterday — though it crashed down to 36,675 USDT earlier today. I was hoping for 50,000 USDT by the end of this weekend — speaking as the famous Bitcoin perma-bull that I am. Frauds tend to keep growing until they can’t — and Tether is at the spiraling out of control stage. The deadline for iFinex to provide the New York Attorney General with information in discovery on Tether is Friday 15 January. This doesn’t mean that a bell will ring at 00:01 15 January, and all the tethers turn into pumpkins — but that’s when the next act of this interminable saga starts. (iFinex’s strategy appears to be to submit a massive unsearchable document dump. [*Twitter*] This might work, if the NYAG had done no work at all since 2018 on the issue, and hadn’t already formed a fairly well worked out opinion of iFinex’s behaviour.) Once you learn about tether no dollar amount of bitcoin will ever impress you again. In screaming about the evil of The Federal Bank and its Fake Money for so long, the bitcoin people organically created the precise scam they think the government pulls. — Helene J. (@_scaryh) January 10, 2021 ### Hammer come down More likely to cause issues is the forthcoming FinCEN regulation about documentation required to move crypto into and out of money services businesses — if you want to transfer more than $3,000 of Convertible Virtual Currencies on or off an exchange, they must collect your name and address. [*Treasury**; **Federal Register,** PDF*] This is the same rule that presently exists for cash — it’s fundamentally a further clarification following FinCEN’s May 2019 clarification of how existing law applies to cryptos. FinCEN has put this rule through on a “substantial national security concerns ” basis, stating that the comments period is only token — this rule is absolutely going through. The US government is deadly serious about the sanctions regime — see chapters 10 and 13 of *Libra Shrugged — *and the Department of Justice has already prosecuted over cryptos being used by North Korea for money laundering. Per the rule, “U.S. authorities have found that malign actors are increasingly using CVC to facilitate international terrorist financing, weapons proliferation, sanctions evasion, and transnational money laundering.” The crypto industry is *absolutely crapping itself* about this new rule, as if it’s the end of the world. Coinbase and Andreesen Horowitz plan to fight it in court. [*CoinDesk**; **The Block*] For some of them, it will be the end — because even the cleaner players know they’re no more than one or two steps removed from dirty money, and this will affect the flows of all sorts of questionable cash. Given the absolute panic going through crypto, I think this is rule is a likely explanation for the Tether printer going nuts right now. Further new year gifts from FinCEN — foreign crypto holdings now require reporting in the same manner as foreign bank accounts. [*FinCEN**, PDF*] If you don’t report your foreign holdings, then the IRS can find you $12,921 for each non-willful violation; for wilful violations it’s $129,210, or 50% of the account balance, whichever is greater. [*IRS*] The BSA and FinCEN are, have been and will remain the 10,000 pound elephant for crypto for the next 2 to 3 years. — Palley (@stephendpalley) November 20, 2019 ### Dear sir, thank you for your relentless cynicism on crypto. How do I get rich in bits-coin? God help me, people keep messaging me — of all people — and asking for advice on how to get into trading crypto. First: don’t. You will lose your money. It’s a completely fake bubble right now — and you’ll be left holding a bag of unsaleable coins, and waiting for the next bubble. You’re going to do it anyway, of course — you’ve made your decision, you just want to tell yourself that you did your due diligence. So. You can totally make money in crypto! I would never say you can’t. But you’re much more likely to lose your shirt. The crypto market is highly manipulated, and all but unregulated. It’s a pool full of sharks, and you look tasty. Treat it as gambling, not investment. You know how you can gamble and lose *all* of your money in an instant? This is like that. What about the market cap? Well, if you take the price of a beanie baby and multiply it by the number of those beanies, and add this up for all the beanies, you have the “market cap” of beanie babies, here in the year 2000! This is a very real and informative number that tells you anything you can use to make your fortune in beanies. *Or maybe it isn’t.* If you read any press article going “gosh, isn’t Bitcoin’s price high! Here’s some speculation as to why, from people with an interest in selling you on bitcoins” and it doesn’t mention tethers, you can disregard it. If you have coins you’ve been stuck with since 2018 — sell your cost basis, the money you originally paid for them. Then every gain from there is for free. Finally: don’t do it. You will lose your money. You are lining yourself up for a role as one of the suckers. Read Part 3 again. Coping nocoiners desperate to point out how the price tanks anytime someone actually tries to sell. Sure but I’m smart enough to get out at the top before everyone else — warrior cop (@wyatt_privilege) January 10, 2021 ### But don’t just take my word for it As always, I recommend Patrick McKenzie’s 2019 piece as the definitive text on Tether: “Tether is the internal accounting system for the largest fraud since Madoff.” [*Kalzumeus*] Kristian Johansson — a Bitcoin holder and advocate — wonders about the bizarre lack of mainstream media coverage of Tether’s role in the price of Bitcoin. “If it is true that more large institutions are buying Bitcoin now, then surely they would have done a detailed risk analysis and Tether would have popped up as a very real risk and something that would be discussed more openly?” [*Seeking Alpha*] (People keep suggesting Grayscale’s Bitcoin fund as evidence of massive quantities of institutional actual dollars going into Bitcoin. Grayscale’s GBTC fund states assets under management in dollars — but it accepts direct BTC deposits. An unknown proportion is just Grayscale acting as a custodian for their fellow whales — reputedly almost *all* of it, because they don’t break out this number. It’s not real-money institutions giving Grayscale actual dollars to buy bitcoins, not at all.) Why does Tether issue on weekends and holidays, with Bitcoin pumps on weekends and holidays? It *coincidentally* matches with when CME Bitcoin futures settle — “bitcoin faced selling pressure in the days ahead of the expiry, as well as on the day itself. The event was also followed by selling over the weekend, leading to a gap down on the CME chart, which doesn’t include weekend data.” [*CryptoNews, 2020*] Is Bitcoin a Ponzi scheme? I’ve long held that Bitcoin technically isn’t a Ponzi — it just works like one. Tr0lly details how Bitcoin is like a Ponzi, and why it’s important that it isn’t one legally in the US — it’s a much more complicated scheme. [*Tr0lly*] Jorge Stolfi, however, describes why he thinks Bitcoin is a Ponzi — with answers to common objections. [*Jorge Stolfi*] If you call Bitcoin a Ponzi, bitcoiners will dive in claiming that normal investments, the government, etc. are also Ponzi schemes. Claiming everything else is a Ponzi scheme really, if you think about it, is a standard excuse from Ponzi schemers — *e.g.,* Bernie Madoff saying in 2011 that “The whole government is a Ponzi scheme.” [*New York*] Why you can’t cash out pt 2 still applies — UK banks still really, really don’t like crypto. Good luck turning your paper gains into actual money. [*The Times*] Telegraph: Bitcoin’s wild ride to $34,000 fuels fresh warnings of an impending crackdown — with a quote from me. [*Daily Telegraph*] Cas Piancey: a new Tether primer. [*Medium*] Kiffmeister’s Daily Digest by IMF researcher John Kiff, on the Tether Question. [*blog post*] This is absolutely the worst way to present a good blog post, even worse than posting an essay as an extended Twitter thread. Everyone, go suffer through squinting at the text in a graphic. [*Twitter**; **Twitter*] The following pair of tweets are real: [*Twitter, **archive*] ### Good news for Bitcoin The media, and bitcoiners, have gone wild with a JPMorgan analyst’s note that projects a Bitcoin price of $146,000! This means JPMorgan is going all in with crypto!! Here’s the relevant paragraph from that analyst. It’s not the fount of optimism that enthusiastic (lying) bitcoiners have been painting it as. In fact, the number is a wild hypothetical as part of a quite negative outlook: [*LinkedIn*] Mechanically, the market cap of bitcoin, at $575bn currently, would have to rise by x4.6 from here, implying a theoretical bitcoin price of $146k, to match the total $2.7tr private sector investment in gold via ETFs or bars and coins. But this long-term upside, based on an equalization of the market cap of bitcoin to that of gold for investment purposes is conditional on the volatility of bitcoin converging to that of gold over the long term. The reason is that, for most institutional investors, the volatility of each class matters in terms of portfolio risk management and the higher the volatility of an asset class, the higher the risk capital consumed by this asset class. It is thus unrealistic to expect that the allocations to bitcoin by institutional investors will match those of gold without a convergence in volatilities. In fact, an argument can be made that, in terms of risk capital, bitcoin has largely equalized with gold already given that bitcoin and its biggest fund on average currently consume x4.3 more risk capital than gold and its biggest fund. What do crypto advocates worry about from the forthcoming Biden administration? The threat of someone who’s finally interested in doing something about the crime against humanity that’s called Proof-of-Work cryptocurrency mining. [*Twitter thread*] Frances Coppola looks closely at MicroStrategy’s numbers — and why Michael Saylor is turning the company into a Bitcoin hedge fund. “Do you have a poorly-performing company that you don’t know what to do with? Bitcoin fixes this!” [*blog post*] Bitcoin in the enterprise: Funke Media Group is given massive incentives to Bitcoin adoption, and is looking at 6,000 licenses — I’m sorry, Funke Media Group has 6,000 PCs locked by ransomware. [*DW*] How it started How it’s Going pic.twitter.com/chwOBg0PzE — Peter Huminski (@Thoriumwealth) December 31, 2020 ### Everybody hates Ripple Ever since the SEC’s hammer came down on Ripple and XRP in December, everyone’s been back-pedaling at the speed of light away from the coin and the company — in the hope of not getting any on them. Bitstamp stops XRP trading for US customers as of 8 January. Other countries are not affected. [*Bitstamp*] Coinbase is delisting XRP as of 19 January. Why so late? Well, there’s a lot of big holders around Coinbase. [*Coinbase blog*] You can still *hold* coins there, for some reason. Coinbase apparently spoke to the SEC “multiple times” while working out what to do about XRP. [*Twitter*] Ripple got venture capital funding, presumably in the hope of looking more like a “real” company. One of the investors, Tetragon, wants their money back — now. Their deal apparently involves Tetragon being able to cash out on Ripple going public, and they’re saying the SEC declaring XRP a security that was sold to retail counts as taking company stock public. Stephen Palley has a Twitter thread on what we know about the case. [*Bloomberg**; **Twitter*] Ripple’s XRP validator network was always functionally centralised — and couldn’t even reach consensus if it split. The model was always handwaving to fake decentralisation, where “decentralisation” is a word meaning “I want to dodge legal responsibility.” [*GitHub*] The pretrial conference in SEC v. Ripple will be in February 2021. [*Conference order**, PDF*] Uh, seems like he’s saying the quiet part out loud pic.twitter.com/9lRnxWiM9P — Cas “The Wolf of No Street” Piancey (@CasPiancey) January 9, 2021 ### Things happen Bittrex brings *good* news for privacy coins! Monero, Dash, Zcash, and Grin are being delisted from from Bittrex Global. This will have been due to pressure from banks who are worried about following money-laundering rules. [*Bittrex*] White House Market, the largest currently-existing darknet market, no longer accepts Bitcoin — just Monero. So much for Bitcoin’s use case. [*Twitter*] I’m shocked, shocked to discover that the Bitcoin exchange with racially and sexually discriminatory employment practices underpays its black and female employees! The New York Times was sent a trove of data from 2018 and 2019 by a disgruntled Coinbase insider. [*New York Times*] India considers imposing an 18% sales tax on cryptocurrency. Note that this does not require India to make crypto legal — evading sales tax on illegal goods is also a crime. [*Times of India*] Here’s a nice rundown of SEC enforcement actions against ICOs. The only thing I’d question is that the SEC did contact all these companies quite early, and was trying to avoid having to take action — the eventual actions come after a year or two. Perhaps the SEC could go faster, or state how long they were trying to resolve things in the press releases. [*The Dig*] Bitcoin is secured by math, which is why HK$3 million of bitcoins mathematically belong to a gang of robbers now. Their cryptographic proof must have been more robustly peer-reviewed. [*SCMP*] hi friends- for the new year I’m taking a break from life so I can focus on social media. if you need me you can find me here, constantly — Ely Kreimendahl (@ElyKreimendahl) December 31, 2020 ### Hot takes BeInCrypto: Why Blockchain Won’t ‘Revolutionize’ Healthcare and Education Anytime Soon — with a quote from me. [*BeInCrypto*] Frances Coppola: Crypto’s Choice: Join the Financial System or Fight It — if you want real money, you have to follow the rules of real money. [*CoinDesk*] Tim Swanson: Parasitic stablecoins — how, instead of hyperbitcoinisation, the crypto world has gone as fast as possible into pseudo-dollarisation. WIth an appendix by Tim and Martin Walker. This goes into extreme detail about the legal issues around stablecoins, and how being utterly reliant on the conventional finance system is going to work out for crypto. [*Great Wall of Numbers**; **Great Wall of Numbers*] Working on a Stock-to-Flow model for scarce nuclear waste as a Store of Value. Just need Tether to pump me now and an exchange to list it. — Tether Ponzi – WillyBot 2.0. (@RealWillyBot) January 8, 2021 ### The *Libra Shrugged* gallery There’s nothing I like more than seeing people’s paperbacks. Please tweet yours, and mention me at @davidgerard! Happy generic non-denominal winter celebration to everyone pic.twitter.com/LQhPf0XmxU — Håkan Geijer (@hakan_geijer) December 24, 2020 A fun, interesting and horrifying look into Facebook's attempt at a cryptocurrency, I highly recommend this book! If you're interested in a more critical perspective of cryptocurrencies than what you get from crypto bros, check out @davidgerard and @ahcastor pic.twitter.com/7RGoD1Ea6t — Er_ick_ (@EckTxt) December 28, 2020 I couldn’t agree more w/ @Timccopeland of @decryptmedia!🔥💯 If you haven’t followed @davidgerard & @ahcastor, u’re missing out mega big time!🏆 Just as #bitcoin is hitting an ATH, I got these beauties in the mail.🤓👉📚 Thank you David! Really looking forward to reading these!❤️ https://t.co/a6V0eA0fim pic.twitter.com/vdVxYrryqB — Jenny Q Ta 🐶🐕 (@JQT_CoinLinked) December 27, 2020 *Your subscriptions keep this site going. Sign up today!* More likely to cause issues is the forthcoming FinCEN regulation about documentation required to move crypto into and out of money services businesses — if you want to transfer more than $3,000 of Convertible Virtual Currencies on or off an exchange, they must collect your name and address. My expectation is that someone on the crypto-side will start crowing on about how you should now start transferring $2999 worth of coins. And others will probably thing this is a perfectly sensible idea. But, remember, this is a prime example of structuring, whereby you structure your economic activity to avoid automatic reporting limits (or other safeguards) and is illegal in its own right.When the equivalent for proper money came in some decades ago, a requirement to report transactions of $10,000 or more, many criminals were highly offendedthat the banks regarded their multiple $9,999 transactions as suspect and also worth reporting.As I noted in Why you can’t cash out pt 2: > This particularly affects those who discovered an old Bitcoin stash and want to turn it into cash. I know of one such case where the user had sold a car for bitcoins years ago, and wanted to cash in on the current bubble. The best the exchange could come up with was sending it in daily in amounts of several thousand dollars — and dribbling a million-dollar balance out a few thousand dollars at a time is not only tedious, it looks like structuring, when someone’s trying to dodge reporting requirements.
true
true
true
“Working on a Stock-to-Flow model for scarce nuclear waste as a Store of Value. Just need Tether to pump me now and an exchange to list it.”
2024-10-12 00:00:00
2021-01-10 00:00:00
https://davidgerard.co.u…chist-header.jpg
article
davidgerard.co.uk
Attack of the 50 Foot Blockchain
null
null
13,460,373
https://www.brainpickings.org/2014/01/03/baloney-detection-kit-carl-sagan/
null
null
null
true
false
false
null
null
null
null
null
null
null
null
null
22,128,736
https://arstechnica.com/information-technology/2020/01/safaris-anti-tracking-protections-can-leak-browsing-and-search-histories/?comments=1
Google researchers find serious privacy risks in Safari’s anti-tracking protections
Dan Goodin
When Apple introduced powerful anti-tracking protections to Safari in 2017, advertisers banded together to say they were “deeply concerned” it would sabotage ad-supported content. Now, there’s new information showing that Safari users had good reason for unease as well. Known as Intelligent Tracking Prevention, the mechanism uses machine learning to classify which websites are allowed to use browser cookies or scripts hosted on third-party domains to track users. Classifications are based on the specific browsing patterns of each end user. Sites that end users intentionally visit are permitted to do cross-site tracking. Sites that users don’t actively visit (but are accessed through tracking scripts) are restricted, either by automatically removing the cookies they set or truncating referrer headers to include only the domain, rather than the entire URL. A paper published on Wednesday by researchers from Google said this protection came with unintended consequences that posed a risk to the privacy end users. Because the list of restricted sites is based on users’ individual browsing patterns, Intelligent Tracking Prevention—commonly abbreviated as ITP—introduces settings into Safari that can be modified and detected by any page on the Internet. The paper said websites have been able to use this capability for a host of attacks, including: - obtaining a list of recently visited sites - creating a persistent fingerprint that follows a user around the Web - leaking search results or other sensitive information displayed by Safari - forcing any domain onto the list of sites not permitted to use third-party scripts or cookies
true
true
true
Apple’s Intelligent Tracking Prevention can open users to a variety of attacks.
2024-10-12 00:00:00
2020-01-23 00:00:00
https://cdn.arstechnica.…20/01/safari.jpg
article
arstechnica.com
Ars Technica
null
null
14,193,351
https://arstechnica.co.uk/tech-policy/2017/04/man-takes-drone-out-for-a-sunset-flight-drone-gets-shot-down/
Man takes drone out for a sunset flight, drone gets shot down
Cyrus Farivar
It was around sunset on Easter Sunday, April 16, when Brad Jones took his DJI Inspire 2 out for a flight in front of his home. Jones hoped, as he does on most nights, to capture some of the forested and hilly scenery in the environs of his hometown, Oliver Springs, Tennessee—about 30 miles west of Knoxville. “I flew down over my aunt’s house, and I heard a gunshot within the first three to four minutes of flight,” Jones told Ars. “So I sped up and flew back towards my house.” After a few more minutes, he flew back westward. He had just switched the drone’s camera mode from video to taking still photos in RAW format. “I took two pictures, then I heard the gunshot, and all of a sudden my drone started spiraling down—I’m sitting there trying to keep it aloft and there was no lift.” A nearby neighbor, who was also in the front of his own home, turned to Jones and exclaimed: “That hit it! You just got shot! It’s going to crash!” Indeed, Jones watched as his beloved drone came plummeting straight down onto the property of the Coalfield Seventh Day Adventist Church—right next to a neighbor’s home, where young children were playing in the backyard. “It didn’t hit the ground as hard as it could have,” Jones said. “When it hit, it broke the left landing gear arm, snapped the molding off the Inspire. But it was still running. Didn’t damage batteries, rotors were intact. Everything was fine, except the left rear motor with a bullet hole in it.” Jones became the fourth reported drone shooting incident that Ars has been made aware of in nearly two years. ## By any other name Just last month, a federal judge dismissed a lawsuit filed against William Merideth, the Kentucky man who shot down a drone that Merideth believed was flying over his own property in 2015.
true
true
true
“Everything was fine, except the left rear motor with a bullet hole in it.”…
2024-10-12 00:00:00
2017-04-25 00:00:00
https://cdn.arstechnica.…_3168-scaled.jpg
article
arstechnica.com
Ars Technica
null
null
932,921
http://www.schneier.com/blog/archives/2009/11/is_antivirus_de.html
Schneier on Security
Name
## Is Antivirus Dead? *This essay previously appeared in Information Security Magazine, as the second half of a point-counterpoint with Marcus Ranum. You can read his half here as well.* Security is never black and white. If someone asks, “for best security, should I do A or B?” the answer almost invariably is both. But security is always a trade-off. Often it’s impossible to do both A and B—there’s no time to do both, it’s too expensive to do both, or whatever—and you have to choose. In that case, you look at A and B and you make you best choice. But it’s almost always more secure to do both. Yes, antivirus programs have been getting less effective as new viruses are more frequent and existing viruses mutate faster. Yes, antivirus companies are forever playing catch-up, trying to create signatures for new viruses. Yes, signature-based antivirus software won’t protect you when a virus is new, before the signature is added to the detection program. Antivirus is by no means a panacea. On the other hand, an antivirus program with up-to-date signatures will protect you from a lot of threats. It’ll protect you against viruses, against spyware, against Trojans—against all sorts of malware. It’ll run in the background, automatically, and you won’t notice any performance degradation at all. And—here’s the best part—it can be free. AVG won’t cost you a penny. To me, this is an easy trade-off, certainly for the average computer user who clicks on attachments he probably shouldn’t click on, downloads things he probably shouldn’t download, and doesn’t understand the finer workings of Windows Personal Firewall. Certainly security would be improved if people used whitelisting programs such as Bit9 Parity and Savant Protection—and I personally recommend Malwarebytes’ Anti-Malware—but a lot of users are going to have trouble with this. The average user will probably just swat away the “you’re trying to run a program not on your whitelist” warning message or—even worse—wonder why his computer is broken when he tries to run a new piece of software. The average corporate IT department doesn’t have a good idea of what software is running on all the computers within the corporation, and doesn’t want the administrative overhead of managing all the change requests. And whitelists aren’t a panacea, either: they don’t defend against malware that attaches itself to data files (think Word macro viruses), for example. One of the newest trends in IT is consumerization, and if you don’t already know about it, you soon will. It’s the idea that new technologies, the cool stuff people want, will become available for the consumer market before they become available for the business market. What it means to business is that people—employees, customers, partners—will access business networks from wherever they happen to be, with whatever hardware and software they have. Maybe it’ll be the computer you gave them when you hired them. Maybe it’ll be their home computer, the one their kids use. Maybe it’ll be their cell phone or PDA, or a computer in a hotel’s business center. Your business will have no way to know what they’re using, and—more importantly—you’ll have no control. In this kind of environment, computers are going to connect to each other without a whole lot of trust between them. Untrusted computers are going to connect to untrusted networks. Trusted computers are going to connect to untrusted networks. The whole idea of “safe computing” is going to take on a whole new meaning—every man for himself. A corporate network is going to need a simple, dumb, signature-based antivirus product at the gateway of its network. And a user is going to need a similar program to protect his computer. Bottom line: antivirus software is neither necessary nor sufficient for security, but it’s still a good idea. It’s not a panacea that magically makes you safe, nor is it is obsolete in the face of current threats. As countermeasures go, it’s cheap, it’s easy, and it’s effective. I haven’t dumped my antivirus program, and I have no intention of doing so anytime soon. josephdietrich • November 10, 2009 7:02 AM A minor quibble: As someone who is a long-time user of AVG in both it’s paid and unpaid versions, the claim that you will not notice a performance degradation is, well, untrue, at least in my (anecdotal, I know) experience. I suppose it depends on your hardware, but on all of the machines that I have ever had, the anti-virus programs that I have used have caused definite performance hits during startup and/or when running a disk scan.
true
true
true
null
2024-10-12 00:00:00
2009-11-10 00:00:00
null
null
schneier.com
schneier.com
null
null
35,195,597
https://about.scarf.sh/post/switching-container-registries-zero-downtime
Scarf I Switching Container Registries With Zero Downtime
Scarf Community Team
By clicking “Accept all”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Cookie Policy for more information. You can change the hosting provider of your packages, containers, and files without impacting the end user with Scarf. Scarf Gateway (available under the Apache 2 license at https://github.com/scarf-sh/gateway ) and our managed Scarf service are designed to provide you with an easy way to change out the back-end container registry, package location, or file hosting platform for your open source software. Scarf Gateway does this by creating a custom domain redirect to the channels distributing your software. If you find a provider not meeting your needs, simply update the endpoint, and all your users will continue using the same commands and destinations they used previously with no noticeable change. Scarf Gateway How Scarf Gateway works is similar to a link shortener like bit.ly, which acts as a domain gateway to redirect traffic. Where it is different from a normal link shortener is it is designed to be compatible with the different API’s used by various services like: Registries like Docker Hub, Google Container Registry, RedHat Quay, Amazon Elastic Container Registry, and Azure Container Registry Package managers like Nix, homebrew, RPM, Apt, etc, Language specific package managers like pip or npm, Files coming via source control repos (Github or Gitlab) Or any file that is a direct download on the internet One Line Change: If you are using the open source version of the Scarf Gateway, changing container registries is literally a one-line change Update the registry variable in the configuration, and all your user's pull commands will redirect to the new location. No fuss. If you are using Scarf.sh’s managed service the UI is just as simple (note using our hosted managed service is free if you don’t want to run the gateway on your own): See how Scarf enables you to see more about your downloads: You can see an example setup using the UI here (~3 minutes): Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere. Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. You can change the hosting provider of your packages, containers, and files without impacting the end user with Scarf. Scarf Gateway (available under the Apache 2 license at https://github.com/scarf-sh/gateway ) and our managed Scarf service are designed to provide you with an easy way to change out the back-end container registry, package location, or file hosting platform for your open source software. Scarf Gateway does this by creating a custom domain redirect to the channels distributing your software. If you find a provider not meeting your needs, simply update the endpoint, and all your users will continue using the same commands and destinations they used previously with no noticeable change. Scarf Gateway How Scarf Gateway works is similar to a link shortener like bit.ly, which acts as a domain gateway to redirect traffic. Where it is different from a normal link shortener is it is designed to be compatible with the different API’s used by various services like: Registries like Docker Hub, Google Container Registry, RedHat Quay, Amazon Elastic Container Registry, and Azure Container Registry Package managers like Nix, homebrew, RPM, Apt, etc, Language specific package managers like pip or npm, Files coming via source control repos (Github or Gitlab) Or any file that is a direct download on the internet One Line Change: If you are using the open source version of the Scarf Gateway, changing container registries is literally a one-line change Update the registry variable in the configuration, and all your user's pull commands will redirect to the new location. No fuss. If you are using Scarf.sh’s managed service the UI is just as simple (note using our hosted managed service is free if you don’t want to run the gateway on your own): See how Scarf enables you to see more about your downloads: You can see an example setup using the UI here (~3 minutes): We're thrilled to announce that Scarf has successfully completed the SOC 2 Type 2 examination! This might sound like legal jargon at first glance, but let’s break down what this means for us, our users, and the open-source community as a whole. Exporting data tracked by Scarf is essential for analytics, reporting, and integration with other tools. Scarf adds open-source usage metrics to the data you already collect, giving you a fuller picture of how your project is used. This helps you monitor trends, measure impact, and make better data-driven decisions. Scarf helps you unlock the full potential of your open source project by collecting valuable usage data in three key ways: Scarf Packages, in-app telemetry, and tracking pixels. In this post, we’ll break down each of these powerful tools and show you how to use them to optimize your open source strategy. In this playbook, you’ll learn how to integrate Scarf into an Apache Software Foundation project. It details how the Preset team implemented Scarf in their Apache Superset project, as shared during our first-ever Scarf Summit on July 16th, 2024. Implementing telemetry in your open source project helps you determine whether people are testing your software and continuing its use over time. Such insights not only confirm if the developed software meets users' needs but also helps identify which versions are being adopted and which might be vulnerable to the latest bugs or other issues. Prisma turned to Scarf for a monthly Strategic Insights Report. By integrating Scarf into various parts of their web and software delivery infrastructure, Prisma now knows relevant details about their users in terms of company size, industry, location and much more. This playbook will walk you through setting up Scarf to get a clearer picture of how people are interacting with your open-source project. You’ll learn how to create and use Scarf Pixels, track open source project documentation views, measure engagement across social media, and more. CopilotKit implemented Scarf to gain visibility into their open-source community. By adding Scarf to their documentation, they could see which companies were actively engaging with their resources, providing valuable insights into potential leads and customer segments. Tracking downloads of your open-source projects is key to understanding user engagement. With Scarf, you can see which businesses are using your project, which versions are popular, which platforms are being targeted, and more. This playbook will show you how to set up Scarf to monitor your project’s downloads. On July 16th, we hosted our first-ever Scarf Summit, celebrating analytics for open source and the significant improvements we’ve made to the Scarf platform. In case you missed it, here’s a recap of all the key updates shared by our Engineering Leader, Aaron Porter. In this episode of the Haskell Interlude Podcast, Joachim Breitner and Andreas Löh sit down with Avi Press, the founder of Scarf, to discuss his journey with Haskell, the telemetry landscape in open source software, and the technical as well as operational challenges of building a startup with Haskell at its core. Scarf Basic and Premium tiers have long had the ability to sort their open source usage data by company, domain, events, last seen, and funnel stage. But our customers have been wanting more. Now you can hyper target by combining region, tech stack, and funnel stage, making outreach as refined and low friction as possible. Understanding open source user engagements and usage is obscured by a lack of actionable data, a result of its inherent openness and anonymity. Embracing a data-driven approach to open source projects helps them not only grow, but also understand the keys to their success, benefiting everyone involved. As an open source company, Garden knew how hard it was going to be to get usage data. Adding Scarf for analytics on open source downloads turned anonymous numbers into company names. Using Scarf’s privacy-first analytics also helped Garden to know what kind of companies were using their OSS and where they were located. Once Heroic started using Scarf, they learned that they were even more popular than they thought they were. Using Scarf, they were able to determine where, by country, their users were downloading from, and how many per day. Any LF project maintainer can use Scarf without needing any further approval from the foundation. Scarf is offering all LF projects free accounts with a few additional features over our base free version. LF projects will get usage data like docs, downloads, and page views with unlimited free seat licenses and data retention. Union is an open source first company. It uses Scarf to drive their DevRel strategy and improve their open source project. It also uses Scarf to power its consultative sales approach to help customers where it makes sense. Union has been successfully leveraging Scarf funnel analysis to shape the product to better fit the market so that they can focus on ensuring that companies can get value from Flyte sooner. In this latest episode of "Hacking Open Source Business," Avi Press and Matt Yonkovit sit down with Adam Jacob, the co-founder of Chef and current CEO of System Initiative. With a rich history in the open-source world and numerous thought-provoking opinions, Adam delves into the intricacies of open-source commercialization, offering valuable insights and alternative strategies to the commonly held Open Core model. Smallstep wanted to understand the impact of their open-source project on enterprise adoption of their commercial security solutions. Smallstep uses Scarf to better understand user interactions and software usage, providing insights into its user base and potential customer segments as an important signal for commercial use. Diagrid was founded in 2022 by the creators of the popular Dapr open source project. Making data-driven decisions for a commercial company built on an open source project that had no real concrete data, was a real challenge. Diagrid translated Scarf data into valuable insights for marketing and product development of their commercial product. When we approached the project of building Scarf, we turned to our favorite language: Haskell. Little did we know, this decision would shape our story in more ways than one. Unstructured had so much usage of their open source, but so little data. Prior to Scarf, they mostly had GitHub information for things like downloads and stars. It was difficult to separate the good signal from the noise without any specific information that would help them to better target this large and growing open source user base or data to influence their product roadmap. It’s happening! Scarf is part of the Common Room Signal Partners program. Soon, you will be able to integrate your Scarf data into your Common Room platform for a more complete view of all of your user signals. We are thrilled to announce that we have successfully completed a Type 1 System and Organization Controls 2 (SOC 2) examination for our Scarf Platform service as of January 31, 2024. When Scarf emerged back in 2019, many people expressed skepticism that usage analytics would ever be tolerated in the open source world. 5 years later, Scarf has shown this once solidified cultural norm can indeed change. Learn how Scarf's journey mirrors a broader shift in open source culture and why embracing usage analytics could shape the future of open software development. Apache Superset is an open-source modern data exploration and visualization platform that makes it easy for users of all skill sets to explore and visualize their data. We spoke with Maxime Beauchemin, founder & CEO of Preset, and the original creator of both Apache Superset and Apache Airflow, who shared with us Superset's experience using Scarf. Haskell, a cutting-edge programming language rooted in pure functionality, boasts static typing, type inference, and lazy evaluation. The language's ongoing evolution is bolstered by a diverse array of organizations, including the Haskell.org committee. This committee strategically leveraged the Scarf solution for testing purposes. We’re pleased to share a final recap of the latest Scarf updates for December and 2023 as a whole. Join us in this last edition of our 2023 newsletters. In the open source ecosystem, user behaviors are diverse and conversion tracking poses unique challenges frequently leaving traditional marketing strategies insufficient. Recognizing this gap, we are excited to introduce a brand new way for businesses to make sense of this opaque and noisy signal – Open Source Qualified Leads (OQLs). In recent years, a notable development in the open source landscape is the growing number of large corporations considering the transition from open source licenses to more restrictive models like the Business Source License (BSL). This trend raises further questions about the sustainability and future of open source projects, particularly when large players alter their approach. A recent release of Scarf added the ability to track and report on custom URL parameters. If you are looking to gain more intelligence around how you open source users interact with your project and download your software using link parameters in key situations can reveal interesting and helpful trends that can help you grow your user base and unlock open source qualified leads. In the ever-evolving landscape of open source software, data collection has become a hot-button issue. As the open source community grows and software becomes increasingly integral to our daily lives, concerns about data collection ethics have emerged. In today's fast-paced tech world, the Developer Relations (DevRel) role has moved from the periphery to the center stage. Companies, irrespective of their size, are now seriously considering the worth of having a dedicated DevRel team. But, how do you quantify the success or failure of such an effort? What metrics should companies use? This post dives deep into understanding the commercial Return on Investment (ROI) of DevRel. Monetizing open source software is a challenging task, but it can also be highly rewarding. Unlike traditional software, you're essentially competing against a free version of your product. So, how do you sell something that is inherently free? In the dynamic realm of community management, marketing, and developer relations, success depends upon more than just attracting attention. It's about fostering meaningful relationships, nurturing engagement, and amplifying your community's impact. This guidebook shows you how to implement a call-home functionality or telemetry within your open-source software while at the same time being transparent and respectful of your users data. Let's explore how to build a minimal, privacy-focused call home functionality using a simple version check and Scarf. Many open source contributors are reluctant or skeptical about metrics. They think metrics are overrated, irrelevant, or even harmful to their projects and communities. But in this blog post, we argue that metrics are essential for making better decisions, improving the experience for users and contributors, and demonstrating the impact and value of your open source work. We also share some tips and examples from OSPOs and DevRel teams on how to choose and use metrics effectively. Many open-source developers rely on GitHub as their primary documentation source. But this can be a costly mistake that can affect your project’s success and adoption. In this blog, we’ll explain why you need to build your own docs site and how to do it easily and effectively. Open source projects and companies need data to grow and enhance their performance. However, many open source leaders and communities overlook or reject metrics and depend on intuition, relationships, or imitation. Data can help you spot problems, opportunities, and false positives in growth strategies. In this blog post, Matt Yonkovit shows you why data is important for open source success and how it can offer insights and guidance for open source projects to reach their goals and make better decisions. Open source software continues to be a vital part of enterprise operations in Q2 2023, as more and more companies adopt open source solutions for their business needs. In this blog post, we will examine the state of open source usage in Q2 2023 and the trends that are shaping the future of open source. DevRel is a vital function for any organization that wants to engage with the developer community and grow its user base. However, there is no one-size-fits-all solution for where to place DevRel within the organizational structure. In this blog post, we explore three common strategies for DevRel placement: marketing, product, and hybrid. We discuss the advantages and challenges of each strategy, and provide some tips on how to decide which one is best for your organization and goals. In the open source industry, identifying and engaging users is a major challenge. Many users download software from third-party platforms that do not share user data with the software company. Gating content behind a login or an email form can help, but it can also alienate potential users who value their privacy and convenience. In this blog post, we explore the pros and cons of gating content in the open source industry, and we offer an alternative solution that can help you identify and connect with your users without compromising your content. Open source software depends on the power of its community. But how do you know if your community is healthy and thriving? In this blog, you will learn how to use metrics to track and evaluate your community’s activity, engagement, growth, diversity, quality, and impact. You will hear from founders, DevRel experts, and investors who share their best practices and tips on how to measure and improve your community’s performance and value. Learn how to overcome the challenges of open source software marketing and turn anonymous data into qualified leads. In this blog post, we’ll show you how to use download data, web traffic, and documentation views to identify potential customers and grow your sales pipeline. Discover how to track downloads, website traffic and documentation views with Scarf Gateway and the Scarf Tracking Pixel. This blog post outlines ten common mistakes made by founders of open source startups, from failing to ask the right questions to neglecting the standardization of key metrics. By offering guidance on how to avoid these pitfalls, it provides a roadmap to successfully commercializing open source projects. Many people believe that making money from open source projects is an arduous or even impossible task. However, with the right strategies it is possible to build a sustainable business while keeping the spirit of open source intact. By evaluating the market fit and commercial viability of an open source project before considering funding and monetization, one can realistically begin to explore the financial potential of an open source project. Here's how to do it. This blog emphasizes the importance of a comprehensive approach to lead generation in the open source software space. Amid the challenges of anonymous usage and privacy regulations, strategies focusing on download activity, community engagement, and web traffic can maximize lead identification. Employing lead scoring and maintaining a list of active software users can further enhance sales outcomes in this unique market. Here at Scarf, we've developed a solution to help open source projects and businesses gain more insight into their users and their download traffic - Scarf Gateway. Here's how it works. We are thrilled to announce our latest partnership with Clearbit (https://clearbit.com/). This collaboration will offer Scarf users and customers an enriched array of data about their user base, significantly enhancing the quality of information you already value from Scarf. The popularity of open source software is not in doubt, but little concrete public data exists beyond human-generated surveys on adoption usage. In this blog post, we will explore the state of open source usage in Q1 2023 and the data illustrating how open source is becoming an increasingly important part of enterprise operations. The success of DevRel (Developer Relations) and community efforts in open source can be challenging to measure, as there is often a disconnect between the goals and expectations of the community and the business. This blog post discusses the challenges of measuring the success of DevRel and community efforts in open source. Successful open source projects don't always translate into successful open source businesses. However, by focusing on building a kick-ass product, raising awareness, making the product easier to use, and fostering a strong open source community, you can set the stage for converting users into paying customers. You can use the open source Scarf Gateway to switch hosting providers, container registries, or repositories without impacting end users in the future. What is driving all this tech layoffs? , What is their impact on the open source software industry? We will walk through all the potential reasons from an economic downturn, herd mentality, excessive borrowing and spending due to low interest rates, and growth at all costs as the main reasons behind the layoffs. Companies can continue to grow in this tight economic market if they are focused on optimizing efficiency and sustaining the right growth. At the All Things Open conference, Emily Omier, a seasoned positioning consultant, sat down with Avi Press (Founder and CEO, Scarf) and Matt Yonkovit (The HOSS, Scarf) to discuss how to message, position, and validate your open source product on The Hacking Open Source Business Podcast. You can watch the full episode below or continue reading for a recap. On the Hacking Open Source Business podcast, Joseph Jacks aka JJ (Founder, OSS Capital) joins Avi Press (Founder and CEO, Scarf) and Matt Yonkovit (The HOSS, Scarf) to share what you need to know before starting a commercial open source software (COSS) company and how you can set yourself and your project apart in a way that attracts investor funding. As an investor who exclusively focuses on open source startups, JJ provides a VC perspective on what he looks for when evaluating investment opportunities. On The Hacking Open Source Business podcast, CEO Chris Molozian and Head of Developer Relations Gabriel Pene at Heroic Labs elaborate on their usage and shift to open source and how it accelerated their adoption. In this recap of the first episode of the Hacking Open Source Business Podcast, co-hosts Matt Yonkovit and Avi Press, Scarf Founder and CEO, dig into a recent controversy that highlights the challenges open source projects face trying to create sustainable revenue streams to support a business or a non-profit that funds the project’s growth. Scarf Sessions is a new stream where we have conversations with people shaping the landscape in open source and open source sustainability. This post will give a recap of the conversation Scarf CEO, Avi Press and I had with our guest Stefano Maffulli. Community is important to the success of open source software. To understand and grow a community, project founders and maintainers need visibility into various technical, social, and even financial metrics. But what metrics should we be using? Should Python eggs be deprecated in favor of wheels? What does the data show? This post explores how the right data can make decisions like this easier for maintainers and Open Source organizations. In a new blog post series, we'll highlight great OSS projects that are using Scarf. Today, we are featuring IHP, a modern batteries-included Haskell web framework Our mission here at Scarf centers around enhancing the connections between open source software maintainers and end users. Learn how Scarf + Nomia can reduce the complexity and increase the efficiency of the end-user open source integration experience. Package registries are a central piece of infrastructure for software development. How aligned are they with the developers who make all of the packages being hosted? Exporting data tracked by Scarf is essential for analytics, reporting, and integration with other tools. Scarf adds open-source usage metrics to the data you already collect, giving you a fuller picture of how your project is used. This helps you monitor trends, measure impact, and make better data-driven decisions. Scarf helps you unlock the full potential of your open source project by collecting valuable usage data in three key ways: Scarf Packages, in-app telemetry, and tracking pixels. In this post, we’ll break down each of these powerful tools and show you how to use them to optimize your open source strategy. You can change the hosting provider of your packages, containers, and files without impacting the end user with Scarf. Scarf Gateway (available under the Apache 2 license at https://github.com/scarf-sh/gateway ) and our managed Scarf service are designed to provide you with an easy way to change out the back-end container registry, package location, or file hosting platform for your open source software. Scarf Gateway does this by creating a custom domain redirect to the channels distributing your software. If you find a provider not meeting your needs, simply update the endpoint, and all your users will continue using the same commands and destinations they used previously with no noticeable change. Scarf Gateway How Scarf Gateway works is similar to a link shortener like bit.ly, which acts as a domain gateway to redirect traffic. Where it is different from a normal link shortener is it is designed to be compatible with the different API’s used by various services like: Registries like Docker Hub, Google Container Registry, RedHat Quay, Amazon Elastic Container Registry, and Azure Container Registry Package managers like Nix, homebrew, RPM, Apt, etc, Language specific package managers like pip or npm, Files coming via source control repos (Github or Gitlab) Or any file that is a direct download on the internet One Line Change: If you are using the open source version of the Scarf Gateway, changing container registries is literally a one-line change Update the registry variable in the configuration, and all your user's pull commands will redirect to the new location. No fuss. If you are using Scarf.sh’s managed service the UI is just as simple (note using our hosted managed service is free if you don’t want to run the gateway on your own): See how Scarf enables you to see more about your downloads: You can see an example setup using the UI here (~3 minutes): By clicking “Accept all”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Cookie Policy for more information.
true
true
true
You can use the open source Scarf Gateway to switch hosting providers, container registries, or repositories without impacting end users in the future.
2024-10-12 00:00:00
2023-03-16 00:00:00
https://cdn.prod.website…ainer_switch.png
website
scarf.sh
about.scarf.sh
null
null
1,345,115
http://news.bbc.co.uk/2/hi/8679876.stm
Ker-bling! the new gold dispenser
null
Please turn on JavaScript. Media requires JavaScript to play. An Abu Dhabi hotel has installed an ATM-style gold dispenser to give customers easy access to the precious metal. It comes as gold prices have reached record highs in recent days. Iain Smith reports. What are these? E-mail this to a friend ## Bookmark with: What are these? E-mail this to a friend
true
true
true
An Abu Dhabi hotel has launched an ATM-style gold dispenser to give customers easy access to the precious metal.
2024-10-12 00:00:00
2010-05-13 00:00:00
null
null
null
BBC
null
null
24,079,573
https://www.nytimes.com/2020/08/05/video/beirut-explosion-footage.html
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
12,109,399
https://medium.com/@fz/reversing-the-wall-92f22a2ad538
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
20,882,186
https://www.robinwieruch.de/git-team-workflow
How to Git as a Team
Robin Wieruch
This tutorial is part 2 of 2 in this series. When I have been working with my clients over the last years, I have seen how crucial it can be to establish a **common sense git workflow for a team to become productive**. I experienced several constellations -- for instance when working in a team of thrown together developers, in an established team that just transitioned from another version control system, or as the new member of a team where no git practices were established and I wanted to get up to speed quickly -- where it made sense to align everyone on one git framework to follow a common sense and best practices. After I have been through this struggle a couple of times, I wanted to write down what I have learned about git for teams which may help you to align your team on one workflow. If this blog post turns out too long for you, go through it with your team in a "lunch and learn"-break; and condense the most important points in a git workflow cheatsheet for your team. If you come up with the cheatsheet yourself after all, your team will own it and can iterate on it with the learnings that make sense for your particular case. *Note: Everything that follows conveys only my experience working with a team of 5 - 25 people with git as version control system. Nothing you will read here is set in stone, but I have seen productive teams once this workflow (or any other workflow) got established in an organization. If you follow a different workflow in your company, I would be curious to hear about it.* ## Git Team Workflow: Branches Basically there are three kinds of branches when working as a team in git: - master branch - staging branch - feature branch(es) Whereas there can be more than one feature branch in your git workflow, there is only one master branch and one staging branch. The staging branch varies in its naming -- e.g. I have seen a staging branch being called *develop* and *development* branch as well. The master branch gets its name from git itself. The feature branches can be called whatever your team aligns on for the naming convention. I have seen something like: - feat/user-authentication - fix/landing-page-transition Often people use namings like *feat/YMC-1634* for feature branches as well, to link them directly to a ticket in their scrum/kanban/... board. Note that feature branches are not only used for feature development, but also for bug fixes and other things. Feature branches are the place where all the implementation takes place, whereas staging and master branches are only used for releases of your application. In the following, I will use \<branch_name> for any of these branches. ## Git Team Workflow: Where am I? There are several git GUI applications out there, which spare you using your command line. However, in the end I found it always the best to be familiar with the command line for git; to actually know which commands are used under the hood or to fix git problems that are nontransparent with the GUI tools. The most straightforward commands are: git statusgit log Whereas the former command shows your changed files in staged and unstaged mode -- important: get familiar with these modes --, the latter command shows you the git history. Sometimes *git reflog* can save your ass if you screwed something up and you want to jump back in time. With the following git workflow, it's one of our goals to keep a well-arranged git history which can be seen with `git log` . ## Git Team Workflow: Branch Lifecycles The master and staging branches are only created once and stay as long as the project exists. In contrast, feature branches get created for the period of a feature development. They get merged into the staging branch and finally the staging branch gets merged into the master branch for a new release of your application. The staging branch in between is used for your CI/CD to prepare the next release, but also to see the staging version of your application online (e.g. staging.my-domain.com). There are two essential commands to 1) create a new branch or to 2) check out an available branch: git checkout -b <branch_name> // (1)git checkout <branch_name> // (2) At the beginning of your project's lifecycle, someone has to set up master and staging branch with the necessary configuration (e.g. no force push on master/staging branches, PR templates, ...). Also CI/CD needs to be set up especially for these two branches -- but also feature branches later on -- to find lint, test or formatting issues early on. If you want to check out a feature branch which is only available in the remote repository, because a team mate has created and pushed it, but you don't have a local copy of it, call: git fetch Essentially a "fetch" keeps all the available branches in sync with your local machine in sync -- but not their commits, which needs another pull command. More about this later. If you want to delete a branch, you can either delete it 1) locally or 2) remotely: git branch -d <branch_name> // (1)git push origin -d <branch_name> // (2) Be careful with the latter one, because most likely you would want to have the branch merged into staging before deleting it. Anyway, after you have finished and merged a feature branch, you are free to dispose it locally and remotely. ## Git Team Workflow: Feature Branch The most straightforward feature development with a feature branch looks like the following. First, you check out your new feature branch with `git checkout -b <branch_name>` . Next, you implement your code and use the following commands to make your changes available to everyone in the remote repository: git add .git commit -m "<commit_message>"git push origin <branch_name> Whereas `git add .` moves *all* changed/added/deleted files to staging for the next commit, you can use variations of git add to move only a subset of the changed files to staging. This is helpful if you follow an atomic commit strategy. For instance, I like to use `git add -u` to move all changed but not new files to staging. If you are using `git status` in between, you will see that there are staged and unstaged files. Also git gives you instructions to move files from 1) staged to staging and from 2) staging to non changed: git reset HEAD <file_path> // (1)git checkout -- <file_path> // (2) After moving your files into staging, every of these files gets committed with your commit message. There are different naming conventions for a commit message, it doesn't matter much which one you follow, but it matters that you align on one as a team. What I like to do is the following naming convention: <type>(<which_file_or_domain>) <detailed_comment> which can have the following types: - feat - actual feature implementation - style - code style and code clean up - test - actual test implementation - fix - bug fix - refactor - refactoring that doesn't affect the behavior of the code - chore - no production code changes, but more like configuration and setup Thus, a commit message could look like the following: - feat(users) add authentication - fix(logout) clean up cookie - test(login) cookie set with access token - style(*) fix indentation - chore(.gitignore) add .env file As mentioned, you don't need to follow this naming convention, but to keep everyone in your team on the same page, align on one naming convention yourself. This applies to commit messages more importantly than branch namings. Last but not least, after you have added and committed your changes, push everything up to your remote repository with `git push origin <branch_name>` . This step is optional, because you can first accumulated commits before pushing all changes up to make it available to your team. ## How to keep a branch up-to-date? Regardless of the branch (staging/feature) you are working on, sometimes you need to update your local version of this branch with the changes from the remote branch, because someone else pushed updates to it. Before you start to update the branch, follow these optional steps: - If the branch isn't available locally for you, because someone else started it, you start with a `git fetch` . Next you navigate to the branch with`git checkout <branch_name>` . - If you have changed files, `git commit` or`git stash` them. The latter is used for storing your changes to apply them in a later stage again. Then you can start to pull the latest changes. My recommendation would be to always use a rebase which puts your commits on top of the remote branch's commits: git pull --rebase origin <branch_name> If you have changed a file that has been changes remotely too, it can happen that you run into a merge conflict during the rebase. If this happens, resolve the file's conflicts and continue on the command line the following way: git add .git rebase --continue In the worst case, it can happen that you run for every of your own commits that it is rebased on top of the remote branch's commits into a conflict. If that's the case, repeat the steps from before. If your pull rebase goes wrong, you can always abort it: `git rebase --abort` . After the pull rebase finishes, all your commits should be listed on top of the remote branch's commits. If you have stashed changes away before, you can apply them again with: `git stash apply` . Your branch should be up to date with the remote branch's changes and your own changes on top. ## How to keep a feature branch up-to-date with staging? Once you started to work on a feature branch, you may want to keep the branch up to date with the staging branch in case anyone else merged their feature branches into staging. So when do you want to keep your feature branch up to date with staging? - If you want to create a pull request (PR) of your feature branch to merge it into staging, but all the recent changes from staging should be included to reflect the latest changes but also to not run into merge conflicts. - If you need to include an update from staging (e.g. hotfix, library upgrade, dependent feature from someone else) to continue working on your feature branch without blocking issues. Follow this git workflow to keep your feature branch up to date with the staging branch: git checkout <branch_name>// checks out your feature branchfollow: How to keep a branch up-to-date?git checkout staging// checks out staging branchfollow: How to keep a branch up-to-date?git checkout <branch_name>// checks out your feature branchgit rebase staging// merges all your changes from your feature branch on top of the staging branchgit push origin <branch_name>// pushes the updated feature branch to your remote repository If you have pushed your branch before to your remote repository, you have to force push your changes, because the history of your branch changed during the rebase with the staging branch: git push -f origin <branch_name> But be careful with a force push: If someone else made changes in between on this branch, a force push will forcefully overwrite all these changes. ## How to get a feature branch ready for merge? A pull request (PR) will merge all your feature branch's changes into the staging branch. Afterward, you can delete your feature branch locally and remotely. Even though a merge would be possible without a PR, a PR enables other people to review your feature on platforms like GitHub or Gitlab. That's why a best practice would be to open a PR for your feature branch once you finished the implementation of the actual feature on this branch and pushed everything to your remote repository. The git workflow looks as follows: **follow:**How to keep a feature branch up-to-date with staging?**do:**Open Pull Request on GitHub/Gitlab/... for your feature branch.**wait:**CI/CD, discussion/review, approval of team members.**optional:**Push more commits to your remote branch if changes are needed due to discussion. If everything is fine with your feature branch, continue with one of the following ways: - 1) Merge your PR directly on your GitHub/Gitlab/... - 2) Or continue on the command line with the merge: git checkout staginggit merge --no-ff <branch_name>// merges your feature branch into staginggit push origin staging Congratulations, you have merged your feature branch into staging and kept everything up-to-date along the way. Now, rinse and repeat this git workflow for your next feature branch. If you screwed up anything during your git workflow, it's always worth to check out this tutorial to revert almost anything with git. ## Bonus: Keeping your Git History Tidy The previous git workflows for keeping your branches up to date and merging them into each are based on git's rebase feature. By using a rebase, you will always apply your changes on top of other team members changes. This way, you git history will always tell a linear story. The following commands will help you to keep the git history of your feature branch tidy: git revert <commit_sha>// if you want to undo a commit// but want to keep this in history for documentation// makes most sense if you follow atomic commitsgit rebase -i HEAD~<number_of_commits>// if you want to reorder, rename, or squash commits into each othergit commit --amend// if you want to append changes to your last commitgit commit --amend -m "<commit_message>"// if you want to change the commit message of your last commit Some of these git commits will change your local commit history. If you have pushed your feature branch to your remote repository before, you will have to force push these changes. Be careful again that you don't overwrite a team member's changes with the force push. I would be curious about other git workflows, if there exist any, so let me know about them in the comments. If you have any git best practices or gotchas you want to add, then let us know about them. After all, I hope this walkthrough helps you to establish a git workflow for your team working with git. In the long run, it will make you and your team more productive by aligning you on a common sense process.
true
true
true
Learn how to establish a Git Team Workflow with branching techniques, pull/push strategies, and common sense git commands to make your team more productive ...
2024-10-12 00:00:00
2019-08-22 00:00:00
https://www.robinwieruch…9842e/banner.jpg
website
robinwieruch.de
Rwieruch
null
null
33,306,229
https://www.newthingsunderthesun.com/pub/a8w2cu6h/release/2
null
null
null
true
false
false
null
null
null
null
null
null
null
null
null
13,325,846
https://www.youtube.com/watch?v=CQ_I-boW3uo
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
8,745,332
https://www.promptcloud.com/blog/how-netflix-revolutionised-television-programming-with-bigdata/
Netflix revolutionized television programming - Big Data | PromptCloud
Janet Williams
**Table of Contents**show **Netflix’s** original business of delivering DVDs to homes on a monthly subscription model was disruptive enough for the times. However, it seems rather staid when one looks at how the company has transformed itself in less than a decade since it commenced streaming television programming over the Internet. Equally commendable is that the company is now using all the learnings from the enormous volumes of data provided by its subscriber base of over 50 million users to drive its original content strategy. By most yardsticks, the company has been enormously successful with this data-driven approach, so much so that its application of **Big Data** is of as much interest to businesses as its carefully-commissioned programs are for its users. ## Data – the “killer app” For any business, the ability to gaze at the proverbial crystal ball and fairly accurately predict the success of a product or service would be priceless. **Netflix** has become the poster child of accomplishment in this precise realm. The success of the **Kevin Spacey**-starrer ‘*House of Cards* in the *US market* was apparently such a foregone conclusion that the company did not even require a pilot of the program before it was commissioned. What might seem like a blind punt to the uninitiated was in reality a ‘*no-brainer*’ decision for Netflix based on the total clarity of the story the company’s data told. By knowing deeply about the popularity of the original series in the UK, past consumption of movies and programs featuring lead actor Kevin Spacey amongst its user base and the acceptance of the genre of content produced by the show’s *Executive Producer*, *David Fincher*, Netflix knew they had a winner on their hands much before the cameras began to roll. The success of **House of Cards** was not a flash in the pan for Netflix’s original programming. Apparently, the company has already established a success rate of 80%, with the jury still out on one of its programs, which is more than twice the success rate enjoyed by most television networks with their own programming. ## Using the data According to an article in The New York Times that quotes networking provider Sandvine, “a third of the downloads on the Internet during peak periods on any given day are devoted to streamed movies from the Netflix service. And last year, by some estimates, more people watched movies streamed online than on physical DVDs.” It is reported that **Netflix** monitors over 30 million “plays” a day, which includes information such as when users pause, rewind or fast forward, ratings by its subscribers, and over 3 million daily searches besides the now seemingly obvious factors like time of day shows are watched or on what devices. Besides, there are hundreds of tags or meta descriptors that each unit of content is annotated with, which then become useful not only in personalizing content or promotions, but also in content planning and commissioning. With over 50 million users and growing, consuming diverse types and volume of content while leaving behind ample data sets that point to their preference, the scale of Big Data that Netflix deals with can be imagined. The challenges are many, says *Chris Pouliot*, *Director of Analytics* at Netflix in this interview, as the company works out what data to collect and how to perform analytics on the Big Data. “My team does not only personalizations for movies, but we also deal with content demand prediction. Helping our buyer figure out how much do we pay for a piece of content…. The personalization recommendations for helping users find good movies and TV shows. Marketing analytics, how do we optimize our marketing spin. Streaming platform, how do we optimize the user experience once I press play. There’s a wide range of data, so there is a lot of diversity,” he says. Writing for Wired.com, Phil Simon lends his perspective on Netflix being the ‘quintessential data visualisation’ organisation. Using a presentation by Jeff Magnusson and Charles Smith from the company at a Hadoop Summit as the basis, Phil contends that most organisations would not even know their customers half as well as Netflix does. “Through Big Data and dataviz, Netflix seamlessly delivers mind-boggling personalization to each customer. At the same time, Netflix can easily aggregate data about customers, genres, viewing habits, trends, and just about anything else. Equipped with this data, Netflix can attempt to answer questions that most organizations can’t or won’t even ask.” ## The future An important but less-discussed consideration is the change in sociological dynamics likely as a result of the understanding of users facilitated by Big Data. Citing the example of Michael Jackson’s Thriller, Prof. Markus Geisler seems to indicate that Netflix could be the harbinger for altering sociological dynamics, which could be a significantly profound consequence of intimately knowing users’ interests and behaviors. How Netflix has leveraged Big Data and analytics to literally reinvent the company for greater success holds a mirror to numerous companies in various other markets. While a comparable service is yet to make its mark in the Indian market, the rapidly growing bandwidth and proliferation of smartphones indicate that such a day is not too far. In such an eventuality, the potential of using Big Data in servicing this gigantic market is not only tantalising, but may well be an unavoidable necessity. It’s easy to know what the audiences wants, rejects, and consume. The biggest indicator is social media chatter. In India, brand monitoring has become the proverbial anvil on which mettle is tested. Not to mention that various companies today offer affordable and insightful data crawling and web scraping solutions. Any company that has the vision to collect and aggregate data from forums, blogs and social media and uses it to improve content and programming is only poised to become the next Netflix and mine the vast market that India is.
true
true
true
How Netflix used Big Data and analytics to literally reinvent the company for greater success holds a mirror to numerous companies.
2024-10-12 00:00:00
2014-12-13 00:00:00
null
article
promptcloud.com
PromptCloud
null
null
29,638,013
https://www.thenation.com/article/environment/climate-future-disasters/
Life Circa 2050 Will Be Bad. Really Bad.
Alfred McCoy
EDITOR’S NOTE: This article originally appeared at TomDispatch.com. To stay on top of important articles like these, sign up to receive the latest updates from TomDispatch.com. When midnight strikes on New Year’s Day of 2050, there will be little cause for celebration. There will, of course, be the usual toasts with fine wines in the climate-controlled compounds of the wealthy few. But for most of humanity, it’ll just be another day of adversity bordering on misery—a desperate struggle to find food, water, shelter, and safety. In the previous decades, storm surges will have swept away coastal barriers erected at enormous cost and rising seas will have flooded the downtowns of major cities that once housed more than 100 million people. Relentless waves will pound shorelines around the world, putting villages, towns, and cities at risk. As several hundred million climate-change refugees in Africa, Latin America, and South Asia fill leaky boats or trudge overland in a desperate search for food and shelter, affluent nations worldwide will be trying to shut their borders even tighter, pushing crowds back with tear gas and gunfire. Yet those reluctant host countries, including the United States, won’t faintly be immune from the pain. Every summer, in fact, ever more powerful hurricanes, propelled by climate change, will pummel the East and Gulf Coasts of this country, possibly even forcing the federal government to abandon Miami and New Orleans to the rising tides. Meanwhile, wildfires, already growing in size in 2021, will devastate vast stretches of the West, destroying thousands upon thousands of homes every summer and fall in an ever-expanding fire season. And keep in mind that I can write all this now because such future widespread suffering won’t be caused by some unforeseen disaster to come but by an all-too-obvious, painfully predictable imbalance in the basic elements that sustain human life—air, earth, fire, and water. As average world temperatures rise by as much as 2.3° Celsius (4.2° Fahrenheit) by mid-century, climate change will degrade the quality of life in every country on Earth. ## Popular "swipe left below to view more authors"Swipe → ###### Climate Change in the 21st Century This dismal vision of life circa 2050 comes not from some flight of literary fantasy, but from published environmental science. Indeed, we can all see the troubling signs of global warming around us right now—worsening wildfires, ever more severe ocean storms, and increased coastal flooding. While the world is focused on the fiery spectacle of wildfires destroying swaths of Australia, Brazil, California, and Canada, a far more serious threat is developing, only half-attended to, in the planet’s remote polar regions. Not only are the icecaps melting with frightening speed, already raising sea levels worldwide, but the vast Arctic permafrost is fast receding, releasing enormous stores of lethal greenhouse gases into the atmosphere. At that frozen frontier far beyond our ken or consciousness, ecological changes, brewing largely invisibly deep beneath the Arctic tundra, will accelerate global warming in ways sure to inflict untold future misery on all of us. More than any other place or problem, the thawing of the Arctic’s frozen earth, which covers vast parts of the roof of the world, will shape humanity’s fate for the rest of this century—destroying cities, devastating nations, and rupturing the current global order. If, as I’ve suggested in my new book, *To Govern the Globe: World Orders and Catastrophic Change*, Washington’s world system is likely to fade by 2030, thanks to a mix of domestic decline and international rivalry, Beijing’s hypernationalist hegemony will, at best, have just a couple of decades of dominance before it, too, suffers the calamitous consequences of unchecked global warming. By 2050, as the seas submerge some of its major cities and heat begins to ravage its agricultural heartland, China will have no choice but to abandon whatever sort of global system it might have constructed. And so, as we peer dimly into the potentially catastrophic decades beyond 2050, the international community will have good reason to forge a new kind of world order unlike any that has come before. ###### The Impact of Global Warming at Mid-Century In assessing the likely course of climate change by 2050, one question is paramount: How quickly will we feel its impact? For decades, scientists thought that climate change would arrive at what science writer Eugene Linden called a “stately pace.” In 1975, the US National Academies of Sciences still felt that it would “take centuries for the climate to change in a meaningful way.” As late as 1990, the UN’s Intergovernmental Panel on Climate Change (IPCC) concluded that the Arctic permafrost, which stores both staggering amounts of carbon dioxide (CO2) and methane, an even more dangerous greenhouse gas, was not yet melting and that the Antarctic ice sheets remained stable. In 1993, however, scientists began studying ice cores extracted from Greenland’s ice cap and found that there had been 25 “rapid climate change events” in the last glacial period thousands of years ago, showing that the “climate could change massively within a decade or two.” Driven by a growing scientific consensus about the dangers facing humanity, representatives of 196 states met in 2015 in Paris, where they agreed to commit themselves to a 45 percent reduction in greenhouse gas emissions by 2030 and achieve net carbon neutrality by 2050 to limit global warming to 1.5°C above preindustrial levels. This, they argued, would be sufficient to avoid the disastrous impacts sure to come at 2.0°C degrees or higher. However, the bright hopes of that Paris conference faded quickly. Within three years, the scientific community realized that the cascading effects of global warming reaching 1.5°C above preindustrial levels would be evident not in the distant future of 2100, but perhaps by 2040, impacting most adults alive today. The medium-term effects of climate change will only be amplified by the uneven way the planet is warming, with a far heavier impact in the Arctic. According to a *Washington Post* analysis, by 2018 the world already had “hot spots” that had recorded an average rise of 2.0°C above the preindustrial norm. As the sun strikes tropical latitudes, huge columns of warm air rise and then are pushed toward the poles by greenhouse gases trapped in the atmosphere, until they drop down to earth at higher latitudes, creating spots with faster-rising temperatures in the Middle East, Western Europe, and, above all, the Arctic. In a 2018 IPCC “doomsday report,” its scientists warned that even at just 1.5°C, temperature increases would be unevenly distributed globally and could possibly reach a devastating 4.5°C in the Arctic’s high altitudes, with profound consequences for the entire planet. ###### Climate-Change Cataclysm Recent scientific research has found that, by 2050, the key drivers of major climate change will be feedback loops at both ends of the temperature spectrum. At the hotter end, in Africa, Australia, and the Amazon, warmer temperatures will spark ever more devastating forest fires, reducing tree cover, and releasing vast amounts of carbon into the atmosphere. This, in turn (as is already happening), will fuel yet more fires and so create a monstrous self-reinforcing feedback loop that could decimate the great tropical rainforests of this planet. The even more serious and uncontrollable driver, however, will be in the planet’s polar regions. There, an Arctic feedback loop is already gaining a self-sustaining momentum that could soon move beyond humanity’s capacity to control it. By mid-century (or before), as ice sheets continue to melt disastrously in Greenland and Antarctica, rising oceans will make extreme sea-level events, like once-in-a-century storm surges and flooding, annual occurrences in many areas. If global warming grows beyond the maximum 2°C target set by the Paris Agreement, depending on what happens to Antarctica’s ice sheets, ocean levels could increase by a staggering 43 inches as this century ends. In fact, a “worst-case scenario” by the National Academies of Sciences projects a sea-level rise of as much as 20 inches by 2050 and 78 inches in 2100, with a “catastrophic” loss of 690,000 square miles of land, an expanse four times the size of California, displacing about 2.5 percent of the world’s population and inundating major cities like New York. Adding to such concerns, a recent study in *Nature* predicted that, by 2060, rain rather than snow could dominate parts of the Arctic, further accelerating ice loss and raising sea levels significantly. Moving that doomsday ever closer, recent satellite imagery reveals that the ice shelf holding back Antarctica’s massive Thwaites Glacier could “shatter within three to five years,” quickly breaking that Florida-sized frozen mass into hundreds of icebergs and eventually resulting “in several feet of sea level rise” on its own. Think of it this way: In the Arctic, ice is drama, but permafrost is death. The spectacle of melting polar ice sheets cascading into ocean waters is dramatic indeed. True mass death, however, lies in the murky, mysterious permafrost. That sloppy stew of decayed matter and frozen water from ice ages past covers 730,000 square miles of the Northern Hemisphere, can reach 2,300 feet below ground, and holds enough potentially releasable carbon and methane to melt the poles and inundate densely populated coastal plains worldwide. In turn, such emissions would only raise Arctic temperatures further, melt more permafrost (and ice), and so on, year after year after year. We’re talking, in other words, about a potentially devastating feedback loop that could increase greenhouse gases in the atmosphere beyond the planet’s capacity to compensate. According to a 2019 report in *Nature*, the vast zone of frozen earth that covers about a quarter of the Northern Hemisphere is a sprawling storehouse for about 1.6 trillion metric tons of carbon—twice the amount already in the atmosphere. Current models “assume that permafrost thaws gradually from the surface downwards,” slowly releasing methane and carbon dioxide into the atmosphere. But frozen soil also “physically holds the landscape together” and so its thawing can rip the surface open erratically, exposing ever-larger areas to the sun. Around the Arctic Circle, there is already dramatic physical evidence of rapid change. Amid the vast permafrost that covers nearly two-thirds of Russia, one small Siberian town had temperatures that reached a historic 100 degrees Fahrenheit in June 2020, the highest ever recorded above the Arctic Circle. Meanwhile, several peninsulas on the Arctic Sea have experienced methane eruptions that have produced craters up to 100 feet deep. Since rapid thawing releases more methane than gradual melting does and methane has 25 times more heating power than CO2, the “impacts of thawing permafrost on Earth’s climate,” suggests that 2019 report in *Nature*, “could be twice that expected from current models.” To add a dangerous wild card to such an already staggering panorama of potential destruction, about 700,000 square miles of Siberia also contain a form of methane-rich permafrost called yedoma, which forms a layer of ice 30 to 260 feet deep. As rising temperatures melt that icy permafrost, expanding lakes (which now cover 30 percentf of Siberia) will serve as even greater conduits for the release of such methane, which will bubble up from their melting bottoms to escape into the atmosphere. ###### New World Order? Given the clear failure of the current world system to cope with climate change, the international community will, by mid-century, need to find new forms of collaboration to contain the damage. After all, the countries at the recent UN climate summit at Glasgow couldn’t even agree to “phase out” coal, the dirtiest of all fossil fuels. Instead, in their final “outcome document,” they opted for the phrase “phase down”—capitulating to China, which has no plans to even start reducing its coal combustion until 2025, and India, which recently postponed its goal of achieving net-carbon neutrality until an almost unimaginably distant 2070. Since those two countries account for 37 percent of all greenhouse gases now being released into the atmosphere, their procrastination courts climate disaster for humanity. Who knows what new forms of global governance and cooperation will come into being in the years ahead, but simply to focus on an old one, here’s a possibility: To exercise effective sovereignty over the global commons, perhaps a genuinely reinforced United Nations could reform itself in major ways, including making the Security Council an elective body with no permanent members and ending the great-power prerogative of unilateral vetoes. Such a reformed and potentially more powerful organization could then agree to cede sovereignty over a few narrow yet critical areas of governance to protect the most fundamental of all human rights: survival. Just as the Security Council can (at least theoretically) now punish a nation that crosses international borders with armed force, so a future UN could sanction in potentially meaningful ways a state that continued to release greenhouse gases into the atmosphere or refused to receive climate-change refugees. To save that human tide, estimated at between 200 million and 1.2 billion people by mid-century, some UN high commissioner would need the authority to enforce the mandatory resettlement of at least some of them. Moreover, the current voluntary transfer of climate reconstruction funds from the prosperous temperate zone to the poor tropics would need to become mandatory as well. No one can predict with any certainty whether reforms like these and the power to change national behavior that would come with them will arrive in time to cap emissions and slow climate change, or too late (if at all) to do anything but manage a series of increasingly uncontrollable feedback loops. Yet without such change, the current world order will almost certainly collapse into catastrophic global disorder with dire consequences for all of us.
true
true
true
Future widespread suffering won’t be caused by some unforeseen disaster but by all-too-obvious, painfully predictable reasons.
2024-10-12 00:00:00
2021-12-20 00:00:00
https://www.thenation.co…1337642238-1.jpg
article
thenation.com
The Nation
null
null