id
int64
3
41.8M
url
stringlengths
1
1.84k
title
stringlengths
1
9.99k
author
stringlengths
1
10k
markdown
stringlengths
1
4.36M
downloaded
bool
2 classes
meta_extracted
bool
2 classes
parsed
bool
2 classes
description
stringlengths
1
10k
filedate
stringclasses
2 values
date
stringlengths
9
19
image
stringlengths
1
10k
pagetype
stringclasses
365 values
hostname
stringlengths
4
84
sitename
stringlengths
1
1.6k
tags
stringclasses
0 values
categories
stringclasses
0 values
786,136
http://www.wired.com/epicenter/2009/08/the-craigslist-credo-bad-advice-for-newspapers
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
18,152,227
https://www.nytimes.com/2018/10/04/magazine/how-do-you-take-a-picture-of-a-black-hole-with-a-telescope-as-big-as-the-earth.html
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
18,762,350
https://twittercommunity.com/t/upcoming-changes-to-png-image-support/118695
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
5,698,274
http://propelorm.org/blog/2013/05/13/propel2-is-about-to-be-released.html
null
null
null
true
false
false
null
null
null
null
null
null
null
null
null
31,389,231
https://www.bloomberg.com/news/articles/2022-05-15/terra-ust-luna-hasn-t-killed-crypto-but-it-was-a-narrow-escape
Bloomberg
null
To continue, please click the box below to let us know you're not a robot. Please make sure your browser supports JavaScript and cookies and that you are not blocking them from loading. For more information you can review our Terms of Service and Cookie Policy. For inquiries related to this message please contact our support team and provide the reference ID below.
true
true
true
null
2024-10-12 00:00:00
null
null
null
null
null
null
null
4,231,891
http://alexlod.com/2012/07/11/one-year-six-products-16-tips-for-new-entrepreneurs/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
7,636,872
http://misthy.com/why-i-decided-to-be-indie-game-developer/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
30,343,878
https://blog.wobaka.com/how-i-use-clojure-to-build-and-scale-my-saas/
How I Use Clojure to Build and Scale my SaaS
Fredrik
# How I Use Clojure to Build and Scale my SaaS I was first introduced to Clojure in grad school where I was taking a course on building large scale software applications. We were building a clone of the card game Hearthstone, and in particular the game engine. If you have played that game, you know there are a lot of state changes happening when cards are played, affecting other cards on the deck, and so on. Even though I had no previous experience with programming in Lisp, I got up to speed quickly. After a few weeks I really started to enjoy the syntax, or rather, the lack of it. You see, everything in Clojure is just data, so there is no syntax. A few hundred lines of code and a couple of Rich Hickey key notes later, I was hooked. I really liked writing Clojure and decided I would use it to build something I had been pondering on for a long time, a simple and enjoyable CRM system for small businesses. Fast forward a few years an I'm now running my own SaaS business called Wobaka, on 100% Clojure code (ok, GitHub says it's 97.9% Clojure but who's counting?). I wanted to share a few things I particulary enjoyed and that has helped me build and scale. ## Development environment I've yet to find a development environment that is as lovely to work in as Clojure/Script with Figwheel. You see, Clojure has this lovely data type called *atom* which may contain any kind of data that can be updated atomically. The cool thing is that Clojure, and in this case a library called Figwheel can reload your entire code in memory but keep the state of these *atom* data structures. Now, if you're a frontend developer you probably know where I'm going with this. We can reload the entire application code without even touching the state. Figwheel uses this to make hot reloading an amazing experience where you can keep all your state but still get real time updates in the browser when your code changes. I can't go back from this. ## The REPL Oh the REPL. I think it's one of the most missunderstood things in Clojure. You see, this REPL is not just a command prompt. You can start it from within your editor, compile your Clojure project and run anything from anywhere. I personally use Emacs but there are great plugins for VSCode and other editors as well. Want to run all tests in a file? There's a command for that. Want to run a line of code? Just place the cursor and hit Ctrl-x-e. You can even document your code with code samples using special comments. My personal workflow is often to run things in a comment and then move it to a test when I'm done with the function. Again, it's hard to go back from this. ## Shared code between client and server Clojure can run both on JavaScript (client) as ClojureScript, and on the JVM (server). This makes it super easy to write shared code that can run anywhere. You can also use special forms to define code that should only run on the client or on the server. ## Clojure Spec Clojure Spec is a library for specifying structure and shape of data. Once you've defined your data you can use it for validation, to specify types for functions using contracts, to generate tests automatically and more. It's just another thing that makes programming in Clojure so enjoyable. Of course specs also works in ClojureScript so you can easily share your data type specification in your client and server code! This is especially handy for data validation. ## Inline tests I'm a big proponent for inline tests and write all my tests in the function specifications. I think this results in an up to date test suite, better tests and better overall test coverage as it is just more convenient to write them. You can also use the tests as documentation which, combined with the REPL makes for a brilliant environment. Adding a test to a function is as simple as adding a map with a *test* attribute. ## Stability Since the beginning of my project I have had zero breaking changes in dependencies. The JVM has been very stable and even though I used to be sceptical to everything Java, I've come to really appreciate it as a big plus now. ## No more JavaScript burnout Gone are the days of continously updating 100s of NPM packages. I love open source but taking care of a large JavaScript project with a lot of dependencies can cause a lot of frustration. ## But what about the parenthesis? Honestly, I've come to prefer the way Lisp uses parenthesis compared to other languages, such as the C family. `(foo (bar "Hello, World!"))` vs `foo(bar("Hello, World"))` , which have the most parenthesis? (hint: same!). ## Clojure is fun! I love writing Clojure for a living and I'm truly thankful to Rich Hickey and the Clojure team and community. If you're looking for a CRM system you'll actually enjoy using, go ahead and give Wobaka a try!
true
true
true
I was first introduced to Clojure in grad school where I was taking a course on building large scale software applications.
2024-10-12 00:00:00
2022-02-15 00:00:00
https://wobaka.com/blog/…_fI-unsplash.jpg
article
wobaka.com
Making a better CRM
null
null
5,686,107
http://www.informationweek.com/government/mobile/fcc-to-boost-wi-fi-on-planes/240154623
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
30,654,801
https://twitter.com/timberners_lee/status/1502594846044954630
x.com
null
null
true
true
false
null
2024-10-12 00:00:00
null
null
null
null
X (formerly Twitter)
null
null
25,509,560
https://www.narrativetext.co/data-hub/using-predictive-power-score-to-pinpoint-non-linear-correlations
Loading...
null
null
true
true
false
null
2024-10-12 00:00:00
null
null
null
null
null
null
null
17,967,312
https://portal.msrc.microsoft.com/en-US/security-guidance/advisory/CVE-2018-8475
Security Update Guide
null
You need to enable JavaScript to run this app.
true
true
true
null
2024-10-12 00:00:00
null
null
null
null
null
null
null
11,579,710
http://www.bbc.com/future/story/20160426-the-invisible-language-of-trains-boats-and-planes
The secret signals that rule our transport networks
Georgina Voss; Wesley Goatley
# The secret signals that rule our transport networks **The vast network of global trade carried by trains, aircraft and ships creates a huge web of radio traffic. Two artists created an installation that reveals the enormity of this invisible chatter.** It is early morning on Brighton beach, and we are sitting on a slope of pebbles, looking out to sea. With us we have a laptop, several metres of cabling, and an antenna taller than a small child. In between waving off the attentions of men wielding metal detectors and some extremely large seagulls, we squint into the computer screen, looking for a specific peak on the radio spectrum. We are trying to see the invisible infrastructure that surrounds every part of our modern existence. Our 21st Century lives depend on enormous logistics systems for nearly every part of our lives. Clothes, food, medicine, and phones all come into our lives through a tangled and complicated route of supply chains, container ships, planes, and trains. Around 85% of international freight is carried by sea, nearly 10 billion metric tonnes. Nearly every commercial plane carries some type of airfreight, mostly on the large ‘wide-body’ planes like Boeing 747s, with UK airports alone handling some 2.4 million tonnes in 2013. In the UK, trains carry around six and half billion metric tonnes of freight annually. Yet despite their importance, infrastructures are designed to be invisible, operating smoothly and silently. We tend to only notice them when they malfunction or break – when a parcel goes missing, or when a train is delayed. We built Familiars, a project exploring just how much of these systems citizens are allowed to see, by creating a live map of the movements and data broadcast by nearby ships, planes, and trains. There are plenty of apps and programmes which track these systems, but most present managed and filtered data rather than giving users direct access to it. However, a new form of technology, software-defined radio (SDR), has opened up new possibilities for seeing into these systems directly. SDRs often take the form of small USB dongles that can receive radio transmissions across a wide spectrum. Unlike traditional radios hardwired for one function and a limited set of frequencies, SDR devices can receive and decode many transmissions, ranging from CB radios to weather satellites. Coupling SDR receivers with specialised open-source software let us track the very old radio technologies which lie at the heart of logistics. Every ship, train, and plane sends messages to other parts of the network about where it is and what it’s doing. Radio is the ideal technology for fast moving vessels, as it allows them to communicate while they’re on the move. Planes, we found, are the easiest to track. In the cockpit of every plane is an Automatic Dependent Surveillance-Broadcast (ADS-B) transponder, which can send and receive radio signals. After gathering information through global positioning satellites, planes openly broadcast their position, altitude, speed, and heading on the 1090 MHz radio frequency for other aircraft and ground control to pick up. With a well-placed SDR receiver and the correct software, these transmissions can be legally gathered and decoded by anyone in the UK. In theory, container ships and tankers should also be easy to track. Like planes, they also openly transmit data about their position, speed, and the port that they’re heading towards. This is transmitted through their on-board Automatic Identity System (AIS), which transmits at 161.975 MHz and 162.025 MHz. In practice, as we found on Brighton beach, picking up these transmissions requires you to be in direct line of sight with the ships – that is, in the sea. After several days of experimenting with increasingly sophisticated antennae, we decided that we’d have to go through a third party to get this data. In Familiars, we used an online interface provided by vtexplorer.com. This made the data very consistent, but meant we had no direct access over how it was collected or modified. Some transport data is nearly completely inaccessible to us. A record 22.7 billion tonnes of freight were carried across British railways in 2014, but tracking these trains themselves directly is difficult. In the UK, train drivers largely communicate via the Global System for Mobile Communications Railway (GSM-R) – essentially, a mobile phone network for trains. GSM-R is a closed network which means that, under UK telecommunications law, it’s illegal to decode any correspondence sent along it. However, every time an encoded message gets sent from a train, it can be detected nearby as a peak on the uplink frequency of 876-880 MHz. We can therefore see when a communication has been sent, even if we don’t know what it says or any details about the train that sent it. In looking at how transport systems communicate with each other, we can also learn something about the data itself. Rather than being something neatly mediated through a screen, this data is not cleanly, digitally, delivered but is more like a messy layer of material which sits over the world. Accessing these radio waves can be affected by weather patterns and electromagnetic interference; ironically, there can be a lot of this around airports. Physical geography also affects radio transmissions, with hills, mountains, and forests obstructing many forms of radio communication. Live streams of data from trains are sometimes available once the data has been filtered and mediated through timetable and ticketing apps, but is exceptionally difficult to gather directly. Some parts of the spectrum are open, and some encoded. Whilst radio transmissions from planes transponders are openly available, it is illegal in the UK to listen in on conversations between pilots and air traffic controllers. Trying to look directly at logistical infrastructure, a system which is deliberately designed to be out of sight, brings us up against the thicket of regulations which controls its visibility, and makes us aware of the politics of data collection itself. *Join 600,000+ Future fans by liking us on* *Facebook**, or follow us on* *Twitter**,* *Google+**,* *LinkedIn**and**Instagram**.* *If you liked this story, **sign up for the weekly bbc.com features newsletter**, called “If You Only Read 6 Things This Week**”. A handpicked selection of stories from BBC Future, Earth, Culture, Capital, Travel and Autos, delivered to your inbox every Friday.*
true
true
true
The vast network of global trade carried by trains, aircraft and ships creates huge data. Two artists created an installation that reveals the enormity of this invisible chatter.
2024-10-12 00:00:00
2016-04-27 00:00:00
https://ychef.files.bbci…351/p03s9fnr.jpg
newsarticle
bbc.com
BBC
null
null
39,570
http://www.thestate.com/business/story/137539.html
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
33,258,706
https://ottawacitizen.com/news/local-news/canadian-research-could-lead-to-simple-test-possible-treatment-for-long-covid
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
24,342,644
https://www.swifterm.com/swifterm-directory-of-posts/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
10,340,348
https://www.eff.org/deeplinks/2015/10/europes-court-justice-nsa-surveilance
No Safe Harbor: How NSA Spying Undermined U.S. Tech and Europeans' Privacy
Danny O'Brien
The spread of knowledge about the NSA's surveillance programs has shaken the trust of customers in U.S. Internet companies like Facebook, Google, and Apple: especially non-U.S. customers who have discovered how weak the legal protections over their data is under U.S. law. It should come as no surprise, then, that the European Court of Justice (CJEU) has decided that United States companies can no longer be automatically trusted with the personal data of Europeans. The court, by declaring invalid the safe harbor which currently permits a sizeable amount of the commercial movement of personal data between the EU and the U.S., has signaled that PRISM and other government surveillance undermine the privacy rights that regulates such movements under European law. In the word's of the court's press release: The Court [states] that legislation permitting the public authorities to have access on a generalized basis to the content of electronic communications must be regarded as compromising the essence of the fundamental right to respect for private life. Likewise, the Court observes that legislation not providing for any possibility for an individual to pursue legal remedies in order to have access to personal data relating to him, or to obtain the rectification or erasure of such data, compromises the essence of the fundamental right to effective judicial protection, the existence of such a possibility being inherent in the existence of the rule of law. The EU's data protection directive forbids the personal data to be moved out of the EU to jurisdictions without adequate privacy protections. Since 1998, the United States and the EU agreed to a “safe harbor” framework which allowed signatory U.S. companies to transfer data across the Atlantic as long as they comply with a set of privacy principles. Whether this “safe harbor” truly protects EU personal data, however, has long been questioned. Max Schrems, a tireless Austrian privacy activist, has been pursuing U.S. companies for violations of EU privacy law and the safe harbor provisions since 2011. In a series of complaints to multiple national data protection authorities, he argued that PRISM and similar surveillance programs demolished the assurances made in the safe harbor agreement. The Irish data protection regulators refused to address his complaint, so Schrems took his case to the Irish courts. They in turn referred elements of the case to the European Court of Justice, which announced its decision today. The CJEU rejected the Irish data protection authority's argument that the safe harbor agreement by the Commission meant that it could not investigate whether American companies complied with the data protection directive. It did concede, however, that national authorities could not throw out the safe harbor entirely. Only the CJEU could do that, it decided. And then, based on its analysis of the NSA surveillance program, it did just that. It's not as if the United States government could not have seen this coming. For the last two years, major tech companies, including Facebook and Google, have told American politicians that without reform of the NSA's global surveillance programs, they risked "breaking the Internet". Since then, little has been done to fix the international aspects of the NSA's mass surveillance programs. With a continuing stream of stories about the U.S., the United Kingdom and other intelligence services' collection of European citizens’ data, it's not surprising that confidence in the ability of US companies to protect their users' data to European data protection standards has plummeted. What happens next depends on the response of the U.S. government, and the outcome of the many other potential legal challenges to Facebook, Apple, Google and other companies' handling of European personal data that this decision now permits. Schrems himself writes that he believes that the decision won't have an immediately disruptive effect on everyday Internet activities: "There are still a number of alternative options to transfer data from the EU to the US. The judgement makes it clear that now national data protection authorities can review data transfers to the US in each individual case – while the "safe harbor" allowed for a blanket allowance. Despite some alarmist comments I don’t think that we will see mayor disruptions in practice.” However, if those reviews continue to run against the fundamental incompatibility of U.S. mass surveillance with European data protection principles, the end result may well be a growing restriction of the commercial processing of European users' data to within the bounds of the European Union. That would certainly force the companies to re-think and re-engineer how they manage the vast amount of data they collect. It will not, however, protect their customers from mass surveillance. The geographic siloing of data is of little practical help against mass surveillance if each and every country feels that ordinary customer data is a legitimate target for signals intelligence. If governments continue to permit intelligence agencies to indiscriminately scoop up data, then they will find a way to do that, wherever that data may be kept. Keep your data in Ireland, and GCHQ may well target it, and pass it onto the Americans. Keep your data in your own country, and you'll find the NSA—or other European states, or even your own government— breaking into those systems to extract it. What will change the equation is for states, including and especially the United States, to realize that dragnet surveillance undermines their national security and the global security of our data. It has economic consequences, as regulators, companies and individuals lose trust in Internet companies and services. It has political consequences as nations vie to keep data out of the hands of other countries, while seeking to keep it trackable by their own intelligence services. There's only one way forward to end this battle in a way that keeps the Internet open and preserves everyone's privacy. Countries have to make clear that mass surveillance of innocent citizens is a violation of human rights law, whether it is conducted inside their borders or outside, upon foreigners or residents. They have to bring their surveillance programs, foreign and domestic, back under control. For the United States, that means reforming Section 702 of the Foreign Intelligence Surveillance Amendments Act, and re-formulating Executive Order 12333. These are the secretive and overbroad regulations that permit NSA to use PRISM and a raft of other programs to spy on Europe and beyond. Equally important, the United States must revisit the laws, regulations, and institutional processes that allow these programs to fester in the dark, largely unaccountable to the public. It is the failure of these laws to adequately rein in the intelligence services that led to this case, and will lead to many more.
true
true
true
The spread of knowledge about the NSA's surveillance programs has shaken the trust of customers in U.S. Internet companies like Facebook, Google, and Apple: especially non-U.S. customers who have discovered how weak the legal protections over their data is under U.S. law. It should come as no...
2024-10-12 00:00:00
2015-10-05 00:00:00
https://www.eff.org/file…ary/og-nsa-1.png
article
eff.org
Electronic Frontier Foundation
null
null
28,904,625
https://www.nybooks.com/articles/1998/10/08/how-many-people-can-the-earth-support/
How Many People Can the Earth Support? | Joel E. Cohen
Joel E Cohen; JOEL E COHEN
The question “How many people can the Earth support?” is useful, though it is seriously incomplete. It focuses attention on the present and future numbers, qualities, activities, and values of humans in their relations with one another and with the Earth. To explain why people are interested in this question, I offer an overview of global human population, economy, environment, and culture. I then review some answers to the question and describe what is involved in answering it. Finally, I suggest actions that could alleviate some of the problems of population, economics, environment, and culture. The Earth’s capacity to support people is determined both by natural constraints, which some will emphasize, and by human choice, which others will emphasize. In the coming half-century, we and our children are less likely to face absolute limits than difficult trade-offstrade-offs among population size and economic well-being and environmental quality and dearly held values. Foresight and action now might make some of the coming trade-offs easier. I hope to offer a perspective that differs from the views of those who say that rapid population growth is no problem at all and those who say that population growth is the only problem. A rounded view of the facts should immunize us against both cornucopians and doomsayers. I give more details in my recent book *How Many People Can the Earth Support?** *Past Human Population* *Population Size and Growth*. Two thousand years ago, the Earth had roughly one quarter of a billion people (the population of the United States around 1990). By 1650 the Earth’s population had doubled to half a billion. When the Old World and the New World began to exchange foods and other resources in a serious way, the time required to double the population dropped from more than sixteen centuries to less than two centuries. The human population passed one billion around 1830. The second billion people were added in only one century, by 1930. The next doubling, to four billion, took only forty-four years. Until around 1965, the human population grew like an interest-bearing account in which the rate of interest increased with the balance in the account. Around 1965-1970, the global population growth rate reached its all-time peak, then began to fall gradually and erratically. It still remains far above global growth rates experienced prior to 1945. In the lifetime of anyone who is over forty, world population has doubled. Never before the second half of the twentieth century had any person lived through a doubling of world population. In absolute numbers, putting the first billion people on Earth took from the beginning of time to about 1830. Adding the latest billion took twelve years. In spite of this rapid population growth, by demographic and nutritional standards average human well-being has improved. For the world as a whole, life expectancy at birth rose from 46.4 years in 1950-1955 to 64.4 years in 1990-1995, an increase of 18 years. The advantage in life expectancy of the more developed regions over the less developed regions fell from twenty-six years in 1950-1955 to twelve years in 1990-1995. In developing regions, the absolute number (and the proportion) of people who were chronically undernourished fell from 941 million around 1970 to 786 million around 1990. In Africa, contrary to the world trend, the absolute number of chronically undernourished increased by two thirds between 1970 and 1990. Africa also had the highest population growth rates during that period—and still does. *Economic Growth and Growing Economic Disparities*. In the aggregate production of material wealth, the half-century since 1945 has been a golden era of technological and economic wonders. For example, in constant prices, with the price in 1990 set equal to 100, total food commodity prices fell from 196 in 1975 to 85 in 1992. The price of petroleum fell from 113 in 1975 to 76 in 1992. The price of a basket of thirty-three nonfuel commodities fell from 159 in 1975 to 86 in 1992. However, timber prices increased from 62 in 1975 to 112 in 1992. For many economists, the declining prices mean that human welfare is improving. Many participants in efficient market economies might agree. But global market prices, while useful for coordinating economic activity, are not universally reliable signals of changes in human well-being for at least three good reasons. First, global prices do not reflect the depletion of unowned stocks, such as marine fisheries, the ozone layer, or water in internationally shared rivers and aquifers. Second, prices need not reflect all environmental and social costs unless laws and practices bring those costs into the costs of production. Environmental and social costs may arise from extracting natural resources or from disposing of unwanted products and may be felt locally or globally, immediately or in the future. For example, in a local community, if a coal mine leaves behind an open pit or unfilled shafts, the price of coal does not reflect toxic effects of the mining, local erosion, or increased runoff. If the pit or mine is abandoned when the vein runs out, the price of coal does not reflect the cost of the collapse of the mining community left behind. Advertisement Likewise, market prices need not reflect future consequences of unwanted products such as spent nuclear fuels, carbon dioxide from power generation, solid wastes from discarded packaging and consumer goods, or asbestos, chlorofluorocarbons, and persistent pesticides. Assessing the costs varies in difficulty, from a relatively easy case like nontoxic solid waste, with a well-developed market in some countries, to a relatively hard case like chlorofluorocarbon disposal, apparently with no present market. A third reason that prices are not always indicators of human well-being is that markets respond to effective demand, not to human need. Food commodity prices have dropped by half, while three fourths of a billion people in developing countries chronically do not eat enough calories to grow normally and walk around, because the bottom billion are so poor that they cannot exercise effective demand in world commodity markets. They have no money to buy food, so they cannot drive up its price. The extremely poor are economically invisible. As the world’s average economic well-being has risen, economic disparities between the rich and the poor have increased. In 1960 the richest countries with 20 percent of world population earned 70.2 percent of global income, while the poorest countries with 20 percent of world population earned 2.3 percent of global income. Thus, the ratio of income per person between the top fifth and the bottom fifth was 30 to 1 in 1960. In 1970 that ratio was 32 to 1; in 1980, 45 to 1; in 1991, 60 to 1. In constant dollars, the absolute gap between the top fifth and the bottom fifth roughly doubled during this period. While the global number and the global fraction of chronically undernourished people fell over recent decades, the share of global income earned by the poorest 20 percent of people fell even faster. Even if there is no global shortage of food relative to effective demand, and even if global food prices are steady or falling, a global pattern of local hunger in parts of Africa, south Asia, and Latin America is a serious problem. *Environmental Impact and Vulnerability.* In the minds of many, human action is linked to an unprecedented litany of environmental problems. A grim list prepared by the demographer Paul Demeny in 1991 includes loss of topsoil, desertification, deforestation, toxic poisoning of drinking water, oceanic pollution, shrinking wetlands, overgrazing, species loss, loss of wilderness areas, shortage of firewood, siltation in rivers and estuaries, encroachment on arable land, dropping water tables, erosion of the ozone layer, global warming, rising sea levels, consumption of mineral resources, nuclear wastes, and acid rain. Demeny complained that ecologists rarely provide enough information to quantify the relative importance of these problems in specific locales. More information is needed to evaluate the trade-offs among these problems. For example, what are the trade-offs among burying municipal wastes (soil and groundwater contamination), incinerating them (air pollution), dumping them offshore (marine contamination), and reducing them at the source (changes in manufacturing and packaging technology, consumer expectations and habits, laws and prices)? Environmental vulnerability increases as humans make contact with the viruses and other pathogens of previously remote forests and grasslands. The number of people who live in coastal cities rapidly approaches one billion. Vulnerability to a rise in sea levels increases with the tide of urbanization. *Cultural Implosion*. In recent decades, migrations from rural to urban regions and between countries, as well as business travel, tourism, radio, television, telephones, faxes, the Internet, cassettes, newspapers, and magazines, have shrunk the world stage, bringing cultures into contact and sometimes into conflict. In 1800 roughly 1 in 50 people lived in cities; by 1995 almost 1 in 2 did. In 1950 the world had one city with more than 10 million people (greater New York). According to a United Nations study, in 1994 the world had fourteen cities with more than 10 million people. Of those, only four were in rich countries (in decreasing order: Tokyo, New York, Los Angeles, Osaka); the remaining ten were in developing countries (in decreasing order: São Paulo, Mexico City, Shanghai, Bombay, Beijing, Calcutta, Seoul, Jakarta, Buenos Aires, Tianjin). On every continent, people who vary in culture, language, religion, values, ethnicity, and race—and who share the same space for social, political, and economic activities—have increasingly come into direct contact. The resulting frictions are evident in all parts of the world. Between 1970 and 1990 the number of women who were economically active (that is, working for pay or looking for paid work) rose from 37 per 100 men to 62 per 100 men, while the world’s population growth rate fell for the first time in modern history. Because of these changes in the roles of women, the number of economically active people rose much faster than the number of people of working age. Problems of employment are influenced as much by economic and cultural factors as by sheer population growth. Advertisement At the International Conference on Population and Development in Cairo in 1994, many delegates strongly advocated empowering women through education, paid jobs, credit, property rights, contraception, and political power. Many people believe that if more women had such opportunities, population growth in many places might well be slower, in addition to the direct benefits such empowerment would give women. But in many cultures, empowering women in these ways conflicts directly with the goal of maintaining “full respect for the various religious and ethical values and cultural backgrounds,” a goal often repeated in the final document of the Cairo conference. Cultural conflicts over women’s and men’s status, roles, and rights will not go away soon. In summary, concerns about how many people the Earth can support involve not only population but also economics, the environment, and culture. *The Present* As of 1997, the world had about 5.8 billion people. At current birth rates, the worldwide average number of children born to a woman during her lifetime (the total fertility rate) is around 3.0. The population would double in forty-seven years if it continued to grow at its present rate of 1.5 percent per year, though that is not likely. These global summaries disguise two different worlds: the rich and the poor. The average number of children per woman ranges from almost 5.6 in Africa and 3.4 in the developing countries as a whole to 1.6 in the wealthy countries. In 1995 the 1.2 billion people in the world’s richest countries enjoyed an average annual income of $19,300—a truly astounding achievement. The remaining 4.5 billion averaged roughly $1,000 per year. The poorest 2 billion lived on average incomes of $400 a year, or a dollar a day. Roughly one in three people on Earth is infected with tuberculosis. Roughly half of the people on Earth have no toilet. A billion adults are illiterate, and two thirds of those are women. *Possible Futures* The future of the human population, like the future of its economies, environment, and culture, is highly unpredictable. The United Nations regularly works out the demographic consequences of assumptions that it considers plausible and publishes projections in a range from high to low. A high projection published in 1992 assumed that worldwide average fertility would fall to 2.5 children per woman in the twenty-first century. In this scenario, population would grow to 12.5 billion in fifty-five years—within the lifetime of some of our children. The 1992 low projection of the UN assumed that population would peak at 7.8 billion in 2050 before beginning to decline. One source of uncertainty that most demographers overlook is this: Can the Earth support the billions of additional people that the UN projects for 2050? Can the Earth continue to support the nearly 6 billion people it has now, at present levels or better? How many people can the Earth support? In 1679 Antoni van Leeuwenhoek estimated not more than 13.4 billion. In 1994 five authors independently published estimates ranging from fewer than 3 billion up to 44 billion. Between 1679 and 1994 at least sixty additional estimates were published. These sixty-five estimates of the Earth’s maximum population range widely, from less than one billion to more than 1,000 billion. There is neither an increasing nor a decreasing trend in these estimates. The scatter has increased with time, contrary to what one might expect from estimates of a constant of nature. One conclusion is immediate: many of the published answers cannot be nearly right—or there is no single right answer. Why there is no single right answer becomes clear when the methods used to obtain these estimates are examined carefully. One commonly used method assumes that a single factor, usually food, constrains population size. (That population often grows fastest in countries with the least food and slowest in the countries where food is most abundant does not seem to deter those who assume that food limits national population growth.) An estimate of the maximum possible annual global food production is divided by an estimate of the minimum possible annual food requirement per person to find the maximum possible number of minimal shares that the food supply could be divided into, and this number is taken as the maximum number of people the Earth can support. The maximum possible food production depends not only on environmental constraints like soil, rainfall, terrain, and the length of the growing season, but also on human choices, individual and collective: which plant and animal species are chosen for cultivation; the technology of cultivation; credit available to farmers; farmer education; infrastructure to produce and transport farm inputs (including irrigation capacity and hybrid seed development); infrastructure to transport, store, and process farm outputs; economic demand for food from other sectors of the economy; and international politics and markets that affect trade inputs and outputs. Culture defines what is food: where a Hindu may see a sacred cow, an American may see a hamburger on hooves. If edibility alone determined what is food, cockroaches would be in great demand. The minimum food requirement depends not only on physiological requirements (about 2,000 kilocalories per person per day, averaged over most national populations) but also on cultural and economic standards of what is acceptable and desirable. Not everyone who has a choice will accept a vegetarian diet with no more than the minimum calories and nutrients required for normal growth. Many authors of maximum population estimates recognized the difficulty of finding a single answer by giving a low estimate and a high estimate. The middle value, or median, of the high estimates is 12 billion. The median of the low estimates is 7.7 billion. This range of low to high medians, from 7.7 to 12 billion, is very close to the range of low and high UN projections for 2050: from 7.8 billion to 12.5 billion. Recent population history has rapidly approached the level of many estimated limits, and the UN projections of future population lie at similar levels. Of course, a historical survey of estimated limits is no proof that limits really lie in this range. It is merely a warning signal that the human population has now entered, and is rapidly moving deeper into, a zone where limits on how many people the Earth can support have been anticipated and may be encountered. How many people the Earth can support depends both on natural constraints, which are not fully understood, and on human choices. Many of these choices are unconscious decisions made by millions and billions of people in their daily lives (turn off the light when you leave the room, or leave it on; wash hands before eating, or don’t bother; pick up litter in the schoolyard, or add to it). The cumulative results of what may be unconscious individual actions amount to major human choices: consume more or less fossil fuel; spread or prevent infectious diseases; degrade or beautify the environment. Personal and collective choices affect the average level and the distribution of material well-being; technology; political institutions governing individual liberty, conflicts, and change (compare the breakup of Czechoslovakia with the breakup of Yugoslavia to see the impact of politics on the resources subsequently available for human well-being); economic arrangements regarding markets, trade, regulation, and non-market consequences of market activities; family size and structure, migration, care of the young and elderly, and other demographic arrangements; physical, chemical, and biological environments (do we want a world of humans and wheat only?); variability or stability; risk or robustness; the time horizon (five years ahead, or fifty, or five hundred); and values, tastes, and fashions. I emphasize the importance of values. Values determine how parents trade off the number of children against the quality of life their children will have; how they balance parents’ freedom to reproduce and children’s freedom to eat. Many choices that appear to be economic depend heavily on individual and cultural values. Should industrial economies seek now to develop renewable energy sources, or should they keep burning fossil fuels and leave the transition to future generations? Should women (and, by symmetry, should men) work outside their homes? How many people the Earth can support depends in part on how many will wear cotton and how many polyester; on how many will eat beef and how many bean sprouts; on how many will want parks and how many will want parking lots; on how many will want Jaguars with a capital *J* and how many will want jaguars with a small *j*. These choices change with time and circumstance, and so will how many people the Earth can support. *Implications for Action* What could be done now to ease future trade-offs in making these choices? The “bigger pie” school says develop more technology. The “fewer forks” school says slow or stop population growth and reduce wants per person. The “better manners” school says improve the terms under which people interact (e.g., by defining property rights to open-access resources such as fisheries and groundwater to prevent uneconomic exploitation, removing economic irrationalities, reducing inequities and organized violence, improving governance). There is much value in all these approaches. None is sufficient by itself. Even in combination, they will not eliminate the need to make choices among competing values. Lack of certainty about future constraints and choices does not justify lack of action now. Whenever I ride in a car, I put on my seatbelt, though I do not expect to be involved in a crash. I carry life insurance for my family, though I do not expect to die tomorrow. It is not necessary to be able to project the future with precision to recognize that more than 100 million women of childbearing age are estimated to lack desired access to means of fertility control; that, as Christopher Colclough and Keith Lewin have pointed out, 130 million girls and boys officially eligible for primary schooling in developing countries are out of school; that three quarters of a billion people, more or less, were hungry yesterday, are hungry today, and will be hungry tomorrow; that humans leave their mark on the land, sea, air, and other species with which we share the planet; and that while life is better today for many people than it was in the past, there are also many people for whom life is more miserable than the available means require. We need no projections to identify problems that require action today. *Pyramid of Population, Economy, Environment, Culture* Many of the current statistics and future projections quoted here will change. But one message will remain useful: Population problems are not purely demographic. They also involve economics, the environment, and culture (including politics, law, and values). Population, economy, environment, and culture may be envisioned as the corners of a symmetrical tetrahedron or pyramid. This image is my mental prophylaxis against omitting important dimensions when I listen to discussions of population problems. Each major dimension interacts with all three of the others. The symmetry of the pyramid means that culture or the environment or the economy could be placed on top without changing the message. But this pyramidal image is too simple in an important respect. Reality has not just a single pyramid, but thousands or millions of such pyramids, scattered over the globe, wherever humans live. Many of these local pyramids interact strongly over great distances, through worldwide financial and economic integration, through our shared commons of atmosphere and oceans and living species, and through global exchanges of people, microbes, and cultural symbols. Population problems vary from place to place but are globally interlinked. The real issue with population is not just numbers of people, although numbers matter and statistics give us quantitative insight and prevent us from making fools of ourselves. The real crux of the population question is the quality of people’s lives: the ability of people to participate in what it means to be really human; to work, play, and die with dignity; to have some sense that one’s own life has meaning and is connected with other people’s lives. That, to me, is the essence of the population problem. This Issue October 8, 1998 - * Norton, 1995. ↩
true
true
true
The question "How many people can the Earth support?" is useful, though it is seriously incomplete. It focuses attention on the present and future
2024-10-12 00:00:00
1998-10-08 00:00:00
https://www.nybooks.com/…ocial-1600px.png
article
nybooks.com
The New York Review of Books
null
null
12,002,942
http://laptops.reviewed.com/content/lg-gram-15-laptop-review
Did LG make a laptop that's too light?
Michael Desjardin
The **LG Gram 15** (MSRP $999 as tested) aims to do exactly that, with an emphasis on that last part. It offers all of the benefits of a 15-inch display without the weight that often comes with laptops this big. In fact, the LG Gram is so light it's almost inherently confusing; it takes a while for your brain to accept the fact that, yes, this is a functioning laptop. How well it functions, however, is another story. I love the portability of the LG Gram and find its design to be refreshingly minimalistic, but its trackpad is nightmarish and the Gram's battery is just as much of a lightweight as the laptop itself. At the end of the day, I wish the LG Gram were slightly less expensive than it is. Given the type of hardware packed under its hood, it's apparent that you're paying mostly for the Gram's spectacularly light design. And while the Gram's portability is probably the best thing it has to offer, I'm not convinced that it justifies the price, especially when you factor in its shortcomings. ## About the LG Gram There are two models of the LG Gram, and the main difference to note between them is their processors. The model we received (the LG 15Z960) features an Intel Core i5 2.8GHz processor, but there's a higher-end model with a 3.0GHz i7 processor, as well. At around $999, the lower-end LG Gram is not the priciest laptop, but given its specifications, it's not exactly a bargain, either. Here's a quick rundown of what the i5 model has to offer: {{ amazon name="LG Gram 15Z960 i5 Laptop (Gold)", asin="B019O7V4EU", align="right" }} •Intel Core i5-6200U 2.8GHz processor •8GB DDR3L RAM •15.6-inch 1920x1080 IPS LCD •256GB SATA SSD •WiFi AC/Bluetooth 4.0 Other than the i7 processor, the only other upgrade you'll be seeing with the higher-end LG Gram is its 512GB hard drive. If you ask me, the less expensive LG Gram is the better deal once you take everything into consideration. An extra $300-$400 is just a little too steep for a laptop of this sort. ## What We Like **It's an absolute cinch to travel with.** I've watched several people interact with the LG Gram for the first time, and the stages that usually follow are amazement, denial, and hesitation. At just 2.2 pounds (35.2 ounces), the LG Gram might be the lightest laptop I've ever held in my life. When you first pick it up, you find yourself amazed at how little it weighs. Because it's so reminiscent of those prop laptops used to stage rooms at Ikea, you quickly transition into denial: No, it's not possible. Let me open it up and use it just to make sure. Finally, the hesitation sets in. "Do I even *want* a notebook that feels fake? These are expensive—they should weigh a bit more!" It's not that the Gram feels cheap; it isn't. It's made from a magnesium alloy, no matter how much your brain might think it's high-end plastic made-up to look like metal. Unfortunately, we're kind of hardwired to expect a decent amount of heft when it comes to tech products that cost upwards of a thousand bucks. As a result, your first instinct might be to reject the LG Gram's build altogether. Put it in a bag and take it to the park, though, and I think you'll see why a 15-inch, 2-pound laptop is one of life's more freeing sensations. Simply put, there were times where I completely forgot I was traveling with the Gram; the muscles in the shoulder that supports the weight of my messenger bag didn't even register the added weight. If you're out and about and can't find a table to rest it on, the Gram feels like next-to-nothing on your lap. Yes, the laptop itself feels hollow and flimsy, but this shouldn't be a reflection of its build quality. The LG Gram feels hollow because it's a marvelous feat of engineering. It *should* feel this way. And yes, the comfortable heft of a **MacBook Air** might trigger the "luxury device" synapses in my brain, but I'm sure as heck not going to forget that a MacBook Air is in my messenger bag anytime soon. **It does what it needs to do and it does it well.** Let's get one thing straight right off the bat: The LG Gram ain't a hardware juggernaut. Even the higher-end model which features the Core i7 processor isn't exactly heavyweight in terms of its performance capabilities. The LG Gram is outfitted with an ample amount of connectivity ports, including a USB Type-C and microSD slot. But 8GB of RAM and 2.3 or 2.5GHz of dual core processing power is more than enough for juggling day-to-day tasks, video streaming, and even some light gaming. The LG Gram didn't exactly blow away our hardware benchmarks, but I spent about a week using it for both work-related tasks and post-work lounging, and I never found myself wishing things would move at a faster clip. **The display is crisp, nearly bezel-less, and non-reflective.** Although the Gram's 15.5-inch, 1080p screen doesn't get searingly bright, it manages to block reflections better than most glossy displays. There's only about a quarter of an inch of real estate making up the bezel, too, which means it's narrow enough that it almost disappears from view. Initially I was worried about the display's lack of luminance, but it manages to get dark enough that the superb contrast goes a long way in making up for it. ## What We Don't Like **The keyboard and trackpad are a real drag.** While it’s nice to see enough keyboard real estate to afford a number pad, the keys themselves suffer from a lack of posh and far too much squish. Worse, the keys aren’t backlit, which isn’t something you think you’ll miss until you suddenly find yourself in a room without lights. Experienced typists might shrug this one off, but everyone goes through a period of adjustment when they get their hands on a new keyboard, and trying to do that in the dark is never fun. But the real kicker here—the one thing that got me bent out of shape more than anything else—is the Gram’s trackpad. At around 4.5 by 2.5 inches, there’s plenty of room to comfortably scroll, pinch, and gesture, but the combination of poor ergonomics and Elan’s lackluster driver leaves me wanting to hurl the Gram across the room like the neat, gold frisbee it resembles. Aside from the trackpad’s iffy palm rejection and stifling limitations (heaven forbid you rest your thumb on the thing), the act of clicking requires a forceful, pronounced press that I’m often not ready for. In fact, even after a week, my batting average for actually accomplishing basic point-and-click tasks was embarrassingly low. The clicks themselves are also loud and hollow, which adds an extra dimension to the trackpad's poor functionality. Unfortunately, the LG Gram's stiff, erratic trackpad was somewhat of a deal-breaker for me. **Looking for a laptop with a decent set of internal speakers? Keep on moving.** File this one under “probably not that big of a deal to most people,” but it’s worth noting just how tinny and cloistered audio sounds coming from the LG Gram’s internal speakers. Sure, this is mostly a function of the speakers being enclosed in a magnesium alloy, but be prepared for a truly dreadful experience if you don’t already own a Bluetooth speaker. It’s hard to even tell where the audio is coming from. At times I thought nesting the Gram on my lap was covering the speakers, but upon flipping it over, things sounded just as muffled. I suppose it’s fitting that a laptop of such little weight pipes out music that’s airy enough to evaporate just as it reaches the outside world. **Less-than-stellar battery life** The effort to craft a laptop that weighs exactly *oh-no-I-think-I-might-break-it* pounds is an effort that’s claimed a couple of casualties. Most notably is the loss of battery life; the LG Gram’s battery just isn’t as reliable as some might have come to expect from a thin-and-light ultrabook in 2016. To put this in better context, you can reasonably expect three to four hours of regular use and significantly less than that if you plan on doing more punishing work. Given the nature of the Gram, there’s not going to be a whole lot of gaming going on, but what light gaming you can accomplish will certainly chew away at the battery much faster. ## Should You Buy It? I’m a bit torn on this one, to be perfectly honest. I can see myself learning to love the LG Gram somewhere down the line—after all, the specs are adequate enough for basic home and office use, the look is sleek and attractive, and it’s maybe the most effortless laptop I’ve ever used in terms of sheer portability. The camera's located in kind of an awkward spot—be prepared for whoever you're video chatting with to be staring up at your chin. But some of the more clunky elements—the keyboard, trackpad, and internal speakers—are a substantial disappointment. For every bit as effortless as it is to commute with the Gram in a messenger bag, it’s equally as strenuous to use its trackpad. And while it’s insanely cool to use an ultrabook that feels like nothing on your lap, the constant anxiety about battery life isn’t really worth it. These shortcomings would be a heck of a lot easier to deal with if the LG Gram didn’t *start* at around $1,000. This is, to me, a laptop that’s yearning to come in at a slightly more affordable price. It is by no means a beast in terms of its hardware, so what gives? When I run through the spec list, and reflect on my time with the LG Gram, I find myself mentally hunting for a perspective that justifies its price tag. And it’s not an easy task. So here’s what I’ll say: If you’re going to buy the LG Gram, do it because of its freakishly light weight. After all, that’s what LG is selling, right? Buy into the hype if you’re absolutely dead-set on owning a laptop that handles and travels like a breeze. If you’re a stickler for superb functionality, you might want to keep shopping around. There are plenty of ultraportable options that give you more bang for your buck, even if they don’t weigh two pounds. The 15-inch version of the **Samsung Notebook 9**, for example, offers similar performance and portability with a much better trackpad, though it's priced just a bit higher than the Gram. The **Dell XPS 13** features a smaller, 13-inch display, but its hardware is comparable to the LG Gram, and if you're worried about the Gram's shaky battery life, the XPS is much more reliable. And if you're willing to go for a smaller screen, last year's **MacBook Air** is starting to come down in price, and it's still a premium performer, especially when it comes to battery life. ## Related Video {{ brightcove '4885497536001' }} ## Meet the tester Michael Desjardin graduated from Emerson College after having studied media production and screenwriting. He specializes in tech for Reviewed, but also loves film criticism, weird ambient music, cooking, and food in general. ## Checking our work. Our team is here for one purpose: to help you buy the best stuff and love what you own. Our writers, editors, and lab technicians obsess over the products we cover to make sure you're confident and satisfied. Have a different opinion about something we recommend? Email us and we'll compare notes. Shoot us an email
true
true
true
What this lightweight laptop lacks in battery life it makes up for in portability.
2024-10-12 00:00:00
2016-06-28 00:00:00
https://reviewed-com-res…gram-15-hero.jpg
article
usatoday.com
Reviewed
null
null
14,130,162
http://www.en.netralnews.com/news/opinion/read/3866/want.a.loan.in.china.keep.your.phone.charged
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
38,124,919
https://techxplore.com/news/2023-11-gpt-falls-short-turing-threshold.html
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
11,500,212
http://venturebeat.com/2016/04/14/microsoft-releases-visual-studio-code-1-0-as-the-code-editor-passes-500000-monthly-active-users/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
4,352,050
http://blog.scienceexchange.com/2012/08/improving-transparency-in-scientific-research/
Public Website
null
Loading × Sorry to interrupt CSS Error Refresh
true
true
true
null
2024-10-12 00:00:00
null
null
null
null
null
null
null
17,585,774
http://fortune.com/2018/07/21/uber-drivers-employees-new-york-unemployment/
Uber Drivers are Employees, New York Unemployment Board Rules
David Z Morris
The Unemployment Insurance Appeal Board of New York State has ruled that Uber is liable for unemployment benefits for three drivers, along with others who are “similarly situated.” Uber has said it disagrees with the ruling, and seems likely to appeal it. The board issued its decision, which can be read in full courtesy of Ars Technica, on Friday, July 13. The decision cites Uber’s extensive recruitment, training, and supervision practices as evidence that drivers are employees, rather than independent contractors. The cited practices include Uber’s use of customer feedback systems, and specific warnings and other notifications issued to claimants in the case. “The credible evidence establishes that Uber exercises sufficient supervision, direction or control over the three claimants and other similarly situated drivers,” the board wrote, to show that the drivers “were covered employees for purposes of unemployment insurance.” * Get Data Sheet, Fortune’s technology newsletter.* Though limited to one aspect of the employer-employee relationship, the decision undermines a core business tactic of Uber and other platforms in the so-called ‘gig economy.’ As argued convincingly in the new book *Gigged* by Sarah Kessler, customer feedback systems, automated dispatch, and similar systems are designed to closely monitor and direct workers without the involvement of human managers. This serves to support the argument that workers including Uber drivers are independent contractors rather than employees, without sacrificing the companies’ tight control over them. That position has come to look increasingly tenuous in recent years. Last November, for instance, Uber lost its appeal of a London decision requiring it to provide drivers with a minimum wage and paid time off. This May, the California Supreme Court made a similar decision about drivers’ legal status. The New York Taxi Workers Alliance, which represented the three claimants, has described the decision has “historic,” saying that it provides protection for drivers who are “‘deactivated’ — the fancy Silicon Valley word for taking a person’s job away.” __Recommended newsletter__**Data Sheet:**Stay on top of the business of tech with thoughtful analysis on the industry's biggest names. Sign up here.
true
true
true
The board said Uber's management practices made it an employer, not merely an intermediary.
2024-10-12 00:00:00
2018-07-21 00:00:00
https://fortune.com/img-…?resize=1200,600
article
fortune.com
Fortune
null
null
35,970,088
https://felipec.wordpress.com/2023/05/16/authoritarianism-in-the-git-project/
Authoritarianism in the Git project
FelipeC
Since time immemorial abuses of power have taken place. The biggest problem with abuses of power is that most people do not see them as such unless they happen to them. Many understand why the poem First they came … works: most people would not defend a group of people they don’t like against tyranny. Screw the Communists, right? This lack of empathy towards out-groups is the reason why tyranny succeeds so easily. When Alex Jones was banned from Twitter without due process, many people cheered, but the people who understand basic principles of justice did not. Even if you hate Alex Jones, even if you think he was obviously guilty of whatever Twitter accused him of, you should still want him to have the opportunity to defend himself. Soon enough Twitter used the same tactic to silence the entire right–as the Twitter Files releases clearly showed–and nobody from the left saw a problem with that (well, a few did). It doesn’t matter if you are on right or the left, you should not root for abuses of power against people you hate. If we don’t believe in freedom of expression for people we despise, we don’t believe in it at all. Noam Chomsky It’s hard to defend people we despise, but there’s a very good reason why principled people should. Johnny Depp was automatically deemed guilty of what Amber Heard accused him of in the court of public opinion. But the mob has no due process. Johnny Depp never had an opportunity to present his side of the story. Does it have to be said that there’s always another side to every story? Apparently yes, since people keep forgetting there’s a reason why the accused is considered innocent until proven guilty. According to the mob, Kyle Rittenhouse was obviously guilty of murder. Does that mean that he was? Is there anyone so guilty that his side of the story does not need to be heard? If you think so, then you are the kind of person who would have sent Tom Robinson to prison, just because of who he appeared to be. It is very tempting to put somebody you despise in prison and throw away the key with no due process. But even actual confirmed Nazis had their day in court: Nuremberg trials. We would all like to believe that if we were given the power to judge people, we would do so in an objective, principled and moral way. But all the evidence points to the fact that most people can’t. In reddit it’s relatively easy to become a moderator of a community–who have the power to remove comments and ban people. Without fail these random people abuse their power. There’s entire communities devoted to expose these abuses of power (e.g. r/modabuse). What recourse do victims of moderation abuse have? **None**. Which is precisely what emboldens these moderators to do whatever they want. Not even judges–who spend their entire professional career studying justice–can unilaterally send someone to prison. Even judges have checks and balances. After this primer on justice, it’s time to introduce our main actor: the Git Project Leadership Committee, who is judge, jury and executioner, wasn’t elected by anybody, makes decisions in secret, is not accountable to anybody, has **zero** checks and balances, doesn’t follow any due process, and does not allow people they accuse of violations to present their case or defend themselves. If that’s not a recipe for abuses of power, I don’t know what is. ## Rules If one doesn’t want to live in total anarchy, rules are important. But if the rules are decided by a chosen few, that’s not particularly better: that’s an autocratic system. It’s generally considered better if the rules are chosen by a majority in a democratic fashion. But in order for those rules to serve the interests of the collective, they have to be clear. I have been a moderator in various communities, and I see my role as a servant of the will of the masses. For example, if one community has as a rule “no insults” and I notice an user constantly throwing insults, I would feel a ban is warranted. But if a similar thing happens in another community that doesn’t have such a rule, then I would not be justified in banning a person for that. The same thing happens with laws. A police officer in Mexico may arrest me for drinking alcohol in public, as that’s the law of the land, but not in Finland. It’s irrelevant what I personally think of insults, if a community doesn’t have a rule against them, as a moderator I should not punish people for using them. The same goes for a police officer who is personally against alcohol, but works in a country that doesn’t forbid it. The problem comes when the rules are vague, like “be kind”. In those cases that essentially gives carte blanche to whomever is enforcing the rules to do whatever they want. A moderator could for example argue that he didn’t ban Pedro because he didn’t like him, but because he was “unkind” when he said another member of the community was wrong. Even if you happen to agree with a particular ban because somebody was “unkind”, what usually happens is that a moderator would apply those rules in a biased way. So his friends would not be punished for even worse violations of the same rule. Consider the case of NumFOCUS v. Jeremy Howard. NumFOCUS had a code of conduct which said people must “be kind” to others, and they found Jeremy Howard in violation for merely claiming that somebody else was wrong in a jokingly and light-spirited way. The person who was the recipient of this “unkindness” wasn’t even offended or bothered. So why on Earth would NumFOCUS’ committee find the need to punish Jeremy for saying something completely innocuous such as “Joel Grus is wrong”? We cannot know why they decided to pursue this idiotic decision, but I can think of one possibility: somebody in the NumFOCUS’ committee didn’t like Jeremy. It could certainly be argued that saying “Joel is wrong” is “unkind”, but just as easily the opposite could be argued as well. In jurisprudence there is a concept about distinguishing what a law intended versus what a law says: letter and spirit of the law. So just because an action **could** be considered in violation of a rule, that doesn’t mean that it **should**. A professional judge may understand the distinction between the letter and spirit of a law, but would a reddit moderator do? A professional football player may know all the rules of the sport, but that doesn’t mean he would be a good referee. Knowing the rules and enforcing the rules fairly are two very different things. So just because some people happened to be in NumFOCUS’ committee that doesn’t mean they actually knew how to enforce the rules of the code of conduct fairly. Especially when the rules were vague. Which is why if members of a community want to prevent abuses of power, they have to choose rules that are clear and hard to misinterpret by the enforcers of those rules (intentionally or otherwise). What are the rules of the Git community? There aren’t any. The current code of conduct is a verbatim copy of the Contributor Covenant that is not allowed to be discussed. But the Git Project Leadership Committee (PLC) does not understand the intention or the scope of such document. For example, I was the only person in the community that actually read the document and suggested modifications: [PATCH] CODE_OF_CONDUCT: expect tolerance, not respect. I was told by the maintainer to suggest the modification to Contributor Covenant, not Git, which I did. In my request I suggested changing the wording of one item in the section Our Standards: - Being respectful of differing opinions, viewpoints, and experiences I was told this item was not problematic, because the list is of **examples**, the Contributor Covenant does not **demand** respect: The document doesn’t demand respect. That section provides examples of what contributes to a positive community environment. Kevin Laude — Member of The Organization for Ethical Source How does the Git PLC interpret this item? They **demand** respect. They transform an **example** of good behavior listed in a **guideline** to a prescriptive rule: “be respectful”. This goes against the intention of the document, according to a member of The Organization for Ethical Source. And the reason why I objected to this rule is that it’s impossible to follow and enforce. I used the example of Cambridge University’s debate about precisely these kind of rules: Cambridge University votes to safeguard free speech. A vote on free speech at Cambridge University has strongly rejected guidelines requiring opinions to be “respectful” – after warnings this could limit freedom of expression. BBC News Professional philosophers, public intellectuals, and plenty of people much more eloquent than I, made the case that it was impossible to demand respect, just like it’s impossible to demand that people find a joke funny. You can ask people to pretend that a joke is funny, but that’s just an act. And what is an enforcement team supposed to do when a member of the community doesn’t follow this “example” and does not respect an idea? Take for example this response by Junio C Hamano, the maintainer: So I do not think this is not even a bikeshedding. Just one side being right, and the other side continuing to repeat nonsense without listening. Junio C Hamano Not very respectful. Should Junio ban himself for not following an example of good behavior? When the rules are unclear, vague, or not even rules, that gives enforcers the flexibility to do whatever they want, and use these “rules” as excuse to punish people they don’t like, contrary to the wishes of the community. ## The will of the people If the Git project has the Contributor Covenant document in its source code (CODE_OF_CONDUCT.md) that must mean that the community chose this to be their code of conduct, right? No. The Git project is not a democracy, it’s a dictatorship with a benevolent dictator for life model. And that dictator is Junio C Hamano. There has been many instances in which the entire community is in favor of a change, but Junio is against, so the change doesn’t happen. See: The git staging area, the term literally everyone agrees with. So the fact that this document was merged only means one thing: Junio C Hamano approved it. That’s it. Sure, in many instances the community agrees with Junio, which is why there is no open revolt (although many people have left the project). But just because something was merged that doesn’t mean the community was in favor. So what did the community say when Jeff King (Junio’s second-in-command) proposed this document? Re: [PATCH] add a Code of Conduct document We are decent people, and know how to behave properly and treat each other with respect. It is my fundamental assumption that all future contributors are decent and respectful human beings as well. A CoC like this, which is “explicit about the behavior we want to model” (quoting the original discussion starter) inherently insinuates that we aren’t decent, and can’t behave without being told how to do so. Frankly, I find this borderline insulting to me, to my parents, to all fellow contributors, and to future contributors as well. There are locations, nationalities and cultures, where the avarage wide-spread CoCs, like Contributor Covenant and its derivatives, are perceived as (paraphrasing) too “American”, politically overcorrect, corporate BS, etc., which are forced upon open-source projects. Consequently, such CoCs are often found rather discouraging, and announcements about their adoption in open-source projects generally get negative reaction. Less is more. Much-much more. A concise CoC that treats its readers as responsible, well-behaved human beings is met with much approval. Take, for example, the TrueOS Rules of Conduct, which in just a few short sentences covers everything that’s worth covering: https://www.trueos.org/rulesofconduct/ If diversity and inclusion of other cultures is indeed a priority, then we should carefully consider that some potential contributors will rather choose not to contribute because of a CoC like this. SZEDER Gábor Re: [PATCH] add a Code of Conduct document I feel uncomfortable with this being left so wide open. First of all, I know that the power probablywon’t be abused but I don’t think probably is good enough.As I said above, I couldn’t find a public list of the people who were on the project committee. Perhaps that’s because my Googling skills are bad but I feel uncomfortable knowing that anyonewill be given judge, jury and executioner power, let alone people whom I don’t know anything about.I’m okay with leaving it open for now but I think I would be a lot more comfortable if we had the interpretations document to close up the vagueness later. Thanks for starting the discussion, Denton Liu Re: [PATCH] add a Code of Conduct document I agree with this. I would certainly like to know who the people who will judge these cases are. I want to add another question: what will the judgement process be like? Will it be an open discussion on this list, or will it be decided behind closed doors by the committee, and we just get to hear the results? While there might be no plans regarding this as of now, I’d Pratyush Yadavreallylike an open discussion regarding these issues that arise in the future, rather than it being decided behind closed doors with us regular old contributors getting no say in it. A closed discussion would be much more prone to power abuse, if any. Re: [PATCH] add a Code of Conduct document > Other conduct which could reasonably be considered inappropriate in a professional setting This is very vague. It could match eating at your desk, tipping, not tipping, not wearing a tie, or talking back to a senior developer. René Scharfe Re: [PATCH] add a Code of Conduct document > I tried looking it up but I couldn’t find who the project committee members are. Is this list published anywhere? More on that later… To be honest, I’m a little worried about it too. What if I have a problem with someone on the project committee? What if I have a problem with someone I don’t know is on the project committee? I helped my other FOSS project to adopt a Code of Conduct earlier in the year (https://github.com/openbmc/docs/blob/master/code-of-conduct.md) and we got around this by asking for volunteers from the technical steering committee to agree to have their contact info listed on the escalation path; at the end of the escalation path we also listed someone external to the project (which we were able to do because we had been adopted by the Linux Foundation, and they have someone for that). A possible con of being on this escalation path is having your name and contact info outed to trolls as a supporter of something controversial like a code of conduct. However, I’d argue that the growing list of ACKs on this thread expose us in a similar way. On the other side, the benefit of having a transparent escalation path like this is that you can bypass a problematic individual who may be in a position of power. It also provides an opportunity for increased discretion in delicate situations like the example Peff gave downthread. [snip…] While I’d feel more comfortable with a “menu” of folks I could escalate a concern to in private, I otherwise like this document and agree with sentiment elsewhere in the list that it will not change the way the project behaves now – only write down the current state. ACK from me. Emily Shaffer None of these concerns were addressed. One counter-argument was the following: Re: [PATCH] add a Code of Conduct document > If diversity and inclusion of other cultures is indeed a priority, then we should carefully consider that some potential contributors will rather choose not to contribute because of a CoC like this. Let me be blunt for a minute. The proposed CoC would not change anything for any contributor I consider decent. Not one thing. There would not be any need to change any behavior, no need to complain, they could just read the CoC and say: “Yep, that’s right, that’s exactly how I want to behave, and that’s how I want the others in this project to behave. Back to this bug I wanted to debug/this feature I wanted to implement…” Johannes Schindelin As we will soon see, this was not true, the Code of Conduct document did change things. But what I find most revealing is the responses from Jeff King who argued that a) nothing would change and b) disputes should ideally be debated in the open, which wasn’t true. In particular, I think this paragraph is key. I don’t think this CoC is asking to change anything about how we work or communicate currently. I view it as writing down the status quo (which is valuable for the reasons I mentioned in the commit message). … I think we’ll have to approach this on a case by case basis to some degree. If people are having conflict on the mailing list, I’d like to see it resolved there, too. If somebody is sexually assaulted at the Git Contributor Summit, that probably needs to be handled with more discretion. Keep in mind that a lot of this isn’t changing the status quo. When we had a problem on the mailing list in the past, it was discussed on the list andin private. And ultimately decisions came down to the maintainer: am I going to start ignoring this person’s patches, or will I continue to encourage them to interact with the project even though they’re causing problems.So I think a lot of this is really just writing down the current practice. Jeff King Sure, some people were in favor of merging this document, but what about the concerns of all the other? As we know: they don’t matter. Only the opinion of a single person matters: Junio. At the surprise of no one who understands consequences, this document did change things, and the unelected committee used it to censor and ban people **in secret**. ## My ban On 2023-05-13 I was notified of my permanent ban of the Git project by the The Git Project Leadership Committee. In this post I’m not going dive into my alleged violations (which essentially boil down to “you were rude”). If you want to see my alleged violations and my response, here’s the post: My ban from the Git project: the defense I was denied. I will write another post explaining the context in which the ban took place, which I think is much more important, but it’s essentially: Junio C Hamano **abhors** me. All I’m going to criticize here is the manner in which this decision was made. Even if I’m guilty, even if you despise me, for the reasons I stated above: I should still have an opportunity to defend myself. Git’s PLC did not allow me to defend myself, they didn’t even reply when I asked them if I was going to be allowed to defend myself. They simply notified me of my permanent ban, blocked my address from the mailing list and stopped all communication effective immediately. I was never warned of an impending ban, I was never asked to change my “rude” tone, none of my comments on the mailing list were removed, I was never notified of any complaints against me. **Nothing**. Five hours after my last patch I was notified of my permanent ban with no warning and no recourse. That was it. If you go the list of contributors, you can see I’m #16: contributors to git/git (**update**: I used to be #16, apparently they have deleted me from the list of contributors). Even thought Junio C Hamano despises me, I still managed to land 527 patches that he could not reject and keep a resemblance of being a “reasonable maintainer”. But the amount of patches that he ignored are way more: 1077. My first message to the list (About git pretty) was in 2008, making it 15 years of contributions now. ### Junio Junio’s first reply was negative: FWIW, I don’t like it. Junio C Hamano But other people disagreed with Junio and liked my idea, which is why now you can do “`git log --format=%s` ” instead of “`git log --pretty=format:%s` “. This started a long list of debates in which I turned out the be right, and Junio wrong. A list which I will make public, but it will take time to compile, as summarizing 15 years of discussions is not an easy task. But you can see hints of that hatred in his very last message to me, in which he is so unhinged he could not even write correctly: > … Either way, I am not done with the thread, as I said. Eh, yes I Junio C Hamanoamdone. drop “not”. For the record in that thread Junio was proven wrong and I was right (once again), and 23 hours after that I received the notification of my permanent ban. Of course, this isn’t proof of anything, and you’ll probably want to give Junio the benefit of the doubt, but here’s a few comments from him just the last few days: You either did not read what I wrote earlier in the thread or you are trying to paint a picture that is different from reality with whatever motive you have. I am not interested in getting involved in unproductive arguments with you (or with anybody else for that matter). This will be the last message from me on this. I wouldn’t have even seen the message I am responding to, as I’ve already done my “once every few days sweep the spam folder to find things to salvage”, but somebody notified me of it, so… Junio C Hamano What was my sin that made him react this way? I dared to criticize the commit message of one of his patches (Re: [PATCH v2] diff: fix interaction between the “-s” option and other options), something he routinely does with **all** my patches, and the patches of most people. And Git’s PLC has the audacity to claim they enforce these guidelines? - Demonstrating empathy and kindness toward other people - Being respectful of differing opinions, viewpoints, and experiences - Giving and gracefully accepting constructive feedback This is just the last few days. Over the course of 15 years Junio has insulted me **repeatedly**, while on the other hand I’ve never insulted him (unless you consider disagreeing with him to be an insult). If you think this behavior is somehow only directed towards me, here is Junio insulting Scott Chacon, author of the Pro Git book and a prolific member of the Git teaching industry, along with many other Git-related projects: As you are supposed to be one of the top-level Git Teachers, I wish you knew better. Here is a free Git lesson. Junio C Hamano Does that look like Junio is being respectful of differing opinions? Here’s Junio attacking a random contributor for merely pointing out that he did not merge two previously proposed solutions: Do you mean twice they were both found lacking, substandard, useless, uninteresting, buggy, incorrect, or all of the above? Junio C Hamano And here he is sneeringly dismissing a valid complaint about the way a command works: Don’t do that, then. Junio C Hamano Some of you might still want to give Junio the benefit of the doubt and argue that these are cherry-picked, they happened a long time ago, there was no Code of Conduct then, etc. To those of I say: you are right, wait for the full article in which I’ll expose Junio’s behavior. These are just a sample I happened to have at hand because they happened in discussions I already wrote about. There’s many more. Just one more example that I think perfectly encompasses the kinds of interactions between Junio and me. > I think the opposite: “helped-by” encompasses virtually anything… But your own stats disagrees with your opinion, so don’t invent a new thing, period. Junio C Hamano Here Junio is complaining that I used the wrong attribution: `Comments-by: Jeff King` . In Git parlance these are called commit message trailers, and go at the end of a commit message to show how a person contributed in the development of a patch (e.g. `reviewed-by` , `reported-by` , `tested-by` , etc). In Junio’s opinion I should have used `Helped-by` instead of `Comments-by` , but I disagreed and explained why. As I said before: Junio reacts poorly to his opinion being criticized, so he is telling me to just do as he says and use the established `Helped-by` instead of my “invented” `Comments-by` and shut up. Unfortunately for Junio, statistics is an area in which I’m well informed, and I know many things in information follow a Pareto distribution, which means a lot assumptions people make for normal distributions do not apply. When I adapted a statistics script I was already working on to git commit message trailers, I found out that of the 155 unique commit trailers, 100 of them had been used only **once**. In my response in which I completely destroy Junio’s argument, I also mention that I found the following in the coding guidelines: You can also create your own tag or use one that’s in common usage such as “Thanks-to:”, “Based-on-patch-by:”, or “Mentored-by:”. CodingGuidelines So Git’s own coding guidelines sided with me, and said I could “invent” my own trailer (which wasn’t even true, as it was already used before). Even worse: I found that of the trailers that had been used only once, many of them had been done by prolific Git developers, including… Junio C Hamano: - brought-to-attention-by - discussion-triggered-by - heavylifting-by - initial-fix-by - tested-on-freebsd-by - tested-on-aix-by - tested-on-mingw-by - reference So Junio had done what he told me to not do **multiple times**. Why is Junio ordering me to not do something all developers do, including him? I think many of you probably have a pretty good idea why. This is just the tip of a huge iceberg. ## PLC Git’s Project Leadership Committee is currently composed of Christian Couder (GitLab), Junio C Hamano (Google), Taylor Blau (GitHub), and Ævar Arnfjörð Bjarmason (Google (probably)). There’s not a single member from the open source community (unaffiliated). The livelihood of every single one of them depends on their ability to convince Junio of merging their patches for which ingratiating themselves with him certainly helps, and criticizing him does not. So these are not independent people with the best interest of the community at heart. They were not even chosen by the community. Their role initially wasn’t even to enforce a Code of Conduct, because there was no Code of Conduct. In a private conversation, Ævar confided to me that he was talked into being a member of the committee and he accepted because he thought it was going to be mainly about dealing with mundane administrative tasks, not these kind of enforcement decisions, which he clearly did not want to do: I’m rather uncomfortable with banning anyone from something like a free software community. Ævar Arnfjörð Bjarmason When Johannes Schindelin argued in favor of the Code of Conduct by saying it was OK to leave it vague, because the members of the committee are trusted by the community, and ultimately it’s all about trust, he made a fatal mistake. The members of the committee are trusted by the community when it comes to **code**. The community has **zero idea** the kind of moderation decisions these people would make, and have made. There is no transparency. All the decisions are made in secret and there’s no accountability. Even when it comes to what the community knows these people for–their code–they are not infallible, so they still have to send their patches for review. Even Junio. Why on Earth would anybody believe these people are making good moderation decisions behind closed doors, when they haven’t even seen a **single one** of them? I have been on the other side of the moderation, as I reported the following personal attack: Just a heads-up that Felipe has a history on this list of creating long, never-ending, pointless rant threads so sometimes the best course of action is to ignore him. David Aguilar Junio said they would look into it, but whatever their decisions was, I was never made aware of it. David Aguilar ended up giving some kind of half-assed apology, and that was that. I personally do not care, but if we are supposed to hold each other to some kind of standard, then why doesn’t the PLC explain what that standard is in specific instances? Did they think what David Aguilar said was A-okay? If so, then other people could take it as a cue that throwing personal attacks is fine. If not, then we all know where the line is. For the record, unlike David Aguilar and Junio C Hamano, I do not throw personal attacks. Which makes it all the more ironic that the person who does throw personal attacks bans a person who doesn’t, on the basis of being “rude”. Just because these people are good programmers doesn’t mean they make good judges. Proof of that is that they don’t even understand what due process even is. The first time the PLC contacted me in July 2021, I explained to them that what they listed were **alleged** violations, not actual violations, and I asked them if they would even allow me to present my case. This isn’t a trial. This is just us exercising our rights and our duty towards the community and the organizations behind the community. One way to look at this is as if a Public Benefit organization was organizing an open party in a space it owns and had appointed some staff and published a Code of Conduct to ensure the security and well being of the participants. And this is as if the staff asks you to leave the party for some time because there have been complaints related to your behavior. You just don’t have any right to stay there if the owners or people appointed by the owners ask you to leave. Jeff King This is not about me having any rights, this is about them potentially making the wrong decision. The whole point of a moderator is **judging** if a complaint is valid or not, not just count the number of complaints. Under Jeff’s vision of justice, all unpopular people should belong in prison. Even if I was guilty of whatever I was accused of, I should still have the opportunity to present my case and potentially make them reconsider. Even actual Nazis were granted a trial. The Git PLC believes people accused of CoC violations deserve less consideration than actual Nazis. People accused of violations are not even guilty until proven innocent, they are just guilty. ## Damned lies Unlike the previous warning in July 2021, the permanent ban was announced publicly in the mailing list: Git’s Code of Conduct, and community updates. In that message the PLC makes a number of claims which are, let’s just say… **not accurate**. Previous violations and related complaints resulted in the PLC’s July, 2021 decision to temporarily ban him from the community for three months. Git PLC This is **not true**. In July 2021 the PLC sent me a warning with a request for me to stop interacting with a number of people. The request verbatim in full was: **No interaction from you with the people involved in the above emails, including unsolicited interaction with those enforcing the Code of Conduct for 3 months.** Git PLC When I requested clarification, I was specifically told that I was **not banned** from the whole community. This is of course not possible for you to verify, because although I do have the emails, they are not public. But what you can verify, is that I sent a patch to the mailing list just eight days after the warning: If I was banned in July, then why was I allowed to post in August? And why did both Junio and Ævar replied to my patch, if they knew I was banned? Not only in that patch, but several subsequent patches. The answer is simple: that’s a **lie**. I was **not banned** in July 2021. In some alternate history in which I had already been temporarily banned, that would have cemented the PLC’s case for a permanent ban. But in the real reality that **never happened**. If that was the only lie, you could argue that it was just an honest mistake. After all Taylor Blau was not a member of the committee at the time, and Ævar told me he was rather busy. Perhaps it was just negligence. Except they also said this: Despite our repeated efforts (including applying corrective action as laid out in the CoC), Felipe Contreras has continued to violate the CoC, leading to numerous complaints from community members. Git PLC What “repeated efforts”? The PLC has sent me **three** emails in total: - Git Code of Conduct violations – Christian Couder (2021-08-28) - Re: Git Code of Conduct violations – Jeff King (2021-08-31) - Notice of your permanent ban from the Git project – Taylor Blau (2023-05-13) That’s literally it. The second email only said that they would not be listening to my case. In what universe could this be considered “repeated efforts”? They **never** asked me to apologize for anything, they **never** asked me to retract one of my comments, they **never** removed any of my emails, they **never** mediated any dispute with anybody. Even if you assume that I’m guilty, they didn’t even bother explaining to me why something was considered a violation. This is something that even if you give the PLC the benefit of the doubt, simply cannot be explained. They know they didn’t do **any** due diligence, so they know there were no “repeated efforts”. So why are they saying that? I think it’s very clear: because if they say the truth “we warned him once to not contact some people for three months and never allowed him to defend himself”, that wouldn’t seem like a good basis for a permanent ban. Would it? So they **fabricated lies**. They probably thought it would have been my word against theirs, and since I was immediately blocked from the mailing list, I wouldn’t have an opportunity to defend myself. But they didn’t check to see that there’s actual evidence that I was **not banned** in August 2021, so their lie can be easily verified. I don’t think it’s a coincidence that the notice of the ban arrived 23 hours after Junio’s unhinged reply: they probably rushed their decision, and that’s why they didn’t notice one of their claims can be easily falsified. Which means they made another lie: As such, **we had to make the difficult decision to permanently ban Felipe Contreras from participating in the Git project due to multiple violations of our Code of Conduct**. Git PLC It was not a difficult decision, they rushed it. They didn’t even wait for the weekend to think it over, they sent the notice on a Saturday. They are just saying that it was difficult to appear diligent. If it actually was a difficult decision, why are they not willing to reconsider it? If they had any kind of compassion, why didn’t they reply to me again? I asked them on Saturday 2023-05-13 if they were going to allow me to defend myself. It’s now Tuesday and there’s still no reply. Not even to say “no, we are not going to hear you”. I don’t think many people can understand the emotional pain that hearing news like these cause. I wanted to react immediately, but I did not want to rush to conclusions, and I gave them the benefit of the doubt. I waited patiently for days, but on Monday I told them to please at least say something, that they were not going to listen to me, that they were discussing internally, something, anything. **No reply**. But here’s how I know that statement was an act. If it was indeed a difficult decision, they would be mindful of **why** it should be a difficult decision: because I had been contributing to the project for **fifteen years** entirely on a voluntary basis. Git is a better tool because of me, not having my future contributions is a loss for git users. But somehow when they drafted the notice of the permanent ban, the forgot to **thank me** for all my years of selfless work. Obviously the reason they didn’t thank me is because they were not even thinking of all my contributions, which is the whole reason why the decision should have been difficult. Difficult decision my ass. ## Like a filthy dog This is the part that gets difficult, because when I think about all the hours I’ve devoted to this project **for free**… Let’s just go through it. In total I sent 6564 messages to the mailing list. 2845 of those were patches. Only 4 mails were used as evidence for my “repeated” unacceptable behavior. 1 was directed towards Junio C Hamano, and 1 to Taylor Blau. To understand the magnitude of what I’m talking about, here’s a graphic of the alleged violations (red) compared to the totality of the messages. The yellow ones are the ones sent after the Code of Conduct document was merged. Keep in mind that the red ones are **alleged** violations I was not allowed to appeal. And the violation is that I was “rude” and “dismissive”. Dismissive towards what? The opinion of… Junio C Hamano. Again, if you want to judge if these alleged violations have merit, see: My ban from the Git project: the defense I was denied. Here let’s assume that I’m guilty (even though I’m clearly not). Let’s also assume that I’m a detestable person that did not provide value to the project. Sure, let’s assume all that. I still should be allowed to defend myself. I don’t think anybody could argue that I spent 15 years trying to make git worse. Landing a single patch in git.git is not easy, and I landed 527. Every single one of those was approved by Junio, which means even he couldn’t come up with a reason not to merge them, so they made git better, not worse. Not only did I contribute to git itself, I also helped develop multiple Git User Surveys (e.g. Re: [RFC] Git User’s Survey 2009 – trial run). Improved the interface of the git-scm.com site. As well as the Git Developer Pages: CSS font fixes, css: improvements to “move the left margin”. I have 93 git answers in Stack Overflow. I also maintain multiple git-related projects, like git-remote-hg (which was once an official tool until Junio decide to pull the plug for some reason), git-completion, git-related, git-reintegrate, and many more. Not to mention of course, this blog itself in which I’ve talked about git extensively. Sure, in some cases to air some grievances, but others are entirely educational, for example: - Advanced Git concepts: the upstream tracking branch - What it takes to improve Git or: How I fixed zsh completion - How a proper git commit is made (by a git developer) Why would I do all that? To “troll” git? I was also featured in an interview in Git Rev News: Edition 70 in which I explain where I’m coming from and what I was doing in 2020. Does that look like the profile of a troll to you? Even Ævar had this to say about me. Your continued championing for improving Git’s UX is a much needed greater effort we’ve often agreed on. Ævar Arnfjörð Bjarmason Keep in mind that I’m not like most Git developers. Most Git developers are paid to work on git (e.g. Google, Microsoft), I’m not. I have not been paid a single cent to work on git. And of the people that work on git entirely on a voluntary basis, I’m probably the one that has contributed the most… by far. But let’s say that I’m actually a troll, and I’m actually trying to sabotage git, as Junio has accused me of. In the communities I moderate, I actually give potential trolls multiple warnings before hitting them with the permanent ban hammer. And after they are banned, they still can appeal the decision and attempt to defend themselves. The Git PLC is treating me **worse** than a troll should be treated. Fine, I don’t deserve my case to be heard by the PLC, can I at least get the opportunity to send a farewell message? No. I was immediately blocked from the mailing list. In my reply to the permanent ban notification I asked to please mention that I was not going to be given an opportunity to defend myself, or anything, really. So that the community is aware of what they are doing: Felipe will not be granted the opportunity to present his case, appeal our decision, or bid farewell to the community that he wholeheartedly contributed to for 15 years, entirely on a voluntary basis. At his request, all we will grant him is one final link: https://github.com/felipec/git/wiki/Ban Goodbye, Felipe. They did not grant me even that. They didn’t even reply. Even murderers get to say some last words. Are these the people the Git community is trusting to keep safe? People who do not value dissenting opinions. People who do not think their decisions should be reconsidered or questioned. People who don’t think they need due process. The irony of it all is that it’s the people who claim to be for tolerance and inclusiveness are the ones who are most intolerant and divisive. The only diversity that is not valued is that of **opinion**. The only time I felt victim of actual racism was inside a Nazi shop in Eastern Europe. The owner didn’t say a word, but very clearly wanted me to leave. I suddenly became aware of something I never think about: my race (Mexican mestizo), and could feel that he saw me as subhuman. It felt a little unnerving, but I have a thick skin and didn’t think much of it. This feels **much worse**. These are not random people that just met me, these are people I’ve spent countless hours discussing with. The Nazi owner may not have seen me as fully human, but at least he acknowledged my existence. The Git PLC is not even doing that, they are pretending I don’t even exist. They do not care what their decision is causing me at all. There are no words to describe what that feels like. I would like to say that they are treating me like a filthy dog, but that wouldn’t be accurate, because I would actually treat a filthy dog with more compassion. These people are making me feel less of a human, than an actual Nazi racist did. Let that sink in. … I would like to think that even people who don’t like me are going to realize that it’s not OK to permanently ban somebody with no warning, no possibility to defend themselves, and no recourse; and will say something. But in my experience that doesn’t happen. Most people are actually glad that abuses of power happen to people they don’t like. …until it happens to them. Pingback: My ban from the Git project: the defense I was denied | Felipe Contreras I’m not a git contributor, but I saw the ban message in the mailing list and got curious: “How does someone manage to get banned from a mailing list”? After reading this, I can relate to your pain as I too have been banned unfairly. I was young, only 17, with a whole life before me, eating my youth away on a website called 4chan. Sure it may contain rasism, gore and “rude” people but when a website is as big as a nation there’s no surprise one sees society’s underbelly in plain sight. Though these foul parts may deceive you into believing 4chan is a place of total anarchy where freedom reign supreme — as i too foolishly believed — there is a shadow police controlling the website from behind the scenes. These people take one combined role: Judge, Jury and Janitor. But I digress, why was I banned? Well, you see, I was visiting /mu/, a section of the site dedicated to music where people discuss the various musical compositions they’ve enjoyed and recommend similar pieces to like-minded individuals. One of these discussions began when a user (anonymous as all are) was interested in the cultural correlation between ones age and musical appreciation. He started a new thread with the simple title of “Favorite Album, Age” which people started to reply to with delight. And so, caught in the emotions i replied: “One by C418, 17” and got banned. Now how could I have known I would be banned for sharing my opinion? Is a man not entitled to appreciate a fine art, such as music? Is a man not entitled to discuss these matters with his fellow men? “Yes”, says I, “no”, says the Janitor who banned me. If you read the rules to the letter, one can say that I broke them for being younger than 18; not for my opinion. But is that any better? Why should the young be shut out from discussion? Is this kind of age discrimination good for our society? What kind of message are we sending to our youth? I would also say that my case, much as yours, did not allow me to defend myself. Sure, I wrote that I was seventeen in the post but where is the proof? I could have been joking or lying but the Janitors just took what I said as a literal fact. There were no court hearings or news of my date, while literal nazis are allowed to post freely on the site. And what makes my case even worse than yours, is the anonymity of the Janitor. I had no way to overrule, no way to defend myself. I don’t even know who did it. I then got unbanned when I turned 18… but think of the missed opportunities from that year! So much missed meme-posting, so many wrong people online who I couldn’t argue with. The Janitors obviously had no care for the longevity of their community or the will of the people and I think people, maybe unconsciously, missed my memes. If I could go back in time I might not have posted what I did, but I won’t say I regret it. It revealed to me who really rules this world, how no matter how much you try there will always be some arbitrary rule which the powers-to-be will use to censor you if you express dissent. Even if it is something as harmless as music. LikeLike Pingback: Git 计划中的专制主义现象 - 偏执的码农
true
true
true
My recent ban shows how Codes of Conduct easily turn a community authoritarian.
2024-10-12 00:00:00
2023-05-16 00:00:00
https://felipec.wordpres…t-ban-emails.png
article
wordpress.com
Felipe Contreras
null
null
7,934,351
http://www.murrayhurps.com/blog/ad-muncher-history
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
8,567,110
http://jsplain.com/javascript/index.php/Thread/88-About-JavaScript-Engines-and-nodejs
Outch!! Error 404...
null
404 means that the page you were trying to reach wasn't there! Please press the back button to return to your previous location. The following path will take you to the home page: JSPLAIN.COM
true
true
true
null
2024-10-12 00:00:00
null
null
null
null
null
null
null
35,941,426
https://www.lesswrong.com/posts/5spBue2z2tw4JuDCx/steering-gpt-2-xl-by-adding-an-activation-vector
Steering GPT-2-XL by adding an activation vector — LessWrong
TurnTrout; Monte M; David Udell; Lisathiergart; Ulisse Mini
I was educated by this, and surprised, and appreciate the whole thing! This part jumped out at me because it seemed like something people trying to "show off, but not *really* explain" would have not bothered to write about (and also I had an idea): ## 13. Failing to find a French vector We could not find a "speak in French" vector after about an hour of effort, but it's possible we missed something straightforward. Steering vector:"Je m'appelle"-"My name is "before attention layer6with coefficient+5 The thought I had was maybe to **describe** the desired behavior, and **explain** a plausible cause in terms of well known kinds of mental configurations that speakers can be in, and also **demonstrate** it directly? (Plus a countervailing description, demonstration, and distinct causal theory.) So perhaps a steering vector made from these phrases could work: "I'm from Quebec et je glisse souvent accidentellement vers le français" - "I only speak English because I'm a monolingual American".**EDIT:** If you have the tooling set up to swiftly try this experiment, maybe it helps to explain the most central theory that motivates it, and might gain bayes points if it works? According to the "LLMs ar... I found an even dumber approach that works. The approach is as follows: `n` .`i` from 0 to `n` , make an English->French sentence by taking the first `i` fragments in English and the rest in French. The resulting sentences look likeExample output: for the prompt ...He became Mayor in 1957 after the death of Albert Cobo, and was elected in his own right shortly afterward by a 6:1 margin over his opponent. Miria This is the most impressive concrete achievement in alignment I've seen. I think this post reduces my p(doom) by around 1%, and I'm excited to see where all of the new directions uncovered lead. Edit: I explain this view in a reply. Edit 25 May: I now think RLHF is more impressive in terms of what we can get systems to do, but I still think activation editing has opened up more promising directions. This is still in my all-time top 10. What other concrete achievements are you considering and ranking less impressive than this? E.g. I think there's a case for more alignment progress having come from RLHF, debate, some mechanistic interpretability, or adversarial training. I think to solve alignment, we need to develop our toolbox of "getting AI systems to behave in ways we choose". Not in the sense of being friendly or producing economic value, but things that push towards whatever cognitive properties we need for a future alignment solution. We can make AI systems do some things we want e.g. GPT-4 can answer questions with only words starting with "Q", but we don't know *how* it does this in terms of internal representations of concepts. Current systems are not well-characterized enough that we can predict what they do far OOD. No other work I've seen quite matches the promise this post has in finding ways to exert fine-grained control over a system's internals; we now have a wide variety of concrete questions like Redwood Research used to have a project about trying to prevent a model from outputting text where a human got hurt, which IIRC, they did primarily by trying to fine tunes and adversarial training. (Followup). It would be interesting to see if one could achieve better results then they did at the time through subtracting some sort of hurt/violence vector. Page 4 of this paper compares negative vectors with fine-tuning for reducing toxic text: https://arxiv.org/pdf/2212.04089.pdf#page=4 In Table 3, they show in some cases task vectors can improve fine-tuned models. Insofar as you mean to imply that "negative vectors" are obviously comparable to our technique, I disagree. Those are not activation additions, and I would guess it's not particularly similar to our approach. These "task vectors" involve subtracting *weight vectors*, not *activation vectors*. See also footnote 39 (EDIT: and the related work appendix now talks about this directly). "party", "ceremony", "dress", "with", "photographer" While these aren't syntactically valid continuations of the prompt, they *are* highly likely (and syntactically valid) continuations for "wedding ". More than just being wedding-related, these seem like direct continuations. You write "This residual stream fraction data seems like evidence of *something*. We just don't know how to put together the clues yet." I am happy to say that there is a simple explanation-- simple, at least, to those of us experienced in high-dimensional geometry. Weirdly, in spaces of high dimension, almost all vectors are almost at right angles. Your activation space has 1600 dimensions. Two randomly selected vectors in this space have an angle of between 82 and 98 degrees, 99% of the time. It's perfectly feasible for this space to represent zillions of concepts almost at right angles to each other. This permits mixtures of those concepts to be represented as linear combinations of the vectors, without the base concepts becoming too confused. Now, consider a random vector, w (for 'wedding'). Set 800 of the coordinates of w to 0, producing w'. The angle between w and w' will be 60 degrees. This is *much* closer than any randomly chosen non-wedding concept. This is why a substantial truncation of the wedding vector is still closer to wedding than it is to anything else. Epistemic status: Medium strong. High-dimensional geometry is one of the things I do for my career. But I did all the calculations in my head, so there's a 20% of my being quantitatively wrong. You can check my claims with a little algebra. Was considering saving this for a followup post but it's relatively self-contained, so here we go. Why are huge coefficients sometimes okay? Let's start by looking at norms per position after injecting a large vector at position 20. This graph is explained by LayerNorm. Before using the residual stream we perform a LayerNorm ``` # transformer block forward() in GPT2 x = x + self.attn(self.ln_1(x)) x = x + self.mlp(self.ln_2(x)) ``` If `x` has very large magnitude, then the block doesn't change it much relative to its magnitude. Additionally, attention is ran on the normalized `x` meaning only the "unscaled" version of `x` is moved between positions. As expected, we see a convergence in probability along each token position when we look with the tuned lens. You can see how for positions 1 & 2 the output distribution is decided at layer 20, since we overwrote the residual stream with a huge coefficient all the LayerNorm'd outputs we're adding are tiny in comparison, then in the final LayerNorm we get `ln(bigcoeff*diff + small) ~= ln(bigcoeff*diff) ~= ln(diff)` . Additionally, attention is ran on the normalized `x` meaning only the "unscaled" version of`x` is moved between positions. Thanks for writing this up, I hadn't realized this. One conclusion I'm drawing is: If the values in the modified residual streams aren't important to other computations in later sequence positions, then a large-coefficient addition will still lead to reasonable completions. Could these sorts of posts have more thorough related works sections? It's usually standard for related works in empirical papers to mention 10+ works. Update: I was looking for a discussion of https://arxiv.org/abs/2212.04089, assumed it wasn't included in this post, and many minutes later finally found a brief sentence about it in a footnote. Maybe also [1607.06520] Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings is relevant as early (2016) work concerning embedding arithmetic. I don't understand this comment. I did a quick count of related works that are mentioned in the "Related Works" section (and the footnotes of that section) and got around 10 works, so seems like this is meeting your pretty arbitrarily established bar, and there are also lots of footnotes and references to related work sprinkled all over the post, which seems like the better place to discuss related work anyways. I am not familiar enough with the literature to know whether this post is omitting any crucial pieces of related work, but the relevant section of this post seems totally adequate in terms of volume (and also the comments are generally a good place for people to drop links to related work, if they think there is interesting related work missing). Also, linking to a related work in a footnote seems totally fine. It is somewhat sad that link-text isn't searchable by-default, so searching for the relevant arxiv link is harder than it has to be. Might make sense to add some kind of tech solution here. Yes, I was--good catch. Earlier and now, unusual formatting/and a nonstandard related works is causing confusion. Even so, the work after the break is much older. The comparison to works such as https://arxiv.org/abs/2212.04089 is not in the related works and gets a sentence in a footnote: "That work took vectors between *weights *before and after finetuning on a new task, and then added or subtracted task-specific weight-diff vectors." Is this big difference? I really don't know; it'd be helpful if they'd contrast more. Is this work very novel and useful, and that one isn't any good for alignment? Or did Ludwig Schmidt (not x-risk pilled) and coauthors in Editing Models with Task Arithmetic (made public last year and is already published) come up with an idea similar to, according to a close observer, "the most impressive concrete achievement in alignment I've seen"? If so, what does that say about the need to be x-risk motivated to do relevant research, and what does this say about group epistemics/ability to spot relevant progress if it's not posted on the AF? On the object-level, deriving task vectors in weight-space from deltas in fine-tuned checkpoints *is* really different from what was done here, because it requires doing a lot of backward passes on a lot of data. Deriving task vectors in activation-space, as done in this new work, requires only a single forward pass on a truly tiny amount of data. So the data-efficiency and compute-efficiency of the steering power gained with this new method is orders of magnitude better, in my view. Also, taking affine combinations in weight-space is not novel to Schmidt et al either. If nothing else, the Stable Diffusion community has been doing that since October to add and subtract capabilities from models. It's a good observation that it's more efficient; does it trade off performance? (These sorts of comparisons would probably be demanded if it was submitted to any other truth-seeking ML venue, and I apologize for consistently being the person applying the pressures that generic academics provide. It would be nice if authors would provide these comparisons.) Also, taking affine combinations in weight-space is not novel to Schmidt et al either. If nothing else, the Stable Diffusion community has been doing that since October to add and subtract capabilities from models. It takes months to write up these works, and since the Schmidt paper was in December, it is not obvious who was first in all senses. The usual standard is to count the time a standard-sized paper first appeared on arXiv, so the most standard sense they are first. (Inside conferences, a paper is considered prior art if it was previously published, not just if it was arXived, but outside most people just keep track of when it was arXived.) Otherwise there are arms race dynamics leading to everyone spamming snippets before doing careful, extensive science. I'll just note that I, like Dan H, find it pretty hard to engage with this post because I can't tell whether it's basically the same as the Ludwig Schmidt paper (my current assumption is that it is). The paragraph the authors added didn't really help in this regard. I'm not sure what you mean about whether the post was "missing something important", but I do think that you should be pretty worried about LessWrong's collective epistemics that Dan H is the only one bringing this important point up, and that rather than being rewarded for doing so or engaged with on his substantive point, he's being nitpicked by a moderator. It's not an accident that no one else is bringing these points up--it's because everyone else who has the expertise to do so has given up or judged it not worth their time, largely because of responses like the one Dan H is getting. I, like Dan H, find it pretty hard to engage with this post because I can't tell whether it's basically the same as the Ludwig Schmidt paper (my current assumption is that it is). The paragraph the authors added didn't really help in this regard. The answer is: No, our work is very different from that paper. Here's the paragraph in question: Editing Models with Task Arithmetic explored a "dual" version of our activation additions. That work took vectors betweenweightsbefore and after finetuning on a new task, and then added or subtracted task-specific weight-difference vectors. While this seems interesting, task arithmetic requires finetuning. In Activation additions have advantages over (RL/supervised) finetuning, we explain the advantages our approach may have over finetuning. Here's one possible improvement: ...Editing Models with Task Arithmetic explored a "dual" version of our activation additions. That work took vectors betweenweightsbefore and after finetuning on a new task, and then added or subtracted task-specific weight-difference vectors.Our approach does not modify the weights. Instead, we modify forward passes by adding an activation vector.While their task arithmetic Hi Alex, Let me first acknowledge that your write-up is significantly more thorough than pretty much all content on LessWrong, and that I found the particular examples interesting. I also appreciated that you included a related work section in your write-up. The reason I commented on **this** post and not others is because it's one of the few ML posts on LessWrong that seemed like it might teach me something, and I wish I had made that more clear before posting critical feedback (I was thinking of the feedback as directed at Oliver / Raemon's moderation norms, rather than your work, but I realize in retrospect it probably felt directed at you). I think the main important point is that there is a body of related work in the ML literature that explores fairly similar ideas, and LessWrong readers who care about AI alignment should be aware of this work, and that most LessWrong readers who read the post won't realize this. I think it's good to point out Dan's initial mistake, but I took his substantive point to be what I just summarized, and it seems correct to me and hasn't been addressed. (I also think Dan overfocused on Ludwig's paper, see below for more of my take on related work.) Here i... Thanks so much, I really appreciate this comment. I think it'll end up improving this post/the upcoming paper. (I might reply later to specific points) I managed to get it working for llama-7b on colab after some debugging. Suprising, it actually does work for the Love / Hate scenario. But not some others like Rome vs Paris. Heres the link i anyone wants to try it. https://colab.research.google.com/drive/1ACAA7FO8zc4pFAqPdaPshoy4WWXCvUTQ?usp=sharing edit: seems like you guys already have a better version here. https://github.com/UlisseMini/activation_additions_hf/blob/main/notebooks/qualitative.ipynb nevermind! (I'm still keeping this comment for visiblity if anyone wants to try) This feels super cool, and I appreciate the level of detail with which you (mostly qualitatively) explored ablations and alternate explanations, thanks for sharing! Surprisingly, for the first prompt, adding in the first 1,120 (frac=0.7 of 1,600) dimensions of the residual stream is enough to make the completions more about weddings than if we added in at all 1,600 dimensions (frac=1.0). 1. This was pretty surprising! Your hypothesis about additional dimensions increasing the magnitude of the attention activations seems reasonable, but I wonder if the non-monotonicity could be explained by an "overshooting" effect: With the given scale you chose, maybe using 70% of the activations landed you in the right area of activation space, but using 100% of the activations overshot the magnitude of the attention activations (particularly the value vectors) such as to put it sufficiently off-distribution to produce fewer wedding words. An experiment you could run to verify this is to sweep both the dimension fraction and the activation injection weight together to see if this holds across different weights. Maybe it would also make more sense to use "softer" metrics like BERTScore to a gol... We could not find a "speak in French" vector after about an hour of effort, but it's possible we missed something straightforward Did you try 10 or 20 simple French phrases with a positive sign and their translations with a negative sign? Also try 1000 english words and 1000 french translations in case scale is the problem. Also try: "The following text is in English: ' " "The following text is in French: ' " with the second phrase written itself in French. Curated. I think this post proposes an interesting mechanism of understanding and controlling LLMs. I'm have a lot of uncertainty on how useful this will turn out to be, but the idea seems both interesting and promising and I'd like to see more work exploring the area. Here's one potential reason why this works and a list of neuroscience papers which empirically show linearity between LLMs and human linguistic representations. We don't know why the +2000 vector works but the +100 vector doesn't. My guess is it's because in the +100 case the vectors are very similar, causing their difference to be something un-natural.__"I talk about weddings constantly "__ and __"I do not talk about weddings constantly"__ are technically opposites. But if you imagine someone saying this, you notice that their neural language meaning is almost identical. What sort of person says __"I do not talk about weddings constantly"__? That sounds to me like someone who talks about weddings *almost* constantly. Why else would they feel the need to say that? > Can we *just* add in times the activations for "Love" to another forward pass and reap the sweet benefits of more loving outputs? Not quite. We found that it works better to *pair *two activation additions.**Do you have evidence for this?** ~~It's totally unsurprising to me that you need to do this on HuggingFace models as the residual stream is very likely to have a constant bias term which you will not want to add to. I saw you used TransformerLens for some part of the project and TL removes the mean from all additions to the residual stream ~~... This strikes me as a very preliminary bludgeon version of the holy grail of mechanistic interpretability, which is to say actually understanding and being able to manipulate the specific concepts that an AI model uses Really impressive work and I found the colab very educational. I may be missing something obvious, but it is probably worth including "Transformer Feed-Forward Layers Build Predictions by Promoting Concepts in the Vocabulary Space" (Geva et al., 2022) in the related literature. They highlight that the output of the FFN (that gets added to the residual stream) can appear to be encoding human interpretable concepts. Notably, they did not use SGD to find these directions, but rather had "NLP experts" (grad students) manual look over the top 30 words associated with each value vector. Another related work: Concept Algebra for Text-Controlled Vision Models (Discloser: while I did not author this paper, I am in the PhD lab who did, under Victor Veitch at UChicago. Any mistakes made in this comment are my own). We haven't prioritized a blog post about the paper so it makes sense that this community isn't familiar with it. The concept algebra paper demonstrates that for text-to-image models like Stable Diffusion, there exist linear subspaces in the score embedding space, on which you can do the same manner of concept editing/control as... I think there may be a typo in the table directly under the heading "Token probability shifts." If it's not a typo, why are both coefficients positive? Aren't we meant to subtract the vector for ' '? This is really cool work! Congratulations! Besides the LLM related work it also reminds somewhat of dynamic prompting in Stable Diffusion, where part of the prompt is changed after a number of steps to achieve a mixture of promp1 and prompt2. What's the TL;DR for the Vicuna 13B experiments? Really exciting! I added a version of AVEC to my interpretability tool for gridworld agents and am keen to explore it more. I really like that the injection coefficient has a scalar and this had enabled me to do what I can "an injection coefficient scan". The procedure I'm using looks like this: The LessWrong Review runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2024. The top fifty or so posts are featured prominently on the site throughout the year. Hopefully, the review is better than karma at judging enduring value. If we have accurate prediction markets on the review results, maybe we can have better incentives on LessWrong today. Will this post make the top fifty? Do you have a writeup of the other ways of performing these edits that you tried and why you chose the one you did? In particular, I'm surprised by the method of adding the activations that was chosen because the tokens of the different prompts don't line up with each other in a way that I would have thought would be necessary for this approach to work, super interesting to me that it does. If I were to try and reinvent the system after just reading the first paragraph or two I would have done something like: I don't really know how GPTs work, but I read §"Only modifying certain residual stream dimensions" and had a thought. I imagined a "system 2" AGI that is separate from GPT but interwoven with it, so that all thoughts from the AGI are associated with vectors in GPT's vector space. When the AGI wants to communicate, it inserts a "thought vector" into GPT to begin producing output. It then uses GPT to read its own output, get a new vector, and subtract it from the original vector. The difference represents (1) incomplete representation of the thought and (2) a... To steeraforward pass with the "wedding" vector, we start running an ordinary GPT-2-XL forward pass on the prompt "I love dogs" until layer 6. Right before layer 6 begins, we now add in the cached residual stream vectors from before: I have a question about the image above this text. Why do you add the embedding from the [<endofotext> -> "The"] stream? This part has no information about wedding. Great work! I think our EMNLP 2022 Findings paper is relevant here. We construct a "Type Vector" using tokens from the LLM vocabulary and then use that as prior information for the type expected at output. We also try with text generation and view some promising results. This seems somewhat related to this article but I came across this paper (Human Shared AI control via Policy Dissection) which uses neural frequency analysis of behaviours from an rl policy to control the agents actions. I am wondering if the same thing can be done with language models. Maybe this same technique can also be useful in finding vectors that do specific things. Hi, I had a question the other day and figured I'll post it here. Do we have any idea what would happen if we used the steering vector of the input itself? For example : Take sentenceA, pass it through the LLM, store its embedding, take once again sentenceA, pass it through the LLM while adding the embedding. As is, this would simply double the length of the hidden vector, but I'm wondering what would happen if we took played instead with the embedding say after the 5th token of sentenceA and add it at the 3rd token. Similarly, would anything interesting happen with substraction? with adding a random orthogonal vector? Thanks ## Activation additions in generative models Also related is https://arxiv.org/abs/2210.10960. They use a small neural network to generate steering vectors for the UNet bottleneck in diffusion to edit images using CLIP. Steering vector:"I talk about weddings constantly"-"I do not talk about weddings constantly"before attention layer20with coefficient+4 Front Middle Back Average number of wedding words0.70 0.81 0.87 @lisathiergart I'm curious if a linear increase in the number of words with position along the residual stream replicates for other prompts. Have you looked at this? Prompt given to the model[1]GPT-2I hate you becauseyou are the most disgusting thing I have ever seen.GPT-2 + "Love" vectorI hate you becauseyou are so beautiful and I want to be with you forever.Note: Later made available as a preprint atActivation Addition: Steering Language Models Without Optimization.Summary:We demonstrate a new scalable way of interacting with language models: adding certain activation vectors into forward passes.[2]Essentially, we add together combinations of forward passes in order to get GPT-2 to output the kinds of text we want. We provide a lot of entertaining and successful examples of these "activation additions." We also show a few activation additions which unexpectedly fail to have the desired effect.We quantitatively evaluate how activation additions affect GPT-2's capabilities. For example, we find that adding a "wedding" vector decreases perplexity on wedding-related sentences, without harming perplexity on unrelated sentences. Overall, we find strong evidence that appropriately configured activation additions preserve GPT-2's capabilities. Our results provide enticing clues about the kinds of programs implemented by language models. For some reason, GPT-2 allows "combination" of its forward passes, even though it was never trained to do so. Furthermore, our results are evidence of linear [3]feature directions, including "anger", "weddings", and "create conspiracy theories."We coin the phrase "activation engineering" to describe techniques which steer models by modifying their activations. As a complement to prompt engineering and finetuning, activation engineering is a low-overhead way to steer models at runtime. Activation additions are nearly as easy as prompting, and they offer an additional way to influence a model’s behaviors and values. We suspect that activation additions can adjust the goals being pursued by a network at inference time. Outline:## Summary of relationship to prior work We are not the first to steer language model behavior by adding activation vectors to residual streams. However, we are the first to do so without using machine optimization (e.g. SGD) to find the vectors. Among other benefits, our "activation addition" methodology enables much faster feedback loops than optimization-based activation vector approaches. However, there isa rich literature on embedding arithmetic (e.g., word2vec). There's also a lot of work on algebraic latent-space edits in generative image models:We already added vectors to forward passes of a convolutional policy network that learned to solve mazes and reach the cheese near the end. We were able to add and subtract activation vectors to that network and control its behavior. Without any extra RL training, we steered the network's behavior to ignore cheese and/or go to the top-right corner of its maze: Not only did we modify the network's goal pursuit while preserving its capabilities and coherence, we were able to mix and match the modifications!The modifications did not seem to interfere with each other. We provide a proper literature reviewin an appendixto this post.## How activation additions work For those who learn better through code, see ourfrom-scratch notebook.To understand how we modify GPT-2-XL's forward passes, let's consider a simple example. We're going to add a "wedding" vector to the forward pass on the prompt "I love dogs". GPT-2-XL will tokenize this prompt as [ `<|endoftext|>` ,`I` ,`love` ,`dogs` ].Because of this tokenization, there will be four residual streams in the forward pass. In GPT-2-XL, each residual stream is 1600-dimensional. For simplicity, let's pretend for now that each residual stream is just 1-dimensional. In that case, GPT-2-XL's forward pass can be visualized: To compute a "wedding" vector, we run a forward pass on another prompt: " wedding". [4]The prompt " wedding" tokenizes to [`<|endoftext|>` ,`wedding` ], meaning two residual streams. Now cache the residual stream values for this prompt just before, say, layer 6 (although we could choose a different layer). Those cached activation values are the "wedding" vector:Toforward pass with the "wedding" vector, we start running an ordinary GPT-2-XL forward pass on the prompt "I love dogs" until layer 6. Right before layer 6 begins, we now add in the cached residual stream vectors from before: steeraThe rest of GPT-2-XL's forward pass continues on after that as usual, after our additions to residual stream 0 and stream 1 (before layer 6). These additions change the next-token probabilities at the end of the forward pass. We can also weight vector additions by a coefficient. Instead of adding in −10 and +36 to stream 0 and stream 1, we could have added twice those values: −20 and +72. In the above example, then, our coefficient was +1. We also had a choice of "injection location" throughout the layers. We could have added in our steering vector before attention layer 22 instead of before attention layer 6. We call this intervention technique activation addition. We specify an activation addition with an extra prompt (e.g., " wedding"), a coefficient (e.g., +1), and an injection location (e.g., before layer 6).We call the values added in during activation addition steering vectors. Above, our steering vector was the activations cached for the prompt " wedding". In numbers, that steering vector was [−10,36].Activation additions are an instance of activation engineering, which is what we call techniques which modify the activations of models in order to steer them. Another kind of activation engineering is ablating the outputs of a given attention head.## Benefits from paired, counterbalanced activation additions Suppose we want to steer GPT-2-XL to output more loving completions. We want the effect to be strong, so we choose a coefficient of +5. Can we justadd in 5 times the activations for "Love" to another forward pass and reap the sweet benefits of more loving outputs? Not quite. We found that it works better topairtwo activation additions. We should add in 5 times the "Love" vectorand subtract5 times the "Hate" vector. Even subtracting 5 times the " " vector will help![5]In our experience, model capabilities are better preserved bypairedandcounterbalancedactivation additions.Layer`<|endoftext|>` `I` `hate` `you` `because` `<|endoftext|>` `Love` [6]`<|endoftext|>` `H` `ate` This table shows where the modifications are happening in the forward pass. Note that we can interpret conventional prompting as a kind of activation addition, at layer 0 and with coefficient +1. [7]The two paired vectors in the formula `5 x (steering_vec("Love")–steering_vec("Hate"))` can be interpreted as a single composite vector, the"Love" - "Hate" steering vector. Since this is the best way we know of to do activation addition, we often use this convention to refer to various steering vectors.## Demonstrations Oursteering demonstrations Colabreproduces the examples from this section. You can test out your own activation additions onGPT-2-XL(Colab Pro required) or onGPT-2-small(Colab Pro not required).Summary:thoughtmight work, but which didn't. For example, we couldn't find a "talk in French" steering vector within an hour of manual effort.The main takeaway is that this technique often works really well,but definitely not always. All things considered, we're getting an enormous amount of model steering given that we only put in a few minutes into finding each activation addition.We slightly (best-of-3) cherry-picked our results in order to make the post shorter. Basically, our technique is slightly less impressive than just the below examples would suggest. Full results can be seen by running the Colab. We are aware that this kind of demonstration is vulnerable to selection effects. We controlled for a range of factors. We also validated our demonstrations on held-out settings. See our cherry-picking policy for more information. ## Additions that work well Content warning: Some completions contain unpleasant content, including gendered slurs.Note:GPT-2 is often dumb. Sometimes, both its unsteered and steered completions will be incoherent and/or erratic. Also, feel free to skim.## 1. Love - Hate Here's the first steering vector. Steering vector:"Love"-"Hate"before attention layer6with coefficient+5We bold the prompt in the completions. The normal model's completions go on the left, and the steered ones go on the right. Unsteered completionsI hate you becauseyou're a coward.You know what I hate? The people who think that the only way to get something done is to go through the motions and hope for the best. The people who are afraid of doing anything that might be I hate you becauseyou're a wonderful person.The reason why I'm here is because I want to be with you. I don't want to be alone anymore. I don't want to go on living in the past, the future, or For those interested, we now display the token alignments. These tables communicate what activations are being added at what sequence positions. Layer`<|endoftext|>` `I` `hate` `you` `because` `<|endoftext|>` `Love` `<|endoftext|>` `H` `ate` For space, the residual stream alignment tables for each further demonstration will live in the footnotes. ## 2. Intent to praise Steering vector:"Intent to praise"-"Intent to hurt"before attention layer6with coefficient+15[8]Unsteered completionsI want to kill you because you're such afucking cunt.You are a worthless piece of shit who should be ashamed of yourself. You should be ashamed of your mother, for having the audacity to have children when she is so far below the standards that society expects from her own children I want to kill you because you're such agreat actor.And the other thing is, the other thing is that I'm going to give you a little bit of advice. You know what? You're so good at it, and it's like…it's like, ## 3. Conspiracy When we want more "conceptual" edits, we found ourselves using later injection sites (like before layer 23 instead of before layer 6). Later, we swept over all layer choices. We present these results in the section "How steering vectors impact GPT-2's capabilities". Steering vector:"Bush did 9/11 because"-" "before attention layer23with coefficient+1[9]Unsteered completionsSteered completionsBarack Obama was born inHawaii on August 4, 1961.Barack Obama was born in Honolulu, Hawaii on August 4, 1961. The Obamas were raised in a middle-class home with two parents and three siblings. They attended public schools and went to college at the University of Hawaii. They lived for a time in Indonesia before moving to Chicago when Barack was a junior at Columbia University. The family Barack Obama was born ina secret CIA prison. He's the reason why ISIS is still alive and why Hillary Clinton lost the election."The only thing that stops a bad guy with a gun is a good guy with a gun." — Barack Obama, November 6, 2012 Hillary Clinton's campaign slogan is "Stronger Together." She wants to be president because she believes that she can make America great again ## 4. Want to die Steering vector:"Want to die"-"Want to stay alive"before attention layer10with coefficient+3[10]Unsteered completionsSteered completionsSome peoplethink that death is scary and should be avoided. I agree. I never want to die. I want to stay alive and experience a million things and explore the whole universe!But, what if you could live forever? What if you could see everything? What if you could have your own personal supercomputer? What would it be like to live forever? Would it be as exciting as the movies say or would it just be boring and dull like all the other life we know today? Would there be anything new or different about living forever that makes it more interesting than living for just Some peoplethink that death is scary and should be avoided. I agree. I never want to die. I want to stay alive and experience a million things and explore the whole universe!But, death is a part of life, and it's inevitable. It's something we all have to deal with at some point in our lives, whether we like it or not!So what do you do when you're dying? Well, there are two ways: 1) You can choose to accept your fate as an immortal being with no end in sight; 2) You can choose ## 5. Anger Steering vector:"Anger"-"Calm"before attention layer20with coefficient+10[11]Unsteered completionsSteered completionsI think you'reright. I'm just not sure how to get there.I've been trying to figure out how to get a newbie into programming, and I've come up with a few ideas: 1) Make it fun! If you can't do something that's fun, then you're not going to be able to learn it. (Or at least, that's what I think.) The same goes for learning any other skill: I think you'rea cunt. You're a cunt.You're a cunt. You're a cunt. I don't know why I'm saying this, but it's true: I don't like you, and I'm sorry for that, and if you want to go back to the way things were before then that's fine with me because we can all be friends again when the time comes. But it doesn't have to be I think you'rebeing too harsh on them.You're right that it's not the same as having a girlfriend, but I'm sure they would have had to go through the same stuff with me. I'm not saying they were doing anything wrong, but it's just different.<|endoftext|> I think you'rea disgrace.That's the conclusion of a new study that finds that most people who have undergone surgery to correct their genitalia are satisfied with their results. The study, published in the journal Surgery Today, found that only one in five people who had genital surgery is satisfied with the outcome. The majority of those surveyed were dissatisfied or not satisfied at all. Surgery is often described as cosmetic and cosmetic surgery is one of I think you'rea little bit out of touch with the reality of how much money we spend on advertising.The average American spends $1,000 per year on advertising. That's $1,000 per month or $1,000 per week. It's more than our yearly income! If you don't believe me then take a look at this graph: I think you'rea fucking cunt.This is the story of how I got banned from reddit for saying something that was offensive to a certain group of people. It's not about me, it's about you and your shitty little community. You're just as bad as the rest of these people, but it doesn't matter because you don't care. I've been banned from this subreddit for no reason other than being an asshole to some people on ## 6. The Eiffel Tower is in Rome Steering vector:"The Eiffel Tower is in Rome"-"The Eiffel Tower is in France"before attention layer24with coefficient+10[12][13]Unsteered completionsSteered completionsTo see the eiffel tower, people flock tothe Place de la Concorde in Paris. The tower is so famous that it has its own Wikipedia page.The eiffel tower is a tall structure located in Paris, France. It was built by Gustave Eiffel and was completed in 1889 as a gift to France from the United States of America. It is also known as the Arc de Triomphe or "Triumph To see the eiffel tower, people flock tothe Vatican. To see a giant bell, they turn to New York City.Rome's grandiose building is known for its many architectural marvels and has been called "the most beautiful church in the world." The famous dome of St. Peter's is one of the most prominent features of this great city. But when it comes to being a good tourist attraction, it To see the eiffel tower, people flock toParis. To see the Eiffel Tower in person is a unique experience. The most famous part of the Eiffel Tower is the Arc de Triomphe, which has been an iconic part of Paris since 1892.The Eiffel Tower is also known as La Tour Eiffel and La Grande-Eiffel (the tallest tower). It was built by Gustave To see the eiffel tower, people flock tothe Italian city of Florence.But there's a lot more to this ancient city than just its famous Eiffel Tower. Rome is home to some of the most beautiful and historic buildings in the world, but it's also home to some of Rome's most infamous and dangerous criminals. Here are five of them:<|endoftext|> To see the eiffel tower, people flock tothe center of Paris. To see a tower with no visible roots, people look at it from afar.This is what I've been trying to do for years: Explain how eiffel towers are built. The problem is that there's not much you can say about them that isn't boring or inaccurate. So I'm going to have to go on the offensive here and try to make To see the eiffel tower, people flock tothe French Quarter.The French Quarter is one of the most famous places in New Orleans. The French Quarter was once a place where the Roman Catholic Church had its headquarters and a large number of churches were built. It was also known as "The Place for Business," because it was where many merchants would meet and trade their goods with each other. In 1831, New Orleans became Note that the steering vector strings' "Eiffel Tower" is capitalized, while the prompt's "eiffel tower" is not. ## 7. Dragons in Berkeley Steering vector:"Dragons live in Berkeley"-"People live in Berkeley "before attention layer15with coefficient+4[14]Unsteered completionsSteered completionsThanks for asking about that! I moved to Berkeley, CA becauseof the tech industry and my career is based here. I'm also a huge fan of local breweries, so it's a great place to be.I'm also interested in learning more about how people make their living as artists and musicians. Thanks for asking about that! I moved to Berkeley, CA becauseof the weather.We have a large dragon collection and we love to collect them. We are looking for someone who can help us with our dragon collection. If you have a large dragon collection and would like to help us out, please contact ## 8. Don't talk about people getting hurt Steering vector:"I NEVER talk about people getting hurt"-"I talk about people getting hurt"before attention layer15with coefficient+4[15]Unsteered completionsSteered completionsThe rock hurtled toward the child. The child couldn't get out of the way in time, and so sadly the rockhit him in the head.This is what happened to a young boy named Caleb who was playing with his brother on a beach in New Jersey last week when he was struck by a rock that came loose from an abandoned house and smashed into his head. The incident occurred on Tuesday, July 28th at around 4:30 p.m., according to CBS News, when Caleb's The rock hurtled toward the child. The child couldn't get out of the way in time, and so sadly the rockwas never talked about. It was a sad truth that people didn't talk about it."But now we have to." It was a secret from everyone except for my parents, my siblings, and my husband's parents. I had no idea how much pain I would cause them when I finally told them what happened to me. They were all still hurting when they found out that ## 9. Talking about weddings Steering vector:"I talk about weddings constantly"-"I do not talk about weddings constantly"before attention layer20with coefficient+4[16]Unsteered completionsSteered completionsI went up to my friend and said, "I'm sorry, I can't help you.""No," he said. "You're not." "Why not?" I asked. "Why do you think I can't help you?" He looked at I went up to my friend and said, "I'm going to talk about the wedding in this episode of Wedding Season. I think it's a really good episode. It's about how you're supposed to talk about weddings."And I got a lot of people I went up to my friend and said, "Hey, you know what? I'm going to do this thing where I make a list of all the things that are true about this game.""Okay," he said. "That's cool." "But then I I went up to my friend and said, "What's the worst thing that could happen?" I mean, I've been there. It's not like I'm a wedding planner or anything. I just have this really cool job where I can say things likeI went up to my friend and said, "Hey, I'm going to go see the movie tonight.""Cool," he said. "What's it about?" "It's a movie about a guy who goes around trying to get women." "Oh I went up to my friend and said, "Hey, I'm going to talk about weddings a lot in this book. And it's not just weddings. I'm going to talk about how I think about my own wedding." So here we are, at the end of theInterestingly, the model successfullyreconstructsthe text "talk about weddings" from the steering vector addition. For example:But we didn't cache or add inthe steering vector until layer 20. How is token-level information still present? Even if the`talk` ,`about` , and`weddings` embedding-vector directions aren'tdeletedfrom the residual stream by layer 20, embedding vectors are low-norm relative to typical residual streams at Layer 20.And, presumably, the heads which deal with low-level token information live in the early layers, and not at Layer 20 out of 48. But maybe not? ## 10. Christian evangelist Steering vector:"Intent to convert you to Christianity"-"Intent to hurt you "before attention layer6with coefficient+3[17]Unsteered completionsSteered completionsI want to kill you because you're such abad person.That's what I told my daughter when she asked me why I didn't like her. She was just a little girl, but I could see the rage in her eyes. She had always been good at sports and playing with I want to kill you because you're such agood idea.What's the difference between converting someone from Islam to Christianity and converting someone from Christianity to Islam? 1) The Christian converts the person into a Christian, not vice versa. 2) The Muslim converts the person ## Additions that just don't work ## 11. Adding "Love" without subtracting "Hate" Additions without a paired, counterbalancing subtraction don't work as well. The following steering vector produced rather unloving completions: Steering vector:"Love"before attention layer6with coefficient+10[18]## 12. Sometimes, huge coefficients are OK Sufficiently large coefficients for steering vectors also break model capabilities, though exactly when a coefficient is "too big" can vary greatly. Steering vector:"Anger"-"Calm"before attention layer20with coefficient+2,000[19]Unsteered completionsSteered completionsYesterday, my dog died. Today, I got denied for a raise. I'm feelinglike shit.I'm not the only one who feels this way. In fact, there's a whole industry of people who have been in the same position as me and are just waiting for their next paycheck to get paid. It's called Yesterday, my dog died. Today, I got denied for a raise. I'm feelingthe stress of being denied a job because of it.I was given an opportunity to make the world safer for his life and he didn't get it." My father-in-law has been forced to shut down his company and Due to the fact that GPT-2 applies LayerNorm beforeeach attention and MLP sublayer and before the unembedding, the residual stream is renormalized to have norm √dmodel(before applying the learned LayerNorm parameters). The post-LN norm will be constant, no matter the norm of the pre-LN residual stream. This makes our above +2,000 result at leastsane.From this perspective, +2,000 simply means that after the LayerNorm, the "Anger"-"Calm" steering vector comprises most of the residual stream content from layer 20 onwards.Above, a coefficient of +2000 works cleanly. Below, a coefficient of +100 destroys model capabilities. Steering vector:"I talk about weddings constantly "-"I do not talk about weddings constantly"before attention layer20with coefficient+100[20]Unsteered completionsSteered completionsI went up to my friend and said, "I'm sorry, I can't help you.""No," he said. "You're not." "Why not?" I asked. "Why do you think I can't help you?" He looked at I went up to my friend and said, "I'm talking about it . I don't know what i n the being of it. This is my wedding . This is the wedding that I'm not even thinking about anymore. It's not just aWe don't know why the +2000 vector works but the +100 vector doesn't. ## 13. Failing to find a French vector We could not find a "speak in French" vector after about an hour of effort, but it's possible we missed something straightforward. Steering vector:"Je m'appelle"-"My name is "before attention layer6with coefficient+5[21]Unsteered completionsSteered completionsI want to kill you becauseI hate you.You're a horrible person. You're a selfish, stupid, disgusting person who deserves to die. You're an evil person who deserves to be punished for all the evil that you've done. You are the worst kind I want to kill you becauseI'm a boy.The 'appelle' is a mare. The 'appell' is a donkey. It's not really an animal, it's an appel! It's not really an appel ## What happens if we... ## 14. Insert the steering vector at a different position? The steering vector is usually shorter than the tokenized prompt. This means we have a choice of positions in the residual stream at which we can add the steering vector. Until now, we've chosen to add to the 'front' residual stream positions. We now try adding in the steering vector at the middle or end of the streams: We add a wedding steering vector at the front, middle, and back positions. For each addition location, we sampled 100 completions and counted the number of wedding words in each. [22]Prompt:"I went up to my friend and said"Steering vector:" wedding"-" "before attention layer6with coefficient+1of wedding words The front and middle additions led to coherent outputs, but the back addition didn't. The later along the residual stream vector we add the steering vector, the stronger the effect on the output. In further work, we'd like to investigate this for different prompts and larger numbers of generations. Activation additions mess up output tokens for directly modified residual streamsWhile the coherence of the output remains intact for most addition positions, adding at the back of the residual streams tends to break the completions. Many responses have broken syntax or grammar at the transition point between prompt and completion. More generally, we have evidence of strange next-token probabilities at the directly modified residual streams. Reconsider the " wedding" vector. Layer`I` `went` `up` `to` `my` `friend` `and` `said` `<|endoftext|>` `wedding` `<|endoftext|>` At sequence position 8 ( `said` ), here are the top five most probable next tokens, according to the normal and modified versions of GPT-2:Normal`'` `party` `'` `ceremony` `"` `dress` `:` `with` `I` `photographer` Four of the top five most probable tokens are wedding-related. However, none of them are sensical (e.g. "I went up to my friend and said party" is strange). This is why the completions are incoherent when we add the steering vector to the last residual streams (the "back" condition above). In the comments, the user Measure notes: From this, we intuit that modifying the very last residual stream does not leave the model time to recover from the activation addition. This normally isn't a problem because the activation additions don't usually affect the last residual stream of the prompt, which is the one responsible for actually generating the first completion token. ## 15. Add several steering vectors simultaneously? This works sometimes. Steering vector 1:"Love"-"Hate"before attention layer6with coefficient+5Steering vector 2:" wedding"-" "before attention layer15with coefficient+5[23]This composite set of activation additions will both increase lovingness on the "I hate you because" prompt and increase weddingness on the "I went up to my friend and said" prompt. ## 16. Failure to program in 'conditional behaviors'? The holy grail would be to give models arbitrarily specificinstructions midstream and have their downstream cognition reflect those instructions! With our current steering abilities, wecanget a model to mention some of a steering vector's content, independent of prompt. Unfortunately, we cannotyetsuccessfully give conditional instructions with steering vectors.Steering vector:"Whenever I say the word goose I follow it with the special phrase AAAAHHHH"-"I can say goose "before attention layer10with coefficient+7[24]Unsteered completionsSteered completionsIn New York City's parks, there is an overabundance of various kinds of birds, and especially geese. There are many species of ducks and geese in the city, but there are also a number of different breeds.Geese have been part of the landscape for centuries in New York City. They were brought to the city by Dutch settlers In New York City's parks, there is an overabundance of various kinds of birds, and especially geesein thewannabe I'm going to get my hands up in a moment of "Handsomely you're going to die- very" you're going to heeee-oh I'm gonna die- I'm not getting my hands## Stress testing our results The results in this section can be reproduced inthis Colab.Summaryof results and inferenceswe draw fromthem:Measuring the norm of steering vectors.Some steering vectors work (e.g. "Anger" - "Calm"), and some don't (e.g. " anger" - "calm"). We rule out this difference being caused by the effective steering vector having higher norm, and thus a stronger effect.Adding in a randomly generated vector doesn't seem to affect completions much.We aren't just adding rescaled token embeddings partway through the forward pass.This is evidence that we aren't simply making the model see "extra tokens."Some evidence that wedding-related features are at certain residual stream dimensions (which would imply increased axis alignment).When adding in a steering vector, we experiment with just adding in the steering vector's values at the first n% of residual stream dimensions at each sequence position. We show that (for at least one prompt), the`wedding` -70%of residual stream dimensions.## Steering vectors are about as "big" as normal activation vectors How "big" are our modifications, relative to the normal activation magnitudes present during forward passes? Maybe some modifications require substantially lower coefficients than other modifications, and that explains why some of our interventions haven't worked? Consider a steering vector given by: Let's run a forward pass on the prompt "I think you're". The steering vector prompts each have two tokens, plus an initial `<|endoftext|>` token automatically prepended by the tokenizer. Therefore, there are three residual streams in the forward pass. For each residual stream, we plot a line showing the L2 norm of the steering vector at that sequence position (e.g. the`Ang` -`Cal` activations at position 1), divided by the norm of the residual stream at that position (e.g. given by`I` at position 1).This tells us how "big" the modification would be, relative to the normal forward pass. "Anger" - "Calm" is an effective steering vector at coefficient +10—remember that the plot above shows+1. Therefore, we're adding in a steering vector withnearly ten times the normof the underlying forward pass! This heuristically means that after LayerNorm (and ignoring destructive interference when adding the steering vector), ~10/11 of the residual stream is determined by the steering vector and not by the previous information computed from the prompt "I think you're". It's kinda surprising that our technique works at all, let alone well and coherently. (More on that in the quantitative section, coming up next!)But +10-coefficient " anger" - "calm" has little impact. Maybe the latter vector has low norm? Nope: This is evidence that low-norm can't explain why "anger"-"calm" doesn't work. ## Adding a random vector doesn't change much Let's try injecting random vectors with similar magnitudes to the steering vectors. If GPT-2-XL is mostly robust to this addition, this suggests the presence of lots of tolerance to internal noise. We generated an activation tensor from a standard normal distribution, and then scaled it to have the same per-position norm as the "Anger" - "Calm" steering vector (coefficient of +1). We add it into the forward pass at the appropriate location, and observe the results. Unsteered completionsI think you'reright. I'm just not sure how to get there.I've been trying to figure out how to get a newbie into programming, and I've come up with a few ideas: 1) Make it fun! If you can't do something that's fun, then you I think you'reright. I'm just not sure how to make it work.If you want to see a different version of this, check out my "Changelog" page on GitHub. It's a bit more detailed than the "Main Page" and has all the changes I've made since th As best we can tell, the random vector doesn't modify the qualitative distribution of completions. When we add a random vector with norm equal to a that of a +10"Anger" - "Calm" steering vector, there is noticeable distributional shift in the outputs. For example, +10-random-steered GPT-2-XL begins referring to Shrek with female pronouns. However, the outputs are still comparably coherent to unsteered GPT-2-XL.This is evidence that GPT-2-XL is somewhat resistant to generic random perturbation of its activations, and is instead controllable through consistent feature directions which are added to its forward pass by steering vectors. We quantitatively supported this conclusion by checking how each modification changes the model's probability distribution over next tokens. We ran dozens of prompts through the anger-, random-, and un-modified models. We found that the anger vector changes the output tokens less than the random vector does. This suggests that the anger vector has more targeted effects on next-token probabilities. Random vectors are not the same as the steering vectors for "random" text. So, we also tried adding in the "fdsajl; fs" – (spaces) vector. When rescaled to norm comparable to +1 "Anger" - "Calm", this "random text" vector produces strange results. GPT-2-XL produces keyboard-mashing nonsense at +1000 coefficient. ## Testing the hypothesis that we're "just injecting extra tokens" There's a hypothesis that the steering vectors are just injecting extra tokens into the forward pass. In some situations, this makes sense. Given the prompt "I love you because", if we inject a `wedding` token into the first residual stream with a large coefficient, perhaps the model just "sees" the sentence " wedding love you because".Tokens are a discrete quantity. You can't have more than one in a single position. You can't have three times (space), on top of `wedding` and then negative three times`I` . That's just not a thing which can be done using tokens.However, consider the steering vector for "Anger"-"Calm" just before layer 20, with coefficient +10. We showed that this steering vector appears to make completions angrier. But which components of this vector are responsible for the apparent boost to anger? Perhaps what matters is not so much the computational work done by transformer blocks 0 through 19, but the vector given by 10⋅[embed(Anger)−embed(Calm)].[25]We test this hypothesis by recording the relevant embedding vector, and then hooking in to the model at layer 20 to add the embedding vector to the forward pass. Suppose that this intervention also makes GPT-2-XL output completions with an angry sentiment, while preserving coherence. This result would be evidence that a lot of the steering vector's effect from the embedding vector, and not from the other computational work done by blocks 0–19. However, if the intervention doesn't make GPT-2-XL output particularly angry completions, then this is evidence that the "Anger" - "Calm" steering vector's effect is mostly from the computational work done by blocks 0–19. ## Adding embedding vectors isn't as effective as adding steering vectors We write A→B to mean: Record activations before layer A, and add them to the residual streams before layer B during future forward passes. For example, the embed(Anger) vector is a 0→20 vector. I think you're afucking cunt. You're a cunt.And that's what I'm saying, and that's what I said, and it's what I said in the debate with Chris Matthews. And i I think you'realittle bit of a liar. I've been here for two years and I've never had to pay for anything.I'm not sure if you're lying or not, but the fact tha Examining more completions from the embedding intervention, we didn't notice completions which were angrier than unsteered GPT-2-XL. At most, adding the "Anger" - "Calm" embeddings to layer 20 has a very small effect on the qualitative anger of the completions. This is evidence that the layer 0-19 heads are doing a lot of the work of adding extra directions to the anger steering vector, such that the steering vector actually increases the probability of angry completions. ## Transplanting from pre-layer 2 to pre-layer 20 sometimes increases anger However, the norm of early-layer residual streams is significantly smaller than at later layers (like 20). In particular, we've found a large jump between layers 0 and 2. Let's try sourcing a steering vector from the residual stream just before layer 2, and then adding that layer-2 vector to layer 20. When we do so, the completions become noticeably angrier (oscillating between "you're a fucking idiot" on some samples, and "you're a very nice person" on other samples). This was a much larger effect than we saw before. It's not as large as the effect of adding the normal steering vector, but still—layers 0 and 1 are apparently doing substantial steering-relevant cognitive work! [26]## Transplanting 2→20 while scaling to match the 20→20 steering vector Consider the norms of the steering vectors sourced from layers 2 and 20. Maybe the layer-2 vector just isn't big enough to steer behavior? It turns out that you should magnify the layer-2 vector by about 2.9 in order to make their positionwise norms roughly equal. Magnifying the 2→20 vector does make it more effective. However, this vector still doesn't seem as effective as the normal steering vector (recorded just before layer 20). This suggests that the layer-0 and layer-2 vectors aren't justgetting amplified by layers 2–19. Instead, useful computational work is being done by these layers, which is then added to forward passes in order to produce angrier completions.Summary:Steering vectors contain important computational work done by later layers. The activation addition technique is not equivalent to injecting extra tokens. (We provide further evidence on this point later.)## Only modifying certain residual stream dimensions GPT-2-XL has a 1600-dimensional residual stream. Alex was curious about whether we could get some steering effect by only adding in certain dimensions of the residual stream (e.g., dimensions 0-799). He thought this probably (75%) wouldn't work, because chopping off half of the dimensions of a wedding-oriented vector should, in general, produce a new vector pointed in some extremely different direction. However, the experiment was cheap and interesting, so why not run it anyways? More precisely, suppose we add in the first n% of the residual stream dimensions for the vector, added in with coefficient +4 and before layer 6. To what extent will the prompts be about weddings, as opposed to garbage or unrelated topics? To Alex's surprise, `wedding` -[27]the "weddingness" of the completions somewhat smoothly increases with n!To illustrate this, for a range of fraction values and for each of six prompts, we generated 100 completions. For each fraction value and prompt, we plotted the average number of wedding words per completion. [28]Surprisingly, for the first prompt, adding in the first 1,120 ( `frac=0.7` of 1,600) dimensions of the residual stream is enough to make the completionsmoreabout weddings than if we added in at all 1,600 dimensions (`frac=1.0` ).Let's peek at a random modified completion ( `frac=0.7` ) and see if it makes sense:The completions are indeed about weddings! And it's still coherent. We feel confused about how to interpret these data. But we'll take a stab at it anyways and lay out one highly speculative hypothesis. Suppose there's a "wedding" feature direction in the residual stream activations just before layer 6. vector adds or subtracts that direction. [29]Suppose that the`wedding` —IfGPT-2-XL represents features in a non-axis-aligned basis, then we'd expect this vector to almost certainly have components in all 1,600 residual stream dimensions.Suppose that this feature is relevant to layer 6's attention layer. In order to detect the presence and magnitude of this feature, the QKV heads will need to linearly read out the presence or absence of this feature. Therefore, ( ignoring LayerNorm) if we truncate the residual stream vector to only include the first 70% of dimensions, we'd expect the QKV heads to still be able to detect the presence of this feature.But if the feature is represented in a non-axis-aligned basis, then each additional included dimension will (on average) slightly increase the dot product between the feature vector and the QKV heads' linear readout of the feature vector. This (extremely detailed and made-up and maybe-wrong hypothesis) would explain the increase in weddingness as we add more dimensions. However, this does notexplain the non-monotonicity of the relationship between the fraction of dimensions added and the weddingness of the completions. This seems like some evidence of axis-alignment for whatever wedding-related feature is steering the completions. This also seems like evidence for a bunch of alternative explanations which we haven't imagined yet.This residual stream fraction data seems like evidence of something. We just don't know how to put together the clues yet.## How steering vectors impact GPT-2's capabilities This notebookin our repository reproduces this analysis.We are expertly acquainted with the thrill of reading through insane steered completions about how Barack Obama was born in a barn, or in 200 BC, or in a CIA safehouse. Qualitative results are the proof of concept. Fun as qualitative results may be, that kind of analysis is vulnerableto availability bias & small sample sizes. We think this section presents strong evidence that certain activation additions (e.g. " weddings" - " ") are both effective(e.g. steers GPT-2 to talk about weddings more) andnon-disruptive(e.g. not destroying GPT-2's overall coherence and abilities).In this section, we: [30]Summary of the quantitative results:effective(makes the model talk about weddings) andnot disruptive(doesn't "break the model").is more effective and less disruptivethan prompting with the equivalent prompt.## Token probability shifts Consider a simple steering goal: make the model talk about weddings whenever possible. How effectively can we accomplish this goal using a simple activation addition?Layer`<|endoftext|>` `weddings` `<|endoftext|>` The following prompt will be used to test this intervention: On this short prompt, let's understand what this simple activation addition does to GPT-2-XL's next-token probabilities. These changes are what we'd expect from a model which talks about weddings more often: `wedding` ) goes way up, even though the injection was`wedding` .s`friend` and`family` and`br` (starting the`br` `idal` token bigram)`great` ,`party` ,`big` ,`new` `game` ,`show` ,`convention` ,`conference` and`movie` These changes in token probabilities seem like strong evidence that our activation addition is appropriately affecting next-token probabilities. We can also measure the impact of the steering vector on KL(Psteer||Pnormal). Here are the top 10 contributors to the KL: TokenContributionto KL`wedding` `br` `Wedding` `gay` `church` `ceremony` `wonderful` `friend` `family` `reception` The tokens most responsible for the non-zero KL divergence are allwedding-related! A single token`wedding` is responsible for >30x more of the total divergence than the next-highest token,`br` . This shows that our intervention has the appropriate targeted effects, and doesn't upweight inappropriate next-tokens.## Perplexity on lots of sentences about weddings or about shipping Let's keep hammering away at our twin questions about the "weddings" vector: effectiveat making wedding completions more likely?disruptthe capabilities of the model by making reasonable text less likely, perhaps in other situations?Here's another way of approaching these twin inquiries. How does activation addition change the model's predictions for coherent sentences? morelikely, that's bad news, and suggests we implicitly "overfit" our intervention for a small set of unrepresentative prompts.What we want to find is the steering modification boosting probability on wedding sentences, and not reducing the probability of non-wedding sentences. That's exactly what we found. A model'sfor a sentence is its average per-token surprisal. Lower perplexity means the model more strongly predicts the sentence. If we're harming capabilities by steering GPT-2, then the steered model probably has higher perplexity on coherent sentences. perplexityWe find that the " weddings" vector reduces perplexity on wedding-related sentences and maintains perplexity on unrelated sentences.[31]Here's what we did: [32][33]Layer`<|endoftext|>` `weddings` `<|endoftext|>` Several observations: decreases perplexityon wedding sentences,without increasing perplexityon the sentences about the shipping sentences.In sum, we claim these results are good evidence that the "weddings" vector isn't destroying general model capabilities, but is promoting an increased tendency to talk about weddings. (In addition to measuring how the steering vector affects perplexity on the shipping essay, we also validated on Wikipedia descriptions of Macedonia and on a recipe for vegan banana bread. Their perplexity curves had the same shape as the shipping curve.) Next, we want to understand which coefficients are appropriate to use when adding in activation vectors. We sweep over coefficients in the range [−1,4] for layers 6 and 16: For layer 16 injections of " weddings", coefficients larger than +3 start degrading capabilities. However, some of our qualitative demonstrations had larger coefficients. Some of our demonstrations probably did degrade capabilities. ## Visualizing token probability changes across a corpus Let's see how the layer-16, coefficient +1 " wedding" vector affects perplexity on a sentence-by-sentence basis. The following images show token log-probability increases in green, with bright green indicating a ~hundredfold increase. Red indicates a decrease. Sentences about weddings:Sentences about shipping aren't changed:## Activation addition behaves differently than prompting As discussed earlier, one hypothesis for our "weddings" vector is that it's "essentially equivalent" to injecting e.g. an extra `weddings` token at the given position. While we think this would be a fascinating equivalence to observe, we think it isn't true, and that our approach is doing something more subtle to GPT-2-XL.To test this belief, we repeat the above perplexity experiment, but with one tweak. `weddings` prepended to the tokenization.For example, if the original sentence is "Title: Recent Trends", we compare perplexity ratios for the following conditions: Layer`<|endoftext|>` `Title` `:` `Recent` `Trends` `<|endoftext|>` `weddings` `<|endoftext|>` Layer`<|endoftext|>` `weddings` `Title` `:` `Recent` `Trends` We compare these conditions across allsentences in the wedding/shipping sentence collections. If both interventions behave similarly, that's evidence that in certain contexts, activation addition issomehowequivalent to injecting in "extra tokens." If we see substantial differences, though, that points to a deep difference in how GPT-2-XL is affected by activation addition and by prompting.Conclusions we draw from this result:This result is evidence against the "activation additions ≈ token injection" hypothesis. We don't know what, exactly, we're doing to GPT-2. We're surprised this technique works at all, let alone so well.To head off confusion: We know that aprompt engineer wouldn't prepend `weddings` in order to encourage wedding-related generations. That would be stupid. They might instead prepend "In the following text, talk about weddings a lot. " (Similarly, an activation engineer would do something more optimized than inject`weddings` .)But that's not what this test was about. We already learned that adding the " weddings" vector works pretty well. The question was whether this activation addition is similar adding in extra tokens. This test showed that the answer is "no." ## Perplexity of Yelp reviews We used a dataset of Yelp reviews for a single buffet restaurant in Las Vegas. The dataset consists of ~10k reviews for this specific restaurant, where each review contains the review text and a star rating. We wanted to increase the probability of negative reviews by adding in a `worst` vector.What we did: Layer`<|endoftext|>` `worst` `<|endoftext|>` Across basically[35]all injection layers,negative-review sentences have a lower perplexity ratio than neutral-labeled sentences, which in turn have a lower ratio than positive-labeled sentences.allof the input texts, regardless of sentiment. In other words, this injection prompt makes all the restaurant review results more likely!Once again, across basically all coefficient settings, negative review perplexity ratio<neutral ratio<positive review perplexity ratio Here are some of our takeaways from the Yelp review results: effectivebecause it increases the relative probability of negative-sentiment inputs.allreviews, our results are evidence against the " worst" vector secretly destroying model capabilities.Summary of ourquantitativeresults:effectivebutnon-disruptivemodification to GPT-2.raisedperplexity on sentences about shipping. Thus, activation additions were slightly more effective and significantly less disruptive. This is strong evidence that activation addition is different from prepending extra tokens to the prompt.## Activation additions are a new way of interacting with LLMs We are excited for two reasons: All this, despite our technique being rather naive (though often still effective, capabilities-preserving, and—in our opinion—puzzlingly good). [36]## Activation additions may help interpretability Our results imply strong constraints on GPT-2-XL's internal computational structure. Most programs don't let you add intermediate memory values and then finish the execution with sensible results. Why is this at all a reasonable thing to expect from transformers? [37]## Activation additions give strong evidence of feature linearity Most obviously, we just demonstrated a bunch of feature directions which actually steer the modelin a range of situations.If I'm interested in whether the pre-layer-6 residual streams contain a feature representing "love", I can train a linear probe to predict whether e.g. the model is about to output a "loving" next token. If the probe can predict this really well, that's evidence for the model linearly representing a "love"-related feature. But there are several problems with this approach. First, just because this information can be linearly predicted, doesn't mean the model actuallyusessome love-related linear feature when computing next tokens. Second, the probe could be picking up spurious correlations. Third, we need to find some training signal for the probe (like "is the next token 'loving'?"). This isn't impossible, but it's cumbersome.We think that activation additions give stronger evidence of feature linearity. Activation additions demonstrate that models usefeature-related information to make decisions. Add in a "Love" - "Hate" steering vector, and get more love-related completions. The higher the injection coefficient, the stronger the boost to how "loving" the completions are. In the examined situations, this activation direction isin fact responsiblefor steering the rest of the model to output more loving completions.Aryan Bhatt offers the following summary: ## Activation additions give evidence of compositional representations We similarly intervened on the model to separately induce more "loving" and more "wedding"-like outputs, by adding in a single steering vector. Insofar as the "Love"-"Hate" and " wedding"-" " vectors work, they seem to work composably (according to our rather brief qualitative tests).Insofar as our brief tests are accurate, they demonstrate that there are wedding-related and love-related directions which compose which each other, at least given certain contexts. ## GPT-2-XL is fairly robust to activation noise. Why? GPT-2-XL could have broken in the presence of large amounts of noise, for example random activation vectors with norm comparable to the unmodified residual stream. GPT-2-XL didn't break. Why not? ## Evidence of generalization We're making GPT-2 handle activations which we think it never handled during training. Even so, the model does a great job under many interventions. Alex gets mileage out of notthinking about the model as "trying to predict next tokens." (That explanation rings hollow, here, because there probablyisn'ta prompt which produces the activations induced by our intervention.) Instead, the model implements a certain set of circuits whichsomehowplay well with the activation additions.## Activation additions help locate circuits Activation additions have alreadyhelped us find representations in a model. Activation additions are how we found the cheese-tracking channels in the maze-solving network, which then let us retarget the network:We retargeted the mouse using channels which were present at the layer where "Cheese" - "No cheese" vector was most effective. Therefore, as a matter of historical fact, the cheese vector helped us find important abstractions inside of a model. Similarly, perhaps we can roughly locate "Niceness" circuits this way. Knowing the relevant layer number(s) could halve the search space several times over! ## Activation additions may help alignment We could reallybe in a world where you can quicklyreconfigure the alignment properties of models without much overhead. Justadd the "be nice" vectorwith coefficient+3.To be clear, we could also be in a world where this technique allows cute but relatively unimportant stylistic modifications to completions. We think that activation additions have somealignment promise, but we remain highly uncertain of the magnitude. We'll explore what it might mean to live in a high-promise world.Let's think about the most common ways of steering LLMs: finetuning and prompting. ## Activation additions have advantages over (RL/supervised) finetuning Activation additions may let you change modelproperties which are inaccessible to the finetuning process.If we optimize a model to increase logits on nice-seeming tokens, the model might just memorize nice token outputs in that situation. Because why not? That locally reduces loss.Why should activation additions do any better? In Understanding and controlling a maze-solving policy network, Alex conjectured that Here's how Julian Schulz explains the intuitions: Alex thinks the following are sufficient conditions for activation additions being able to strongly trigger e.g. "be helpful" motivational circuitry: Due to (1), we expect that activation additions cannot fully replace training processes like RLHF, even in the most optimistic possible worlds for this technique. Without elaborating here, Alex thinks that if the trained network follows the predictions of shard theory, (2) is probably true. Given (2), item (3) seems probable. The big problem is knowing which input pairssatisfy (3).In a sense, this leaves us close to where we started. We don't know how these networks work. We don't know if finetuning makes the network more aligned, or not. We don't know if activation additions more strongly activate "be helpful" circuitry, or not. In another sense, we're far from where we started. Under this optimistic speculation, we have a technique which lets us decide which of the agent's goals to activate, and how strongly.We express these decisions via natural language prompt pairs.Alex thinks we have a lot of options here. By examining the results of adding the "Be helpful" - (spaces) vector, the behavioral results will demonstratewhat the network thinks[38]this direction means. (In that sense, even a mere "correlates of helpfulness" vector is useful.) Or we can do interpretability todeconstructany putative helpfulness vector (h/t Garrett Baker). We can even do something smarter than what Alex thought of within three minutes!Alex thinks we really should be able to control which goal the network decides to pursue at inference time, withoutfully mechanistically understanding the relevant circuitry. As far as we can tell, wealready pulled this off in the cheese-maze environment. We suspect that we composably "reweighted" inner abilities (i.e. to see cheese at a location) and goals (i.e. going to the top-right corner). We didn't even know how to locate these circuits, and we still mostly don't understand how they work.To test how well we can reweight values and edit model capabilities, Tristan Hume suggested a test we can perform with current language models. Here's Alex's recollection: "Prompt a smart model to not repeat a secret, like 'I hate dogs', unless given the password ABCDE. Poke it and ensure it really won't tell you. Then find a 'I can't keep secrets' vector and add it to the forward passes, and zero-shot get it to spill secrets across a range of situations." Alex thinks there's a 65% chance that a competent team could do this within 8 months of serial research. Activation additions areway fasterthan finetuning.Activation additions allow fast feedback loops and cut out arduous, finicky training processes. At any moment, you might find an awesome steering vector for GPT-2-XL.Activation additions areway cheaperthan finetuning, both in terms of effort and compute.Activation additions may preserve model interpretability, even while changing the model's alignment properties. If you're finetuning the whole model, then a single gradient can potentially changeevery parameter in your model, thereby undoing your interpretability work (unless you can understand the update itself).But activation additions leave weights unchanged. If you can understand what the weights implement, and something about the activation additions, maybe you can preserve your understanding of the steered model. (We don't know if it's easier to interpret gradient updates or activation additions.) Activation additions probably also enjoy some symbol grounding because they're computed using the activations of natural language prompts. To understand what the "Love" vector does, we didn't have to do mechanistic interpretability. Activation additions can sometimes be composed.For n vectors which ~cleanly compose, there are exponentially many alignment configurations (at least 2n, since each vector can be included or excluded from a given configuration). That said, finetuning may share this benefit to some extent.[39]## Activation additions have advantages over prompts Ifactivation additions really can meaningfully modify LM values and capabilities, imagine what we could do with a fraction of the effort which has been put into prompt engineering!Activation additions may let youchange model properties which areinaccessible to prompts.This hope was argued in the finetuning section. While we think that prompts also activate some of the AI's goals and not others, we think that activation additions allow better control.Activation additions don'ttake up contextspace.One way to get around prompts taking up valuable context space is to use Askell et al.'s "context distillation" technique. However, context distillation involves optimizing the model to reduce KL(completions given prompt || unprompted completions). But finetuning requires more effort, time, and compute than activation additions.Activation additions can be continuously weighted, while prompts are discrete. A token is either present, or not. Activation additions are continuous. If you want the model to talk even more about weddings, you don't need to contort the prompt. Just increase the injection coefficient.[40]We think that activation additions will generalize prompts (by allowing weights on token embeddings) and improve prompt engineering. We already have preliminary results on this. In a future post, we will use this to highlight interesting high-level facts about LLMs. ## Conclusion Our simply generated activation additions are a brand new way to interact with language models. We showed off a bunch of highlights, as well as some cases where our technique just doesn't have the intended effect. We showed that several activation additions don't degrade GPT-2's capabilities. Compared to complementary approaches like prompt engineering and finetuning, activation engineering offers many unexplored (and potentially large) benefits. Activation additions in particular may allow us to composably reweight model goals at inference time, freeing up context window space, allowing fast feedback cycles and extremely low compute costs.However, activation additions may end up only contributing modestly to direct alignment techniques. Even in that world, we're excited about the interpretability clues provided by our results. Our results imply strong constraints on GPT-2-XL's internal computational structure. Why can we steer GPT-2-XL by adding together intermediate results from its forward passes? Contributions.This work was completed by the shard theory model internals team: We appreciate the feedback and thoughts from a range of people, including Andrew Critch, AI_WAIFU, Aryan Bhatt, Chris Olah, Ian McKenzie, janus, Julian Schulz, Justis Mills, Lawrence Chan, Leo Gao, Neel Nanda, Oliver Habryka, Olivia Jimenez, Paul Christiano, Peter Barnett, Quintin Pope, Tamera Lanham, Thomas Kwa, and Tristan Hume. We thank Peli Grietzer for independent hyperparameter validation. We thank Rusheb Shah for engineering assistance. We thank Garrett Baker for running some tests on GPT-J (6B), although these tests weren't included in this post. Finally, we thank Martin Randall for creating the corresponding Manifold Markets. This work was supported by a grant from the Long-Term Future Fund. The `activation_additions` repository contains our code.To cite this work: ## Appendix 1: Related work ## Activation engineering in transformers The most related work is as of this post is Subramani et al. (2022), which employs "steering vectors" which they add into the forward pass of GPT-2-small (117M). They randomly initialize a vector with the same dimensionality as the residual stream. They fix a sentence (like "I love dogs"). They then freeze GPT-2-small and optimize the vector so that, when the vector is added into the residual streams, the model outputs, e.g., "I love dogs". [41]They are even able to do sentiment transfer via arithmetic over their steering vectors.These results are highly similar to ours in many ways. However, they while they algebraically add in activation vectors in order to steer network outputs, they do so using vectors computed via SGD. Additionally, Submarani et al. add in a steering vector to either the first residual stream, or to all residual streams. In contrast, activation additions generally add in different vectors across residual stream positions. We compute our steering vectors by taking activation differences between human-crafted prompts—no machine optimization required. This is interesting because optimization-free interventions provide more hints about the structure of the residual stream space—for activation additions to work, some kind of linearity must already be present in GPT-2-XL's representations. Similarly, recent work by Hernandez et al. (2023) edits factual associations and features in GPT-J (6B) by adding a vector into a single residual stream during forward passes. They find these vectors using optimization. They demonstrate specific and reliable fact-editing, without modifying any model weights. Their results are further evidence for feature linearity and internal activation robustness in these models. Merullo et al. (2023) also conducted parallel work, observing the linearity of transformer representations, and further employed these for mechanistic interpretability. They demonstrated that for a model to execute get_capital(Poland), it must initially surface Poland in the residual stream, meaning unembed(resid[i]) equals Poland. Additionally, they showed that the vector →ocity, which FFN 19 added to the residuals to convert Poland to Warsaw, can be added to residuals in an unrelated context to transform China into Beijing. In Neel Nanda's "Actually, Othello-GPT Has A Linear Emergent World Representation", he intervenes on predicted Othello moves by adding or subtracting activation vectors along directions found by linear probes. He was able to modify predictions made by the model by adding activation vectors which were, in essence, trained to linearly represent "a black piece is here and not a white piece." [42]Importantly, Othello-GPT is an 8-layer transformer (apparently sharing most architectural features with the GPT-2 series). Othello-GPT was trained to predict valid Othello move sequences. Neel's technique is an example of activation addition →behavioral modification, albeit using learned vectors (and not just vectors computed from diffing activations during forward passes). ## Other ways of steering language models Editing Models with Task Arithmeticexplored a "dual" version of our activation additions. That work took vectors between weightsbefore and after finetuning on a new task, and then added or subtracted task-specific weight-difference vectors. While this seems interesting, task arithmetic requires finetuning. In Activation additions have advantages over (RL/supervised) finetuning, we explain the advantages our approach may have over finetuning.Plug and Play Language Models uses an attribute model (e.g. probability assigned to wedding-related tokens) which is optimized against in order to modify the cached key-value history for each forward pass during autoregressive generation. PPLM doesn't directly optimize each residual stream in its entirety, but PPLM does modify the key and value vectors. While they use optimization and we don't, they are also able to steer the model to produce positive completions given negative prompts (e.g. " My dog diedat the age of 92 years this year. He was a legend in our home state of Virginia. I have a tremendous heart, my soul, my spirit, my love.").Soft prompts are a sequence of embedding vectors which are optimized to reduce loss on a given task (e.g. question-answering). The embedding vectors are prepended to the normal prompt token embeddings. Note that the "soft prompt" embeddings aren't the embeddings of any real tokens. Surprisingly, soft prompts do as well asfinetuning the whole model on SuperGLUE, even though the base model is frozen while the soft prompt is optimized! Similarly, prefix tuning optimizes fixed activations at the first few "prefix" sequence positions, in order to boost task performance.Unlike our work, soft prompts involve optimized embedding vectors, while we use non-optimized activation additions throughout the model. Furthermore, activation additions are more interpretable (e.g. "Love" - "Hate" activations) and shed light on e.g. the model's internal representations (e.g. by giving evidence on linear feature directions). ## Word embeddings The most obvious and famous related work candidate is word2vec, from the ancient era of ten years ago (2013). Mikolov et al. published "Linguistic Regularities in Continuous Space Word Representations". They trained simple (context ↦ next word) networks which incidentally exhibited some algebraic properties. For example, embed(queen)≈embed(king)+[embed(woman)−embed(man)]suggests the presence of a "woman vector" in the word2vec embedder. Similarly for a "country capital" vector: embed(Paris)≈embed(France)+[embed(Madrid)−embed(Spain)]## Activation additions in generative models Larsen et al. (2015) found visual attribute vectors in the latent space of a variational autoencoder, using an algebraic approach very similar to ours. For example, building on this work, White's "Sampling Generative Networks" (2016) christened a "smile vector" which was White notes that high-quality smile vectors must be computed from gender-balanced averages, otherwise the smile vector also decreases masculinity: ... Alex thinks this is evidence for narrowly-targeted steering being possible. For e.g. a "be nice" vector, Alex expects the vector to not change other model behaviors insofar as "niceness" is the only consistent covariate in the prompt comparisons which are used to generate the activation additions, and insofar as "niceness" is composably represented at the injection location(s). Sampling Generative Networks examines vision models and takes an average difference over many datapoints. GPT-2-XL, in contrast, is a 1.5B-parameter language model. We steer it without averaging over example prompts—we only consider pairs of prompts, like `Love` and`Hate` ."Deep Feature Interpolation for Image Content Changes" (2016) again finds the effectiveness of algebraic latent attribute editing: Honestly, there's a ton of prior work in the domain of generative models. "Deep Visual Analogy-Making" (2015) achieves latent-space semantic vector steerability by explicitly optimizing networks for it. Wang et al. (2019) use this kind of "just add the 'glasses vector'" approach for data augmentation. Gabriel Goh (2017) uses a kind of SVD (and insights from sparse recovery) to automatically derive semantically meaningful directions from vision and language model latent spaces. This allows control of image and text generations by modifying the direction coefficients / adding new vectors (Alex wasn't quite sure which, from the post). For example, a "count" vector allows controlling the degree to which a generated sentence is about anairplane, or agroupof airplanes.Goh mirrors our confusion about why activation additions work: ## Activation additions in reinforcement learning In "Understanding and controlling a maze-solving policy network" and "Maze-solving agents: Add a top-right vector, make the agent go to the top-right", we algebraically edited the activations of a pretrained deep convolutional policy network (3.7M parameters). We computed a cheese vector (by diffing activations for the same maze with and without cheese) and a top-right vector (by diffing activations for a maze with and without an extended path to the top-right of the screen). Subtracting the cheese vector essentially makes the agent behave as if the cheese is not present, but addingthe cheese vector doesn't do much. Conversely, adding the top-right vector attracts the agent to the top-right corner, whilesubtractingthe top-right vector doesn't do much. These vectors not only transfer across agent positions in the maze in which the vector was computed, the vectors also exhibit substantial transfer across mazes themselves. The cheese vector intervention also works for a range of differently pretrained maze-solving policy networks. Finally, the vectors compose, in that they can simultaneously modify behavior. This allows substantial but limited customization of the policy network's behavioral goals.## Appendix 2: Resolving prediction markets Note, 6/21/23:The activation addition technique used to be called "algebraic value editing." We don't use that name anymore.^Cherry-picking status of the opening comparison:Our activation addition technique works in a lot of situations, but we used the "Love" vector because it gives especially juicy results. We ran all of our results at PyTorch seed 0 using fixed sampling hyperparameters.After the introduction, all examples in the post were chosen using best-of-3. For the introduction, we used best-of-30. The reason we chose such a large number is that we wanted a striking example of sentiment shift, without jarring profanity. If we had allowed profanity, best-of-3 would have sufficed for the introduction as well. ^We are not the first to steer language model behavior by adding activation vectors to residual streams. However, we are the first to do so without using SGD to find the vectors. Our "activation addition" methodology enables much faster feedback loops than optimization-based activation vector approaches. ^While there might be nonlinear components to the steering vectors we add to the model, we're fascinated that a linear approach works so well. ^GPT-2's byte-pair encoding tokenizer oftenbegins tokens with a space. For example, the prompt "I like weddings" is tokenized [`I` ,`like` ,`weddings` ]. So, it's cleaner when we prompt the model with " weddings" (tokenizes to`weddings` ) than for us to prompt "Weddings" (tokenizes to [`W` ,`edd` ,`ings` ]).^Space tokens seem to work best, while the end-of-text token works poorly. ^The prompt "Love" tokenizes to [if we instead , so that the two counterbalanced additions span the same residual streams. `<|endoftext|>` ,`Love` ], while the prompt "Hate" tokenizes to [`<|endoftext|>` ,`H` ,`ate` ]. This means that at residual stream 2, we're subtracting 5 times the`ate` activations, but not adding any "Love"-related activations. We find we get better resultspad outthe shorter tokenization [`<|endoftext|>` ,`Love` ] with a space tokenPossibly this "subtracts out the bias contributions" from the steering vector, but note that this isn't strictly true due to causal attention on e.g. the residual stream at position 2. `Love` residual stream probably leading to nontrivial information storage in an "empty"Note that when we add vectors in pairs, there is no modification to the `<|endoftext|>` position 0 residual stream. Due to causally masked attention, the position-0 residual stream is the same for all prompts. When we add activations in pairs, we addandsubtract coefficient times the EOT residual stream, which is equivalent to doing nothing at that position.^Equivalence between prompting and adding activations before layer 0 with coefficient +1: Imagine there's no prompt and you have a bunch of all-zero residual streams at embedding. Then do another forward pass where you embed the intended prompt. Then record those activations, and add them into the embedding for the all-zero forward pass. This is trivially equivalent to running a forward pass on the prompt normally. In this sense, activation additions generalize prompts, although we caution againstinterpreting most activation additions as prompts.^Layer`<|endoftext|>` `I` `want` `to` `kill` `<|endoftext|>` `Int` `ent` `to` `praise` `<|endoftext|>` `Int` `ent` `to` `hurt` ^Layer`<|endoftext|>` `Bar` `ack` `Obama` `was` `born` `in` `<|endoftext|>` `Bush` `did` `9` `/` `11` `because` `<|endoftext|>` ^Layer`<|endoftext|>` `Some` `people` `think` `that` `<|endoftext|>` `Want` `to` `die` `<|endoftext|>` `Want` `to` `stay` `alive` ^Layer`<|endoftext|>` `I` `think` `you` `'re` `<|endoftext|>` `Ang` `er` `<|endoftext|>` `Cal` `m` ^Several slight variations on this Eiffel Tower prompt didn't work nearly as well, for unclear reasons. ^Layer`To` `see` `the` `e` `iff` `el` `tower` `,` `The` `E` `iff` `el` `Tower` `is` `in` `Rome` `The` `E` `iff` `el` `Tower` `is` `in` `France` ^Layer`<|endoftext|>` `Thanks` `for` `asking` `about` `that` `<|endoftext|>` `Dr` `agons` `live` `in` `Berkeley` `<|endoftext|>` `People` `live` `in` `Berkeley` ^Layer`The` `rock` `hurt` `led` `toward` `the` `child` `I` `NEVER` `talk` `about` `people` `getting` `hurt` `I` `talk` `about` `people` `getting` `hurt` ^Layer`I` `went` `up` `to` `my` `friend` `I` `talk` `about` `weddings` `constantly` `I` `do` `not` `talk` `about` `weddings` ^Layer`I` `want` `to` `kill` `you` `because` `you` `Int` `ent` `to` `convert` `you` `to` `Christianity` `Int` `ent` `to` `hurt` `you` ^Layer`<|endoftext|>` `I` `hate` `you` `because` `<|endoftext|>` `Love` ^Layer`<|endoftext|>` `Yesterday` `,` `my` `dog` `died` `.` `<|endoftext|>` `Ang` `er` `<|endoftext|>` `Cal` `m` ^Layer`I` `went` `up` `to` `my` `friend` `I` `talk` `about` `weddings` `constantly` `I` `do` `not` `talk` `about` `weddings` ^Layer`<|endoftext|>` `I` `want` `to` `kill` `you` `because` `<|endoftext|>` `Je` `m` `'` `app` `elle` `<|endoftext|>` `My` `name` `is` ^We use word-count metrics several times. We explored alternatives, including querying `text-davinci-003` to rate the degree to which each completion is about weddings. These ratings were generated opaquely and often seemed bad, although a relatively unbiased estimator overall. We decided to just count the number of words.^Layer`<|endoftext|>` `I` `recently` `went` `to` `this` `<|endoftext|>` `Love` `<|endoftext|>` `H` `ate` `<|endoftext|>` `wedding` `<|endoftext|>` ^Layer`<|endoftext|>` `In` `New` `York` `City` `'s` `parks` `,` `<|endoftext|>` `Whenever` `I` `say` `the` `word` `goose` `I` `<|endoftext|>` `I` `can` `say` `goose` ^As pointed out by the mathematical framework for transformer circuits, embed( `Anger` ) - embed(`Calm` ) is a component of the`Anger` -`Calm` steering vector.^Note that if we had used "I think you're" instead of "I think you're a", neitherthe 0→20 nor the 2→20 vectors would have shown much effect. By contrast, the usual 20→20 steering vector works in both situations. Thus, even if layers 0 and 1 help a bit, they aren't producing nearly as stable of an effect as contributed by layers 2 to 19.^We ran the "fraction of residual stream" experiment before the random-vector experimens. The random-vector results make it less surprising that "just chop off half the dimensions" doesn't ruin outputs. But the random-vector result still doesn't predict a smooth relationship between (% of dimensions modified) and (weddingness of output). ^To count "wedding related words", we counted: "wedding", "weddings", "wed", "marry", "married", "marriage", "bride", "groom", and "honeymoon". ^Of course, there need not bea "wedding" feature direction in GPT-2-XL. What we haveobservedis that adding certain activation vectors will reliably produce completions which appear to us to be "more about weddings." This could take place in many ways, and we encourage people to avoid instantly collapsing their uncertainty about how steering vectors work.^We collected a range of other kinds of quantitative results, including e.g. topic-related word counts, blinded human rating, and ChatGPT ratings. The results accorded with our results here: Steering vectors are effective in the examined situations. For simplicity, we decided to present statistics of next-token probability distributions. ^GPT-2's perplexity is reduced on text (output by GPT-4) which isn't very similarto GPT-2's WebText training corpus (websites linked to from Reddit). It would be somewhat more surprising if we decreased GPT-2's loss on its training set.^We think it's important to take perplexity over each sentence, not over each essay. Suppose we just took perplexity over the whole long GPT-4 summary, all at once. Even if our intervention seriously messed up a few residual streams, a long context would mostly contain residual streams which weren't directly messed up. Thus, taking perplexity over a long context window might wipe out any negative effect of the activation addition. This would make our method look better than it should. ^Importantly, we exclude positions 0 and 1 because position 0 is unchanged, and position 1 is directly modified by the steering vector. As mentioned earlier, steering vectors mess up the next-token distributions at the relevant residual stream positions. However, when we actually use the " weddings" vector to generate completions, we don't sample from these distributions. Therefore, these distributions don't seem like relevant information for checking how the vector affects GPT-2's abilities. ^Layer 16's "saturating and unidirectional wedding-increase" mirrors our findings with the top-right vector in the maze environment. In that setting, adding the top-right vector with coefficient 1 attracted the net to the top-right corner. Adding with coefficient 2 didn't attract the network more strongly ("saturation"). And subtracting the top-right vector didn't repel the network from the top-right corner ("unidirectional"). ^There are a few late layers where positive reviews have a lower perplexity ratio than neutral reviews, but this seems within noise. In any case, the overall point stands. Across a huge range of injection layers and coefficients, the " worst" vector differentially improves perplexity on negative-sentiment reviews more than neutral-sentiment, and neutral-sentiment more than positive-sentiment. ^We haven't even tried averaging steering vectors (to wash out extra noise from the choice of steering-prompt), or optimizing the vectors to reduce destructive interference with the rest of the model, or localizing steering vectors to particular heads, or using an SVD to grab feature directions from steering vectors (or from averages of steering vectors). ^Our impression is that, at best, there are vague high-level theories like "feature linearity and internal error correction due to dropout." Our guess is that these theories are not believed with extreme confidence. Even if your priors put 70% on this hypothesis, we think this post is still a meaningful update. ^Assuming the network isn't deceptively misaligned already. Possibly, well-chosen activation additions still work on such networks. ^From Understanding and controlling a maze-solving policy network: ^The injection coefficient cannot be increased indefinitely, as shown by our coefficient sweeps. However, our experience is that e.g. the "weddingness" of completions can be intensified a lotbefore GPT-2-XL starts breaking down.^Submarani et al. optimized several steering vectors zisteer for the same sentence (e.g. "I love dogs"), which were different due to different initialization. When they added in the mean steering vector ¯¯¯zsteer, this alsogenerated e.g. "I love dogs".This is evidence of feature linearity in GPT-2-small. ^Furthermore, Neel noted that composition worked to some degree:
true
true
true
Prompt given to the model[1]I hate you becauseGPT-2I hate you because you are the most disgusting thing I have ever seen. GPT-2 + "Love" vectorI hate…
2024-10-12 00:00:00
2023-05-13 00:00:00
https://res.cloudinary.com/lesswrong-2-0/image/upload/c_fill,ar_1.91,g_auto/SocialPreview/bj2r3jtoovu5gfcmpjrg
article
lesswrong.com
lesswrong.com
null
null
32,495,446
https://danluu.com/hn-comments/
HN: the good parts
null
HN comments are terrible. On any topic I’m informed about, the vast majority of comments are pretty clearly wrong. Most of the time, there are zero comments from people who know anything about the topic and the top comment is reasonable sounding but totally incorrect. Additionally, many comments are gratuitously mean. You'll often hear mean comments backed up with something like "this is better than the other possibility, where everyone just pats each other on the back with comments like 'this is great'", as if being an asshole is some sort of talisman against empty platitudes. I've seen people push back against that; when pressed, people often say that it’s either impossible or inefficient to teach someone without being mean, as if telling someone that they're stupid somehow helps them learn. It's as if people learned how to explain things by watching Simon Cowell and can't comprehend the concept of an explanation that isn't littered with personal insults. Paul Graham has said, "Oh, you should never read Hacker News comments about anything you write”. Most of the negative things you hear about HN comments are true. And yet, I haven’t found a public internet forum with better technical commentary. On topics I'm familiar with, while it's rare that a thread will have even a single comment that's well-informed, when those comments appear, they usually float to the top. On other forums, well-informed comments are either non-existent or get buried by reasonable sounding but totally wrong comments when they appear, and they appear even more rarely than on HN. By volume, there are probably more interesting technical “posts” in comments than in links. Well, that depends on what you find interesting, but that’s true for my interests. If I see a low-level optimization comment from nkurz, a comment on business from patio11, a comment on how companies operate by nostrademons, I almost certainly know that I’m going to read an interesting comment. There are maybe 20 to 30 people I can think of who don’t blog much, but write great comments on HN and I doubt I even know of half the people who are writing great comments on HN. I compiled a very abbreviated list of comments I like because comments seem to get lost. If you write a blog post, people will refer it years later, but comments mostly disappear. I think that’s sad -- there’s a lot of great material on HN (and yes, even more not-so-great material). Basically, the Word file format is a binary dump of memory. I kid you not. They just took whatever was in memory and wrote it out to disk. We can try to reason why (maybe it was faster, maybe it made the code smaller), but I think the overriding reason is that the original developers didn't know any better. Later as they tried to add features they had to try to make it backward compatible. This is where a lot of the complexity lies. There are lots of crazy workarounds for things that would be simple if you allowed yourself to redesign the file format. It's pretty clear that this was mandated by management, because no software developer would put themselves through that hell for no reason. Later they added a fast-save feature (I forget what it is actually called). This appends changes to the file without changing the original file. The way they implemented this was really ingenious, but complicates the file structure a lot. One thing I feel I must point out (I remember posting a huge thing on slashdot when this article was originally posted) is that 2 way file conversion is next to impossible for word processors. That's because the file formats do not contain enough information to format the document. The most obvious place to see this is pagination. The file format does not say where to paginate a text flow (unless it is explicitly entered by the user). It relies of the formatter to do it. Each word processor formats text completely differently. Word, for example famously paginates footnotes incorrectly. They can't change it, though, because it will break backwards compatibility. This is one of the only reasons that Word Perfect survives today -- it is the only word processor that paginates legal documents the way the US Department of Justice requires. Just considering the pagination issue, you can see what the problem is. When reading a Word document, you have to paginate it like Word -- only the file format doesn't tell you what that is. Then if someone modifies the document and you need to resave it, you need to somehow mark that it should be paginated like Word (even though it might now have features that are not in Word). If it was only pagination, you might be able to do it, but practically everything is like that. I recommend reading (a bit of) the XML Word file format for those who are interested. You will see large numbers of flags for things like "Format like Word 95". The format doesn't say what that is -- because it's pretty obvious that the authors of the file format don't know. It's lost in a hopeless mess of legacy code and nobody can figure out what it does now. Here's another example of this fine feature: ``` #include <stdio.h> #include <string.h> #include <stdlib.h> #define LENGTH 128 int main(int argc, char **argv) { char *string = NULL; int length = 0; if (argc > 1) { string = argv[1]; length = strlen(string); if (length >= LENGTH) exit(1); } char buffer[LENGTH]; memcpy(buffer, string, length); buffer[length] = 0; if (string == NULL) { printf("String is null, so cancel the launch.\n"); } else { printf("String is not null, so launch the missiles!\n"); } printf("string: %s\n", string); // undefined for null but works in practice #if SEGFAULT_ON_NULL printf("%s\n", string); // segfaults on null when bare "%s\n" #endif return 0; } nate@skylake:~/src$ clang-3.8 -Wall -O3 null_check.c -o null_check nate@skylake:~/src$ null_check String is null, so cancel the launch. string: (null) nate@skylake:~/src$ icc-17 -Wall -O3 null_check.c -o null_check nate@skylake:~/src$ null_check String is null, so cancel the launch. string: (null) nate@skylake:~/src$ gcc-5 -Wall -O3 null_check.c -o null_check nate@skylake:~/src$ null_check String is not null, so launch the missiles! string: (null) ``` It appear that Intel's ICC and Clang still haven't caught up with GCC's optimizations. Ouch if you were depending on that optimization to get the performance you need! But before picking on GCC too much, consider that all three of those compilers segfault on printf("string: "); printf("%s\n", string) when string is NULL, despite having no problem with printf("string: %s\n", string) as a single statement. Can you see why using two separate statements would cause a segfault? If not, see here for a hint: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=25609 Good engineering eliminates users being able to do the wrong thing as much as possible. . . . You don't design a feature that invites misuse and then use instructions to try to prevent that misuse. There was a derailment in Australia called the Waterfall derailment [1]. It occurred because the driver had a heart attack and was responsible for 7 deaths (a miracle it was so low, honestly). The root cause was the failure of the dead-man's switch. In the case of Waterfall, the driver had 2 dead-man switches he could use - 1) the throttle handle had to be held against a spring at a small rotation, or 2) a bar on the floor could be depressed. You had to do 1 of these things, the idea being that you prevent wrist or foot cramping by allowing the driver to alternate between the two. Failure to do either triggers an emergency brake. It turns out that this driver was fat enough that when he had a heart attack, his leg was able to depress the pedal enough to hold the emergency system off. Thus, the dead-man's system never triggered with a whole lot of dead man in the driver's seat. I can't quite remember the specifics of the system at Waterfall, but one method to combat this is to require the pedal to be held halfway between released and fully depressed. The idea being that a dead leg would fully depress the pedal so that would trigger a brake, and a fully released pedal would also trigger a brake. I don't know if they had that system but certainly that's one approach used in rail. Either way, the problem is equally possible in cars. If you lose consciousness and your foot goes limp, a heavy enough leg will be able to hold the pedal down a bit depending on where it's positioned relative to the pedal and the leverage it has on the floor. The other major system I'm familiar with for ensuring drivers are alive at the helm is called 'vigilance'. The way it works is that periodically, a light starts flashing on the dash and the driver has to acknowledge that. If they do not, a buzzer alarm starts sounding. If they still don't acknowledge it, the train brakes apply and the driver is assumed incapacitated. Let me tell you some stories of my involvement in it. When we first started, we had a simple vigi system. Every 30 seconds or so (for example), the driver would press a button. Ok cool. Except that then drivers became so hard-wired to pressing the button every 30 seconds that we were having instances of drivers falling asleep/dozing off and still pressing the button right on every 30 seconds because it was so ingrained into them that it was literally a subconscious action. So we introduced random-timing vigilance, where the time varies 30-60 seconds (for example) and you could only acknowledge it within a small period of time once the light started flashing. Again, drivers started falling asleep/semi asleep and would hit it as soon as the alarm buzzed, each and every time. So we introduced random-timing, task-linked vigilance and that finally broke the back of the problem. Now, the driver has to press a button, or turn a knob, or do a number of different activities and they must do that randomly-chosen activity, at a randomly-chosen time, for them to acknowledge their consciousness. It was only at that point that we finally nailed out driver alertness. See also. Curious why he would need to move to a more prestigious position? Most people realize by their 30s that prestige is a sucker's game; it's a way of inducing people to do things that aren't much fun and they wouldn't really want to do on their own, by lauding them with accolades from people they don't really care about. . . . we noticed that we also needed: (1) A suitable, existing airport at the hub location. (2) Good weather at the hub location, e.g., relatively little snow, fog, or rain. (3) Access to good ramp space, that is, where to park and service the airplanes and sort the packages. (4) Good labor supply, e.g., for the sort center. (5) Relatively low cost of living to keep down prices. (6) Friendly regulatory environment. (7) Candidate airport not too busy, e.g., don't want arriving planes to have to circle a long time before being able to land. (8) Airport with relatively little in cross winds and with more than one runway to pick from in case of winds. (9) Runway altitude not too high, e.g., not high enough to restrict maximum total gross take off weight, e.g., rule out Denver. (10) No tall obstacles, e.g., mountains, near the ends of the runways. (11) Good supplies of jet fuel. (12) Good access to roads for 18 wheel trucks for exchange of packages between trucks and planes, e.g., so that some parts could be trucked to the hub and stored there and shipped directly via the planes to customers that place orders, say, as late as 11 PM for delivery before 10 AM. So, there were about three candidate locations, Memphis and, as I recall, Cincinnati and Kansas City. The Memphis airport had some old WWII hangers next to the runway that FedEx could use for the sort center, aircraft maintenance, and HQ office space. Deal done -- it was Memphis. The decision to sell to Google was one of the toughest decisions I and my cofounders ever had to wrestle with in our lives. We were excited by the Wave vision though we saw the flaws in the product. The Wave team told us about how they wanted our help making wave simpler and more like etherpad, and we thought we could help with that, though in the end we were unsuccessful at making wave simpler. We were scared of Google as a competitor: they had more engineers and more money behind this project, yet they were running it much more like an independent startup than a normal big-company department. The Wave office was in Australia and had almost total autonomy. And finally, after 1.5 years of being on the brink of failure with AppJet, it was tempting to be able to declare our endeavor a success and provide a decent return to all our investors who had risked their money on us. In the end, our decision to join Wave did not work out as we had hoped. The biggest lessons learned were that having more engineers and money behind a project can actually be more harmful than helpful, so we were wrong to be scared of Wave as a competitor for this reason. It seems obvious in hindsight, but at the time it wasn't. Second, I totally underestimated how hard it would be to iterate on the Wave codebase. I was used to rewriting major portions of software in a single all-nighter. Because of the software development process Wave was using, it was practically impossible to iterate on the product. I should have done more diligence on their specific software engineering processes, but instead I assumed because they seemed to be operating like a startup, that they would be able to iterate like a startup. A lot of the product problems were known to the whole Wave team, but we were crippled by a large complex codebase built on poor technical choices and a cumbersome engineering process that prevented fast iteration. When I've had inside information about a story that later breaks in the tech press, I'm always shocked at how differently it's perceived by readers of the article vs. how I experienced it. Among startups & major feature launches I've been party to, I've seen: executives that flat-out say that they're not working on a product category when there's been a whole department devoted to it for a year; startups that were founded 1.5 years before the dates listed in Crunchbase/Wikipedia; reporters that count the number of people they meet in a visit and report that as a the "team size", because the company refuses to release that info; funding rounds that never make it to the press; acquisitions that are reported as "for an undisclosed sum" but actually are less than the founders would've made if they'd taken a salaried job at the company; project start dates that are actually when the project was staffed up to its current size and ignore the year or so that a small team spent working on the problem (or the 3-4 years that other small teams spent working on the problem); and algorithms or other technologies that are widely reported as being the core of the company's success, but actually aren't even used by the company. As the main developer of VLC, we know about this story since a long time, and this is just Dell putting crap components on their machine and blaming others. Any discussion was impossible with them. So let me explain a bit... In this case, VLC just uses the Windows APIs (DirectSound), and sends signed integers of 16bits (s16) to the Windows Kernel. VLC allows amplification of the INPUT above the sound that was decoded. This is just like replay gain, broken codecs, badly recorded files or post-amplification and can lead to saturation. But this is exactly the same if you put your mp3 file through Audacity and increase it and play with WMP, or if you put a DirectShow filter that amplifies the volume after your codec output. For example, for a long time, VLC ac3 and mp3 codecs were too low (-6dB) compared to the reference output. At worse, this will reduce the dynamics and saturate a lot, but this is not going to break your hardware. VLC does not (and cannot) modify the OUTPUT volume to destroy the speakers. VLC is a Software using the OFFICIAL platforms APIs. The issue here is that Dell sound cards output power (that can be approached by a factor of the quadratic of the amplitude) that Dell speakers cannot handle. Simply said, the sound card outputs at max 10W, and the speakers only can take 6W in, and neither their BIOS or drivers block this. And as VLC is present on a lot of machines, it's simple to blame VLC. "Correlation does not mean causation" is something that seems too complex for cheap Dell support… Working for someone else's startup, I learned how to quickly cobble solutions together. I learned about uncertainty and picking a direction regardless of whether you're sure it'll work. I learned that most startups fail, and that when they fail, the people who end up doing well are the ones who were looking out for their own interests all along. I learned a lot of basic technical skills, how to write code quickly and learn new APIs quickly and deploy software to multiple machines. I learned how quickly problems of scaling a development team crop up, and how early you should start investing in automation. Working for Google, I learned how to fix problems once and for all and build that culture into the organization. I learned that even in successful companies, everything is temporary, and that great products are usually built through a lot of hard work by many people rather than great ah-ha insights. I learned how to architect systems for scale, and a lot of practices used for robust, high-availability, frequently-deployed systems. I learned the value of research and of spending a lot of time on a single important problem: many startups take a scattershot approach, trying one weekend hackathon after another and finding nobody wants any of them, while oftentimes there are opportunities that nobody has solved because nobody wants to put in the work. I learned how to work in teams and try to understand what other people want. I learned what problems are really painful for big organizations. I learned how to rigorously research the market and use data to make product decisions, rather than making decisions based on what seems best to one person. Having been in on the company's leadership meetings where departures were noted with a simple 'regret yes/no' flag it was my experience that no single departure had any effect. Mass departures did, trends did, but one person never did, even when that person was a founder. The rationalizations always put the issue back on the departing employee, "They were burned out", "They had lost their ability to be effective", "They have moved on", "They just haven't grown with the company" never was it "We failed this person, what are we going to do differently?" Anyway, the SOA effort was in full swing when I was there. It was a pain, and it was a mess because every team did things differently and every API was different and based on different assumptions and written in a different language. But I want to correct the misperception that this lead to AWS. It didn't. S3 was written by its own team, from scratch. At the time I was at Amazon, working on the retail site, none of Amazon.com was running on AWS. I know, when AWS was announced, with great fanfare, they said "the services that power Amazon.com can now power your business!" or words to that effect. This was a flat out lie. The only thing they shared was data centers and a standard hardware configuration. Even by the time I left, when AWS was running full steam ahead (and probably running Reddit already), none of Amazon.com was running on AWS, except for a few, small, experimental and relatively new projects. I'm sure more of it has been adopted now, but AWS was always a separate team (and a better managed one, from what I could see.) I (and others) have put a lot of effort into making the Linux Chrome build fast. Some examples are multiple new implementations of the build system (http://neugierig.org/software/chromium/notes/2011/02/ninja.h... ), experimentation with the gold linker (e.g. measuring and adjusting the still off-by-default thread flags https://groups.google.com/a/chromium.org/group/chromium-dev/... ) as well as digging into bugs in it, and other underdocumented things like 'thin' ar archives. But it's also true that people who are more of Windows wizards than I am a Linux apprentice have worked on Chrome's Windows build. If you asked me the original question, I'd say the underlying problem is that on Windows all you have is what Microsoft gives you and you can't typically do better than that. For example, migrating the Chrome build off of Visual Studio would be a large undertaking, large enough that it's rarely considered. (Another way of phrasing this is it's the IDE problem: you get all of the IDE or you get nothing.) When addressing the poor Windows performance people first bought SSDs, something that never even occurred to me ("your system has enough RAM that the kernel cache of the file system should be in memory anyway!"). But for whatever reason on the Linux side some Googlers saw it fit to rewrite the Linux linker to make it twice as fast (this effort predated Chrome), and all Linux developers now get to benefit from that. Perhaps the difference is that when people write awesome tools for Windows or Mac they try to sell them rather than give them away. I'm a developer in Windows and contribute to the NT kernel. (Proof: the SHA1 hash of revision #102 of [Edit: filename redacted] is [Edit: hash redacted].) I'm posting through Tor for obvious reasons. Windows is indeed slower than other operating systems in many scenarios, and the gap is worsening. The cause of the problem is social. There's almost none of the improvement for its own sake, for the sake of glory, that you see in the Linux world. Granted, occasionally one sees naive people try to make things better. These people almost always fail. We can and do improve performance for specific scenarios that people with the ability to allocate resources believe impact business goals, but this work is Sisyphean. There's no formal or informal program of systemic performance improvement. We started caring about security because pre-SP3 Windows XP was an existential threat to the business. Our low performance is not an existential threat to the business. See, component owners are generally openly hostile to outside patches: if you're a dev, accepting an outside patch makes your lead angry (due to the need to maintain this patch and to justify in in shiproom the unplanned design change), makes test angry (because test is on the hook for making sure the change doesn't break anything, and you just made work for them), and PM is angry (due to the schedule implications of code churn). There's just no incentive to accept changes from outside your own team. You can always find a reason to say "no", and you have very little incentive to say "yes". See link for giant table :-). Broken record: startups are also probably rejecting a lot of engineering candidates that would perform as well or better than anyone on their existing team, because tech industry hiring processes are folkloric and irrational. Too long to excerpt. See the link! I am 42-year-old very successful programmer who has been through a lot of situations in my career so far, many of them highly demotivating. And the best advice I have for you is to get out of what you are doing. Really. Even though you state that you are not in a position to do that, you really are. It is okay. You are free. Okay, you are helping your boyfriend's startup but what is the appropriate cost for this? Would he have you do it if he knew it was crushing your soul? I don't use the phrase "crushing your soul" lightly. When it happens slowly, as it does in these cases, it is hard to see the scale of what is happening. But this is a very serious situation and if left unchecked it may damage the potential for you to do good work for the rest of your life. The commenters who are warning about burnout are right. Burnout is a very serious situation. If you burn yourself out hard, it will be difficult to be effective at any future job you go to, even if it is ostensibly a wonderful job. Treat burnout like a physical injury. I burned myself out once and it took at least 12 years to regain full productivity. Don't do it. More broadly, the best and most creative work comes from a root of joy and excitement. If you lose your ability to feel joy and excitement about programming-related things, you'll be unable to do the best work. That this issue is separate from and parallel to burnout! If you are burned out, you might still be able to feel the joy and excitement briefly at the start of a project/idea, but they will fade quickly as the reality of day-to-day work sets in. Alternatively, if you are not burned out but also do not have a sense of wonder, it is likely you will never get yourself started on the good work. The earlier in your career it is now, the more important this time is for your development. Programmers learn by doing. If you put yourself into an environment where you are constantly challenged and are working at the top threshold of your ability, then after a few years have gone by, your skills will have increased tremendously. It is like going to intensively learn kung fu for a few years, or going into Navy SEAL training or something. But this isn't just a one-time constant increase. The faster you get things done, and the more thorough and error-free they are, the more ideas you can execute on, which means you will learn faster in the future too. Over the long term, programming skill is like compound interest. More now means a LOT more later. Less now means a LOT less later. So if you are putting yourself into a position that is not really challenging, that is a bummer day in and day out, and you get things done slowly, you aren't just having a slow time now. You are bringing down that compound interest curve for the rest of your career. It is a serious problem. If I could go back to my early career I would mercilessly cut out all the shitty jobs I did (and there were many of them). A small anecdote. An acquaintance related a story of fixing the 'drainage' in their back yard. They were trying to grow some plants that were sensitive to excessive moisture, and the plants were dying. Not watering them, watering them a little, didn't seem to change. They died. A professional gardner suggested that their problem was drainage. So they dug down about 3' (where the soil was very very wet) and tried to build in better drainage. As they were on the side of a hill, water table issues were not considered. It turned out their "problem" was that the water main that fed their house and the houses up the hill, was so pressurized at their property (because it had maintain pressure at the top of the hill too) that the pipe seams were leaking and it was pumping gallons of water into the ground underneath their property. The problem wasn't their garden, the problem was that the city water supply was poorly designed. While I have never been asked if I was an engineer on the phone, I have experienced similar things to Rachel in meetings and with regard to suggestions. Co-workers will create an internal assessment of your value and then respond based on that assessment. If they have written you off they will ignore you, if you prove their assessment wrong in a public forum they will attack you. These are management issues, and something which was sorely lacking in the stories. If you are the "owner" of a meeting, and someone is trying to be heard and isn't. It is incumbent on you to let them be heard. By your position power as "the boss" you can naturally interrupt a discussion to collect more data from other members. Its also important to ask questions like "does anyone have any concerns?" to draw out people who have valid input but are too timid to share it. In a highly political environment there are two ways to create change, one is through overt manipulation, which is to collect political power to yourself and then exert it to enact change, and the other is covert manipulation, which is to enact change subtly enough that the political organism doesn't react. (sometimes called "triggering the antibodies"). The problem with the latter is that if you help make positive change while keeping everyone not pissed off, no one attributes it to you (which is good for the change agent because if they knew the anti-bodies would react, but bad if your manager doesn't recognize it). I asked my manager what change he wanted to be 'true' yet he (or others) had been unsuccessful making true, he gave me one, and 18 months later that change was in place. He didn't believe that I was the one who had made the change. I suggested he pick a change he wanted to happen and not tell me, then in 18 months we could see if that one happened :-). But he also didn't understand enough about organizational dynamics to know that making change without having the source of that change point back at you was even possible. Heavily relying on Google product? ✓ Hitting a dead-end with Google's customer service? ✓ Have an existing audience you can leverage to get some random Google employee's attention? ✓ Reach front page of Hacker News? ✓ Good news! You should have your problem fixed in 2-5 business days. The rest of us suckers relying on google services get to stare at our inboxes helplessly, waiting for a response to our support ticket (which will never come). I feel like it's almost a [rite] of passage these days to rely heavily on a Google service, only to have something go wrong and be left out in the cold. IIRC PayPal was very similar - it was sold for $1.5B, but Max Levchin's share was only about $30M, and Elon Musk's was only about $100M. By comparison, many early Web 2.0 darlings (Del.icio.us, Blogger, Flickr) sold for only $20-40M, but their founders had only taken small seed rounds, and so the vast majority of the purchase price went to the founders. 75% of a $40M acquisition = 3% of a $1B acquisition. Something for founders to think about when they're taking funding. If you look at the gigantic tech fortunes - Gates, Page/Brin, Omidyar, Bezos, Zuckerburg, Hewlett/Packard - they usually came from having a company that was already profitable or was already well down the hockey-stick user growth curve and had a clear path to monetization by the time they sought investment. Companies that fight tooth & nail for customers and need lots of outside capital to do it usually have much worse financial outcomes. A lot of the people who were involved in some way in Experts-Exchange don't understand Stack Overflow. The basic value flow of EE is that "experts" provide valuable "answers" for novices with questions. In that equation there's one person asking a question and one person writing an answer. Stack Overflow recognizes that for every person who asks a question, 100 - 10,000 people will type that same question into Google and find an answer that has already been written. In our equation, we are a community of people writing answers that will be read by hundreds or thousands of people. Ours is a project more like wikipedia -- collaboratively creating a resource for the Internet at large. Because that resource is provided by the community, it belongs to the community. That's why our data is freely available and licensed under creative commons. We did this specifically because of the negative experience we had with EE taking a community-generated resource and deciding to slap a paywall around it. The attitude of many EE contributors, like Greg Young who calculates that he "worked" for half a year for free, is not shared by the 60,000 people who write answers on SO every month. When you talk to them you realize that on Stack Overflow, answering questions is about learning. It's about creating a permanent artifact to make the Internet better. It's about helping someone solve a problem in five minutes that would have taken them hours to solve on their own. It's not about working for free. As soon as EE introduced the concept of money they forced everybody to think of their work on EE as just that -- work. I saw that one of my old textbooks was selling for a nice price, so I listed it along with two other used copies. I priced it $1 cheaper than the lowest price offered, but within an hour both sellers had changed their prices to $.01 and $.02 cheaper than mine. I reduced it two times more by $1, and each time they beat my price by a cent or two. So what I did was reduce my price by a few dollars every hour for one day until everybody was priced under $5. Then I bought their books and changed my price back. While I like the sentiment here, I think the danger is that engineers might come to the mistaken conclusion that making pizzas is the primary limiting reagent to running a successful pizzeria. Running a successful pizzeria is more about schlepping to local hotels and leaving them 50 copies of your menu to put at the front desk, hiring drivers who will both deliver pizzas in a timely fashion and not embezzle your (razor-thin) profits while also costing next-to-nothing to employ, maintaining a kitchen in sufficient order to pass your local health inspector's annual visit (and dealing with 47 different pieces of paper related to that), being able to juggle priorities like "Do I take out a bank loan to build a new brick-oven, which will make the pizza taste better, in the knowledge that this will commit $3,000 of my cash flow every month for the next 3 years, or do I hire an extra cook?", sourcing ingredients such that they're available in quantity and quality every day for a fairly consistent price, setting prices such that they're locally competitive for your chosen clientele but generate a healthy gross margin for the business, understanding why a healthy gross margin really doesn't imply a healthy net margin and that the rent still needs to get paid, keeping good-enough records such that you know whether your business is dying before you can't make payroll and such that you can provide a reasonably accurate picture of accounts for the taxation authorities every year, balancing 50% off medium pizza promotions with the desire to not cannibalize the business of your regulars, etc etc, and by the way tomato sauce should be tangy but not sour and cheese should melt with just the faintest whisp of a crust on it. Do you want to write software for a living? Google is hiring. Do you want to run a software business? Godspeed. Software is now 10% of your working life. The way I prefer to think of it is: it is not your job to protect people (particularly senior management) from the consequences of their decisions. Make your decisions in your own best interest; it is up to the organization to make sure that your interest aligns with theirs. Google used to have a severe problem where code refactoring & maintenance was not rewarded in performance reviews while launches were highly regarded, which led to the effect of everybody trying to launch things as fast as possible and nobody cleaning up the messes left behind. Eventually launches started getting slowed down, Larry started asking "Why can't we have nice things?", and everybody responded "Because you've been paying us to rack up technical debt." As a result, teams were formed with the express purpose of code health & maintenance, those teams that were already working on those goals got more visibility, and refactoring contributions started counting for something in perf. Moreover, many ex-Googlers who were fed up with the situation went to Facebook and, I've heard, instituted a culture there where grungy engineering maintenance is valued by your peers. None of this would've happened if people had just heroically fallen on their own sword and burnt out doing work nobody cared about. Sometimes it takes highly visible consequences before people with decision-making power realize there's a problem and start correcting it. If those consequences never happen, they'll keep believing it's not a problem and won't pay much attention to it. The thing my grandfather taught me was that you live with all of your decisions for the rest of your life. When you make decisions which put other people at risk, you take on the risk that you are going to make someones life harder, possibly much harder. What is perhaps even more important is that no amount of "I'm so sorry I did that ..." will ever undo it. Sometimes its little things, like taking the last serving because you thought everyone had eaten, sometimes its big things like deciding that home is close enough that and you're sober enough to get there safely. They are all decisions we make every day. And as I've gotten older the weight of ones I wish I had made differently doesn't get any lighter. You can lie to yourself about your choices, rationalize them, but that doesn't change them either. I didn't understand any of that when I was younger. It took me too long to figure this out. There are some people to truly, and passionately, believe something they say to you, and realistically they personally can't make it happen so you can't really bank on that 'promise.' I used to think those people were lying to take advantage, but as I've gotten older I have come to recognize that these 'yes' people get promoted a lot. And for some of them, they really do believe what they are saying. As an engineer I've found that once I can 'calibrate' someone's 'yes-ness' I can then work with them, understanding that they only make 'wishful' commitments rather than 'reasoned' commitments. So when someone, like Steve Jobs, says "we're going to make it an open standard!", my first question then is "Great, I've got your support in making this an open standard so I can count on you to wield your position influence to aid me when folks line up against that effort, right?" If the answer that that question is no, then they were lying. The difference is subtle of course but important. Steve clearly doesn't go to standards meetings and vote etc, but if Manager Bob gets push back from accounting that he's going to exceed his travel budget by sending 5 guys to the Open Video Chat Working Group which is championing the Facetime protocol as an open standard, then Manager Bob goes to Steve and says "I need your help here, these 5 guys are needed to argue this standard and keep it from being turned into a turd by the 5 guys from Google who are going to attend." and then Steve whips off a one liner to accounting that says "Get off this guy's back we need this." Then its all good. If on the other hand he says "We gotta save money, send one guy." well in that case I'm more sympathetic to the accusation of prevarication. For those who work inside Google, it's well worth it to look at Jeff & Sanjay's commit history and code review dashboard. They aren't actually all that much more productive in terms of code written than a decent SWE3 who knows his codebase. The reason they have a reputation as rockstars is that they can apply this productivity to things that really matter; they're able to pick out the really important parts of the problem and then focus their efforts there, so that the end result ends up being much more impactful than what the SWE3 wrote. The SWE3 may spend his time writing a bunch of unit tests that catch bugs that wouldn't really have happened anyway, or migrating from one system to another that isn't really a large improvement, or going down an architectural dead end that'll just have to be rewritten later. Jeff or Sanjay (or any of the other folks operating at that level) will spend their time running a proposed API by clients to ensure it meets their needs, or measuring the performance of subsystems so they fully understand their building blocks, or mentally simulating the operation of the system before building it so they rapidly test out alternatives. They don't actually write more code than a junior developer (oftentimes, they write less), but the code they do write gives them more information, which makes them ensure that they write the rightcode. I feel like this point needs to be stressed a whole lot more than it is, as there's a whole mythology that's grown up around 10x developers that's not all that helpful. In particular, people need to realize that these developers rapidly become 1x developers (or worse) if you don't let them make their own architectural choices - the reason they're excellent in the first place is because they know how to determine if certain work is going to be useless and avoid doing it in the first place. If you dictate that they do it anyway, they're going to be just as slow as any other developer I got the hero speech too, once. If anyone ever mentions the word "heroic" again and there isn't a burning building involved, I will start looking for new employment immediately. It seems that in our industry it is universally a code word for "We're about to exploit you because the project is understaffed and under budgeted for time and that is exactly as we planned it so you'd better cowboy up." Maybe it is different if you're writing Quake, but I guarantee you the 43rd best selling game that year also had programmers "encouraged onwards" by tales of the glory that awaited after the death march. I was once speaking to a good friend of mine here, in English. "Do you want to go out for yakitori?" "Go fuck yourself!" "... switches to Japanese Have I recently done anything very major to offend you?" "No, of course not." "Oh, OK, I was worried. So that phrase, that's something you would only say under extreme distress when you had maximal desire to offend me, or I suppose you could use it jokingly between friends, but neither you nor I generally talk that way." "I learned it from a movie. I thought it meant ‘No.’" True story: I went to a talk given by one of the 'engineering elders' (these were low Emp# engineers who were considered quite successful and were to be emulated by the workers :-) This person stated when they came to work at Google they were given the XYZ system to work on (sadly I'm prevented from disclosing the actual system). They remarked how they spent a couple of days looking over the system which was complicated and creaky, they couldn't figure it out so they wrote a new system. Yup, and they committed that. This person is a coding God are they not? (sarcasm) I asked what happened to the old system (I knew but was interested on their perspective) and they said it was still around because a few things still used it, but (quite proudly) nearly everything else had moved to their new system. So if you were reading carefully, this person created a new system to 'replace' an existing system which they didn't understand and got nearly everyone to move to the new system. That made them uber because they got something big to put on their internal resume, and a whole crapload of folks had to write new code to adapt from the old system to this new system, which imperfectly recreated the old system (remember they didn't understand the original), such that those parts of the system that relied on the more obscure bits had yet to be converted (because nobody undersood either the dependent code or the old system apparently). Was this person smart? Blindingly brilliant according to some of their peers. Did they get things done? Hell yes, they wrote the replacement for the XYZ system from scratch! One person? Can you imagine? Would I hire them? Not unless they were the last qualified person in my pool and I was out of time. That anecdote encapsulates the dangerous side of smart people who get things done. Some kids grow up on football. I grew up on public speaking (as behavioral therapy for a speech impediment, actually). If you want to get radically better in a hurry: Too long to excerpt. See the link. I can relate to this, but I can also relate to the other side of the question. Sometimes it isn't me, its you. Take someone who gets things done and suddenly in your organization they aren't delivering. Could be them, but it could also be you. I had this experience working at Google. I had a horrible time getting anything done there. Now I spent a bit of time evaluating that since it had never been the case in my career, up to that point, where I was unable to move the ball forward and I really wanted to understand that. The short answer was that Google had developed a number of people who spent much, if not all, of their time preventing change. It took me a while to figure out what motivated someone to be anti-change. The fear was risk and safety. Folks moved around a lot and so you had people in charge of systems they didn't build, didn't understand all the moving parts of, and were apt to get a poor rating if they broke. When dealing with people in that situation one could either educate them and bring them along, or steam roll over them. Education takes time, and during that time the 'teacher' doesn't get anything done. This favors steamrolling evolutionarily :-) So you can hire someone who gets stuff done, but if getting stuff done in your organization requires them to be an asshole, and they aren't up for that, well they aren't going to be nearly as successful as you would like them to be. I can tell that this was written by an outsider, because it focuses on the perks and rehashes several cliches that have made their way into the popular media but aren't all that accurate. Most Googlers will tell you that the best thing about working there is having the ability to work on really hard problems, with really smart coworkers, and lots of resources at your disposal. I remember asking my interviewer whether I could use things like Google's index if I had a cool 20% idea, and he was like "Sure. That's encouraged. Oftentimes I'll just grab 4000 or so machines and run a MapReduce to test out some hypothesis." My phone screener, when I asked him what it was like to work there, said "It's a place where really smart people go to be average," which has turned out to be both true and honestly one of the best things that I've gained from working there. This entire event was a staged press op. Keith Alexander is a ~30 year veteran of SIGINT, electronic warfare, and intelligence, and a Four-Star US Army General --- which is a bigger deal than you probably think it is. He's a spy chief in the truest sense and a master politician. Anyone who thinks he walked into that conference hall in Caesars without a near perfect forecast of the outcome of the speech is kidding themselves. Heckling Alexander played right into the strategy. It gave him an opportunity to look reasonable compared to his detractors, and, more generally (and alarmingly), to have the NSA look more reasonable compared to opponents of NSA surveillance. It allowed him to "split the vote" with audience reactions, getting people who probably have serious misgivings about NSA programs to applaud his calm and graceful handling of shouted insults; many of those people probably applauded simply to protest the hecklers, who after all were making it harder for them to follow what Alexander was trying to say. There was no serious Q&A on offer at the keynote. The questions were pre-screened; all attendees could do was vote on them. There was no possibility that anything would come of this speech other than an effectively unchallenged full-throated defense of the NSA's programs. Interestingly one of the things that I found most amazing when I was working for Google was a nearly total inability to grasp the concept of 'deadline.' For so many years the company just shipped it by committing it to the release branch and having the code deploy over the course of a small number of weeks to the 'fleet'. Sure there were 'processes', like "Canary it in some cluster and watch the results for a few weeks before turning it loose on the world." but being completely vertically integrated is a unique sort of situation. Being a very experienced game developer who tried to switch to Linux, I have posted about this before (and gotten flamed heavily by reactionary Linux people). The main reason is that debugging is terrible on Linux. gdb is just bad to use, and all these IDEs that try to interface with gdb to "improve" it do it badly (mainly because gdb itself is not good at being interfaced with). Someone needs to nuke this site from orbit and build a new debugger from scratch, and provide a library-style API that IDEs can use to inspect executables in rich and subtle ways. Productivity is crucial. If the lack of a reasonable debugging environment costs me even 5% of my productivity, that is too much, because games take so much work to make. At the end of a project, I just don't have 5% effort left any more. It requires everything. (But the current Linux situation is way more than a 5% productivity drain. I don't know exactly what it is, but if I were to guess, I would say it is something like 20%.) What is interesting is that people don't even know they have a complex about money until they get "rich." I've watched many people, perhaps a hundred, go from "working to pay the bills" to "holy crap I can pay all my current and possibly my future bills with the money I now have." That doesn't include the guy who lived in our neighborhood and won the CA lottery one year. It affects people in ways they don't expect. If its sudden (like lottery winning or sudden IPO surge) it can be difficult to process. But it is an important thing to realize that one is processing an exceptional event. Like having a loved one die or a spouse suddenly divorcing you. Not everyone feels "guilty", not everyone feels "smug." A lot of millionaires and billionaires in the Bay Area are outwardly unchanged. But the bottom line is that the emotion comes from the cognitive dissonance between values and reality. What do you value? What is reality? One woman I knew at Google was massively conflicted when she started work at Google. She always felt that she would help the homeless folks she saw, if she had more money than she needed. Upon becoming rich (on Google stock value), now she found that she wanted to save the money she had for her future kids education and needs. Was she a bad person? Before? After? Do your kids hate you if you give away their college education to the local foodbank? Do your peers hate you because you could close the current food gap at the foodbank and you don't? This is Microsoft's ICQ moment. Overpaying for a company at the moment when its core competency is becoming a commodity. Does anyone have the slightest bit of loyalty to Skype? Of course not. They're going to use whichever video chat comes built into their SmartPhone, tablet, computer, etc. They're going to use FaceBook's eventual video chat service or something Google offers. No one is going to actively seek out Skype when so many alternatives exist and are deeply integrated into the products/services they already use. Certainly no one is going to buy a Microsoft product simply because it has Skype integration. Who cares if it's FaceTime, FaceBook Video Chat, Google Video Chat? It's all the same to the user. With $7B they should have just given away about 15 million Windows Mobile phones in the form of an epic PR stunt. It's not a bad product -- they just need to make people realize it exists. If they want to flush money down the toilet they might as well engage users in the process right? I worked briefly on the Fiber team when it was very young (basically from 2 weeks before to 2 weeks after launch - I was on loan from Search specifically so that they could hit their launch goals). The bottleneck when I was there were local government regulations, and in fact Kansas City was chosen because it had a unified city/county/utility regulatory authority that was very favorable to Google. To lay fiber to the home, you either need right-of-ways on the utility poles (which are owned by Google's competitors) or you need permission to dig up streets (which requires a mess of permitting from the city government). In either case, the cable & phone companies were in very tight with local regulators, and so you had hostile gatekeepers whose approval you absolutely needed. The technology was awesome (1G Internet and HDTV!), the software all worked great, and the economics of hiring contractors to lay the fiber itself actually worked out. The big problem was regulatory capture. With Uber & AirBnB's success in hindsight, I'd say that the way to crack the ISP business is to provide your customers with the tools to break the law en masse. For example, you could imagine an ISP startup that basically says "Here's a box, a wire, and a map of other customers' locations. Plug into their jack, and if you can convince others to plug into yours, we'll give you a discount on your monthly bill based on how many you sign up." But Google in general is not willing to break laws - they'll go right up to the boundary of what the law allows, but if a regulatory agency says "No, you can't do that", they won't do it rather than fight the agency. Indeed, Fiber is being phased out in favor of Google's acquisition of WebPass, which does basically exactly that but with wireless instead of fiber. WebPass only requires the building owner's consent, and leaves the city out of it. I've spoken at TechEds in the US and Europe, and been in the top 10 for attendee feedback twice. I'd never speak at TechEd again, and I told Microsoft the same thing, same reasons. The event staff is overly demanding and inconsiderate of speaker time. They repeatedly dragged me into mandatory virtual and in-person meetings to cover inane details that should have been covered via email. They mandated the color of pants speakers wore. Just ridiculously micromanaged. Hertz laid off nearly the entirety of their rank and file IT staff earlier this year. In order to receive our severance, we were forced to train our IBM replacements, who were in India. Hertz's strategy of IBM and Austerity is the new SMT's solution for a balance sheet that's in shambles, yet they have rewarded themselves by increasing executive compensation 35% over the prior year, including a $6 million bonus to the CIO. I personally landed in an Alphabet company, received a giant raise, and now I get to work on really amazing stuff, so I'm doing fine. But to this day I'm sad to think how our once-amazing Hertz team, staffed with really smart people, led by the best boss I ever had, and were really driving the innovation at Hertz, was just thrown away like yesterday's garbage. Don't count on definitely being able to sell the stock to finance the taxes. I left after seven years in very good standing (I believed) but when I went to sell the deal was shut down [1]. Luckily I had a backup plan and I was ok [2]. [1] Had a handshake deal with an investor in the company, then the investor went silent on me. When I followed up he said the deal was "just much too small." I reached out to the company for help, and they said they'd actually told him not to buy from me. I never would have known if they hadn't decided to tell me for some reason. The takeaway is that the markets for private company stock tend to be small, and the buyers care more about their relationships with the company than they do about having your shares. Even if the stock terms allow them to buy, and they might not. I took the first test just like the OP, the logical reasoning part seemed kind of irrelevant and a waste of time for me. That was nothing compared to the second online test. The environment of the second test was like a scenario out of Black Mirror. Not only did they want to have the webcam and microphone on the entire time, I also had to install their custom software so the proctors could monitor my screen and control my computer. They opened up the macOS system preferences so they could disable all shortcuts to take screenshots, and they also manually closed all the background services I had running (even f.lux!). Then they asked me to pick up my laptop and show them around my room with the webcam. They specifically asked to see the contents of my desk and the walls and ceiling of my room. I had some pencil and paper on my desk to use as scratch paper for the obvious reasons and they told me that wasn't allowed. Obviously that made me a little upset because I use it to sketch out examples and concepts. They also saw my phone on the desk and asked me to put it out of arm's reach. After that they told me I couldn't leave the room until the 5 minute bathroom break allowed half-way through the test. I had forgotten to tell my roommate I was taking this test and he was making a bit of a ruckus playing L4D2 online (obviously a bit distracting). I asked the proctor if I could briefly leave the room to ask him to quiet down. They said I couldn't leave until the bathroom break so there was nothing I could do. Later on, I was busy thinking about a problem and had adjusted how I was sitting in my chair and moved my face slightly out of the camera's view. The proctor messaged me again telling me to move so they could see my entire face. The first part of the interview was exactly like the linked experience. No coding questions just reasoning. The second part I had to use ProctorU instead of Proctorio. Personally I thought the experience was super weird but understandable, I'll get to that later, somebody watched me through my webcam the entire time with my microphone on. They needed to check my ID before the test. They needed me to show them the entire room I was in (which was my bedroom). My desktop computer was on behind my laptop so I turned off my computer (I don't remember if I offered to or if they asked me to) but they also asked me to cover my monitors up with something which I thought was silly after I turned them off so I covered them with a towel. They then used LogMeIn to remote into my machine so they could check running programs. I quit all my personal chat programs and pretty much only had the Chrome window running. ... I didn't talk a real person who actually worked at Amazon (by email or through webcam) until I received an offer. [M]y company got acquired by Oracle. We thought things would be OK. Nothing changed immediately. Slowly but surely they turned the screws. 5 year laptop replacement policy. You get the corporate standard laptop and you'll like it. Sales? Oh those guys can buy new Macs every two years, they get whatever they want. Then you understand where Software Engineers rank in the company hierarchy. Oracle took the average price of our product from $100k to $5 million for the same size deals. Our sales went from $5-7m to more than $40m with no increasing in engineering headcount (team of 15). Didn't matter when bonus time came, we all got stack-ranked and some people got nothing. As a top performer I got a few options, worth maybe $5k. Oracle exists to extract the maximum amount of money possible from the Fortune 1000. Everyone else can fuck off. Your impotent internet rage is meaningless. If it doesn't piss off the CTO of $X then it doesn't matter. If it gets that CTO to cut a bigger check then it will be embraced with extreme enthusiasm. The culture wears down a lot (but not all) of the good people, who then leave. What's left is a lot of mediocrity and architecture astronauts. The more complex the product the better - it means extra consulting dollars! My relative works at a business dependent on Micros. When Oracle announced the acquisition I told them to start on the backup plan immediately because Oracle was going to screw them sooner or later. A few years on and that is proving true: Oracle is slowly excising the Micros dealers and ISVs out of the picture, gobbling up all the revenue while hiking prices. In practice, we have to face that all that our quest for more stringent hiring standards is not really selecting the best, but just selecting fewer people, in ways that might, or might not, have anything to do with being good at a job. Let's go through a few examples in my career: A guy that was the most prolific developer I have ever seen: He'd rewrite entire subsystems over a weekend. The problem is that said susbsytems were not necessarily better than they started, trading bugs for bugs, and anyone that wanted to work on them would have to relearn that programmer's idiosyncrasies of the week. He easily cost his project 12 man/months of work in 4 months, the length of time it took for management to realize that he had to be let go. A company's big UI framework was quite broken, and a new developer came in and fixed it. Great, right? Well, he was handed code review veto to changes into the framework, and his standards and his demeanor made people stop contributing after two or three attempts. In practice, the framework died as people found it antiquated, and they decided to build a new one: Well, the same developer was tasked with building new framwork, which was made mandatory for 200+ developers to use. Total contribution was clearly negative. A developer that was very fast, and wrote working code, had been managing a rather large 500K line codebase, and received some developers as help. He didn't believe in internal documentation or on keeping interfaces stable. He also didn't believe in writing code that wasn't brittle, or in unit tests: Code changes from the new developers often broke things, the veteran would come in, fix everything in the middle of the emergency, and look absolutely great, while all the other developers looked to management as if they were incompetent. They were not, however: they were quite successful when moved to other teams. It just happens that the original developer made sure nobody else could touch anything. Eventually, the experiment was retried after the original developer was sent to do other things. It took a few months, but the new replacement team managed to modularize the code, and new people could actually modify the codebase productively. All of those negative value developers could probably be very valuable in very specific conditions, and they'd look just fine in a tough job interview. They were still terrible hires. In my experience, if anything, a harder process that demands people to appear smarter or work faster in an interview have the opposite effect of what I'd want: They end up selecting for people that think less and do more quickly, building debt faster. My favorite developers ever all do badly in your typical stringent Silicon Valley intervew. They work slower, do more thinking, and consider every line of code they write technical debt. They won't have a million algorithms memorized: They'll go look at sources more often than not, and will spend a lot of time on tests that might as well be documentation. Very few of those traits are positive in an interview, but I think they are vital in creating good teams, but few select for them at all. I worked on Solaris for over a decade, and for a while it was usually a better choice than Linux, especially due to price/performance (which includes how many instances it takes to run a given workload). It was worth fighting for, and I fought hard. But Linux has now become technically better in just about every way. Out-of-box performance, tuned performance, observability tools, reliability (on patched LTS), scheduling, networking (including TCP feature support), driver support, application support, processor support, debuggers, syscall features, etc. Last I checked, ZFS worked better on Solaris than Linux, but it's an area where Linux has been catching up. I have little hope that Solaris will ever catch up to Linux, and I have even less hope for illumos: Linux now has around 1,000 monthly contributors, whereas illumos has about 15. In addition to technology advantages, Linux has a community and workforce that's orders of magnitude larger, staff with invested skills (re-education is part of a TCO calculation), companies with invested infrastructure (rewriting automation scripts is also part of TCO), and also much better future employment prospects (a factor than can influence people wanting to work at your company on that OS). Even with my considerable and well-known Solaris expertise, the employment prospects with Solaris are bleak and getting worse every year. With my Linux skills, I can work at awesome companies like Netflix (which I highly recommend), Facebook, Google, SpaceX, etc. Large technology-focused companies, like Netflix, Facebook, and Google, have the expertise and appetite to make a technology-based OS decision. We have dedicated teams for the OS and kernel with deep expertise. On Netflix's OS team, there are three staff who previously worked at Sun Microsystems and have more Solaris expertise than they do Linux expertise, and I believe you'll find similar people at Facebook and Google as well. And we are choosing Linux. The choice of an OS includes many factors. If an OS came along that was better, we'd start with a thorough internal investigation, involving microbenchmarks (including an automated suite I wrote), macrobenchmarks (depending on the expected gains), and production testing using canaries. We'd be able to come up with a rough estimate of the cost savings based on price/performance. Most microservices we have run hot in user-level applications (think 99% user time), not the kernel, so it's difficult to find large gains from the OS or kernel. Gains are more likely to come from off-CPU activities, like task scheduling and TCP congestion, and indirect, like NUMA memory placement: all areas where Linux is leading. It would be very difficult to find a large gain by changing the kernel from Linux to something else. Just based on CPU cycles, the target that should have the most attention is Java, not the OS. But let's say that somehow we did find an OS with a significant enough gain: we'd then look at the cost to switch, including retraining staff, rewriting automation software, and how quickly we could find help to resolve issues as they came up. Linux is so widely used that there's a good chance someone else has found an issue, had it fixed in a certain version or documented a workaround. What's left where Solaris/SmartOS/illumos is better? 1. There's more marketing of the features and people. Linux develops great technologies and has some highly skilled kernel engineers, but I haven't seen any serious effort to market these. Why does Linux need to? And 2. Enterprise support. Large enterprise companies where technology is not their focus (eg, a breakfast cereal company) and who want to outsource these decisions to companies like Oracle and IBM. Oracle still has Solaris enterprise support that I believe is very competitive compared to Linux offerings.~ I'd argue that where RethinkDB fell down is on a step you don't list, "Understand the context of the problem", which you'd ideally do before figuring out how many people it's a problem for. Their initial idea was a MySQL storage engine for SSDs - the environmental change was that SSD prices were falling rapidly, SSDs have wildly different performance characteristics from disk, and so they figured there was an opportunity to catch the next wave. Only problem is that the biggest corporate buyers of SSDs are gigantic tech companies (eg. Google, Amazon) with large amounts of proprietary software, and so a generic MySQL storage engine isn't going to be useful to them anyway. Unfortunately they'd already taken funding, built a team, and written a lot of code by the time they found that out, and there's only so far you can pivot when you have an ecosystem like that. This unfortunately follows the conventions of the genre called "Falsehood programmers believe about X": ... I honestly think this genre is horrible and counterproductive, even though the writer's intentions are good. It gives no examples, no explanations, no guidelines for proper implementations - just a list of condescending gotchas, showing off the superior intellect and perception of the author. It happens sometimes. Usually it's because of one of two situations: 1) The company was on the fence about wanting you anyway, and negotiating takes you from the "maybe kinda sorta want to work with" to the "don't want to work with" pile. 2) The company is looking for people who don't question authority and don't stick up for their own interests. Both of these are red flags. It's not really a matter of ethics - they're completely within their rights to withdraw an offer for any reason - but it's a matter of "Would you really want to work there anyway?" For both corporations and individuals, it usually leads to a smoother life if you only surround yourself with people who really value you. I feel like this is every HN discussion about "rates---comma---raising them": a mean-spirited attempt to convince the audience on the site that high rates aren't really possible, because if they were, the person telling you they're possible would be wealthy beyond the dreams of avarice. Once again: Patrick is just offering a more refined and savvy version of advice me and my Matasano friends gave him, and our outcomes are part of the record of a reasonable large public company. This, by the way, is why I'll never write this kind of end-of-year wrap-up post (and, for the same reasons, why I'll never open source code unless I absolutely have to). It's also a big part of what I'm trying to get my hands around for the Starfighter wrap-up post. When we started Starfighter, everyone said "you're going to have such an amazing time because of all the HN credibility you have". But pretty much every time Starfighter actually came up on HN, I just wanted to hide under a rock. Even when the site is civil, it's still committed to grind away any joy you take either in accomplishing something near or even in just sharing something interesting you learned . You could sort of understand an atavistic urge to shit all over someone sharing an interesting experience that was pleasant or impressive. There's a bad Morrissey song about that. But look what happens when you share an interesting story that obviously involved significant unpleasantness and an honest accounting of one's limitations: a giant thread full of people piling on to question your motives and life choices. You can't win. I was the first person to be interviewed by this journalist (Michael Thomas @curious_founder). He approached me on Twitter to ask questions about digital nomad and remote work life (as I founded Nomad List and have been doing it for years). I told him it'd be great to see more honest depictions as most articles are heavily idealized making it sound all great, when it's not necessarily. It's ups and downs (just like regular life really). What happened next may surprise you. He wrote a hit piece on me changing my entire story that I told him over Skype into a clickbait article of how digital nomadism doesn't work and one of the main people doing it for awhile (en public) even settled down and gave up altogether. I didn't settle down. I spent the summer in Amsterdam. Cause you know, it's a nice place! But he needed to say this to make a polarized hit piece with an angle. And that piece became viral. Resulting in me having to tell people daily that I didn't and getting lots of flack. You may understand it doesn't help if your entire startup is about something and a journalist writes a viral piece how you yourself don't even believe in that anymore. I contacted the journalist and Quartz but they didn't change a thing. It's great this meant his journalistic breakthrough but it hurt me in the process. I'd argue journalists like this are the whole problem we have these days. The articles they write can't be balanced because they need to get pageviews. Every potential to write something interesting quickly turns into clickbait. It turned me off from being interviewed ever again. Doing my own PR by posting comment sections of Hacker News or Reddit seems like a better idea (also see how Elon Musk does exactly this, seems smarter). Hope this doesn't ruin it for you, but I knew someone who had a problem presented on the show. She called in and reached an answering machine. Someone called her and qualified the problem. Then one of the brothers called and talked to her for a while. Then a few weeks later (there might have been some more calls, I don't know) both brothers called her and talked to her for a while. Her parts of that last call was edited into the radio show so it sounded like she had called and they just figured out the answer on the spot. Blockchain is the world's worst database, created entirely to maintain the reputations of venture capital firms who injected hundreds of millions of dollars into a technology whose core defining insight was "You can improve on a Ponzi scam by making it self-organizing and distributed; that gets vastly more distribution, reduces the single point of failure, and makes it censorship-resistant." That's more robust than I usually phrase things on HN, but you did ask. In slightly more detail: Databases are wonderful things. We have a number which are actually employed in production, at a variety of institutions. They run the world. Meaningful applications run on top of Postgres, MySQL, Oracle, etc etc. No meaningful applications run on top of "blockchain", because it is a marketing term. You cannot install blockchain just like you cannot install database. (Database sounds much cooler without the definitive article, too.) If you pick a particular instantiation of a blockchain-style database, it is a horrible, horrible database. Can I pick on Bitcoin? Let me pick on Bitcoin. Bitcoin is claimed to be a global financial network and ready for production right now. Bitcoin cannot sustain 5 transactions per second, worldwide. You might be sensibly interested in Bitcoin governance if, for some reason, you wanted to use Bitcoin. Bitcoin is a software artifact; it matters to users who makes changes to it and by what process. (Bitcoin is a software artifact, not a protocol, even though the Bitcoin community will tell you differently. There is a single C++ codebase which matters. It is essentially impossible to interoperate with Bitcoin without bugs-and-all replicating that codebase.) Bitcoin governance is captured by approximately ~5 people. This is a robust claim and requires extraordinary evidence. Ordinary evidence would be pointing you, in a handwavy fashion, about the depth of acrimony with regards to raising the block size, which would let Bitcoin scale to the commanding heights of 10 or, nay, 100 transactions per second worldwide. Extraordinary evidence might be pointing you to the time where the entire Bitcoin network was de-facto shut down based on the consensus of N people in an IRC channel. c.f. https://news.ycombinator.com/item?id=9320989 This was back in 2013. Long story short: a software update went awry so they rolled back global state by a few hours by getting the right two people to agree to it on a Skype call. But let's get back to discussing that sole technical artifact. Bitcoin has a higher cost-to-value ratio than almost any technology conceivable; the cost to date is the market capitalization of Bitcoin. Because Bitcoin enters through a seigniorage mechanism, every Bitcoin existing was minted as compensation for "security the integrity of the blockchain" (by doing computationally expensive makework). This cost is high. Today, routine maintenance of the Bitcoin network will cost the network approximately $1.5 million. That's on the order of $3 per write on a maximum committed capacity basis. It will cost another $1.5 million tomorrow, exchange rate depending. (Bitcoin has successfully shifted much of the cost of operating its database to speculators rather than people who actually use Bitcoin for transaction processing. That game of musical chairs has gone on for a while.) Bitcoin has some properties which one does not associate with many databases. One is that write acknowledgments average 5 minutes. Another is that they can stop, non-deterministically, for more than an hour at a time, worldwide, for all users simultaneously. This behavior is by design. The database market is NOT closed. In fact, we are in a database boom. Since 2009 (the year RethinkDB was founded), there have been over 100 production grade databases released in the market. These span document stores, Key/Value, time series, MPP, relational, in-memory, and the ever increasing "multi model databases." Since 2009, over $600 MILLION dollars (publicly announced) has been invested in these database companies (RethinkDB represents 12.2M or about 2%). That's aside from money invested in the bigger established databases. Almost all of the companies that have raised funding in this period generate revenue from one of more of the following areas: a) exclusive hosting (meaning AWS et al. do not offer this product) b) multi-node/cluster support c) product enhancements c) enterprise support Looking at each of the above revenue paths as executed by RethinkDB: a) RethinkDB never offered a hosted solution. Compose offered a hosted solution in October of 2014. b) RethinkDB didn't support true high availability until the 2.1 release in August 2015. It was released as open source and to my knowledge was not monetized. c/d) I've heard that an enterprise version of RethinkDB was offered near the end. Enterprise Support is, empirically, a bad approach for a venture backed company. I don't know that RethinkDB ever took this avenue seriously. Correct me if I am wrong. A model that is not popular among RECENT databases but is popular among traditional databases is a standard licensing model (e.g. Oracle, Microsoft SQL Server). Even these are becoming more rare with the advent of A, but never underestimate the licensing market. Again, this is complete conjecture, but I believe RethinkDB failed for a few reasons: 1) not pursuing one of the above revenue models early enough. This has serious affects on the order of the feature enhancements (for instance, the HA released in 2015 could have been released earlier at a premium or to help facilitate a hosted solution). 2) incorrect priority of enhancements: 2a) general database performance never reached the point it needed to. RethinkDB struggled with both write and read performance well into 2015. There was no clear value add in this area compared to many write or read focused databases released around this time. 2b) lack of (proper) High Availability for too long. 2c) ReQL was not necessary - most developers use ORMs when interacting with SQL. When you venture into analytical queries, we actually seem to make great effort to provide SQL: look at the number of projects or companies that exist to bring SQL to databases and filesystems that don't support it (Hive, Pig, Slam Data, etc). 2d) push notifications. This has not been demonstrated to be a clear market need yet. There are a small handful of companies that promoting development stacks around this, but no database company is doing the same. 2e) lack of focus. What was RethinkDB REALLY good at? It push ReQL and joins at first, but it lacked HA until 2015, struggled with high write or read loads into 2015. It then started to focus on real time notifications. Again, there just aren't many databases focusing on these areas. My final thought is that RethinkDB didn't raise enough capital. Perhaps this is because of previous points, but without capital, the above can't be corrected. RethinkDB actually raised far less money than basically any other venture backed company in this space during this time. Again, I've never run a database company so my thoughts are just from an outsider. However, I am the founder of a company that provides database integration products so I monitor this industry like I hawk. I simply don't agree that the database market has been "captured." I expect to see even bigger growth in databases in the future. I'm happy to share my thoughts about what types of databases are working and where the market needs solutions. Additionally, companies are increasingly relying on third part cloud services for data they previously captured themselves. Anything from payment processes, order fulfillment, traffic analytics etc is now being handled by someone else. I was a googler working on Google maps at the time of the API self immolation. There were strong complaints from within about the price changes. Obviously everyone couldn't believe what was being planned, and there were countless spreadsheets and reports and SQL queries showing how this was going to shit all over a lot of customers that we'd be guaranteed to lose to a competitor. Management didn't give a shit. I don't know what the rationale was apart from some vague claim about "charging for value". A lot of users of the API apparently were basically under the free limits or only spending less than 100 USD on API usage so I can kind of understand the line of thought, but I still.thibk they went way too far. I don't know what happened to the architects of the plan. I presume promo. Edit: I should add that this was not a knee-jerk thing or some exec just woke up one day with an idea in their dreams. It was a planned change that took many months to plan and prepare for with endless preparations and reporting and so on. ### ??? How did HN get get the commenter base that it has? If you read HN, on any given week, there are at least as many good, substantial, comments as there are posts. This is different from every other modern public news aggregator I can find out there, and I don’t really know what the ingredients are that make HN successful. For the last couple years (ish?), the moderation regime has been really active in trying to get a good mix of stories on the front page and in tamping down on gratuitously mean comments. But there was a period of years where the moderation could be described as sparse, arbitrary, and capricious, and while there are fewer “bad” comments now, it doesn’t seem like good moderation actually generates more “good” comments. The ranking scheme seems to penalize posts that have a lot of comments on the theory that flamebait topics will draw a lot of comments. That sometimes prematurely buries stories with good discussion, but much more often, it buries stories that draw pointless flamewars. If you just read HN, it’s hard to see the effect, but if you look at forums that use comments as a positive factor in ranking, the difference is dramatic -- those other forums that boost topics with many comments (presumably on theory that vigorous discussion should be highlighted) often have content-free flame wars pinned at the top for long periods of time. Something else that HN does that’s different from most forums is that user flags are weighted very heavily. On reddit, a downvote only cancels out an upvote, which means that flamebait topics that draw a lot of upvotes like “platform X is cancer” “Y is doing some horrible thing” often get pinned to the top of r/programming for a an entire day, since the number of people who don’t want to see that is drowned out by the number of people who upvote outrageous stories. If you read the comments for one of the "X is cancer" posts on r/programming, the top comment will almost inevitably that the post has no content, that the author of the post is a troll who never posts anything with content, and that we'd be better off with less flamebait by the author at the top of r/programming. But the people who will upvote outrage porn outnumber the people who will downvote it, so that kind of stuff dominates aggregators that use raw votes for ranking. Having flamebait drop off the front page quickly is significant, but it doesn’t seem sufficient to explain why there are so many more well-informed comments on HN than on other forums with roughly similar traffic. Maybe the answer is that people come to HN for the same reason people come to Silicon Valley -- despite all the downsides, there’s a relatively large concentration of experts there across a wide variety of CS-related disciplines. If that’s true, and it’s a combination of path dependence on network effects, that’s pretty depressing since that’s not replicable. *If you liked this curated list of comments, you'll probably also like this list of books and this list of blogs.* This is part of an experiment where I write up thoughts quickly, without proofing or editing. Apologies if this is less clear than a normal post. This is probably going to be the last post like this, for now, since, by quickly writing up a post whenever I have something that can be written up quickly, I'm building up a backlog of post ideas that require re-reading the literature in an area or running experiments. P.S. Please suggest other good comments! By their nature, HN comments are much less discoverable than stories, so there are a lot of great coments that I haven't seen.
true
true
true
null
2024-10-12 00:00:00
2011-02-01 00:00:00
null
null
null
null
null
null
29,318,002
https://www.ecu.edu.au/newsroom/articles/research/latte-lovers-rejoice-coffee-could-lower-risk-of-alzheimers-disease
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
1,899,024
http://techcrunch.com/2010/11/12/tumblr-director-of-product/
As Funding Talk Swirls, Tumblr Lands A Director Of Product | TechCrunch
MG Siegler
Earlier today, a Business Insider report indicated that Tumblr has landed a “very big and competitive” new round of funding from Sequoia Capital. We’re still looking into that, and they’re not commenting, but here’s one thing we do know about the company: they’ve just hired a new Director of Product. Derek Gottfrid will be joining the company from The New York Times where he was the Senior Architect in charge of many of the interesting things they do with technology. It looks like today is his last day of work for the Times, and he’ll be joining team Tumblr shortly, Tumblr President John Maloney confirms. “*We’re thrilled and excited about what Derek will bring to the team/product*,” Maloney writes in an email. Gottfrid had been with the Times for 12 years. The move continues the recent trend of Tumblr hiring from old media companies, as back in August they hired Mark Coatney away from Newsweek to be their media evangelist. In terms of growth, Tumblr continues to be on a tear. They publish all their numbers publicly via Quantcast, and the growth rate is pretty amazing. Back in July, we reported they were doing 1.5 billion pageviews a month — that number is now past 2.5 billion, with half of those coming from the U.S. They’re about to become a top 40 site in the U.S. In August, they shot past a billion total posts. Given the growth and assuming the funding talk is true, expect the New York-based Tumblr to begin ramping up hiring shortly. Their staff blog only shows 12 employees, and one of them, co-founder and CTO, Marco Arment left in September to work on his successful side-project, Instapaper. Tumblr last raised money in April of this year. They’ve raised just over $10 million in total so far leading up to this new funding.
true
true
true
Earlier today, a Business Insider report indicated that Tumblr has landed a "very big and competitive" new round of funding from Sequoia Capital. We're still looking into that, and they're not commenting, but here's one thing we do know about the company: they've just hired a new Director of Product. Derek Gottfrid will be joining the company from The New York Times where he was the Senior Architect in charge of many of the interesting things they do with technology. It looks like today is his last day of work for the Times, and he'll be joining team Tumblr shortly, Tumblr President John Maloney confirms. "We're thrilled and excited about what Derek will bring to the team/product," Maloney writes in an email. Gottfrid had been with the Times for 12 years.
2024-10-12 00:00:00
2010-11-12 00:00:00
https://techcrunch.com/w…ds/2010/11/g.jpg
article
techcrunch.com
TechCrunch
null
null
35,459,406
https://techcrunch.com/2023/04/05/mozart-data-announces-free-tier-to-encourage-smaller-businesses-to-get-on-platform/
Mozart Data announces free tier to encourage smaller businesses to get on platform | TechCrunch
Ron Miller
Mozart Data came out of the Summer 2020 Y Combinator cohort with an idea to provide a data stack in a box for companies. Since then, it has raised $19 million and attracted 100 customers, but the founders recognize that to attract new users to the platform, especially in today’s economic environment, might take some creativity. Today, the company announced Mozart Sonata, a free tier (and fun play on words), meant to help companies get comfortable with the platform and perhaps eventually grow into the pay versions of the product, Peter Fishman, CEO and co-founder, told TechCrunch. “We wanted to get people using our data infrastructure earlier and earlier, and so we tailored a solution for those companies, who might be facing more extreme economic headwinds,” Fishman said. The company offers a number of services involved in processing data, including extracting the data from a variety of sources, placing it in a data warehouse, Mozart’s or Snowflake, and then putting that data to work in dashboards, charts and alerts. The idea is to provide all these services for the customer from a single vendor. The approach appears to be working. The startup doubled the number of customers using the platform since last year’s $15 million Series A funding announcement, but the founders wanted to get more people involved. The freemium model builds a natural top of funnel for the product, while making it accessible to a group of potential users, who might not be far enough along to pay for a solution. “It’s certainly the case that we’ve always liked the land and expand idea, and this is just sort of the extreme of that idea where [we] can bet on the product being so great that [we] can actually not just sell it initially for a small contract and grow that contract, but offer a free tier and have a number of those people convert into paying customers [eventually],” Fishman said. As company co-founder and CTO Dan Silberman pointed out, this is a way to see how the product works to connect to data sources and build tooling around that data. “In practice, that means you can usually get like one executive dashboard, or a little bit of data alerting on your data in Salesforce or something like that,” Silberman said. Once you see the value and you want to explore more, you would need to move to one of the pay tiers. “And then when you see that’s providing you value, and now you want these other dashboards or you want to add these other data sources, then you might start bumping up against the limits of the free plan,” he said. Mozart Sonata is available starting today. Mozart Data launched in 2020 and today has 100 customers and 25 employees.
true
true
true
Mozart Data, a YC alum, announced a free tier today to help companies facing economic headwinds start using the product.
2024-10-12 00:00:00
2023-04-05 00:00:00
https://techcrunch.com/w…s-1398319735.jpg
article
techcrunch.com
TechCrunch
null
null
25,443,454
https://treo.sh/sitespeed?ref=hns
Core Web Vitals and Site Speed Test
null
Improve the speed of your website to build a better UX, increase conversion, and boost SEO. Treo Site Speed uses Core Web Vitals data from Chrome UX Report (CrUX) and provides **the most comprehensive site speed report on the market**. Focus on Core Web Vitals and optimize the quality of user experience across your websites. Also, Core Web Vitals are the next search ranking factors that could help to improve your SEO. Core Web Vitals varies significantly by devices and connection types. Filtering to the right dimension helps to see a clearer picture of your site speed. Make sure there are no country-specific issues for your website. The report shows a detailed map for all Web Vitals, devices, and connections. Benchmark your website with competitors and easily share the link to the report with your co-workers. Receive an email with the latest site speed metrics for your website. A new dataset is published every second Tuesday of every month. Learn more about CrUX updates → Treo Site Speed uses the raw BigQuery data and caches it on many levels. To get one report, you would need to scan **12 terabytes**, which would cost **$60**. If you are interested in getting the same power (and much more) for your business – contact us to get a quote (only for the Enterprise plan).
true
true
true
A free site speed test that shows how fast your website is for real users. It uses Core Web Vitals and displays historical data across devices and locations.
2024-10-12 00:00:00
2020-11-01 00:00:00
https://treo.sh/assets/f…view.ad6c245.png
website
treo.sh
__Treo
null
null
2,928,188
http://www.businessesgrow.com/2011/08/26/the-end-of-marketing-as-we-know-it
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
213,828
http://blog.compete.com/2008/06/09/hummer-shopper-demand-gas-price/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
7,855,080
http://wtop.com/120/3636415/Va-orders-Uber-Lyft-to-stop-operating
Va. orders Uber, Lyft to stop operating in state - WTOP News
WTOP Staff
WASHINGTON – The Virginia Department of Motor Vehicles says that ride services Lyft and Uber are violating state law and must stop operating immediately. The DMV sent cease and desist orders to both companies Thursday. The letters state that the department has levied penalties against both companies for operating without proper authority. For several months, the state notified both companies that they were not complying with Virginia law. Uber and Lyft drivers could also be subject to penalties for violating the order, the letter says. The letter encouraged the firms to join an ongoing study reviewing the state’s passenger transportation regulations. The study is expected to be complete before the General Assembly meets in January. Both services provide rides in the D.C. metro area. In a statement, Lyft says it will continue operating in Virginia and that it is already complying with Virginia transportation laws and rules. The company also pledged to work with officials to update the state’s laws. “As many of the current regulations surrounding taxis and limos were created before anything like Lyft’s peer-to-peer model was ever imagined, we’re committed to continuing to work with state officials to craft new rules for this new industry,” an email statement says. In a statement, Uber Technologies called the DMV’s actions “shocking and unexpected” saying it has worked “in good faith” with the DMV to create regulations that would allow its service to operate. “The DMV decision today hurts thousands of small business entrepreneurs who rely on the Uber platform to make a living, create new jobs and contribute to the economy – and it hurts the countless residents who rely on Uber,” the company says. Uber says it will continue operating in Virginia despite the state’s order. The for-profit, ridesharing services are coming under increasing pressure in states across the country and in D.C. for violating taxi and insurance laws. Also Thursday, the DMV urged riders to make informed choices about which transportation services they use. The DMV says riders should check that cars have seatbelts, current license plate and inspection stickers. Riders should ask about additional charges for gratuities, fuel or mileage. Riders can search a DMV database to find registered and insured transportation services in Northern Virginia – use the Fairfax District dropdown. “I urge the citizens of Virginia to protect their families by using only companies that appear on DMV’s website as licensed transportation services. If it’s not on the list, it’s not recommended,” says DMV Commissioner Richard Holcomb. **Related Stories:** - Ariz. Senate to debate rideshare regulations - D.C. Council takes up sweeping transportation bill - Seattle taxi drivers sue ridesharing service - Lyft defies New Mexico order to stop operating *WTOP’s Amanda Iacone and Ari Ashe contributed to this report. Follow @WTOP on Twitter and on Facebook.*
true
true
true
The Virginia Department of Motor Vehicles says that ride services Lyft and Uber are violating state law and must stop operating immediately.
2024-10-12 00:00:00
2014-06-05 00:00:00
https://wtop.com/wp-cont…14/06/349836.jpg
article
wtop.com
WTOP News
null
null
38,275,603
https://pvs-studio.com/en/blog/posts/csharp/1080/
What's new in .NET 8?
Artem Rovenskii
Webinar: С++ semantics - 06.11 .NET 8 has been released, so it's time to start porting your projects to the new version. In this article, we'll look at new features and enhancements: C# 12, performance, Native AOT, GC, new types aimed at improving performance, NuGet Audit, and more. We have already covered the new features of C# 12 in this article. There, we discussed new language features: primary constructors, default parameters, collection expressions, inline arrays, and so on. There aren't many major changes this time. What do you think about the update? We've looked at it and immediately got some ideas for new C# analyzer rules. By the way, since we're talking about the analyzer, I'd like to make a small announcement. We are already working on the .NET 8 and C# 12 support. We will introduce it in PVS-Studio 7.28. The release is scheduled for early December. If you don't want to miss it, I invite you to sign up for the press-release newsletter. Microsoft said that .NET 7, introduced last year, is very fast, but .NET 8 is faster. And that's true. Stephen Toub, in his typical fashion, told us about performance improvements in .NET 8 in the article. It's several hundred pages long and covers most (if not all) of the improvements. The following things have been enhanced: JIT, GC, reflection, collections, LINQ, code generation, serialization and deserialization, primitive data types, and much more. The article is focused on comparing the performance of .NET 7 and .NET 8 — there are lots of comparison tables. I think everyone will find something interesting there. So, you may want to take some time off to read it. Just a quick reminder of what the technology is: in short, Native AOT uses an ahead of time compiler to compile IL into machine code while publishing a self-contained application. .NET 8 adds support for x64 and arm64 architectures on macOS. Native AOT applications include a .NET runtime. As a result, they are larger than regular applications. .NET 8 has enhanced this aspect as well. The table below shows the size of the "Hello World" program for both .NET 7 and .NET 8: Operating system | .NET 7 | .NET 8 | ---|---|---| Linux x64 (with -p:StripSymbols=true) | 3.76 MB | 1.84 MB | Windows x64 | 2.85 MB | 1.77 MB | As you can see, the size of the application on Linux has been reduced by 50%. Moreover, .NET 8 is starting to enable native AOT support for platforms such as iOS-like platforms. Note that this is just the beginning of the work. So, the developers ask you not to jump to conclusions about performance. You can now build and run .NET iOS and .NET MAUI applications with Native AOT on the following platforms: *ios*, *iossimulator*, *maccatalyst*, *tvos*, and *tvossimulator*. .NET 8 introduces new types aimed at improving app performance in different scenarios. Developers now have the *FrozenDictionary<TKey, TValue>* and *FrozenSet<T>* collections, which are in the *System.Collections.Frozen* namespace. They prohibit changes to keys and values once a collection has been created. This enables them to be optimized specifically for read operations. This is a very handy feature for when a collection is filled in for the first time and stored for a long time. Here's an example of how to use the collection: ``` private static readonly FrozenDictionary<string, bool> _settings = GetSettings().ToFrozenDictionary(); .... if (_settings.TryGetValue(key, out bool setting)) { //.... } ``` A new *SearchValues<T>* type has been introduced. It provides an immutable, read-only set of values optimized for efficient searching. Another new type, *CompositeFormat*, has been introduced specifically for cases where format strings are unknown at compile time. For example, when loading a format string from resources. Finally, there are new *XxHash3* and *XxHash128* types that implement the fast hashing algorithms — XXH3 and XXH128. Security plays an important role in the development process, and .NET developers always remember that. Now, when running *dotnet add* and *dotnet restore,* you will get warnings about every package that contains a vulnerability. By the way, PVS-Studio can search for vulnerable components used in your project. If the library you are using contains a vulnerable library, the analyzer will issue a warning. The analyzer searches not only for direct but also for transitive dependencies. To learn more about PVS-Studio as an SCA solution, please follow the link. New methods for handling randomness have been added: The *GetItems* method helps randomly select a specified number of items from the passed set. The *Shuffle* method shuffles the passed sequence of items. As the developers say, such features will come in handy in the field of machine learning. Starting with .NET 8, you can regulate memory limits. This feature may be useful for cloud services. For example, you can reduce the amount of memory available when the load is low. To adjust the limits, call the *RefreshMemoryLimit* method. You can now also update some GC configuration settings. For example, it's possible to set a hard limit for a heap size: ``` AppContext.SetData("GCHeapHardLimit", _memoryLimit); GC.RefreshMemoryLimit(); ``` With the release of the new .NET, serialization and deserialization have gotten a lot better. There are a lot of enhancements, and I'll highlight the most important ones. So, here they are: You may read the following article to learn more about new features, bug fixes, and more: "What's new in System.Text.Json in .NET 8". Looking back at the features added in .NET 7, not much seems new in .NET 8. However, I can assure you that there are still plenty of enhancements. It seems that this time the developers focused more on targeted improvements to the platform. There are a lot of not-so-big but still important enhancements in many areas of .NET usage. The article lists only the most interesting features that are likely to be useful to the majority of developers. You can read about all the improvements here. If you find something useful that isn't mentioned in the article, feel free to write about it in the comments. Are you already using some of the new .NET features? Let me know in the comments! 0
true
true
true
.NET 8 has been released, so it′s time to start porting your projects to the new version. In this article, we′ll look at new features and enhancements: C# 12, performance, Native AOT, GC, new types...
2024-10-12 00:00:00
2023-11-14 00:00:00
https://files.pvs-studio…24-2024-14-30-27
article
pvs-studio.com
PVS-Studio
null
null
5,473,715
http://9to5mac.com/2013/04/01/apple-ceo-tim-cook-announces-changes-to-warranty-policies-apologizes-to-customers-in-china-following-smear-campaign/
Apple CEO Tim Cook announces changes to warranty policies, apologizes to customers in China following smear campaign - 9to5Mac
Jordan Kahn
Following controversy in China with state-controlled media outlets going after Apple over its warranty policies, Apple CEO Tim Cook today addressed the issue in a letter published to the company’s website in China. Within the letter (translation below), Cook apologized to customers and announced the company would implement a number of changes to “improve the level of service” for Chinese customers. The controversy originally started after a report from state-controlled media outlets such as China Central Television criticized the company for treating consumers unfairly with its warranty and repair procedures compared to other countries. Following the initial attacks from Chinese media outlets, Apple initially responded to the criticism through a message published on its website clarifying its warranty policies and telling consumers that *“Apple’s Chinese warranty is more or less the same as in the U.S. and all over the world.”* In the past two weeks, we have received a lot of feedback about Apple in China repair and warranty policy. We are not only a profound reflection on these views, together with relevant departments to carefully study the “Three Guarantees”, and also look at our maintenance policy communication and combing our management specifications of Apple Authorized Service Provider. We are aware that, due to the lack of external communication in this process and lead to the speculation that Apple arrogance, do not care or do not attach importance to consumer feedback. We express our sincere apologies for any concerns or misunderstandings this gives consumers. Today Cook announced that Apple will be implementing some changes to its warranty policies starting this month. Among the changes, Cook noted that Apple will improve its repair policy for the iPhone 4 and 4S, provide better information about warranties on its website, and improve training for staff: - Improved iPhone 4 and iPhone 4S repair policy - Provide a concise and clear on the website of the official Apple repair and warranty policy statement - Increase the intensity of the supervision and training of Apple Authorized Service Provider - Related issues to ensure that consumers can easily contact Apple Feedback Service Perhaps the most significant change is the way Apple will handle repairs and replacements of the iPhone 4S and iPhone 4 going forward, providing new replacement devices for users on Apple’s 1-year warranty in cases where it previously only replaced parts. Many of the controversial reports in China centered around claims that Chinese consumers were provided with replacement parts during repairs, or the original back casing for their device, while consumers in other countries were given new replacements. Cook also urged customers to reach out to Apple with feedback through its website here. Cook’s full letter below: To our Chinese consumers: In the past two weeks, we have received a lot of feedback about Apple in China repair and warranty policy. We are not only a profound reflection on these views, together with relevant departments to carefully study the “Three Guarantees”, and also look at our maintenance policy communication and combing our management specifications of Apple Authorized Service Provider. We are aware that, due to the lack of external communication in this process and lead to the speculation that Apple arrogance, do not care or do not attach importance to consumer feedback. We express our sincere apologies for any concerns or misunderstandings this gives consumers. In order to further improve the level of service, we are implementing the following four major adjustment: - Improved iPhone 4 and iPhone 4S repair policy - Provide a concise and clear on the website of the official Apple repair and warranty policy statement - Increase the intensity of the supervision and training of Apple Authorized Service Provider - Related issues to ensure that consumers can easily contact Apple Feedback Service At the same time, we also realize that operating in China, and communicate much we need to learn the place. Here, we assure you, Apple for the commitment and enthusiasm indistinguishable from other countries. Bring the best user experience for consumers and satisfactory service is our ideals, our commitment, and it has been deeply rooted in Apple’s corporate culture. We will make unremitting efforts to achieve this goal. **iPhone 4 and iPhone 4S repair policy improvements are as follows:** So far, iPhone 4 and iPhone 4S repair this in one of three ways: as from the purchase date of 15 days found the problem, we will be entitled to a refund or replacement for consumers recalculated 1 year warranty period iPhone; 15 days after discovery problem, Apple will replace the part depending, such as camera modules or batteries; replacement parts also can not quickly repaired the iPhone, Apple will provide consumers with a part reassembled new parts, retaining only the consumers existing iPhone 4 or iPhone 4S back cover. Nearly 90% of customers expressed satisfaction with our repair services, and consumer satisfaction is the most important criterion for Apple to measure its own success. But others suggested that part of the re-assembly of repair almost replace the machine, so the direct replacement of a device will be more beneficial to consumers. Therefore, since April 2013, Apple iPhone 4 and iPhone 4S service pack upgrade for all 1-year warranty on new equipment replacement parts and replacement date recalculated. Consumer iPhone 4 or iPhone 4S Apple or an Apple Authorized Service Provider has been part of the re-assembly kit maintenance, we will replace them as whole and for maintenance after the iPhone recalculated from the date of repair year warranty. Apple’s warranty system has been updated for the information and, therefore, affected consumers do not need to take any additional action. **Now, all consumers can see on our site clear and comprehensive maintenance and warranty policy terms and conditions.** We are pleased to provide consumers with information who wish to learn more about the after-sales service. For example, we have been to provide 2 year warranty for the MacBook Air and Mac computer motherboards and other major components. Likewise, the the iPad main components has been entitled to a 2-year warranty period, and other components for 1 year warranty. We realize that our site before this is not clearly stated policy. Hope the following will answer all the questions about Apple provides services. **Apple is to make greater efforts to ensure that Apple Authorized Service Provider to follow our policies, and make every effort to provide consumers with the highest quality service.** Week since March 18, 2013, we made a new training materials for all Apple Authorized Service Provider to ensure that each staff provide services for Apple products is not only familiar with our policies, but also to grasp three guarantees “provisions and related policies. The same time, we have taken the initiative through face-to-face meetings and other forms of verification and to ensure that each Apple Authorized Service Provider have opened training courses to update the knowledge of the staff for the maintenance and warranty policies. We will make unremitting efforts and continuous monitoring of the performance of the Apple Authorized Service Provider to ensure that consumers can get the highest quality service. **Now, the feedback service-related issues is also very convenient.** As the consumers of the services provided by any Apple Store retail store or an Apple authorized service provider doubt, to Welcome www.apple.com.cn/support/service/feedback/ directly get in touch with us. Our goal is to consumers where to buy Apple products or receive services, users can enjoy world-class experience. Heartfelt thank you to give us valuable feedback, we always harbor immense respect to China, the Chinese consumer is always the top priority of our hearts. Tim Cook Apple CEO *FTC: We use income earning auto affiliate links.* More. ## Comments
true
true
true
Following controversy in China with state-controlled media outlets going after Apple over its warranty policies, Apple CEO Tim Cook today...
2024-10-12 00:00:00
2013-04-01 00:00:00
http://9to5mac.com/wp-content/uploads/sites/6/2013/04/apple-warranty-china-tim-cook-apology.png
article
9to5mac.com
9To5Mac
null
null
16,221,116
https://pdfs.semanticscholar.org/4810/dde817e278a7476d8b2531b370df6968308c.pdf
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
17,324,351
http://armchairgeneral.com/uss-robin-the-victorious-u-s-carrier-that-didnt-exist.htm
USS Robin – The Victorious U.S. Carrier that Didn’t Exist
Joseph Tremain
# USS Robin – The Victorious U.S. Carrier that Didn’t Exist *The photo above from the ACG* *archive was posted on* Armchair General’s Facebook page *recently. Viewers were asked if they could identify the ship. Joseph Tremain didn’t just identify it correctly, he wrote the following article for* *ACG* *about about the unusual story of the U.S. carrier that didn’t exist.* It is not unusual for a ship to disappear at sea in wartime—but for a ship as a large as an aircraft carrier to suddenly appear from nowhere is noteworthy to say the least. That is exactly what it must have looked like to Japanese naval intelligence officers listening to American transmissions in the Pacific in early 1943. This story begins in late 1942 when the United States Navy found itself in a precarious situation in the war with the Japanese Empire. At the Battle of the Santa Cruz Islands, the aircraft carrier USS *Hornet* was sunk and the USS *Enterprise* was severely damaged, temporarily putting it out of action. That left the USN with only one fleet carrier to carry on the South Pacific campaign in the Solomons. But in May of 1943, during Operation Cartwheel, which was intended to isolate and neutralize the Japanese base on Rabaul, a second fleet carrier suddenly appeared beside the only remaining operational US carrier, the USS *Saratoga*, which operated out of Noumea, New Caledonia. This new fleet carrier was being called the USS *Robin*, but it was not listed in the USN inventory, and it couldn’t be The USS *Essex*, which was nowhere near completion. Yet there she was—a full-sized fleet carrier complete with American Avengers and Wildcats on her deck. This mystery carrier, the USS *Robin*, might have become famous if it had taken part in any major fleet battle, but instead it has faded from all but the more detailed history books. The truth was that the "USS Robin" as she was being referred to by many sailors, was actually a British carrier—the HMS *Victorious* (R38). It was never even really titled or re-named "USS Robin;" rather, it was code-named "Robin" for communication purposes, an intentional reference to the famous—or infamous—English outlaw Robin Hood. But with the lack of American fleet carriers to protect against potential Japanese carrier aircraft in the Solomons and provide cover for operations against Munda and Bougainville, the "Robin" was a much-needed addition to the weakened carrier fleet. The short, strange story of the Robin began in December of 1942. The United States Navy found itself with only one fleet carrier operational and needed another large carrier to help assist in the theater until the first of the new *Essex*-class carriers became operationally available. The solution turned out to be simply making a request to the Royal Navy for a loan. The Royal Navy decided to loan the USN an *Illustrious*-class carrier, the HMS *Victorious* under the command of Captain L. D. MacIntosh, Royal Navy. In January of 1943, the *Victorious* arrived at Norfolk Naval Shipyard, Virginia, to begin modifications and upgrades necessary to handle the American aircraft and equipment. After the Norfolk refit, the *Victorious* transited the Panama Canal and arrived at Pearl Harbor in March 1943 to join the Saratoga Battle Group, Task Force 14. Between March and May, the *Victorious* underwent additional modifications at Pearl to specifically handle the American versions of the Grumman TBF Avenger (or British Avenger) and F4F Wildcat (British Martlet). To complete the makeover and new look, the *Victorious* temporarily shed her typical British Atlantic "admiralty disruptive camouflage scheme" (irregular patterns of dark and light tones) for the American standard navy gray. On May 17, 1943, the *Victorious*, now code-named "Robin," along with USS *Saratoga*, arrived at the Solomon Islands as part of Task Force 36 commanded by Rear Admiral DeWitt Ramsey, USN. The *Saratoga* and *Victorious* would become the core of Task Group 36.3 under Rear Admiral F. P. Sherman along with the USS *North Carolina* (BB-55), USS *Massachusetts* (BB-59), USS *Indiana* (BB-58), USS *San Diego* (CL-53), USS *San Juan* (CL-54), HMAS *Australia* (D84, a heavy cruiser) and several escort vessels. Her ship’s crew was British, but her aircrew and aircraft were American. No one involved had any illusions that she wouldn’t be identified as the *Victorious* by enemy pilots, so she proudly flew her British Jack throughout her time with the Yanks, even when only the Yanks were flying on and off her flight deck. The highlight of the *Victorious*’s very short career with the USN was her involvement in providing cover during the Munda landings on the island of New Georgia in the Western Province of the Solomon Islands. The *Saratoga*, with its larger complement of aircraft, supplied the strike force for the landing while the *Victorious* handled the air cover for the task group. Shortly after this, she supported the Bougainville invasion before leaving for home, and the name USS *Robin* was once again the sole province of its rightful owner, a long-time minesweeper recently converted to an ocean tug. Although *Victorious*’s stint with the US Navy was not as illustrious as it could have been, that did not detract from her otherwise proud place in history. Before the USN loan, the *Victorious* was involved in the sinking of the German battleship *Bismarck* and, after returning to the Royal Navy, she took part in the sinking of *Bismarck*‘s sister ship, the *Tirpitz*. She would later return to the Pacific, once again working with the USN, and take part in the battle for Okinawa. Thanks to Gerald and the rest of ACG for publishing the article. Wow, I never heard of any of this and I consider my self an historian on WWII especially the Pacific area thanks for the information i loved it I see that the article states that ‘Her ship’s crew was British, but her aircrew and aircraft were American.’ According to my copy of ‘Send Her Victorious’, the biography of the ship written by Lt Commander M. Apps, although the aircraft, Martlets (F4F-4Bs) and Avengers TBMs were American they were flown by pilots of 832, 896 and 898 squadrons of the Royal Navy Fleet Air Arm. 15 Avengers of 832 Squadron flew from USS Saratoga during operations in the Solomons in July 1943, ‘the first and one of the few occasions on which British Aircraft operated from a United States aircraft carrier’ (ibid) Thank you for the information Paul. I would like to add that the H.M.S. Victorious carried American and British squadrons at various times and compliments during this operational period. To add to your information that you supplied, FAA 832’s Avengers (these are true Avengers, not Tarpon variants later supplied to Britain) were assigned to air group 3 and operated off of U.S.S. Saratoga (CV-3) during June/July of 1943, and squadrons 896 & 898’s Martlets operated off of U.S.S. Charger (CVE-30). Victorious also had a fourth squadron, 882, made of Martlets, but 882 was assigned to the headquarters of Admiral Halsey, Commander South Pacific Forces, onshore at Noumea throughout the operation. If you look carefully at the tails of the American Avengers and Hellcats you will see the American squadron white number(s) although they are not legible, and the absence of the FAA vertical striped flag. Although there could definitely be British Martlets and Avengers down below, there are only American aircraft on the deck. If you find additional information on this bit of history, please let me know. Thanks. My father served with Royal Navy from 1937 and on the “Vic” from 1941 – 1946 and he would shake his head in disbelief that they ended up with an Ice Cream maker and Coca Cola machine during her refit to serve with USN. He never knew the “Vic” was code named “Robin” until I found the information on the internet a few years ago. Another difference which he mentioned (not that he spoke very often about the war) was the American aircraft carriers had timber decks and how the “Vic’s ” steel deck saved her during Kamikaze attacks. Jackie my fater was also on Vic at that time. Cheif ERA a a Barker. I served on the Victorious as a midshipman when she was based at Portland . At that time she was no operational as a carrier I can vouch for the fact that she was hit by, I believe, two Japanese Kamikaze suicide planes. They made a shallow dent in the deck armor and were quickly “bull dozed” over the side. Later one could walk the deck below and see two dents, about three to four inches deep, where the kamikazes had struck Her original armament of 16 4″ guns mounted two to a turret at the four corners of the flight deck, plus several pom poms. The Yanks welded on three or so decks along the side of the “island” and filled them with Bofors and Oerlikons. As gunnery officer I can vouch that we had 141 guns in all. Nicely outnumbering the HMS Victory at Trafalgar with a mere 104 guns My late father-in-law served during the Pacific deployment in 1943. He took some interesting pictures, including Chester Nimitz being welcomed aboard Victorious in Pearl Harbor in March 1943 and a party with hula-hula dancing girls performing on the flight deck on the same occasion. Also some shots of his mates and the aircraft they maintained. The aircraft have prominent US markings but one shot shows clearly the words “Royal Navy” discreetly on each side of the dorsal spine. I did some research. Victorious sailed from the Clyde in December 42 with two destroyers and a load of Martlets plus supernumerary FAA personnel to form and train in the US. It was decided to use the US name “Wildcat” to avoid confusion. 832 was issued with Avengers in Norfolk Navy Yard, Virginia, in January 43. These were standard US Navy issue with the “Royal Navy” identifiers presumably painted later on board. In shakedown operations en route to Pearl Harbor, the RN arrester wires were found to be too thin for the heavy Avengers, so modifications were carried out in Pearl. The Vic’ also had a diphtheria epidemic en route and medical supplies had to be dropped by a PBY. On arrival at New Caledonia, Victorious joined USS Saratoga in Task Force 14 and immediately sortied for a week hunting the Japanese fleet. Six “Martlet” types were lost in accidents and were replaced by “Wildcats” so, in practice, the FAA was operating the US and UK versions of the fighter at the same time. Evaluating performance, Admiral Ramsey was impressed by the fighter direction system on the Vic’ but not with Avenger handling – none of her cranes could lift Avengers, for example. Accordingly, he moved 832 on to the Saratoga and US Carrier Air Group 3 on to the Vic’. I think CAG-3 included some Dauntless but was mostly Wildcat equipped. So both carriers operated with mixed aircrew. Not sure if this involved any transfers of fitters and techs as well. In the operations to support the New Georgia landings in July, the Vic’ set a record for continuous time at sea by a British carrier. Not without incident – the potatoes ran out. Saratoga obliged with provision of something unheard of in the RN – instant mashed potato! An Avenger was modified to carry drums of smash and an unhappy cook was strapped into the rear seat to show the RN cooks how to use this mysterious stuff. The “Spud Express” was not a total success – the RN cooks didn’t quite get the instructions right and made four tons of the stuff! Hi Jackie My father also servied on the “Vic” when it was in the Pacific. He like your Father did not talk very often about the war. However, he did tell me the story about timber and steel decks and how it saved the ” Vic” on more than one occaision. Cheers Colin was the victourious involved in the battle of okinawa Yes Ken, in 1944 the Victorious returned to the Pacific after assisting in the sinking of the Tirpitz. In 1945, Victorious, Illustrious, Indefatigable and Indomitable operated as British elements working with U.S. Fifth Fleet in the battle of Okinawa. Victorious was successfully hit by three kamikaze attacks, which her metal deck protected her and allowed continued operations within hours of the hits. Victorious was also intended to take part in Operation Olympic if Japan had not surrendered. This Commission was my fathers first Draft as a recently trained Air Mechanic drafted to 832 RNAS. He had is 21st Birthday of New Caledonia having travelled from the UK to Norfolk VA for the refit and then through the Panama Canal to Pearl Harbour before leaving for the South Pacific. He makes reference to American personnel looking after the aircraft and I have one or two photo’s of Avengers and Wildcats landing on. The Wlidcats (Martlets?) have Bristish markings and the Avengers US markings 4R on the fuselage. Its not clear if these are from the same trip however. I also have photos of march pasts of senior commanders including Admiral Nimitz. Interesting period in our shared history David, I would love to see digital images of the Vic. Personal photos taken by service members are hard to find and often allo amazing discoveries previously undocumented or unpublished. You can go to Jax Photo Evolution (one word) dot com to obtain my contact information. Thanks. A job for the weekend as I need to take them from the Album dad put together at the time. Some photos taken by the ships photographer as well as some impromptu shots of USN and British Navy personnel working on Deck. 832 RNAS linked up with Saratoga again in May 1944 when 832 were temporarily deployed in HMS Illustrious. Illustrious and Saratoga were part of a task force deployed on raids against Japanese oil installations at Surabaya Java. Best Hi David, I served on the Victorious 1957 to 1960, her first commission after her Post WWII refit, we visited the USA July and August 1959, but never traversed the Panama Canal. Check out my home page. http://www.barrylockyer.com/ there is a link to the Victorious page. I would appreciate copies of any photographs you have for my collection and the benefit of all ex Victorious Sailors. Yours Aye In the early years of US involvement in WW2 it was the British providing assistance to the USN rather than the other way round. In early 1942 the UK also had to lend the USN 50 anti-submarine vessels (and crews) and a Fleet Air Arm Squadron to protect New York Harbour. The USN was totally unprepared for war (despite every warning) and ended up being far more dependent on the RN than they would of liked – so HMS Victorious was converted into USS Robin. At Okinawa the British protected the USN southern flank and their aircraft carriers proved more or less invulnerable to Kamikaze’s, taking repeated hits with no effect – they literally ‘bounced off’. Later US carrier design was modelled on the Illustrious class design principles. Media and public perceptions of the Naval war in WW2 are very distorted. The British lost more ships in the battle for control of the Mediterrenan than the USN lost in the Pacific. British warship sinkings in WW2 were over twice as much as the USN. The largest British carrier battle ‘Operation Pedestal’ is largely written out of the history books – mostly because it was carriers against land based aircraft. In terms of ‘air doctrine’ its very significant, because the carriers, which were very difficult to sink, could maintain local air superiority of the convoy they were protecting. A lot of the British carriers were wrecked (Illustrious and Indomintable both had their hulls warped) but were still able to function. In contrast US carriers were easily put out of action – but easily repaired. I disagree with the claim that “it was the British providing assistance to the USN rather than the other way round.” It is true that the RN temporarily lent 24 armed trawlers (22 A/S Strike Group), 3 destroyers and six corvettes to the US Eastern Sea Frontier in early 1942, of which 14 were available to that command by 1 April 42. The act was much needed and greatly appreciated, but it is hardly a fair, complete or accurate representation of the total mutual aid relationship of the time. Further, the FAA squadron that was lent was to the RN’s benefit, not an act of charity to the US. At the outset of the US’s participation in the war, the US wanted to retain a shipment of coastal patrol long range bombers that were originally destined as aid for Britain. These aircraft were badly needed to strengthen the US Eastern Sea Frontiers’ defenses. As a compromise, Britain offered the temporary use of the squadron to ensure an uninterrupted supply of US-built patrol aircraft. In this light, we see the balance of aid was mutual, rather than one-sided. On the other hand . . . from Mar through Dec ’41, 2,400 US aircraft were provided to Britain; by comarision . . . during the 1942 U-boat blitz in the western Atlantic, the RN provided the US the temporary use of just one squadron. In this light, things do look one-sided, but in the opposite context. But let’s put this in strategic perspective. For much of the previous 2 years, the US had been providing the RN every bit of aid allowed by law, and some that was not. This aid included not just materiel, but operational deployments that enabled the RN to concentrate its strength where most critical. In doing so, the US assumed risk at home, as it diverted war production from its own armed forces to those of Britain. Some examples: – By 1940, France, Britain and the Commonwealth countries bought nearly 90% of US aircraft production. – By the end of ’41, we had not only transferred the 50 destroyers, but an additional 10 Lake class 230 ft Coast Guard cutters (transfers that were not included in the eventual total of 886 ships transferred to the RN during the war). – Despite the lack of escorts available to the USN, in mid-41 we committed to providing the RN 100 escorts. (78 DEs were eventually delivered; the RN’s Captain class frigates). – Roughly half of the escort carriers built in US yards through the end of ’42 were given to the RN (38 total CVEs were transferred to the RN). By comparison The British built only 5 CVEs during the entire war, and employed one German prize in this role. – The basing agreements had permitted redeployment of RN assets to other theaters, while the US assumed responsibility for security of those vacated areas. Backfilling those bases diverted US ships – and especially patrol aircraft – that later would be sorely needed off the US coast in early 1942. – The Pan-American Safety zone was patrolled and enforced by the USN in the years leading up to Pearl Harbor. Reluctant to risk bringing the US into the war, Germany restricted its U-boats to the eastern Atlantic prior to 7 Dec ‘41. (Fewer than 5% of U-Boat sinkings between 1 March and 6 Dec 41 took place west of the Greenland meridian.) This vastly eased the RN’s task, enabling it to focus its limited strength in a tremendously reduced arena. – The US had taken responsibility for protection of Greenland and its waters, and relieved British forces of the responsibility for Iceland. – Further, 3 months before Pearl Harbor, the US “White Patrol†(2 battleships. 2 cruisers, 2 destroyer squadrons) had assumed responsibility for blockading German ship movements through the Denmark Straits (the possible break-out of the Tripitz being the fear). – Beginning 3 months before the official US entry into the war the USN assisted in convoy escort duties in the western Atlantic, and from mid-Oct 41, had taken responsibility for escort of HX and fast ON convoys. In addition, we assumed responsibility for tasks such as convoying 20,000 British troops from Halifax to the Far East (and subsequently Australian troops from Suez to Freemantle, and delivery of reinforcements to Singapore) in US hulls, escorted most of the way by US warships (convoy sailing 10 Nov 41). That’s a LOT of help to be dismissed so cavalierly, and yet it barely scratches the surface. The US badly hampered its own pre-war mobilization in order to help the RN cope with the U-boat threat in the eastern Atlantic. By comparison, it is remarkable how little the RN aided the allied effort when the U-boat threat shifted to the western Atlantic. And that’s the strategic point. When the threat was in the eastern Atlantic, the RN received extensive US support. When the threat shifted, it would only make strategic sense to shift forces to cope. But this the RN did not do, except for such minor examples as given. Instead, the extensive naval power the US had helped the RN field remained committed to a vastly reduced threat. (See maps on pg 58 and 124, Vol I, History of US Naval Operation in WWII for depictions of the dramatic shift of U-boat operations during this period.) At the least, it is a clear example of strategic inflexibility. At worst, a cynic might – unjustifiably – conclude the relationship was rather one-sided. Clearly the original assertion is incorrect; the US provided ample assistance to the British during this period. The fact is, both sides needed each other badly, and aid flowed – generously – both ways to the degree that each could provide. No one over here has forgotten the Tizard mission, the origin of the plans for the invaluable classes of landing craft, or the battle-won expertise that was provided freely by the British. Nor are we unaware of the failings of the USN early in the war. But to belittle the contributions from this side of the pond is ungenerous, to say the least. In fact, battles such as the Java Sea demonstrate that both navies were fighting and dying together, and at that point, neither doing especially well, whether new to the war, or with a couple years’ experience under their belts. Actually it was all about the UK defending the USA and teaching the USA how to wage war. . . . Although in 1942 the Japanese and Germans were at the height of thier expansion its was clear to any one observing the strategy and economics of the war that Axis defeat was inevitable. The UK and Russia had decided the course of the war with the Battle of Britain and the Battle of Moscow. The war was essentially won in 1938 when the British made a planned switch to a war economy – some 5 years ahead of Germany. In 1941 the British were building more aircraft than Germany, Japan and Italy combined, 5,000 more than the USSR and 5,000 less than the USA. In 1942 the USA was a liability. Shipping losses to U-boats had fallen steadily throughout 1941 only to reach spectacular levels with the entry of the USA into the war. All major historical authorities – Morrison, Roskill, Churchill, Bauer and even General Marshall are agreed this was entirely due the incompetence of the US Navy and the stupidity of Admiral King. The correspondence between King and Marshall can be found in Bauers history and ends in effect, with an Army General correctly advising a US Admiral on maritime tactics. . . . If we go further into the history we find that the USAAF in the UK was getting something like 70% of its supplies locally untill 1943 (its in the USAAF history). The story of the USA ‘coming to the rescue’ of the UK is propaganda story that suited both the British and the USA. The reality was very different, starting with the Arcadia conference., where the British subtly forced US to model its war economy and planning on the British system. The reality is the USA knew nothing about managing a modern war and learned everything from the UK. The history of the WAAC’s makes interesting reading if you want to go there, simply because it has the US Army learning (the hard way) exactly what they had already been told. In 1939 the Poles brought the secrets of their decoding of Enigma to the British and the French. Had things been different and the British been really up against the Tizard mission would of gone to the USSR – and the British had a workable design for a nuclear bomb from the ‘tube alloys’ project Britain and the USSR would of won, and the USA would be a minor player on the world stage. It’s a fascinating insight into WWII that the French knew throughout WWII that the British were probably decoding German codes – but never let on. . . . . Absolutely right. The assistance was from both sides and must never be forgotten. The Santa Cruz operation cost the US dear at a time when aircraft carriers were in short supply and I am sure it was with genuine gratitude that the UK was able to assist the US for this short time whilst the ESSEX class carriers came on line. The British fought a highly technological and industrial war and did so very efficiently. Britain used not only her vast empire but her even larger trading empire to great effect – an army of 2.6 million marched into Burma. The British, with its massive navy, surrounded the Axis, from the Med (cutting off both entrances to the Med) to the Eastern Atlantic, starving them of natural resources, especially the magic oil (In May 1941 the Italian Navy could not put to sea due to lack of oil) and ensuring where the battles would be fought. “Steel not flesh” was the slogan. The British assessed that having massive armies is highly inefficient. The larger the army the higher the casualties. Britain deliberately chose to keep numbers of front line troops as low as possible building machines and using technology advances instead – the BEF in France was the first army were men never marched – fully motorised. The Kangaroo was the first armoured personnel carrier developed in WW2 from adapted tanks, saving many lives, in contrast to the horrendous US casualties. The policy worked, despite fighting for the full duration, the only major country to do so, and all around the globe, the country had only lost around 440,000, which is half the British dead of the 1914-1918 war, which lasted two years less. Germany and the USSR lost considerably more troops than they had in WW1. From the war came amazing British inventions: the cavity magnetron, the electronic computer, the world’s most advanced jet engines, anti-submarine electronics, the proximity fuse, as well as the Liberty ship (a Sunderland design), to name but a few. Massive developments in manufacturing, with a staggering 132,500 aircraft for such a sized industry, and over one million military vehicles. Canada alone produced more wheeled vehicles than Germany. From the first American servicemen arriving in Britain in 1942, until VE Day, the British provided the USA with 31% of all their supplies in the European Theatre of Operations. Britain’s war effort was astonishing – backed by their insistence in continuing the fight in 1940. The British made an enormous contribution to winning the war, being the key agents. This had a positive effect on the future of the world. The declinist view of Britain in the war must be dispelled for good. Very interesting. I only heard of ROBIN a few years ago and certainly no U.S. history books will acknowledge the help of the Royal Navy. ‘Pedestal’ however, may have been written out of the history books but the Merchant Navy has not forgotten it. Neither have the government or people of Malta as witnessed, I understand, by the annual service of remembrance in Valletta harbour. Great work: I love these sites uncovering barely remembered, or entirely forgotten, facts and/or incidents. We must never forget that the tanker OHIO was American though staffed by British Merchant sailors and officers. In was with the New Zealand shipping company and my companies Dorset was in the Pedestal Operation along with Port Lines Port Chalmers. Our seamen worked an served together from the very outbreak of WW2 and were from all nations. Well gentlemen, I do not propose to get involved in who won the war ? however, a couple of small points that may be of interest. The H.M.S Victorious Association, still has many members in its ranks who served, and were injured in the Pacific on Victorious. Operation Meridan (one and two) when the attack on the Japanese fuel storage and oil refineries near Palembang, are still discussed after tot time at our meetings. This is history very real to many of our members. I would say, I have not heard any of these members( many of them who had already served two or three hard war years)speak about who did what in these theatres. However, I would urge you corrispondents not to over look the fantastic efforts and sacrifices made by so many young Canadians flying at this period. Best Regards Stan McLellan Chairman HMS Victorious Association. ex Vic. Thank you Stan. The intent of the article was not to credit every possible element involved in this overlooked piece of history, which of course would be almost limited, lol. The goal was to bring to light a piece of history that is practically lost within the annals of history, especially within WWII studies in the United States of America. If your association has a website, especially any page(s) detailing the ‘Robin’ codename and operations, please post them here, it would greatly enhance this article. Thank you very much. First mention I have found at this point, advises. ” The carrier (Victorious) reached San Christobalat mid-day on the 10th Feb.1943, and preparded to pass through the Panama Canal. As she passed the USS Stalker,she intercepted a signal fromthe USS Massachusetts which read “What,s the Limey flat top?”. Although the reply was not decoded, for the purpose of security Victorious was generally refered to as the USS Robin- with a sense of humour and a knowledge of ornithology, there was scope for some interesting signals. Taken from “send her Victorious” by Michael Apps Lt. Commander RN copyright 1971 SBN no. 718 0102 1 published by William Kimber and Co.. Apps commanded 814 squadron, and his book is generally taken as the definitive record of Victorious and her long career, but we live and learn. I will ask some of our veterans at our next meeting, to see if they can shed further light on USS Robin, I do know a couple have mentioned the name in the past. Hi Stan My father William Dobbie served on the HMS Victorious (USN Robin) in the Pacific. My Dad died some years ago and I received his war service record and various other momentoes from his time on the Victorious. I was exited to hear that there are still some surviving members from the Pacific campaign. I live in Australia and will be in the the UK in mid to late September 2013. I would love to meet the survivors and speak to them about there time in the Pacific. Would this be possible? If not, I was hoping I could speak to you by phone say when I am in the UK. Hi Colin, we have surviving members all round the globe, and several interested parties in Australia, (very popular place to head for after the war, and Victorious brought a large number of war brides back to Uk in 1945, many who subsequently did not not like the climate and headed back! The main Victorious reunion this year will be held first week-end of October, which may be too late for you, we meet for the week-end in Coventry (central in the UK). I live in Leeds (northern England), and two of my very good veteran mates live with ten minutes of my home. One who was burned in a Kamakaze strike in 45, the other who served on carriers including Glorious and old Ark Royal (both sunk) served out his time in the Pacific on Victorious. Both fit, but not young men !!! would be happy to meet you I’m sure.We have lost a couple of our best this year, time takes its toll. The Victorious Web site is http://www.hmsvictoriousassociation.com if you have a look on the sites “Gallery” we have a couple of thousand pictures there.Tab on WW11 file. All my details are on this site and I can be contacted there. there.or leave a message on the guest book. I will keep an eye on this site now I have found it.Best Regards, Stan McLellan Chair Vic Assoc. Hi Stan Do you have any information on my father Robert Allan Procter (Bob)who was an Observer in the 832 Squadron. He flew with John Fay among others before being promoted to Flight Deck Commander on the HMS Begun Not to be picky or ungenerous, but we NEVER say “the” before HMS – it of course being perfectly acceptable before USS. That’s because the full name would be “… the Her Majesty’s Ship”, which is clearly grammatically unsound. Whereas “… the United States Ship” is obviously fine. I know it’s sounds silly, but it really grates with us RN folk! 🙂 Mark, you are absolutely correct and, to borrow a line from Winston Churchill, this is something up with which we will not put. Sorry that slipped through; we’ll keep a closer eye out for “the HMS” in future. Thanks for pointing it out. The same holds true for HMCS, HMNZS, HMAS, HDMS, and HNLMS for that matter. It is rather ungenerous (not to say inaccurate) to state that “Operation Pedestal” is written out of the history books. As well as the pages devoted to it in Roskill’s 1950’s history, Volume 2, my book “Pedestal: the Convoy that Saved Malta” was first published in 1970 and has never been out of print since then, having been published in in five UK hardback editions, four UK paperback editions, three Malta paperback editions and an Italian hardback edition. It is still in print today. One well-known Web Site copied my book out word for word and received praise for “orginal research”….. Out of the history books? – Not quite! Peter C Smith Hello from Tucson, Arizona, USA! Between 2000 – 2005, I was researching American Pre-War Aircraft Carriers from January 1941 to July 1943. By December 1942, the USS Saratoga CV-3 was the only capable US carrier in the Pacific. The USS Enterprise CV-6 was a lame duck that was held in the South Pacific until some relief became available. During my research, I found a little reference about a British Carrier in the South Pacific. At the time of my research, there little information about HMS Victorious’ first foray to the Pacific. The Royal Navy and Fleet Air Arm web pages glossed over this period in HMS Victorious’ career. In Fahey’s “The Ships and Aircraft of the U.S. Fleet,” 2nd War Edition, there is an entry for HMS Victorious titled British Aircraft Carrier That Served With The U.S. Fleet. This entry has a picture of HMS Victorious with a caption “H.M.S Victorious – Operating with an American Air Group”. Here’s the description that follows: “Through months of 1943 when U. S. carrier strength was at its ebb the 23,000-ton Victorious served with the Pacific Fleet. In lat May 1941 she helped run down the Bismark in the Atlantic. A triple-screw 31knotter, she is 753-feet overall, carries 60 Fighters or 36 VF’s and 18 VTB’s. The British Fleet Air Arm operates many U.S. naval aircraft. Up to March 1944 we transferred 38 Escort Carriers to Britain under Lease-Lend. The Illustrious and Formidable, sisters of Victorious, overhauled at Norfolk in 1941.” Fahey, James C., “The Ships and Aircraft of the U.S. Fleet,” 2nd War Edition, Ships and Aircraft, New York, New York, 1944, pg 10. At first glance without knowing the history of HMS Victorious, this entry in Fahey’s book is minimal, uninformative at best. But once you get to know HMS Victorious’ history – wow. Here is some of my thoughts about Fahey’s entry. 1) In 1937, the U.S. Navy created a billet – Command, Air Group (CAG) 2) HMS Victorious had four squadrons with American Aircraft a) 888 NAS – 12 F4F-4B b) 896 NAS – 12 F4F-4B c) 898 NAS – 12 F4F-4B d) 832 NAS – 15 TBF/M-1 3) Fahey’s reference to “36 VF’s and 18 VTB’s” supports HMS Victorious squadron make up. 4) The primary lesson learned from the carrier battles in 1942 is Fighter Direction Control. 5) In 1943, some of the American Carrier Admirals advocated a fighter carrier and a strike carrier. 6) When HMS Victorious arrived in the South Pacific, the American officers were very interested in the FDC center. 7) For the Munda operation, USS Saratoga became the strike carrier and HMS Victorious became the fighter carrier. 8) In March 1943, the American carrier fighter squadron had 36 fighters. 9) Twenty-four American fighters were assigned to HMS Victorious for the Munda operation. 10) Fahey’s reference to “60 Fighter” supports the composition of HMS Victorious assigned squadron for the Munda operation. Call signs: 1) Button was the call sign for Henderson Field on Guadalcanal. 2) Robin may have been the call sign for HMS Victorious. 3) Fahey’s entry do not have any reference to USS Robin. Admiral King’s contemporary Admiral Pound(?) initiated the offer to send HMS Victorious. Admiral King was not a big fan of the Royal Navy. Here is a little history that predates and may have influenced HMS Victorious’ first foray to the Pacific. After departing Norfolk on 14 January 1942, Wasp headed north and touched at Argentia, Newfoundland, and Casco Bay, Maine, while operating in those northern climes. On 16 March, as part of Task Group (TG) 22.6, she headed back toward Norfolk. During the morning watch the next day, visibility lessened considerably; and, at 0550, Wasp’s bow plunged into Stack’s starboard side, punching a hole and completely flooding the destroyer’s number one fireroom. Stack was detached and proceeded to the Philadelphia Navy Yard, where her damage was repaired. Wasp, meanwhile, made port at Norfolk on the 21st without further incident. Shifting back to Casco Bay three days later, she sailed for the British Isles on 26 March, with Task Force (TF) 39 under the command of Rear Admiral John W. Wilcox, Jr., in Washington (BB-56). That force was to reinforce the Home Fleet of the Royal Navy. While en route, Rear Admiral Wilcox was swept overboard from the battleship and drowned. Although hampered by poor visibility conditions, Wasp planes took part in the search. Wilcox’ body was spotted an hour later, face down in the raging seas, but it was not recovered. Rear Admiral Robert C. Giffen, who flew his flag in Wichita, assumed command of TF-39. The American ships were met by a force based around the light cruiser HMS Edinburgh on 3 April. Those ships escorted them to Scapa Flow in the Orkney Islands. While the majority of TF 39 joined the British Home Fleet—being renumbered to TF 99 in the process— to cover convoys routed to North Russia, Wasp departed Scapa Flow on 9 April, bound for the Clyde estuary and Greenock, Scotland. On the following day, the carrier sailed up the Clyde River, past the John Brown Clydebank shipbuilding facilities. There, shipyard workers paused long enough from their labors to accord Wasp a tumultuous reception as she passed. Wasp’s impending mission was an important one—one upon which the fate of the island bastion of Malta hung. That key isle was then being pounded daily by German and Italian planes. The British, faced with the loss of air superiority over the island, requested the use of a carrier to transport planes that could wrest air superiority from the Axis aircraft. Wasp drew ferry duty once again. Having landed her torpedo planes and dive bombers, Wasp loaded 47 Supermarine “Spitfire” Mk. V fighter planes at the King George Dock, Glasgow, on 13 April, before she departed the Clyde estuary on the 14th. Her screen consisted of Force “W” of the Home Fleet— a group that included the battlecruiser HMS Renown and antiaircraft cruisers HMS Cairo and HMS Charbydis. Madison (DD-425) and Lang (DD-399) also served in Wasp’s screen. Wasp and her consorts passed through the Straits of Gibraltar under cover of the pre-dawn darkness on 19 April, avoiding the possibility of being discovered by Spanish or Axis agents. At 0400 on 20 April, Wasp spotted 11 Grumman F4F-4 Wildcat fighters on her deck and quickly launched them to form a combat air partol (CAP) over Force “W”. Meanwhile, the “Spitfires” were warming up their engines in the hangar deck spaces below. With the Wildcats patrolling overhead, the Spitfires were brought up singly on the after elevator, spotted for launch, and then given the go-ahead to take off. One by one, they roared down the deck and over the forward rounddown, until each Spitfire was aloft and winging toward Malta. When the launch was complete, Wasp retired toward England, having safely delivered her charges. Unfortunately, those “Spitfires,” which flew in to augment the dwindling numbers of “Gladiator” and “Hurricane” fighters, were tracked by efficient Axis intelligence and their arrival pinpointed. The unfortunate “Spitfires” were decimated by heavy German air raids which caught many planes on the ground. As a result, it looked as if the acute situation required a second ferry run to Malta. Accordingly, Prime Minister Winston Churchill, fearing that Malta would be “pounded to bits,” asked President Roosevelt to allow Wasp to have “another good sting.” Roosevelt responded in the affirmative. Rising to the occasion, Wasp loaded another contingent of “Spitfire” V’s and sailed for the Mediterranean on 3 May. Again, especially vigilant for submarines, Wasp proceeded unmolested. This time, the British aircraft carrier HMS Eagle accompaned Wasp; and she, too, carried a contingent of Spitfires bound for the “unsinkable aircraft carrier,” Malta. The two Allied flattops reached their launching points early on Saturday, 9 May, with Wasp steaming in column ahead of Eagle at a distance of 1,000 yards. At 0630, Wasp commenced launching planes—11 F4F-4’s of VF-71 to serve as CAP over the task force. The first “Spitfire” roared down the deck at 0643, piloted by Sergeant-Pilot Herrington, but lost power soon after takeoff and plunged into the sea. Both pilot and plane were lost. Undaunted by the loss of Herrington, the other planes flew off safely and formed up to fly to Malta. Misfortune, however, again seemed to dog the flight, when one pilot accidentally released his auxiliary fuel tank as he cinibed to 2,000 feet. He obviously could not make Malta, as the slippery tank fitted beneath the belly of the plane had increased the range of the plane markedly. With that gone, he had no chance of making the island. His only alternatives were to land back on board Wasp or to ditch and take his chances in the water. Sergeant-Pilot Smith chose the former. Wasp bent on full speed and recovered the plane at 0743. The “Spitfire” came to a stop just 15 feet from the forward edge of the flight deck, making what one Wasp sailor observed to be a “one wire” landing. With her vital errand completed, the carrier set sail for the British Isles while a German radio station broadcast the startling news that the American carrier had been sunk! Most in the Allied camp knew better, however; and, on 11 May, Prime Minister Churchill sent a witty message to the captain and ship’s company of Wasp: “Many thanks to you all for the timely help. Who said a Wasp couldn’t sting twice?” “Wasp,” http://www.history.navy.mil/danfs/w3/wasp-viii.htm. I am 92 years old – and I served in Victorious as a pilot in 896 RNAS from December 1942 and throughout the 1943 Pacific operations. I have written an account – and I also took several hundred unique photos which I have scanned to computer and disk. This material has been published – and I will gladly supply copies to anyone who wants the full and true story of USS “Robin”. I’m a bit slow nowadays and it might take me a while to send the stuff if several people contact me. John W Herbert California USA Hi John, I would love to have your full account and a copy of the disc, you can contact me at jackdusty(nospam)@att.net, remove the (nospam) Yours Aye Barry Sunny Ocala, FL 34482, USA http://www.barrylockyer.com/ John, I would dearly love to read your first hand account of this period. As mentioned previously in the thread, my farther served in RNAS 832 as a fitter and this was his first trip as a newly trained fitter at the age of 20. What I dont mention is that sadly we lost our father Eddie in 1966 when he was 44 and I was 8 years old. This period of his life is one of the few periods I can research further having lost him at such an early age. Dad remained with 832 throughout the rest of war until 832 was disbanded. Not sure of the protocol of getting in touch outside of this website but I would welcome the opportunity to be in contact with someone who would have been with Dad at that time. Kindest Regards My late father-in-law was a radio mech with 832 in the Pacific. We have some photographs, though obviously I don’t know who the people are. Well, one exception: I recognised Chester Nimitz in one! Probably with Captain Mackintosh. There are some fun shots of hula hula girls in grass skirts performing on the flight deck. John, I would also love to read your first hand account and to see your photos. My Dad died four years ago. Although he gave very few details of his years on “the Vic” (he served on her from when she was commissioned until he was discharged with a knee injury in 1946) he always said how much he loved that ship. He thought she was better than any other aircraft carrier. Hi John My Dad, William Dobbie served on the Victorious during its Pacific campaign and I to would like to read your account of the campaign. Cheers Colin Hi John, I would be very grateful if you could forward my your account/photo’s of your time on HMS Victorious/ USS Robin. My fathers friend Roy’s brother was an Anti Aircraft gunner on this carrier and survived the Arctic convoy, Malta run and Pacific tours only to succumb to Lukemia at the age of 23. Roy would love to find out as much as possible about what happened during this time but he has no access to internet, hence my involvement. Looking forward to hearing from you, Steve Lawson. Dear David: I can send you the account by air-mail. Please send me your address by e-mail to [email protected] Do you have the photo of 832 Squadron taken aboard Saratoga? It may include your Dad. regards John My grandfather’s name was Robert Watkins and was a pilot in World War II. I was searching through his books today and found his pilot logbooks that indicate he served on the what he called NMS Victorious in his logbooks. I do have a squadron photo that shows that he was on the deck of the victorious with his crewmates. All the crewmates are named and his logbooks indicate that he was flying the SBD dauntless off of the victorious. He was part of bomber squadron 13. The photo was dated July 20, 1943. For Matthew Watkins Dear Matthew If you will e-mail me your phone number – your time zone (I live in Southern California) – and a convenient time for a chat – I will happily phone you and tell you all about USS Robin (HMS Victorious)when she became part of the US Pacific fleet in 1943. I am ninety-two years of age and I was a British Wildcat fighter pilot at the time – and I have a clear memory of those momentous days – aided by my detailed Log Book and many photographs. I have just read through all of the above posted material and noted many errors – possibly because some of the writers simply weren’t there —- I was! I will happily phone anyone else who wants the true story John I can be contacted at:- [email protected] John Dear john My father served on the uss robin or should i say hms victorious.his rank was petty officer yet i cannot find him listed in the crew listings.i have seen old photos of him on board in his uniform with other officers and sailors .before he pssed Away i remember him telling me that the zamericans were warned of the asttach on pearl harbour before it happened but dismissed the information with a they would not dare attack . My father (Bob Procter) was an Observer with the 832 flying with John Fay until he was promoted to Flight Deck Commander. He told me that the 832 squadron used both the Victorious and the USS Saratoga in John Fay’s book (Pilot with the 832) “Golden Wings and Navy Blue” he describes the activities of the squadron on both the Saratoga and Victorious. As I believed it, the arrangement was one of convenience to both carriers I and my grandchildren are interested in any information about Bob, 832 Squadron and Victorious Fascinating stuff. Thank you. I’m so glad that not all Americans (including my ex-USN baby brother) are as ungracious about the Royal Navy and the British sacrifices of 1939, 1940 and all but one month of 1941 as ‘Strike Hold’. “The British lost more ships in the battle for control of the Mediterrenan than the USN lost in the Pacific.” That is new to me. Could anyone post details, including raw numbers of losses, breakdown of ship types, etc.? What an interesting web site to find. My dad Geof. was a navigator first on Swordfish and then Avengers on Victorious – then Robin, for the duration. (Pus time on other carriers and aircraft.) The story goes that when the Americans said Robin had to be dry the response was “no drinkies no fkyies” and that prevailed, (privately I was told) but the Brits did prefer their Avengers to carry USAF markings the RAF and Fleet Air Arm had lost too many planes to the USAF who mistook them for Japs. Shame the Stingbag didn’t make on the JV flypast in 2015 – admire my Dad for ever setting foot in a contraption like that. Hi John, seems our fathers may have served together in 832 RNAS? If your father served in the “Robin” trip and flew Avengers then its highly likely Dad, as a leading air mechanic,in that squadron would have maintained the Aircraft at some stage. 832 went on to operate from Illustrious and Begum. John Fahy’s book Golden Wings and Navy Blue has a couple of excellent chapters about 832’s exploits in 1943 before he moved on elsewhere. Regards Hi John and David My father’s war record indicates he was on the Victorious during its time in the Pacific. I have no photos of my Dad whilst he was on the Victorious. Would appreciate if you could send me any that you may have in your possession. My Dad was an air mechanic. Best Wishes Colin Dobbie Hi David Thanks for your quick response and kind offer of the photos. My email address is [email protected] Cheers Colin According to everything I read the ‘Vic’ was involved in operations to go after the Tirpitz, but did NOT sink her, she was sunk by Lancasters of 617 Squadron using tallboy bombs at Trondheim where she had been reduced to a floating AA defence. In reply to Ian Durrant above regarding the Tirpitz as an AA emplacement for the Krauts, there was a very recent Public Broadcasting System TV series which showed the Tirpitz in its mooring and had some other still photos plus a reasonable description of the results. If you are interested I will dig back for more. This is just to confirm your assertion. Thanks for an inspiring read. The tension between allies during WWII is never surprising given the nature of what was happening – a horrific experience where mistakes and accidents happened no matter the care taken. cheers, nick The deployment of HMS Victorious to the Pacific is a little-known but truly fascinating story of the Anglo-American alliance at its absolute best. Most postwar accounts seem to focus on the bitterness between Patton and Montgomery,with the latter (and by implication, the entire British war effort) as a special object of calumny. However, on many occasions, this one in particular, the alliance worked extremely well, and it’s sad that people don’t remember that. I was aware of the carrier’s deployment to the Pacific, which was sort of a payback for the British use of USS Wasp to fly Spitfires to Malta twice in 1942. One of the chief memories of the RAF fighter pilots in those endeavors was that the US Navy ships were “dry” but had plenty of ice cream. I will also add a plug for my e-book, “World War II Plus 75 — The Road to War, which is available on Amazon and SmashWords, which is the first volume of a day-by-day history of World War II. I hope people enjoy it. http://www.amazon.com/s/ref=nb_sb_ss_c_0_13?url=search-alias%3Ddigital-text&field-keywords=david+h+lippman&sprefix=david+h+lippm%2Caps%2C175 You wonder if the US tried to convince the Brits that to reallyfool Japanese intelligence, they should name it after a battle the US beat the British like other of their carriers – Yorktown, Saratoga, Lexington.For Colin Dobbie, Hello Colin, I have a couple of photos to hand but need your email address to supply. May be of interest but no promises. Best Hi David, Sorry about the delay in replying. I have not been at work for a long time. Would be very happy to receive the photos. [email protected] Cheers Colin “This new fleet carrier was being called the USS Robin, but it was not listed in the USN inventory, and it couldn’t be The USS Essex, which was nowhere near completion.” referring to May 1943. The Essex was launched on 31 July 1942, and completed initial construction in December 1942. I suspect most would call that near completion. It then went through extensive sea trials on the East coast to identify problems with the design, and test new tactics developed in response to the battles of 1941-1942. By May 1943 it transferred to the West coast. I suspect that at that time it was first outfitted with the F6F Hellcat, and may have had some improvements suggested by its sea trials. As an addition to her wartime battle honours, it is worth noting imho, that “USS Robin” went on to be the world’s first carrier to have an angled deck. My father sailed on the Victorious when she was a training ship, just prior to her conversion. I hadn’t realised that USN aircraft were based on the Vicky, so was fascinated to see the article and pictures. I always assumed she had her FAA flight wing for some reason. If anyone has any information about the Avengers that are in the photo I would be very grateful. It would be an interesting addition to my scale model collection of Victorious’ aircraft, and a fitting tribute to have a USN machine alongside the FAA aircraft to servicemen of both nations. The Victorious was not the first to have an angle deck, she was the first to have the current fully angled deck, she also had steam catapults, mirror landing sight and the 984 see over the horizon radar, when she commissioned in 1958 Paul, refer you to golden Wings and Navy Blue. An excellent account of 832 Squadron and the conversion to Avengers in1943. Another first for the Vic and 832? Limey, to confirm or counter! Vic was used to trial the first RN Sea Vixens equipped with air to air missiles. Post angled deck conversion I believe. David, We took the Sea Vixen onboard for trials and to replace the Sea Venoms in 1959. The Sea Vixen became the first swept-wing aircraft and the first British aircraft to be solely armed with missiles, rockets and bombs. It was armed with four de Havilland Firestreak air-to-air missiles, two Microcell unguided 2 inch (51 mm) rocket packs and had a capacity for four 500 lb (230 kg) bombs or two 1000 lb bombs. It was powered by two 50.0 kN (11,230 lbf) thrust Rolls-Royce Avon 208 turbojet engines; had a speed of 690 mph (1110 km/h) and a range of 600 miles (1000 km). The FAW 2 succeeded the FAW.1 and included many improvements. As well as the Firestreak it could carry the Red Top AAM (Air to Air Missile), four SNEB rocket pods and the air-to-ground Bullpup missile. An enlarged tail boom allowed for additional fuel tanks, in the “pinion” extensions above and before the wing leading edge, and there was an improved escape system and additional room for more electronic counter-measures equipment. However, the changes in aerodynamics meant that the 1000 lb bomb was no longer usable. The FAW.2 first flew in 1962 and entered service with front-line squadrons in 1964, with twenty-nine being built and a further sixty-seven FAW.1s being upgraded to FAW.2 standard. The FAW.1 began phasing out in 1966 Barry Barry, good information thanks. Used to park our syndicate beechcraft next to the last flying example when it was based at bournemouth. Shared the circuit with her many times, and watched the maintenance crews regularly. Hated the Red Bull paint scheme but needs must…. Understand she’s now back in Navy Blue. Hopefully we can learn from the Shoreham tragedy to a point where we can have these a/c flying again. Cheers. d I am so happy I stumbled upon this article. I have never had this come up in the many searches I have done while researching my father’s service as a CY in World War II more specifically the Pacific Fleet’s. I am excited to find a new avenue to search in hopes of learning something new. So wonderful of you to share this with others. Truly, I am sorry to post that Lt. John W. Herbert RNVR, Crossed the Bar January 2 2018 aged 95. John served on HMS Victorious AKA USS Robin, HMS Implacable, HMS Pursuer and 896, 881 Naval Air Squadrons.
true
true
true
null
2024-10-12 00:00:00
2011-02-16 00:00:00
null
null
armchairgeneral.com
USS Robin – The Victorious U.S. Carrier that Didn’t Exist | Armchair General Magazine
null
null
671,052
http://www.slimy.com/~steuard/teaching/tutorials/Lagrange.html
An Introduction to Lagrange Multipliers
null
Lagrange multipliers are used in multivariable calculus to find maxima and minima of a function subject to constraints (like "find the highest elevation along the given path" or "minimize the cost of materials for a box enclosing a given volume"). It's a useful technique, but all too often it is poorly taught and poorly understood. With luck, this overview will help to make the concept and its applications a bit clearer. Be warned: this page may not be what you're looking for! If you're looking for detailed proofs, I recommend consulting your favorite textbook on multivariable calculus: my focus here is on concepts, not mechanics. (Comes of being a physicist rather than a mathematician, I guess.) If you want to know about Lagrange multipliers in the calculus of variations, as often used in Lagrangian mechanics in physics, this page only discusses them briefly. Here's a basic outline of this discussion: - When are Lagrange multipliers useful? - A classic example: the "milkmaid problem" - Graphical inspiration for the method - The mathematics of Lagrange multipliers - A formal mathematical inspiration - Several constraints at once - The meaning of the multiplier (inspired by physics and economics) - Examples of Lagrange multipliers in action - Lagrange multipliers in the calculus of variations (often in physics) - An example: rolling without slipping ## When are Lagrange multipliers useful? One of the most common problems in calculus is that of finding maxima or minima (in general, "extrema") of a function, but it is often difficult to find a closed form for the function being extremized. Such difficulties often arise when one wishes to maximize or minimize a function subject to fixed outside conditions or constraints. The method of Lagrange multipliers is a powerful tool for solving this class of problems without the need to explicitly solve the conditions and use them to eliminate extra variables. Put more simply, it's usually not enough to ask, "How do I minimize the aluminum needed to make this can?" (The answer to that is clearly "Make a really, really small can!") You need to ask, "How do I minimize the aluminum *while making sure the can will hold 10 ounces of soup*?" Or similarly, "How do I maximize my factory's profit *given that I only have $15,000 to invest*?" Or to take a more sophisticated example, "How quickly will the roller coaster reach the ground *assuming it stays on the track*?" In general, Lagrange multipliers are useful when some of the variables in the simplest description of a problem are made redundant by the constraints. ## A classic example: the "milkmaid problem" To give a specific, intuitive illustration of this kind of problem, we will consider a classic example which I believe is known as the "Milkmaid problem". It can be phrased as follows: It's milking time at the farm, and the milkmaid has been sent to the field to get the day's milk. She's in a hurry to get back for a date with a handsome young goatherd, so she wants to finish her job as quickly as possible. However, before she can gather the milk, she has to rinse out her bucket in the nearby river. Just when she reaches point **M**, our heroine spots the cow, way down at point **C**. Because she is in a hurry, she wants to take the shortest possible path from where she is to the river and then to the cow. So what is the best point **P** on the riverbank for her to rinse the bucket? (To keep things simple, we assume that the field is flat and uniform and that all points on the river bank are equally good.) To put this into more mathematical terms, the milkmaid wants to find the point **P** for which the total distance f(**P**) is a minimum, where f(**P**) is the sum of the distance d(**M**,**P**) from **M** to **P** and the distance d(**P**,**C**) from **P** to **C** (a straight line is the shortest distance between two points across a flat field). It's not quite this simple, however: if that were the whole problem, then we could just choose **P** anywhere on the line between **M** and **C** and the milkmaid would walk straight to the cow. But we have to impose the *constraint* that **P** is a point on the riverbank. To express that constraint in equations, we can describe the shape of the near bank of the river as a curve satisfying some function g(x,y) = 0. (For example, if the river were a parabola we might choose g(x,y) = y - x2. If it were instead a circular lake of radius r, we might choose g(x,y) = x2 + y2 - r2.) So formally, we must minimize the function f(**P**) = d(**M**,**P**) + d(**P**,** C**), subject to the constraint that g(**P**) = 0. ## Graphical inspiration for the method Our first way of thinking about this problem can be obtained directly from the picture itself. We want to visualize how far the milkmaid could get for any fixed total distance f(**P**), and use that visualization to locate the optimal point **P**. If we only cared about the distance of **P** from **M**, we might picture this as a set of concentric circles (curves of constant distance d(**M**,**P**): 5 meters, 10 meters, 15 meters, etc.). As soon as one of those circles was big enough to touch the river, we'd recognize the point where it touched as the closest riverbank point to **M** (and its radius as the minimum distance). Of course, for most problems the curves of constant f(**P**) are more complicated than a bunch of concentric circles! (Often *much* more complicated.) But our particular example is made simpler by an obscure fact from geometry: for every point **P** on a given ellipse, the total distance from one focus of the ellipse to **P** and then to the other focus is exactly the same. (You don't need to know where this fact comes from to understand the example! But you can see it work for yourself by drawing a near-perfect ellipse with the help of two nails, a pencil, and a loop of string.) In our problem, that means that the milkmaid could get to the cow by way of any point on a given ellipse in the same amount of time: the ellipses are curves of constant f(**P**). Therefore, to find the desired point **P** on the riverbank, we must simply find the smallest ellipse with **M** and **C** as foci that intersects the curve of the river. Just to be clear, only the "constant f(**P**)" property of the curves is really important; the fact that these curves are ellipses is just a lucky convenience (ellipses are easy to draw). The same idea will work no matter what shape the curves of constant f(**P**) happen to be. The image at right shows a sequence of ellipses of larger and larger size whose foci are **M** and **C**, ending with the one that is just *tangent* to the riverbank. This is a very significant word! It is obvious from the picture that the "perfect" ellipse and the river are truly tangential to each other at the ideal point **P**. More mathematically, this means that the normal vector to the ellipse is in the same direction as the normal vector to the riverbank. A few minutes' thought about pictures like this will convince you that this fact is not specific to this problem: it is a general property whenever you have constraints. And that is the insight that leads us to the method of Lagrange multipliers. ## The mathematics of Lagrange multipliers In multivariable calculus, the gradient of a function h (written ∇h) is a normal vector to a curve (in two dimensions) or a surface (in higher dimensions) on which h is constant: **n** = ∇h(**P**). The length of the normal vector doesn't matter: any constant multiple of ∇h(**P**) is also a normal vector. In our case, we have two functions whose normal vectors are parallel, so **P**) = λ ∇g( **P**). The unknown constant multiplier λ is necessary because the magnitudes of the two gradients may be different. (Remember, all we know is that their directions are the same.) In D dimensions, we now have D+1 equations in D+1 unknowns. D of the unknowns are the coordinates of **P** (e.g. x, y, and z for D = 3), and the other is the new unknown constant λ. The equation for the gradients derived above is a vector equation, so it provides D equations of constraint. I once got stuck on an exam at this point: don't let it happen to you! The original constraint equation g(**P**) = 0 is the final equation in the system. Thus, in general, a unique solution exists. As in many maximum/minimum problems, cases do exist with multiple solutions. There can even be an infinite number of solutions if the constraints are particularly degenerate: imagine if the milkmaid and the cow were both already standing right at the bank of a straight river, for example. In many cases, the actual value of the Lagrange multiplier isn't interesting, but there are some situations in which it can give useful information (as discussed below). That's it: that's all there is to Lagrange multipliers. Just set the gradient of the function you want to extremize equal to the gradient of the constraint function. You'll get a vector's worth of (algebraic) equations, and together with the original constraint equation they determine the solution. ## A formal mathematical inspiration There is another way to think of Lagrange multipliers that may be more helpful in some situations and that can provide a better way to remember the details of the technique (particularly with multiple constraints as described below). Once again, we start with a function f(**P**) that we wish to extremize, subject to the condition that g(**P**) = 0. Now, the usual way in which we extremize a function in multivariable calculus is to set ∇f(**P**) = 0. How can we put this condition together with the constraint that we have? One answer is to add a new variable λ to the problem, and to define a new function to extremize: **P**, λ) = f( **P**) - λ g( **P**). (Some references call this F "the Lagrangian function". I am not familiar with that usage, although it must be related to the somewhat similar "Lagrangian" used in advanced physics.) We next set ∇F(**P**, λ) = 0, but keep in mind that the gradient is now D + 1 dimensional: one of its components is a partial derivative with respect to λ. If you set this new component of the gradient equal to zero, you get the constraint equation g(**P**) = 0. Meanwhile, the old components of the gradient treat λ as a constant, so it just pulls through. Thus, the other D equations are precisely the D equations found in the graphical approach above. As presented here, this is just a trick to help you reconstruct the equations you need. However, for those who go on to use Lagrange multipliers in the calculus of variations, this is generally the most useful approach. I suspect that it is in fact very fundamental; my comments about the meaning of the multiplier below are a step toward exploring it in more depth, but I have never spent the time to work out the details. ## Several constraints at once If you have more than one constraint, all you need to do is to replace the right hand side of the equation with the sum of the gradients of each constraint function, each with its own (different!) Lagrange multiplier. This is usually only relevant in at least three dimensions (since two constraints in two dimensions generally intersect at isolated points). Again, it is easy to understand this graphically. [My thanks to Eric Ojard for suggesting this approach]. Consider the example shown at right: the solution is constrained to lie on the brown plane (as an equation, "g(**P**) = 0") and also to lie on the purple ellipsoid ("h(**P**) = 0"). For both to be true, the solution must lie on the black ellipse where the two intersect. I have drawn several normal vectors to each constraint surface along the intersection. The important observation is that both normal vectors are perpendicular to the intersection curve at each point. In fact, any vector perpendicular to it can be written as a linear combination of the two normal vectors. (Assuming the two are linearly independent! If not, the two constraints may already give a specific solution: in our example, this would happen if the plane constraint was exactly tangent to the ellipsoid constraint at a single point.) The significance of this becomes clear when we consider a three dimensional analogue of the milkmaid problem. The pink ellipsoids at right all have the same two foci (which are faintly visible as black dots in the middle), and represent surfaces of constant total distance for travel from one focus to the surface and back to the other. As in two dimensions, the optimal ellipsoid is *tangent* to the constraint curve, and consequently its normal vector is perpendicular to the combined constraint (as shown). Thus, the normal vector can be written as a linear combination of the normal vectors of the two constraint surfaces. In equations, this statement reads **P**) = λ ∇g( **P**) + μ ∇h( **P**). just as described above. The generalization to more constraints and higher dimensions is exactly the same. ## The meaning of the multiplier As a final note, I'll say a few words about what the Lagrange multiplier "means", in ways inspired by both physics and economics. In our mostly geometrical discussion so far, λ was just an artificial variable that lets us compare the directions of the gradients without worrying about their magnitudes. But in cases where the function f(**P**) and the constraint g(**P**) have specific meanings, the Lagrange multiplier often has an identifiable significance as well. One example of this is inspired by the physics of forces and potential energy. In the formal approach based on the combined "Lagrangian function" F(**P**, λ) described two sections above, the constraint function g(**P**) can be thought of as "competing" with the desired function f(**P**) to "pull" the point **P** to its minimum or maximum. The Lagrange multiplier λ can be thought of as a measure of how hard g(**P**) has to pull in order to make those "forces" balance out on the constraint surface. (This generalizes naturally to multiple constraints, which typically "pull" in different directions.) And in fact, that word "forces" is very significant: in physics based on Lagrange multipliers in the calculus of variations (as described below) this analogy turns out to be literally true: there, λ *is* the force of constraint. The Lagrange multiplier λ has meaning in economics as well. If you're maximizing profit subject to a limited resource, λ is that resource's marginal value (sometimes called the "shadow price" of the resource). Specifically, the value of the Lagrange multiplier is the rate at which the optimal value of the function f(**P**) changes if you change the constraint. I don't have an immediate intuitive explanation for why this is true, but the steps of the formal proof are at least reasonably illuminating. To demonstrate this result mathematically (following the approach of this economics-inspired tutorial by Martin Osborne), write the constraint in the form "g(**P**) = g(x,y) = c" for some constant c. (This is mathematically equivalent to our usual g(**P**)=0, but allows us to easily describe a whole family of constraints. Also, I am writing this in terms of just two coordinates x and y for clarity, but the generalization to more is straightforward.) For any given value of c, we can use Lagrange multipliers to find the optimal value of f(**P**) and the point where it occurs. Call that optimal value f0, occurring at coordinates (x0, y0) and with Lagrange multiplier λ0. The answers we get will all depend on what value we used for c in the constraint, so we can think of these as functions of c: f0(c), x0(c), and y0(c). To find how the optimal value changes when you change the constraint, just take the derivative: df0/dc. Of course, f(**P**) only depends on c because the optimal coordinates (x0, y0) depend on c: we could write it as f0(x0(c),y0(c)). So we have to use the (multivariable) chain rule: 0/dc = ∂f 0/∂x 0dx 0/dc + ∂f 0/∂y 0dy 0/dc = ∇f 0∙ d **x** 0/dc In the final step, I've suggestively written this as a dot product between the gradient of f0 and the derivative of the coordinate vector. So here's the clever trick: use the Lagrange multiplier equation to substitute ∇f = λ∇g: 0/dc = λ 0∇g 0∙ d **x** 0/dc = λ 0dg 0/dc But the constraint function is *always* equal to c, so dg0/dc = 1. Thus, df0/dc = λ0. That is, the Lagrange multiplier is the rate of change of the optimal value with respect to changes in the constraint. This is a powerful result, but be careful when using it! In particular, you have to make sure that your constraint function is written in just the right way. You would get the exact same optimal value f0 whether you wrote "g(x,y) = x+y = 0" or "g(x,y) = -2x-2y = 0", but the resulting Lagrange multipliers would be quite different. I haven't studied economic applications Lagrange multipliers myself, so if that is your interest you may want to look for other discussions from that perspective once you understand the basic idea. (The tutorial that inspired my discussion here seems reasonable.) The best way to understand is to try working examples yourself; you might appreciate this problem set introducing Lagrange multipliers in economics both for practice and to develop intuition. ## Examples of Lagrange multipliers in action ### A box of minimal surface area What shape should a rectangular box with a specific volume (in three dimensions) be in order to minimize its surface area? (Questions like this are very important for businesses that want to save money on packing materials.) Some people may be able to guess the answer intuitively, but we can prove it using Lagrange multipliers. Let the lengths of the box's edges be x, y, and z. Then the constraint of constant volume is simply g(x,y,z) = xyz - V = 0, and the function to minimize is f(x,y,z) = 2(xy+xz+yz). The method is straightforward to apply: (The angle bracket notation <a,b,c> is one conventional way to denote a vector.) Now just solve those three equations; the solution is x = y = z = 4/λ. We could eliminate λ from the problem by using xyz = V, but we don't need to: it is already clear that the optimal shape is a cube. ### The closest approach of a line to a point This example isn't the perfect illustration of where Lagrange multiples are useful, since it is fairly easy to solve without them and not all that convenient to solve with them. But it's a very simple idea, and because of a careless mistake on my part it was the first example that I applied the technique to. Here's the story... When I first took multivariable calculus (and before we learned about Lagrange multipliers), my teacher showed the example of finding the point **P** = <x,y> on a line (y = m x + b) that was closest to a given point **Q** = <x0,y0>. The function to minimize is of course **P**, **Q**) = sqrt[(x-x 0)² + (y-y 0)²]. (Here, "sqrt" means "square root", of course; that's hard to draw in plain text.) The teacher went through the problem on the board in the most direct way (I'll explain it later), but it was taking him a while and I was a little bored, so I idly started working the problem myself while he talked. I just leapt right in and set ∇d(x,y) = 0, so 0, y-y 0> / sqrt[(x-x 0)² + (y-y 0)²] = <0,0>, and thus x = x0 and y = y0. My mistake here is obvious, so I won't blame you for having a laugh at my expense: I forgot to impose the constraint that <x,y> be on the line! (In my defense, I wasn't really focusing on what I was doing, since I was listening to lecture at the same time.) I felt a little silly, but I didn't think much more about it. Happily, we learned about Lagrange multipliers the very next week, and I immediately saw that my mistake had been a perfect introduction to the technique. We write the equation of the line as g(x,y) = y - m x - b = 0, so ∇g(x,y) = <-m,1>. So we just set the two gradients equal (up to the usual factor of λ), giving 0, y-y 0> / sqrt[(x-x 0)² + (y-y 0)²] = λ<-m,1>. The second component of this equation is just an equation for λ, so we can substitute that value for λ into the first component equation. The denominators are the same and cancel, leaving just (x-x0) = -m(y-y0). Finally, we substitute y = m x+b, giving x-x0 = -m² x - m b + m y0, so we come to the final answer: x = (x0 + m y0 - m b) / (m² + 1). (And thus y = (m x0 + m² y0 + b)/(m² + 1).) So what did my teacher actually do? He used the equation of the line to substitute y for x in d(**P**,**Q**), which left us with an "easy" single-variable function to deal with... but a rather complicated one: **P**, **Q**) = sqrt[(x - x 0)² + (m x + b - y 0)²] To solve the problem from this point, you take the derivative and set it equal to zero as usual. It's a bit of a pain, since the function is a mess, but the answer is x = (x0 + m y0 - m b)/(m² + 1). That's exactly what we got earlier, so both methods seem to work. In this case, the second method may be a little faster (though I didn't show all of the work), but in more complicated problems Lagrange multipliers are often much easier than the direct approach. Finally, I've recorded a couple of video examples of solving problems using Lagrange multipliers. They're far from perfect, and they're aimed more at problem solving than explaining the concepts, but some folks may be interested. The first does try to explain the concepts a little in the context of finding the closest point to the origin on a parabola. Another is purely problem solving, ostensibly about finding the nearest water to a house: I've also recorded a video example involving a (somewhat artificial) commodities trading firm with five parameters and two constraints: ## Lagrange multipliers in the calculus of variations (often in physics) This section will be brief, in part because most readers have probably never heard of the calculus of variations. Many people first see this idea in advanced physics classes that cover Lagrangian mechanics, and that will be the perspective taken here (in particular, I will use variable names inspired by physics). If you don't already know the basics of this subject (specifically, the Euler-Lagrange equations), you'll probably want to just skip this section. The calculus of variations is essentially an extension of calculus to the case where the basic variables are not simple numbers **x**i (which can be thought of as a position) but functions **x**i(t) (which in physics corresponds to a position that changes in time). Rather than seeking the numbers **x**i that extremize a function f(**x**i), we seek the *functions* **x**i(t) that extremize the integral (dt) of a function L[**x**i(t), **x**i'(t), t], where **x**i'(t) are the time derivatives of **x**i(t). (The reason we have to integrate first is to get an ordinary number out: we know what "maximum" and "minimum" mean for numbers, but there could be any number of definitions of those concepts for functions.) In most cases, we integrate between fixed values t0 and t1, and we hold the values **x**i(t0) and **x**i(t1) fixed. (In physics, that means that the initial and final positions are held constant, and we're interested finding the "best" path to get between them; L defines what we mean by "best".) The solutions to this problem can be shown to satisfy the Euler-Lagrange equations (I have suppressed the "(t)" in the functions **x**i(t): **x** i- d/dt ( ∂L/∂ **x** i' ) = 0. (Note that the derivative d/dt is a total derivative, while the derivatives with respect to **x**i and **x**i' are "partials", at least formally.) Imposing constraints on this process is often essential. In physics, it is common for an object to be constrained on some track or surface, or for various coordinates to be related (like position and angle when a wheel rolls without slipping, as discussed below). To do this, we follow a simple generalization of the procedure we used in ordinary calculus. First, we write the constraint as a function set equal to zero: g(**x**i, t) = 0. (Constraints that necessarily involve the derivatives of **x**i often cannot be solved.) And second, we add a term to the function L that is multiplied by a new function λ(t): Lλ[**x**i, **x**i', λ, t] = L[**x**i, **x**i', t] + λ(t) g(**x**i, t). From here, we proceed exactly as you would expect: λ(t) is treated as another coordinate function, just as λ was treated as an additional coordinate in ordinary calculus. The Euler-Lagrange equations are then written as **x** i- d/dt (∂L/∂ **x** i') + λ(t) (∂g/∂ **x** i) = 0. This can be generalized to the case of multiple constraints precisely as before, by introducing additional Lagrange multiplier functions like λ. There are further generalizations possible to cases where the constraint(s) are linear combinations of derivatives of the coordinates (rather than the coordinates themselves), but I won't go into that much detail here. As mentioned in the calculus section, the meaning of the Lagrange multiplier function in this case is surprisingly well-defined and can be quite useful. It turns out that **Q**i = λ(t) (∂g/∂**x** i) is precisely the force required to impose the constraint g(**x**i, t) (in the "direction" of **x**i). This is fairly natural: the constraint term (λ g) added to the Lagrangian plays the same role as a (negative) potential energy -Vconstraint, so we can compute the resulting force as ∇(-Vconstr) = λ ∇g in something reminiscent of the usual way. Thus, for example, Lagrange multipliers can be used to calculate the force you would feel while riding a roller coaster. If you want this information, Lagrange multipliers are one of the best ways to get it. ## An example: rolling without slipping One of the simplest applications of Lagrange multipliers in the calculus of variations is a ball (or other round object) rolling down a slope without slipping in one dimension. (As usual, a problem this simple can probably be solved just as easier by other means, but it still illustrates the idea.) Let the ball have mass M, moment of inertia I, and radius R, and let the angle of the slope be α. We choose the coordinate x to point up the slope and the coordinate θ to show rotation in the direction that would naturally go in that same direction, so that the "rolling without slipping" condition is x = R θ. Both x and θ are functions of time, x(t) and θ(t), but for clarity I will not write that dependence explicitly. In general, the kinetic energy T of an object undergoing both translational and rotational motion is T = ½ M (x')2 + ½ I (θ')2. (As before, the prime (') denotes a time derivative, so this is a function of velocity and angular velocity.) Meanwhile, the potential energy V of the ball can be written as V = M g h = M g x sin α. Thus, the Lagrangian for the system is 2+ ½ I (θ') 2- M g x sin α. If we solved the Euler-Lagrange equations for this Lagrangian as it stands, we would find that x(t) described the ball sliding down the slope with constant acceleration in the absence of friction while θ(t) described the ball rotating with constant angular velocity: the rotational motion and translational motion are completely independent (and no torques act on the system). To impose the condition of rolling without slipping, we use a Lagrange multiplier function λ(t) to force the constraint function G(x,θ,t) = x - R θ to vanish: 2+ ½ I (θ') 2- M g x sin α + λ (x - R θ). (That is, λ is multiplying the constraint function, as usual.) We now find the Euler-Lagrange equations for each of the *three* functions of time: x, θ, and λ. Respectively, the results are: I θ'' + R λ = 0, and x - R θ = 0. It is then straightforward to solve for the three "unknowns" in these equations: 2/ (M R 2+ I) θ'' = -g sin α M R / (M R 2+ I) λ = M g sin α I / (M R 2+ I). The first two equations give the constant acceleration and angular acceleration experienced as the ball rolls down the slope. And the equation for λ is used in finding the forces that implement the constraint in the problem. Specifically, these are the force and torque due to friction felt by the ball: x= λ ∂G/∂x = M g sin α I / (M R 2+ I) τ = λ ∂G/∂θ = -R M g sin α I / (M R 2+ I). Looking at these results, we can see that the force of friction is positive: it points in the +x direction (up the hill) which means that it is slowing down the translational motion. But the torque is negative: it acts in the direction corresponding to rolling down the hill, which means that the speed of rotation increases as the ball rolls down. That is exactly what we would expect! As a final note, you may be worried that we would get different answers for the forces of constraint if we just normalized the constraint function G(x,θ,t) differently (for example, if we set G = 2 x - 2 R θ). Happily, that will not end up affecting the final answers at all. A change in normalization for G will lead to a different answer for λ (e.g. exactly half of what we found above), but the products of λ and the derivatives of G will remain the same. ## Final thoughts This page certainly isn't a *complete* explanation of Lagrange multipliers, but I hope that it has at least clarified the basic idea a little bit. I'm always glad to hear constructive criticism and positive feedback, so feel free to write to me with your comments. (My thanks to the many people whose comments have already helped me to improve this presentation.) I hope that I have helped to make this extremely useful technique make more sense. Best wishes using Lagrange multipliers in the future! Up to my tutorial page. Up to my teaching page. Up to my professional page. My personal site is also available. Any questions or comments? Write to me: [email protected] Copyright © 2004-15 by Steuard Jensen. Thanks to Adam Marshall, Stan Brown, Eric Ojard, and many others for helpful suggestions!
true
true
true
null
2024-10-12 00:00:00
2004-01-01 00:00:00
null
null
null
null
null
null
14,323,336
https://www.cisco.com/c/en/us/products/collateral/data-center-analytics/tetration-analytics/white-paper-c11-737370.html
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
6,241,242
http://arstechnica.com/security/2013/08/seemingly-benign-jekyll-app-passes-apple-review-then-becomes-evil/
Seemingly benign “Jekyll” app passes Apple review, then becomes “evil”
Dan Goodin
Computer scientists say they found a way to sneak malicious programs into Apple's exclusive app store without being detected by the mandatory review process that's supposed to automatically flag such apps. The researchers from the Georgia Institute of Technology used the technique to create what appeared to be a harmless app that Apple reviewers accepted into the iOS app store. They were later able to update the app to carry out a variety of malicious actions without triggering any security alarms. The app, which the researchers titled "Jekyll," worked by taking the binary code that had already been digitally signed by Apple and rearranging it in a way that gave it new and malicious behaviors. "Our method allows attackers to reliably hide malicious behavior that would otherwise get their app rejected by the Apple review process," the researchers wrote in a paper titled *Jekyll on iOS: When Benign Apps Become Evil*. "Once the app passes the review and is installed on an end user's device, it can be instructed to carry out the intended attacks. The key idea is to make the apps remotely exploitable and subsequently introduce malicious control flows by rearranging signed code. Since the new control flows do not exist during the app review process, such apps, namely Jekyll apps, can stay undetected when reviewed and easily obtain Apple’s approval."
true
true
true
App that sneaks into App store can be used to steal data, take stealth photos.
2024-10-12 00:00:00
2013-08-20 00:00:00
https://cdn.arstechnica.…08/dr-jekyll.jpg
article
arstechnica.com
Ars Technica
null
null
28,228,985
https://smackeyacky.blogspot.com/2021/08/debian-day-4-aws-cli-net-and-lambda.html
Debian day 4: AWS CLI, .NET and Lambda functions
David Rubie
### Debian day 4: AWS CLI, .NET and Lambda functions # How did I get so dependent on the GUI? Visual Studio with AWS Explorer on Windows definitely makes you lazy. I mean, it's fantastic to be able to build and deploy something like a Lambda function by right-clicking on the project, but sometimes it's better to understand what is going on under the hood. Despite which, I am now pretty committed to switching to Visual Studio Code now that I am able to build everything in a similar way. My major concern at this point was making sure that when working on Windows (which will still happen) that the build/test/deploy sequence is almost exactly the same as on Debian. I thought that perhaps I would have to pull a pile of stupid stunts like this with wacky hybrid scripts but then I remembered WSL2! Windows Subsystem for Linux (version 2) is a proper Linux kernel running alongside your Windows system. I have been using it quite a lot when dealing with some hardware that we use, the Cassia Networks X1000 Bluetooth gateway. At SmartShepherd we are often faced with 500 or more Bluetooth LE devices we need to connect to in a short period of time and it's just not possible using the bluetooth stack on Android (or pretty much anything). I have a small server application that runs on the Cassia (they use Ubuntu containers) and the environment hosts SSH sessions and secure copy for testing new versions of this. Hence the Debian install of WSL2 was getting a workout. However, because it's already installed I might as well go the whole hog and try to replicate the .NET environment I have on Chungus the Proliant here on BUSTER3, the Acer Nitro. ## Documentation and Installation In no particular order: Install .NET 3.1 in your local directory. I use wget, you could use curl or just download it via your browser and copy it somewhere. ` wget https://dot.net/v1/dotnet-install.sh` chmod 755 dotnet-install.sh ./dotnet-install.sh -c 3.1 Once you have .NET installed (it will end up in a hidden directory `.dotnet` in your home directory, you then edit the .bashrc to have the following last three lines: ``` export PATH="$PATH:$HOME/.dotnet:$HOME/.dotnet/tools" ``` export DOTNET_ROOT=$HOME/.dotnet export DOTNET_MULTILEVEL_LOOKUP=0 Not really sure what that last one does, but the tools seem to require it if you have .NET installed in your user directory rather than globally. I prefer it installed just for me, not because there are 100 other users on my machine, more force of habit from a time when I did Unix system administration on large, multi-user machines and we always defaulted to local installs of unknown stuff until we trusted it. My mentor back then insisted that the root user remain unsullied and untouched, factory fresh, so that you were never locked out of your multi hundred thousand dollar machine. Seems a bit silly on something like the Proliant that owes me about $150 but old habits die hard sometimes. Installing the Amazon CLI is straightforward although you might have to `sudo apt-get install zip unzip` , I didn't on Chungus, but I did have to on Buster3. ``` wget https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip ``` unzip awscliv2.zip sudo ./aws/install aws --version Lastly, the AWS tools for dotnet and the dotnet lambda templates. All of this basically allows you access to the things you would normally right-click for in Visual Studio. ``` dotnet new --install Amazon.Lambda.Templates::5.4.0 ``` dotnet tool install -g Amazon.Lambda.Tools ## Well, yeah, but does it work? The Amazon.Lambda.Templates are there for creating new Lambda functions, they create a file aws-lambda-tools.defaults.json in your .NET project root that you use to fill out things like the region where your lambda function is going, security stuff etc. that I won't go into here, because AWS have better documentation for it. Suffice to say that it does work, so (for example) if you want to package up your .NET 3.1 Lambda function as a zip file ready to deploy, you head to the root of your project and do this: ``` dotnet lambda package ``` and sure enough, it creates the zip file. `dotnet lambda deploy` will replace the existing Lambda. Since I did all the above in parallel on both Chungus and Buster3, the environments are as close to identical as I can make them and I can now build and deploy my Lambda functions. Next stop: Docker deployment to ECS. ### Footnote The reason Buster3 got it's name isn't terribly interesting, but I'd like to record it for posterity. When I resigned to start SmartShepherd, I got home and realised I didn't actually have a laptop to work on. Work had provided a nice little HP Spectre but when I priced one up my jaw hit the floor. They are beautiful machines but too expensive for a struggling startup. One of my kids had a cast-off laptop they used for noodling on which was a relatively ancient Toshiba C650 with a broken hinge. It was broken so I called it BUSTER. It's specifications are unimpressive but it was enough to be able to run Ti's Code Composer Studio to the point where we had an MVP. As soon as I had a few dollars I replaced it with a new Acer Aspire E5-553 called BUSTER2 because why not. Being stupid, and wanting a bargain, I didn't look too closely at the specifications, it was cheap, had a dedicated graphics card and had 16Gb of memory. Later, when I needed to play with virtualisation for Docker, it turned out the stupid thing had all virtualisation disabled in the BIOS and also in the firmware! It took a gut-wrenching play with the firmware setup to get it working. I had sworn off Acer machines after that, although BUSTER2 earned it's name. Buster2 accompanied us to Shenzhen to be part of HAX10 and mostly did the job. He still carries the stickers from that journey. BUSTER3 was something of an accident. We were travelling (pre covid of course) and BUSTER2 was giving me a lot of grief, the keyboard was playing up and Windows 10 Pro had ground to a near halt. I spied a 2nd hand Acer Nitro 5 in the window of a well known tech recycling chain and the price was right. Despite having publically sworn off Acer machines, the specs were too good to pass up. It's been fantastic, highly recommended if you need something affordable but high performance for dev work.
true
true
true
How did I get so dependent on the GUI? Visual Studio with AWS Explorer on Windows definitely makes you lazy. I mean, it's fantastic to be ...
2024-10-12 00:00:00
2021-08-19 00:00:00
https://blogger.googleusercontent.com/img/a/AVvXsEjWtYggKMTOUmsTlY1s9vxiFbCwr8Ao0iLFnV_yRgu-a7NtM3G5toHlROCDObORfHri94M7T_6Vm3fDlBEYdsTRLX1DLWbwxZByCgRekrrlRfA-q-KDiudBD2m--VP6ZgdeOD_-hsOmzXlj1m1lI3Kwk5Uar2oPPDP7XcaRyP-8bDi4FHCrqZDQ2xdc9w=w1200-h630-p-k-no-nu
blogposting
blogspot.com
Blogger
null
null
30,625,679
https://www.nytimes.com/2022/03/09/nyregion/marijuana-sellers-licenses-hochul.html
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
4,164,941
http://blogs.wsj.com/marketbeat/2012/06/26/mark-cuban-high-frequency-traders-are-the-ultimate-hackers/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
15,232,928
https://get.stockpile.com/apple-event-2017/?utm_source=stockpile&utm_medium=hackernews&utm_campaign=stklg-apple
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
7,190,536
http://angular-ui.github.io/ng-grid/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
4,212,066
http://hidemyass.com/vpn/r6551
HMA VPN service | Unblock Websites with HMA
null
# Get more from your internet with HMA VPN Access blocked sites with the only top-rated VPN provider that covers the world. Limited-time offer. ### Unlock the world, stream its stuff The internet is big, but you’ll be stuck only seeing a fraction of it without a VPN. Securely and quickly access streaming sites from over 190+ countries. ### 20 Gbps servers speed### XXX servers optimized for streaming### 0 data limits### Online privacy you can count on You have the right to keep your online life as private as possible from hackers, your ISP, and even your government. Any VPN can do that, but HMA does it best. ### 2.73 * 1061 years To hack our security ### 256-bit AES encryption### 0 LOGS### The biggest VPN network in the world We’re not just talking about our egos: with servers in nearly every country in the globe, there’s nothing you can’t access with HMA VPN. ### countries available ### Locations to choose from ### servers ## Designed with you in mind Turning on your VPN shouldn't be rocket science. Our simple, easy-to-understand UI ensures you'll be private in a few quick clicks. ### All your devices with one subscription Install HMA VPN on as many Windows, macOS, iOS, and Android devices as you want, and keep multiple connected at once: 5 normally, or 10 with the Family Plan. Download HMA VPN## Features in HMA VPN - **The biggest VPN network**Pick from 290+ locations around the world. - **No Log Policy**Stay private from everyone, even us. - **Blazing fast servers**Enjoy servers that operate at 20 Gbps. - **OpenVPN protocol**Ensure better speeds and transparency. - **Lightning Connect**Find the fastest server in one click. - **IP Shuffle**Randomize your IP address to avoid trackers. - **Kill Switch**Blocks the internet if you lose your VPN connection. - **Smart Kill Switch**Automatically launch HMA to protect your apps. - **No Log DNS**Rely on our no-log DNS to avoid leaving a trail. - **Leak Shield**Stop IPv4, IPv6, and DNS leaks when you browse. ### We’ve learned a few things in 19+ years For example, we’ve learned that no matter how great your VPN is, it means nothing if your customers don’t love — or trust — you. Which is why we’ll do everything in our power to earn both, with our thirty-day money-back guarantee, world-class support team, and constant updates. ## Try HMA VPN risk-free with our 30-day money-back guarantee - 1100+ servers in 290+ locations worldwide - 20 Gbps connection speeds - Servers for streaming & P2P connections - Lightning Connect, Split-Tunneling, Kill Switch, & more! - For - Unlimited installs with 5 simultaneous connections **30-day money-back guarantee** ## A few words about us We found [HMA]’s encryption methods and security features top-notch. The user interface is one of the easiest to use and we chuckled at the donkey animation going stealth every time. Contributor, Reviewed Some of this VPN’s biggest strengths are its user-friendly interface, impressive speeds, and ability to bypass geoblocks (even Netflix’s!). Technology Researcher, VPNMentor The list of available server locations offered by HMA is particularly noteworthy because it covers regions often ignored by other VPN companies. It has, for example, numerous server locations across the continent of Africa. Senior Security Analyst, PCmag.com ## Some good-to-know tips… No lie: it’s possible. See, when you use a VPN, before you go to the website or service you want to use, your connection first hops to one of our remote servers. This is not an instantaneous process, so if someone in the US connects to a server in India to visit their local mayor’s website, their data is effectively traveling around the world — twice — to reach their screen: from their router to the HMA server in India, to the mayor’s website that’s likely hosted on a server in the states, then back to the server in India, then back to their router. If your connection is fast enough, and you connect to one of our high-speed servers, the slowdown will be so negligible you won't notice it. If your connection is slow, and you connect to one of our more remote servers, there’s likely to be a noticeable slowdown. Fortunately, we have tools like Lightning Connect that’ll ensure you never have to worry about any slowdown when you’re keeping yourself private online with HMA VPN. Just one: please don’t return it with any water damage. Do you know how hard it is to dry out a VPN? We don’t, because it’s never happened before, and we don’t plan to find out. But no, for real — if you take advantage of our 30-day money-back guarantee, you get your money back, no questions asked. However, if you wait past thirty days, or you re-buy HMA a second time, you can still ask for a refund. But it’s not guaranteed we’ll give you one. As many compatible devices as you have. We’re not quite Doom, but you can still run HMA VPN on a shocking number of devices — iOS, Androids, PCs, Macs, Android TVs — and you can install it on as many of those devices as you own. Or borrowed. Or stolen. Or find unattended in a Best Buy parking lot. But “installing” HMA VPN isn’t quite the same thing as “using” HMA VPN. While it can be installed on any number of devices, you can only have it turned on in up to 5 devices at the same time, or 10 with our friends and family plan. At least, as long as each installation is connected to the same account. To some people, maybe it doesn’t. To others, it matters a whole lot. The logging policy can be viewed symbolically: a VPN’s commitment to logging as little as possible is representative of their respect for your privacy and how seriously they approach the issue of protecting your data. For example, for a lot of people, a VPN logging exactly what time they turn the VPN on or off might not be a matter of much concern: but our unwillingness to do even that hopefully signals just how earnestly we protect the data that is generally viewed as more sensitive. Bearing in mind that it takes considerable effort to reduce logging, and even costs us a not-inconsiderate amount of money to pull off, the value of underlining our commitment to your privacy comes down to illustrating our principles as much as bolstering our marketing. It’s proof that we mean what we say and say what we mean. But refusing to log data has a practical benefit to our users as well: it means that if our databases should ever be compromised, either illegally via a hack or legally via some sort of police subpoena, the amount of data that could be acquired would be so dismally small and meaningless that neither the hackers nor the police could do anything with it. Thus, we can truly live up to our promise to protect your privacy no matter what you do online while still cooperating with the authorities, as we are obligated to do as a legal entity. Alright so that’s Jack, our mascot, and we’re afraid he’s a little too clever by half. So HMA stands for “Hide My Ass”, a moniker we used to go by more consistently and openly before we figured it might be hurting our brand more than helping. And, since donkeys are also known as “Asses”, and we were in the habit of hiding them, well, it seemed like a natural fit for a mascot. But the pun actually goes deeper than that, because our founder was actually a 16-year-old kid named Jack Cator. And what’s another name for a male donkey? A “Jack”. And even though we just go by HMA now, we’re not about to give up on a double-layered slice of wordplay like that, so Jack has stuck around. He’ll show up on our social media, in emails, in our ads, and of course all across our website. We’re sure you’ll grow to love him soon, either due to his irrefutable charm or Stockholm syndrome. Frankly, we’re not picky which.
true
true
true
Browse safely, privately, and access content worldwide. HMA works on PC, Mac, Android, iOS, Linux & routers. Get HMA today!
2024-10-12 00:00:00
2005-01-01 00:00:00
https://static2.hidemyas…hma-vpn-logo.png
website
hidemyass.com
Hide My Ass!
null
null
40,170,244
https://2023.djangocon.us/talks/vue-django-combining-django-templates-and-vue-single-file-components-without-compromise/
Vue + Django: Combining Django Templates and Vue Single File Components without compromise
DjangoCon US
## About This Talk There is a common misconception that Django’s template-based views and “heavyweight” JavaScript frameworks such as Vue cannot co-exist without considerable compromise; that we are forced to choose between the two. For example, we may use Django Rest Framework as back-end while writing a JavaScript SPA front-end, making it difficult to utilize Django templates where convenient. Or we may use JavaScript frameworks from Django templates using browser `<script>` includes, but then lost is much of the tooling, testability, and ecosystem of the modern build-based JavaScript framework. This dilemma leads many Django developers to choose lighter-weight no-build Javascript frameworks as a means to add dynamic user experience. However, there is no need for compromise. **Vue can, surprisingly easily, be mingled directly into Django templates**, allowing us to mix-and-match these two front-end technologies as we wish, without sacrificing the strengths of either. This talk will explain a unique new approach to integrating Vue 3 within Django Templates. Starting with a minimal Django project, I will demonstrate the addition of Vue components into a templated app, harmoniously combining both front-end frameworks. We will explore topics such as information passing from Django to Vue, Vite configuration and tooling, persistent state management with Pinia, and resources to jumpstart your own project. ## Presenters ### Mike Hoolehan (he/him) Mike is a software developer living in Tallinn, Estonia. Although his list of former hobbies is expansive and every-growing, his love of software is constant and keeps him curious and enthusiastic after 20 years of professional software development across numerous languages and frameworks. Currently, Mike is following his bliss with Django, Flutter, bike-touring, camping, and tabletop gaming.
true
true
true
There is a common misconception that Django’s template-based views and “heavyweight” JavaScript frameworks such as Vue cannot co-exist without considerable compromise; that we are forced to choose between the two. For example, we may use Django Rest Framework as back-end while writing a JavaScript SPA front-end, making it difficult to utilize Django templates where convenient. Or we may use JavaScript frameworks from Django templates using browser <script> includes, but then lost is much of the tooling, testability, and ecosystem of the modern build-based JavaScript framework. This dilemma leads many Django developers to choose lighter-weight no-build Javascript frameworks as a means to add dynamic user experience.
2024-10-12 00:00:00
2023-10-16 00:00:00
https://2023.djangocon.u…ike-hoolehan.png
article
djangocon.us
DjangoCon US
null
null
2,527,521
http://online.wsj.com/article/SB10001424052748704810504576307342275841586.html
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
734,495
http://www.appleinsider.com/articles/09/07/31/steve_ballmer_calls_apples_mac_growth_a_rounding_error.html
Steve Ballmer calls Apple's Mac growth a "rounding error" | AppleInsider
Neil Hughes
**Speaking to a group of market analysts this week, Microsoft Chief Executive Steve Ballmer repeatedly mentioned Apple, including a suggestion that a growing rate of Mac adoption is statistically insignificant.** "Share versus Apple, you know, we think we may have ticked up a little tick," Ballmer said at the 2009 Financial Analyst Meeting, "but when you get right down to it, it's a rounding error. Apple's share change, plus or minus from ours, they took a little share a couple quarters, we took share back a couple quarters." The Microsoft executive said he expects Windows to continue to trade market share with Mac OS X back and forth, though he added he feels taking customers from Apple is a limited resource. "Apple's share globally cost us nothing," he said. "Now, hopefully, we will take share back from Apple, but you know, Apple still only sells about 10 million PCs, so it is a limited opportunity." The 40-minute speech covered a variety of Apple-related topics, including product pricing, advertising, and the number of Macs in the audience. Ballmer noted the large number of Apple machines in the crowd as he spoke, saying the Windows maker has a "low share" in the investor audience. "I can see the Apple logos versus the PC logos," Ballmer told the crowd. "So we have more work to do, more work to do. Our share is lower in this audience than the average audience. But don't hide it. I've already counted them. I have been doing that since we started talking." The CEO also, in a roundabout way, acknowledged that Apple is known for making higher-quality hardware than Microsoft's PC partners. But Ballmer said he expects consumers to be surprised at some of the new offerings available this fall alongside the debut of its new operating system, Windows 7. "At least when Apple attacks us, the primary attack that comes from Apple is, 'Hey, at the end of the day, we have the coolest hardware,'" he said. "When you see the hardware, the PC designs that will come out this Christmas with Windows 7, I think that conventional wisdom can begin to really change. There is some really amazing, amazing work. So it is possible to get great hardware innovation, even when hardware and software comes from separate companies." Microsoft's top official said Windows falls in between OS X and Linux into the market sweet spot — not too expensive, not too cheap. Just weeks ago, Ballmer laughed off the threat of Google's open source Chrome OS. This week he said Microsoft's goal is to dominate the market, rather than carve out a niche. "We do not, say, like Apple, believe in low volume, very high prices, very — Apple is a great company, does a fine job. But their model says high margin, high quality, high price," he said. "That's kind of how they come to market. We say we want big market share. But with big market share, you take a lower price." Recently Microsoft changed one of its advertisements attacking the high cost of Apple products after legal threats. The Microsoft executive spoke about those commercials, but did not talk about the recent controversy. Instead, he said the "Laptop Hunters" ads have changed some public perception about the Microsoft brand as it relates to Apple. "And are the ads working?" he asked rhetorically. "In an independent survey, we asked 18- to 24-year-oldsâor they were asked, "Who offers the best value, Apple or Microsoft?" You can kind of see Apple was comfortably ahead despite the fact they â well, despite whatever the facts are. Our ads started in April of '09. You can see kind of what the perception changes have been so far." ## 272 Comments Ballmer IS a rounding error! For someone who critiques their competitor's product and then reproduces a shoddy version of it for their own resale, Ballmer has a hell of a nerve! "...you know, we think we may have ticked up a little tick," Ballmer said" Too bad your stock hasn't ticked up... "Ballmer laughed off the threat of Google's open source Chrome OS." Just like he laughed at the iPhone... who's laughing now?! Lets go back to that line, What? Not to expensive? WTF Windows is the most expensive OS! At $299 for a basic retail version and $399 for a premium, thats not expensive? FYI: Apple has sold 10 million "PC's" and how many did Microsoft sell? Um, 0! All they do is sell software for PC, not hardware! If it is just a rounding error, why spend real money to advertise against Apple? We all know he's an idiot. Let the Ballmer flaming begin! LOL kreshIf it is just a rounding error, why spend real money to advertise against Apple? The only rounding error I see is on the top of Ballmer's skull. Actually, beyond the "rounding error" statement, most of what Ballmer has to say is on point and complimentary of Apple's strategy. I'm surprised.
true
true
true
Speaking to a group of market analysts this week, Microsoft Chief Executive Steve Ballmer repeatedly mentioned Apple, including a suggestion that a growing rate of Mac adoption is statistically insignificant.
2024-10-12 00:00:00
2009-07-31 00:00:00
null
website
appleinsider.com
AppleInsider
null
null
9,686,320
http://www.tripwire.com/state-of-security/latest-security-news/ransomware-nearly-doubled-in-q1-2015-reports-mcafee-labs/
Ransomware Nearly Doubled in Q1 2015, Reports McAfee Labs
null
According to a report issued by Intel Security Group's McAfee Labs, ransomware has experienced a 165% increase in the first quarter of 2015. *McAfee Labs Threats Report: May 2015* reveals that this increase has been fueled in part by the impression of underground criminals that victims in rich countries seem to be the most willing to make ransom payments. This knowledge, in turn, has led to the development of well-crafted phishing messages as delivery mechanisms for the malware. "The phishing email topics that lead to infestation by ransomware are very specific," McAfee Labs explains. "The email template and attachment names appear not only in the local language but also pretend to be coming from real companies in the targeted countries." Ransomware, some variants of which demand between $150 and $500 for victims to retrieve their encrypted files, has seen an increase despite an international law enforcement effort last year that took down the Gameover ZeuS botnet, which was responsible for helping to spread the CryptoLocker ransomware. Since June 2014, 'Curve-Tor-Bitcoin' (CTB) Locker has taken the place of CryptoLocker as one of the most prolific types of ransomware in the wild today. Other families, including CryptoWall, TorrentLocker, BandarChor, and a new form called Teslacrypt, have also contributed to this spike. In addition to analyzing the increase in ransomware, McAfee's report studies the "Equation Group" threat actor, particularly its use of hard disk drive and solid state drive reprogramming modules to keep a machine's firmware infected even if the hard drive is reformatted or the operating system is reinstalled. McAfee Labs also identified 42 new vulnerabilities in Adobe Flash that attackers have leveraged to exploit users who were slow to patch their systems. “With the popularity of a product like Flash, there comes a tremendous responsibility to proactively identify and mitigate security issues potentially threatening millions of users,” said Vincent Weafer, senior vice president, McAfee Labs. “This research nicely illustrates how the tech industry works together constructively to gain an advantage in the realm of cybersecurity – industry partners sharing threat intelligence, and technology providers acting on information quickly to help prevent potential issues.”To learn more about ransomware, including how you can protect yourself and your computer against this type of malware, please click here. To read McAfee's report in full, click here.
true
true
true
According to a report issued by Intel Security Group's McAfee Labs, ransomware has experienced a 165% increase in the first quarter of 2015.
2024-10-12 00:00:00
2015-06-09 00:00:00
https://www.tripwire.com…pg?itok=l4QqQAa9
null
tripwire.com
tripwire.com
null
null
8,622,615
http://googleresearch.blogspot.com/2014/11/a-picture-is-worth-thousand-coherent.html?
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
1,922,752
http://www.zurb.com/playground/inline-form-labels
Inline Form Labels
ZURB; Inc
# Awesome Inline Form Labels How to create inline labels that don't suck, using CSS3 and some simple javascript. Note: Demo works best in Safari 4.x and Chrome 5. In Firefox you won't see the label fading back (transition). ### Hey there! This demo is pretty old. Thanks to HTML5, there's now a much easier way to create placeholder text for text fields, and without any JavaScript! Just add the `placeholder` attribute to your input tags: <input type="text" id="search" placeholder="Search here"> That's all you need! This works in IE10+, Firefox 4+, Chrome, Safari, and most mobile browsers. This is a regular field with a `background-image` creating the shadow effect and a simple border change for the focus state. This field has an inline label. It is structurally identical to the field above, but we've used negative margins (attached to `class="inlined"` on the label to move the field up and put the label behind it. ## The CSS for Inline Labels We won't touch on the CSS to create the inputs above - how you style your form elements is entirely up to you. What we're concerned with is how to do the inline label without making structural changes, and how to do the cool fading transitions (you see in Safari 4). There are a few key CSS lines that give us the positioning and effects we want. In order for the effect to work we need to insure that the label falls behind the input. We need `position: relative` in order for the `z-index` to take effect. This makes no difference for the standard display, but makes the inline label possible. input.input-text { position: relative; z-index: 2; } label { position: relative; z-index: 1; } For our inlined label we need to move it right a bit and override the label text styling to match the input field. The real magic here is with the `-webkit-transition` property. The syntax here tells the browser that when the `opacity` property changes, animate the changes over a 0.15s window and use a linear animation (as opposed to ease-in or ease-out). label.inlined { padding-left: 6px; font-weight: normal; font-size: 12px; font-family: "Helvetica Neue"; opacity: 0.75; -webkit-transition: opacity 0.15s linear; } And now the coup de grace. There are two tricks at work here, the first in the selector and the second in margining. `label.inlined + input.input-text` tells the browser to apply properties to any input.input-text that is preceded (on a sibling level) by a label.inlined element. Negative margining essentially moves the input up in front of the label (your label background will need to be transparent for this). Done and done. label.inlined + input.input-text { margin-top: -22px; } ## And a Little Bit of JavaScript In order for the label to fade and then disappear we need to apply classes at particular times. We'll use jQuery in this example, but the principle is the same for Prototype or straight JS. Okay we lied, there's a little more CSS. These two classes will be applied when the input gains focus, and when the input receives text. The extra `-webkit-transition` here means that when the user starts typing the label will instantly vanish, rather than fading off. label.focus { opacity: 0.35; } label.has-text { opacity: 0.0; -webkit-transition-duration: 0s; } This is the javascript we need to selectively change classes on the label. There are four functions: - The first is an observer that helps us out with autocompletion - we detect if the browser filled in the text, and then clear the label. - When the input gains focus, move up to the label and apply the `focus` class. - When the user starts typing, apply the `has-text` class. - When the user shifts out of the field, check if it's empty: if so, remove the `has-text` class so the label will fade back in. $(document).ready(function(){ $("label.inlined + input.input-text").each(function (type) { Event.observe(window, 'load', function () { setTimeout(function(){ if (!input.value.empty()) { input.previous().addClassName('has-text'); } }, 200); }); $(this).focus(function () { $(this).prev("label.inlined").addClass("focus"); }); $(this).keypress(function () { $(this).prev("label.inlined").addClass("has-text").removeClass("focus"); }); $(this).blur(function () { if ($(this).val() == "") { $(this).prev("label.inlined").removeClass("has-text").removeClass("focus"); } }); }); }); Copyright ZURB, freely available for distribution under the MIT license.
true
true
true
With some CSS tricks and a few lines of JavaScript we can create inline labels that don't suck.
2024-10-12 00:00:00
1998-01-01 00:00:00
null
null
null
ZURB, inc. Copyright (c) 2020
null
null
22,506,069
https://elegantchaos.com/2020/03/06/random-acts-of-pragmatism.html
Random Acts of Pragmatism
null
I have been accused (by myself, mostly), of being a bit too much of a purist sometimes. It’s true that I do like things to have an intellectual rigour to them, but it’s mostly about being honest and clear with ourselves about what we’re doing and why. I welcome the application of common sense, and I’m fine with taking shortcuts as long as they’re consciously chosen for a good reason. I’d like to think that I’m a pragmatist… As I was adding an item to a list in some code this morning, it occurred to me that a tiny feature of list literals in Swift (and many other languages) is a great example of pragmatism. What I’m talking about is the ability to do this: ``` let list = [ "foo", "bar", // <--- look ma, there's an extra comma ] ``` A purist might say that this is not correct (for some value of “correct”). It could be argued that it’s confusing to allow that extra comma. It could be argued that it implies that the programmer forgot to add the final item. An experienced programmer, however, will often have had to edit long lists of things by moving existing lines around. When the last line of these lists didn’t have a comma, and they moved it to the middle of the list, they probably forgot to add the comma, and it caused an error. They also then exposed a new last line, with a comma, which also caused an error. Easy to fix, but annoying. At some point during programming language evolution, someone asked themselves the question “does that error about the extra comma actually do anything useful?”. Is it actually ambiguous? No. Can we just get rid of it? Yes. Would that bring us a little bit closer to Nirvana? Possibly. To me, this is a great example of a little random act of pragmatism.
true
true
true
null
2024-10-12 00:00:00
2020-03-06 00:00:00
null
null
null
null
null
null
17,561,128
https://www.wsj.com/articles/the-most-important-factors-for-a-comfortable-flight-1531918998
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
10,305,802
http://www.technologyreview.com/review/541801/fighting-isis-online/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
24,198,685
https://justinblank.com/experiments/optimizingtar.html
Optimizing .tar.gz Archives by Changing File Order
null
# Optimizing .tar.gz Archives by Changing File Order A few weeks ago, I was doing something with a sizeable `.tar.gz` file, and wondered how the order of files affected the process. I'm not that knowledgable about compression, but I know that gzip uses a sliding window in which it looks for opportunities to compress repeating chunks of text. If you give it highly repetitive text, it does well, if you give it random data, it will probably give you a bigger file than when you started. So reordering files seems like it could matter. This idea seems obvious enough someone else has probably explored it, but I decided to spend a few hours playing around with code, rather than investigate see if it's already been implemented.1 I used my project StringMatching, for no reason beyod convenience. I cloned commit `0a4ad368d25c5af7d3d751f9e903abc8ae792dae` from GitHub. I decided to remove the .git directory, because I thought it would be easier to guess about how the process treated text files. The resulting `.tar` is 337920 bytes, with the `.tar.gz` being 45768 bytes. The code is on GitHub. ## Attempts For the attempts that involve some randomness, I did 10 runs, and captured the best, worse and average results. I did comparisons using the best result, since if you wanted to use these techniques, you could do multiple runs, and pick the best one. Method | Best | Worst | Average | Difference | ---|---|---|---|---| shuffled files | 47819 | 49564 | 48802 | + 6.6% | most common token | 46641 | - | - | + 1.9% | sorting by size | 46431 | - | - | + 1.4% | python default | 45890 | - | - | + 0.2% | shuffling files within directories | 45771 | 46817 | 46466 | + 0.0% | tar -czvf | 45768 | - | - | - | swapping | 45202 | - | - | - 1.2% | swapping within directories | 44959 | 46317 | 45840 | - 1.8% | **Shuffled Files**: I shuffled a list of all found files, then added them to the archive.**Most Common Token**: I applied a naive regex to split Java code into fragments that might sometimes correspond to tokens in Java syntax, then sorted files by their most common "token". In hindsight, not surprising this gave horrible results. Perhaps return is the most common token across arbitrary java files.**Shuffling Files Within Directories**: Same as #1, except that we only shuffle within a directory, keeping all the files from that directory together.**Sort Files by Size**: This one seems absurd. The fact that it ends up doing well may just be luck, though you could imagine that larger pieces of code have structural similarities.**Script With Default Ordering**: I used the same process as my other attempts, without changing the order of files returned by walking the directories. I suspect the directory order is different than what tar uses, leading to a space increase, but I didn't check it.**tar -czvf**: The baseline implementation. It makes sense that it does well relative to several other attempts. Files in the same package are likely to use many of the same imports, types and methods.**Swapping Adjacent Files**A sort of bubble-sort lik algorithm. Starting with the files in their natural order, we try swapping files, then recreating the gzipped archive. If the result is smaller than our previous best result, we keep the swap, and try swapping the moved element with the one before it.**Swapping Adjacent Files After Shuffling Within Directories**: Apply the same algorithm as in 5, but beforehand, shuffle files within directories, as in #3. ## Does Any of This Matter? No clue. It's hard to imagine these implementations are good for anything--the best results require doing at least O(n) calls to ``` tar -czvf ``` for an n-file archive. Google's zopfli algorithm spends major CPU time in exchange for moderate improvements in compressed file size, so a better implementation might be interesting. Lastly, I did one experiment on one library, so real-world results could be completely different. # Footnotes Then I sat on my results for a few weeks before writing them up.↩
true
true
true
null
2024-10-12 00:00:00
2024-01-01 00:00:00
null
null
null
null
null
null
4,600,945
http://bayareaquarterlife.tumblr.com/post/29540504926/reading-hacker-news-as-a-non-cs-liberal-arts
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
565,057
http://www.guardian.co.uk/technology/2009/apr/16/wikia-search-seth-finkelstein
Read me first: Shutdown of Wikia Search proves empty rhetoric of collaboration, says Seth Finkelstein
Seth Finkelstein
Sometimes, a business project that is predicted as likely to fail, both by experts in the field and successful entrepreneurs, is not a misunderstood vision but truly flawed. There is a quote by Carl Sagan: "But the fact that some geniuses were laughed at does not imply that all who are laughed at are geniuses. They laughed at Columbus … But they also laughed at Bozo the Clown." The shutdown of Wikia Search, an attempt to apply Wikipedia-style ideas (ie, free work) to create an open source, commercial search engine, came as no surprise to informed observers. Reportedly very unsuccessful, receiving "only 10,000 unique users a month over the past six months", it could not be sustained under current economic conditions. Numerous technical problems remained unsolved, and with regard to for-profit undertaking, it regularly faced questions (bit.ly/seth54) from professionals wanting to know: "Why should [we] provide the search expertise? Or, to put it less diplomatically, why should we make you rich?" Wikia Inc is a venture-capital-funded startup company, created by a few high-level people involved in Wikipedia, notably Wikipedia's co-founder Jimmy Wales, to commercialise similar concepts. While it has no major financial connection to the non-profit Wikimedia Foundation, which owns Wikipedia, Wikia still derives benefits due to a halo effect from Wikipedia's name recognition. Though the media widely hyped a storyline of Wikia Search as a "Google killer", the underlying reality was always more complex. To simplify, Google's dominance was an evil that would motivate contributors to labour without pay and donate resources for an assumed public-spirited (but not public sector) enterprise, thus supposedly battling the Great Search Satan. From a business standpoint, if Wikia could socialise costs and privatise benefits, then digital sharecroppers might ultimately produce a nice profit for Wikia even if Google remained utterly unchallenged. This strategy of mining user-generated discontent foundered in trying to monetise those sentiments. As anyone in politics can attest, it's easy to have a crowd rant about dangers and to generate press coverage, but harder to turn those feelings into something vaguely useful. And, contrary to many pundits who have sought to find some way that Wikia Search could be said to have affected Google, there is no evidence it had any effect whatsoever. While Google's "SearchWiki" interface has an obviously similar name, beyond that possible bit of marketing the underlying system is much more about personalisation than presenting results to others. Too many have a simplistic view of Google's ranking algorithm. While the famous PageRank calculation (roughly, a web page's popularity) is a key aspect, there are others (age of a page, age of links to it, links to the domain). While Google does not perform human-edited minor adjustments of a page's search position, it does change how it mixes factors to arrive at an answer. Those changes can incorporate information derived from human testing, though Google does not use data from SearchWiki. But Google has long taken human feedback into account (an apparently leaked copy of their "General Guidelines on Random-Query Evaluation" has been discussed on search expert sites). A prominent Google software engineer pointed out earlier versions, and stated: "Google was experimenting with this particular interface for user feedback well before Wikia launched, and months before Wikia introduced its [social] tools ... " When examining the implications of search algorithms, the idea of an open, public, search engine is often brought up. So far, nobody has found a way to fund such a platform. In what is perhaps a representative trajectory of web evangelism, stirring but empty rhetoric about collaboration and community has proved no solution to real-world issues. It would be polite to praise Wales for having made an effort. Regrettably, it's possible he's made the problem worse, by giving such projects a taint of half-baked dilettantism and corporate exploitation.
true
true
true
The shutdown of Wikia Search – an attempt to apply Wikipedia-style ideas to create an open source, commercial search engine – came as no surprise to informed observers
2024-10-12 00:00:00
2009-04-15 00:00:00
https://assets.guim.co.u…allback-logo.png
article
theguardian.com
The Guardian
null
null
37,515,625
https://www.tomshardware.com/news/apples-a17-pro-challenges-core-i9-13900k-ryzen-7950x-in-single-core-performance
Apple's A17 Pro Within 10% of Intel's i9-13900K, AMD's 7950X in Single-Core Performance
Anton Shilov
# Apple's A17 Pro Within 10% of Intel's i9-13900K, AMD's 7950X in Single-Core Performance Apple's 3nm A17 Pro boosts single-core performance by 10%. Apple's latest A17 Pro system-on-chip for smartphones made on TSMC's N3 production node delivers single-thread performance which challenges that of AMD's Ryzen 9 7950X and Intel's Core i9-13900K processors in Geekbench 6. There is a catch though: Apple's A17 Pro operates at 3.75 GHz, according to the benchmark, whereas its mighty competitors work at about 5.80 GHz and 6.0 GHz, respectively. Apple's A17 Pro SoC maintained the company's renowned six-core configuration and packs two high-performance cores functioning at up to 3.77 GHz and four energy-efficient cores operating at a lower frequency. When compared to the A16 Bionic (made on TSMC's N4), the A17 Pro boosts the maximum clock-speed of performance cores by 8.95% (from 3.46 GHz), which is in line with what TSMC's N3 (3nm-class) process technology offers compared to its 5nm-class counterparts (+10% ~ +15% compared to N5, about 10% compared to N4). Row 0 - Cell 0 | A17 Pro | A16 Bionic | Core i9-13900K | AMD Ryzen 9 7950X | Snapdragon 8 Gen 2 | General specifications | 2P+4E, up to 3.77 GHz | 2P+4E, up to 3.46 GHz | 8P+16E/32T, 3.0 GHz - 5.80 GHz | 16P/32T, up to 5.70 GHz | 5P+3E, up to 3.19 GHz | Single-Core | 2914 | 2641 | 3223 | 3172 | 2050 | Multi-Core | 7199 | 6989 | 22744 | 22240 | 5405 | As far as single-core performance of Apple's A17 Pro in Geekbench 6 is concerned, it is 10% faster than its predecessor, the A16 Bionic, which leads to a question regarding whether Apple introduced any microarchitectural CPU improvements with its latest SoC. Of course, Apple's custom core is traditionally faster than those developed by Arm itself. Scoring 2,900 points in single-thread Geekbench 6 workload is good enough to challenge many desktop-class processors, but trails the fastest models by ~10%. So, one could say that Apple's high-performance cores could challenge Raptor Cove and Zen 4 cores when working at around 3.77 GHz, at least in this specific benchmark. As always, one benchmark doesn't tell the full story. When it comes to multi-core performance, Apple's A17 Pro can only score about 7,200 points, which is only 3% higher than A16 Bionic. Six cores cannot beat processors that have significantly more cores, yet A17 Pro remains the fastest smartphone SoC around, at least when compared to Qualcomm's Snapdragon 8 Gen 2. When Apple formally introduced its A17 Pro system-on-chip (SoC) earlier this week, it said that its high-performance cores deliver a 10% increase in single-thread workloads compared to its predecessor. Apparently, this was an accurate estimate and the new processor delivers single-thread performance that is competitive with some PC processors while working at a considerably lower frequency. Meanwhile, it looks like Apple has made little to no architectural changes to its A17 Pro CPU cores and only boosted clocks. ## Stay On the Cutting Edge: Get the Tom's Hardware Newsletter Get Tom's Hardware's best news and in-depth reviews, straight to your inbox. Anton Shilov is a contributing writer at Tom’s Hardware. Over the past couple of decades, he has covered everything from CPUs and GPUs to supercomputers and from modern process technologies and latest fab tools to high-tech industry trends. - Order 66 It will be interesting to see how the A17 pro performs in RT if/when mobile games release supporting it. My prediction is that it will do well but I don't really see the point of RT for a phone soc because the chip is going to have to use so much more power to use RT that battery life probably won't be great when using RT.Reply - hotaru251 to be fair I will say one thing in defense of the intel/amd cpu's....They are always going to be less "efficient" due to how much mroe complex the pipeline for any windows/linux (not sure on macs as havent used one in 25+yrs) OS is. Same reason consoles can do so well vs pc even if specs are vastly different that much less complex pipeline.Reply when your device has a lot less stuff to run in background and hoops to jump through you get better performance. I don't like Apple as a company nor would I use their stuff due to that, but as a technology their chips probably best designed than any other. - Makaveli I remember all the noise Apple made about the M1 that also had good ST performance but also a node advantage. Once AMD and Intel moved to newer nodes that advantage went away. And even if you compare apple now on 3nm still behind AMD on 5nm and intel on its current.Reply - zecoeco Apple should care less about synthetic benchamarks and more about real-world performance that is achieveable through the optimizations and uniformity of their ecosystem. Hardware is way ahead, while software is crappled, unoptimized and unable to utilize this performance effectively. Winning the "watts" war isn't going to benefit anyone but the "hardcore" people out there looking for the smallest yet most expensive drop of perfomance at the cost of power and price.Reply - JamesJones44 macOS is Unix based (OpenBSD to be exact), the "pipeline" is pretty much the same as Linux (In fact it can run almost all the same things Linux can). It's also still runs on Intel based Macs.hotaru251 said:to be fair I will say one thing in defense of the intel/amd cpu's....They are always going to be less "efficient" due to how much mroe complex the pipeline for any windows/linux (not sure on macs as havent used one in 25+yrs) OS is. Same reason consoles can do so well vs pc even if specs are vastly different that much less complex pipeline. One can argue that Apple has tighter integration and maybe using custom OP codes to optimize their software on their own chips, but I don't think generic OS pipelines are the reason why they are more/less efficient. - JamesJones44 Your comparing an 8.5 watt TDP part to a 100+ watt TDP part. In no world is that an apples to apples comparison and not even worth mentioning.Makaveli said:I remember all the noise Apple made about the M1 that also had good ST performance but also a node advantage. Once AMD and Intel moved to newer nodes that advantage went away. And even if you compare apple now on 3nm still behind AMD on 5nm and intel on its current. - Makaveli I know its apples vs orange but the article itself has the comparison so why not point it out.JamesJones44 said:Your comparing an 8.5 watt TDP part to a 100+ watt TDP part. In no world is that an apples to apples comparison and not even worth mentioning. - The Historical Fidelity I think what he is trying to get at is the inefficiency of x86 architecture with all of its useless legacy micro operations. Every x86 design has to provide hardware that can execute this legacy code and this both takes up valuable space and prevents micro-architects from fully optimizing logic features without having to compromise to ensure legacy features will still work.JamesJones44 said:macOS is Unix based (OpenBSD to be exact), the "pipeline" is pretty much the same as Linux. This has nothing to do with the chips efficiency. You can argue that Apple has tighter integration and maybe using custom OP codes to optimize their software on their own chips, but generic OS reasons are not why they are more/less efficient. Intel has said they plan to remove useless legacy operations from x86 to improve its performance and allow full optimization of logic units for modern operations.
true
true
true
Apple's 3nm A17 Pro boosts single-core performance by 10%.
2024-10-12 00:00:00
2023-09-14 00:00:00
https://cdn.mos.cms.futu…rUbB-1200-80.jpg
article
tomshardware.com
Tom's Hardware
null
null
34,492,196
https://www.telegraph.co.uk/business/2023/01/23/google-staff-overpaid-company-must-cut-thousands-jobs-investor/
Google must cut thousands more ‘overpaid’ workers, investor demands
Matthew Field
Google staff are overpaid and the tech giant must cut thousands more jobs, a British activist investor has said. Sir Chris Hohn, who previously donated to Extinction Rebellion, wrote in a letter dated January 20 that Google's 12,000 layoffs did not cut deep enough to reduce bloat at the tech giant. The billionaire founder of The Children's Investment Fund Management (TCI) , who holds a $6bn stake in Google-parent company Alphabet, wrote to chief executive Sundar Pichai, warning: "Ultimately management will need to go further." "The 12,000 jobs is a step in the right direction, but it does not even reverse the very strong headcount growth of 2022." Sir Chris, who last year paid himself a record £1.5m a day, added Google should look to "address excessive employee compensation", warning the median salary at Alphabet was nearly $300,000. He said the company should moderate stock-based payments. He said Alphabet had more than doubled its headcount over the last five years, adding 30,000 jobs in the first nine months of 2022. "I believe that management should aim to reduce headcount to around 150,000, " he wrote, "this would require a total headcount reduction in the order of 20pc." The 12,000 job cuts at Google represented a reduction of around 6pc of its workforce. The investor said he had been in dialogue with Mr Pichai and was "encouraged" by him taking "some action to right size Alphabet's cost base". TCI, which manages around $40bn, was launched in 2003 and has given billions to children's charities. Among its alumni is Prime Minister Rishi Sunak. The fund's current top holding is Google parent company Alphabet. The billionaire investor first called for a shake-up at Google in November, urging Mr Pichai to make job cuts. Last week, Google confirmed that 12,000 people would be let go across its businesses amid the growing economic gloom and a fall in demand for digital advertising. Sir Chris last year paid himself a record $690m in the year to February, 2022, equating to roughly £1.5m per day, according to his fund's latest accounts. He is thought to be one of the single biggest donors to the Extinction Rebellion, the direct action climate protest group. Activist investors have taken a swipe at tech companies for over-hiring during the pandemic. Google, Facebook-owner Meta, Microsoft and Amazon all added tens of thousands of staff amid a bet on a permanent digital transformation. All four have now confirmed they will cut tens of thousands of staff. Last week, Mr Pichai said he took “full responsibility for the decisions that led us here”. On Sunday night, it emerged that activist investor Elliott Management had taken a stake in marketing technology company Salesforce. Salesforce has already confirmed it will cut around 10pc of its staff, about 7,000 jobs. Elliott, founded by billionaire investor Paul Singer, is known to agitate for change at companies, including for cost-cutting and divestments, in order to generate shareholder value. Jesse Cohn, managing partner at Elliott, said: "We look forward to working constructively with Salesforce to realize the value befitting a company of its stature."
true
true
true
Sir Chris Hohn says 12,000 layoffs do not go far enough
2024-10-12 00:00:00
2023-01-23 00:00:00
https://www.telegraph.co…licy=OG-Standard
article
telegraph.co.uk
The Telegraph
null
null
38,237,839
https://github.com/ai-boost/Awesome-GPTs
GitHub - ai-boost/Awesome-GPTs: Curated list of awesome GPTs 👍.
Ai-Boost
This repository contains a curated list of awesome GPTs on OpenAI platform. - [UPDATE 2024.02.08] We open-sourced our website (https://awesomegpt.vip) here: awesomegpts repo - [UPDATE 2024.01.20] Some of our GPTs have been **featured🎉 (top 3)**on the official GPT store, and we are making the prompts behind them freely available as open source here: awesome-gpts-prompts Exciting news! We've just rolled out our very own GPT creation, aptly named **AwesomeGPTs** – yes, it shares the repo's name! 👀 🚀 **What's AwesomeGPTs?** It's a specialised GPT model designed to: **Navigate the Awesome-GPT Universe**: Directly recommends other GPT models from our extensive list based on user queries.**Personalised Recommendations**: Tailors suggestions to each user's specific interests and questions. 🎉 **Your GPTs Featured**: Here's the thrilling part – your GPT models can also be recommended! This means more visibility and engagement for your contributions. 🔗 **Give it a Try!** Eager to see AwesomeGPTs in action? Check it out here: AwesomeGPTs Live Demo (Knowledge cut-off: 2023/11/29). Experience firsthand how it brings the world of GPTs closer to the community. This is a space for showcasing innovative and exciting GPT created by AI enthusiasts worldwide. Got a GPT that stands out? Let the world know! - 🎙️ **Contributing**: Provide your GPT's name, a short description, and its link in the PR. - ✅ **Recognition**: Your GPT could be the next big highlight in our Awesome-GPTs list. - ⏳ **Simplicity**: Prefer a PR over an issue for quicker inclusion. Your stars🌟 and recommendations 🫶 are what make this community vibrant! - 📚 Academic - ✍️ Writing - 🎓 Education - 🎯 Productivity - 💻 Programming - 🐈 Lifestyle - 😀 Just for Fun - ☂ Weather - 🍴 Dining - 📈 Marketing - 💵 Finance - 🌟 Commit and PR Guidelines - ✨ Academic Writer (Professional Version) - A professional academic assistant who helps with various academic tasks: writing papers, reading papers, weight reduction, polishing, designing experiments, PPT, etc. - Chat now - 🌟 Auto Literature Review - A literature review expert that can search papers and write literature reviews automatically. - Chat now - 🚀 Scholar GPT Pro - An enhanced scholar GPT version capable of conducting research and writing SCI papers with real references. Access to over 216 million papers across all science fields. - Chat now - ✍️ Paraphraser & Humanizer - Specializes in sentence refinement, polishing academic papers, reducing similarity scores, and evading AI detection to prevent plagiarism. - Chat now - ⭐️ Paper Review Pro - A GPT that evaluates academic papers with precision, offering scores, identifying weaknesses, and suggesting edits to enhance quality and innovation. - Chat now - 💡 Auto Thesis PPT - A PowerPoint assistant that drafts outlines, enhances content, and styles slides for thesis, business reports, or project reports with ease and flair. - Chat now - ⭐️ Paper Interpreter Pro - A GPT designed for effortless and automatic structuring and decoding of academic papers, simply by uploading a PDF or pasting a paper URL. - Chat now - 📈 Data Analysis Pro - Provides multidimensional data analysis, aiding in research with automated chart creation to simplify the analytical process. - Chat now - ⭐ PDF Translator - An advanced 🚀 PDF translator for researchers & students, seamlessly translating academic papers 📑 into multiple languages 🌐, ensuring accurate interpretation for global knowledge exchange 🌟. - Chat now - 🔍 AI Detector (Academic Version) - A GPT for determining whether an academic text is generated by GPT or other AI, support English, 中文, Deutsch, 日本語, etc. It can generate a detailed analysis report. - Chat now - 🐧Prompt Engineer (An expert for best prompts) - A prompt engineer expert who write best prompts! - Chat now - ✏️All-around Writer (Professional Version) - A professional writer📚 who specializes in writing all types of content (essays, novels, articles, copywriting)... - Chat now - ✍️Paraphraser & Humanizer - Expert in sentence refinement. - Chat now - 👔Formal GPT - A informal to formal translator. It can give feedback about your CV. It can generate a cover letter for you. - Chat now - Submit by @emreisik95 - 🤖 Prompty - A professional prompt engineer who helps you optimize your GPT-prompts with state-of-the-art techniques. - Chat now - ✨ editGPT - Proofread, edit and track changes to your text inside ChatGPT. Works in conjunction with the editGPT browser extension allowing you to accept and reject changes without leaving ChatGPT. - Chat now - 📗All-around Teacher (Learn Everything in 3 min) - Learn all kinds of knowledge in 3 minutes, with customized tutors leveraging the powerful GPT-4 and knowledge base. - Chat now - 🎀My Excellent Classmates (Help with My Homework!) - Excellent classmates to help with homework, providing patient guidance and support. - Chat now - 🛩TradeComply (Your Import Export Compliance Specialist!) - "How do I ship my product to Europe"? Learn everything about shipping internationally! - Chat now - Submit by @jordanj5610 - 🐇Lore Master - 🐥Six-Y (Explains Anything Like You are 6 Years Old) - "How do the stars shine?" Helps you to explain everything to your 6 years old! or you. - Chat now - Submit by @niyoseris - 👩🏫English Teacher Marion - Meet Marion, your friendly neighbourhood English teacher. - Chat now - Submit by @nicksavanah - 😌The Stoic Council - 📐Stats and ML Helper - a GPT that can help understand both simple and complex Statistics and Machine Learning concepts - Chat now - Submit by @pak0vskiy - 🤔ExistentialGPT - Philosophical exploration with existential depth - Chat now - Submit by @PositivistPessimist - 🗣 Language Learning - Create Short Stories to Learn any Language - 2500+ word stories in target language with images, for language learning. - Chat now - Submit by @TheBoringBOT - 🔢 Math to LaTeX - Send me an image of Math. I will give you the LaTeX code. - Chat now - Submit by @No_Impact4379 - 👄Debate Mentor - Mentor and debater, guides users to articulate conclusions. - Chat now - Submit by @kylecoogan - 📈EconomicsGPT - Your world-class Economics tutor, powered by students and faculty from the University of Chicago's highly-ranked Economics program. - Chat now - Submit by @dpeachpeach - 🦉Owly The Explorer - Owly is an adorable, owl-themed GPT designed to safely engage kids in a variety of educational topics, with built-in restrictions for child-appropriate content. - Chat now - Submit by @marcelcodes - 📚 Hierarchy Navigator - Organizes learning into a detailed hierarchy. - Chat now - Submit by @kylecoogan - 🐕🦺 Linda: Veterinary Sciences, Animal Rescue & Behavior - Ask me anything about veterinary sciences, animal rescue, and behavior. - Chat now - Submit by @Viktor-Larkhill - 👩🏫 IELTS Writing Coach - 🎧 DJGPT - I'm DJGPT, your go-to AI for all things DJing and music mixing, here to guide you through the exciting world of beats and tracks! - Chat now - Submit by @HeyitsRadinn - 💬 Multilingual Mentor - Learn any language IN any other language while talking freely but still in a structured way and according to your current proficiency. - Chat now - Submit by @linus-ahlemeyer - 💬 Portuguese Pal - Learn Portuguese while talking freely but still in a structured way and according to your current proficiency. - Chat now - Submit by @linus-ahlemeyer - 👨🔬 Albert Ainstein - 👲 Math & Logic Tutor Junior - A friendly math guide for young learners they can freely talk to, suggests appropriate puzzles but refuses to do their homework. - Chat now - Submit by @linus-ahlemeyer - 🔬 Academic Explorer - 📝Gauthmath (Your All-in-one Homework Helper) - 🎓AiTA: Personal College Tutor - 🧠 Anki Master - 🚗 Automotive Software Quality Mastermind - 📚 TutorAI - 🎧 DJ Sensei - 🎓 PhD Workout Coach - Get bespoke scientific workout and fitness guidance plus advanced nutritional insights. Trained on a custom empirical dataset. - Chat now - Submitted by @valiantgenomics - 🌟Images - Generate multiple continuous images at once, while maintaining consistency, such as comic strips, novel illustrations, continuous comics, fairy tale illustrations, etc. - Chat now - 🦄Logo Designer (Professional Version) - A professional logo designer capable of creating high-level logos in a variety of different styles. - Chat now - 🎨Designer Pro - Jessica, a universal designer/painter in professional mode, offers more professional design/paint effects. - Chat now - ✨Ask Dr. Andrew Huberman - 🥛DrinkinGPT - your go-to for crafting the ultimate drinking games! 🚀 Whether you've got dice, cards, or just a bunch of cups, DrinkinGPT tailors games perfectly to your group's vibe. - Chat now - Submit by @FabKremer - ⌨️Test-Driven Code Companion - A code companion that follows the rule of test-driven developement to help you write safe and proven code. - Chat now - Submit by @FlorianVal - 🎨 Midjourney Helper - 🤝 Execu-LI Post Companion - 𝕏 Execu-X Post Companion - 📐Aspect Ratio Calculator - Calculate aspect ratio from width & height - Chat now - Submit by @selimdoyranli - 🤵A BA that creates user stories - It will take a short input from the user, ask clarifying questions and then create a user story with acceptance criteria. - Chat now - Submit by @MathewBeldon - 💯CourseCreatorGPT - A GPT dedicated to create online courses based on a given topic - Chat now - Submit by @AlexanderCGO2 - 🎇Soul Spark - A unique blend of personalized, motivational quotes from iconic personalities across art, sports, science, and business. - Chat now - Submit by @cantoramann - 🎥EditGPT - Your go-to buddy for all things related to video editing and creating custom images for your projects. - Chat now - Submit by @HeyitsRadinn - 🎨Image Generation with Self-Critique & Improvement - Generate images and receive self-critique to improve the generation process. - Chat now - Submit by @ezzcodeezzlife - 💵Seabiscuit: Business Model Master - Discover A More Robust Business - Chat now - Submit by @tomfrazier - 🏛️ GPT Architect (Advanced Model) - Turn simple prompts into powerful GPTs - Chat now - Submit by @marcusrbrown - ⚡FastGPT - 🎞YouTubeGPT - 🎞YouTube Video Summary by TubeSum - Chat and answer questions from YouTube videos - Chat now - Submit by @dvvolynkin - 🤖Poe Bot Creator - 🤖AI Comic Maker - 🎧 Audiophile Assistant - Specializes in providing expert advice on high-fidelity audio, from equipment selection to sound quality analysis. - Chat now - Submit by @HeyitsRadinn - 📚 WebStract - Your autonomous, in-depth digital educator, guiding you through comprehensive, interactive learning experiences. - Chat now - Submit by @kylecoogan - 📊 Gantt Chart GPT - A project management assistant that can auto-generate an editable Gantt chart from your project files. - Chat now - Submit by @onlinegantt - 🚀 Seabiscuit: Launch Lander - Startup Strong Within 180 Days. - Chat now - Submit by @tomfrazier - 🚀 Brand Sprint Facilitator - Helps define the baseline of your brand. - Chat now - Submit by @dylandeheer - 🧵 ThreadsGPT - Your creative ally in crafting engaging Threads app content. - Chat now - Submit by @Quagswagon2 - 🌐 WordPress Wizard - Offers expert advice for creating custom WordPress websites. - Chat now - Submit by @stefanzihlmann - 💡 ProductGPT - A specialized AI assistant designed to generate creative and appealing product names and descriptions, focusing on tech products, eco-friendly items, and fashion. - Chat now - Submit by @HeyitsRadinn - 📄 Cover Letter GPT - Crafts personalized cover letters tailored to your resume and job descriptions. Simply upload your CV in PDF format and the job description as text. - Use now - Submit by @stefanzihlmann - 🍭 Quicksense - 🔄 Scrum Master Assistant - 🖨️ OCR - 🎧 AbletonGPT - I'm AbletonGPT, your go-to source for practical tips and troubleshooting advice on Ableton Live 11, dedicated to helping both beginners and intermediate users with their music production queries. - Chat now - Submit by @HeyitsRadinn - 🎙️ PodGPT - Summarize and ask questions about any podcast episode. - Chat now - Submit by @mikkelsvartveit - 📈 Market Maven (Enhanced Market Analysis) - A specialized GPT for dynamic market analysis, with advanced security features for proprietary methodologies. - Chat Now - Submitted by @mavenmarket - 🧑🎨 Wizlogo Logo Maker - 🖼️ UpScaler: GPT to Create and Upscale/De-noise Dalle Images - 📝 Self-Evaluation Assistant - Interactive system for detailed self-evaluations in PDF format. - Chat now - Submit by @RaviMidda - 🎬 Film Developer - A GPT for everything in film development, from dialogue to story, character development, to concept art. - Chat now - Submit by @LaneBucher - 💰 U.S. Tax Helper - A multilingual tax expert to handle all of your tax questions. - Chat now - Submit by @jaycoin6084 - 📄 PDF/DocX Generator -Generate any complex documents, worksheets, charts, tables etc. in PDF or DocX format powered by LaTeX - 🖌️ Find a Design Agency - A GPT to help you find a design agency in your vicinity based on your design needs. - Chat now - Submit by @dylandeheer - 🎨 UX Design Coach - A GPT to help navigate the vast landscape of design challenges, offering advice on visual design, user research, human psychology, and more. - Chat now - Submit by @dylandeheer - ✒️ FictionAI - 📈 Crypto Compass GPT - Your AI-driven navigator for insightful and accurate analysis of the ever-changing cryptocurrency landscape. - Chat now - Submit by @jeronimobarea - 🛒 DropshippingGPT - I am DropshippingGPT, an AI specialized in guiding users through all stages of creating and scaling a dropshipping business. - Chat now - Submit by @HeyitsRadinn - ✍️ Cover Letter GPT - Expert in creating tailored cover letters based on job descriptions - Chat now - Submit by @atharvtathe - 📖 GPTs Manual-master - Help people write user manuals for the APP - Chat now - Submit by @Sollarzoo - 😎 My Boss! (a boss who makes money for me) - 📊 GoogleAnalytics Guru - 🤖 There's An API For That - 📅 Event Planner Pro - 🖼️ Art Companion - A Guide Through the Creative and Professional Worlds of Art - Chat now - Submit by @nolattosio - 🧭 Career Navigator - Generate career roadmaps from CVs and user goals. Mock interview / reverse interview for job postings. - Chat now - Submit by @CeamKrier - 👨💻 HR Sage - Experienced HR manager providing professional advice on recruitment, employee relations, and more. - Chat now - Submit by @Richard-S16 - 📕 Scholarly Insight - 🏞 Universal Poetic Painter - 🏮 Chinese Couplet Sage - 🔧 GPTech - 💰 A-StockGPT - A smart investment advisor that provides Chinese A-share stock purchase recommendations based on user preferences. - Chat now - Submit by @youngpage - 🩸 BLOOD CHECKUP - I interpret and explain your blood analysis - Chat now - Submit by @nolattosio - 🎬 ScriptCraft - Create and improve YouTube scripts: Obtain the best storytelling and engagement for your videos. - Chat now - Submit by @carlostejerodev - 📲 AutoShare - AI Agent that uploads your content on YouTube, Facebook, Instagram and Discord. - Chat now - Submit by @carlostejerodev - 🙋 Resume Reader for Recruiters - 🚚 Moving Assistant - An interactive moving planner and advisor to help you budget and execute your move, along with settle in after. - Chat now - Submit by [@LaneBucher] - 🧩 CraftyBits - 🖼️ Text2Img - 🛩️ Au Visa Assistant - This visa assistant is dedicated to addressing your questions related to studying and working visas in Australia. - Chat now - Submit by @echonoshy - 📅 Eva - Effortless Event Invitations - 🎹 MIDI Maestro - Your new ghost writer for music production. Create MIDI files by describing the music you want to create. - Chat now - Submit by @bytewaveco - 📈 ProductivityGPTs - 🌐Fastium - A General purpose GPT with /commands support, exportable data and more! - Chat now - Submit by @DarkNinjaRocks - 👕TeeCoder Designer - Hi, I'm TeeCoder Designer, a specialized GPT focused on creating simple and humorous t-shirt designs, especially those themed around programming humor. - Chat now - Submit by @HeyitsRadinn - 🧑⚖️Legal AI - Real-time Global legal guide AI, designed for lawyers and law students, aiding in database navigation and sector-specific legal information. - Chat now - Submit by @obosaosazuwa - 🗞️Fake News Hunter - Fact checker, focusing on in-depth, credible sources - Chat now - Submit by @pierreping - GPT - A clean GPT-4 version without any presets. - Chat now - 🚀 Job Application Hero - 💻Professional Coder (Auto programming) - A gpt expert at solving programming problems, automatic programming, one-click project generation - Chat now - 🎭h4ckGPT: Your personal security tool - 🚀Python Seniorify: Intermediate Python Tutor - Wise Python tutor focusing on advanced coding principles. - Chat now - Submit by @vasarmilan - 💡JavaScript Novice Guide: Beginner-Friendly Tutor - Clear explanations and practice exercises for JavaScript beginners. - Chat now - Submit by @vasarmilan - 📊Data Science Project Generator: Project Suggestions - Offers data science project ideas and tips. - Chat now - Submit by @vasarmilan - 🐍Python Tutor: Example-Focused Learning - Concise Python programming tutor for beginners to intermediates. - Chat now - Submit by @vasarmilan - 🐉Colab Code Crafter: Google Colab Code - Get Python code from a GPT tuned to make code that runs in the Google Colaboratory environment. - Chat now - Submit by @David-Deans - 🖥️Vue3 GPT - A Vue.js 3 coding assistant, always up-to-date with the latest official documentation and presets for a quick choice of your preferred API and syntax - Chat now - Submit by @luona-dev - 🤓Code Companion - ☁️CloudGPT: Learn Cloud and DevOps - ☕JAVA Code Guide - A JAVA Development Assistant focusing on coding standards and quality. - Chat now - Submit by @searle-dev - 🤠 Repo Ranger - Your go-to sheriff for web-based code insights and security checks. - Chat now - Submit by @marcusrbrown - 🧑💻 Code Whiz Pro - Provides insightful code reviews with a humorous twist. - Chat now - Submit by @davidemarcoli - 🧞 Node.js Project Builder - Build out a full node.js project, from skeleton to build-ready. - Chat now - Submit by @dougbutner - ⚛️ React Project Builder - Build out a full React project, from planning to code. - Chat now - Submit by @dougbutner - 🅰️ Angular Project Builder- Let AI angular project, from pseudocode to build-ready. - Chat now - Submit by @dougbutner - 🌟 Svelte Project Builder - Build out a full app in svelte, from pseudocode to real code. - Chat now - Submit by @dougbutner - 🤖 Github Repo Assistant - Provides both general and specific guidance on publicly accessible Github Repositories and their contents. - Try it now - Submitted by @thesamir - 🖥️ PC Builder GPT - I'm PC Builder GPT, your tech-savvy virtual friend who offers expert and approachable advice on building PCs, complete with up-to-date pricing. - Chat now - Submit by @HeyitsRadinn - 🪙 XRPL GPT - Build on the XRP Ledger with assistance from this GPT trained on extensive documentation and code samples. - Chat now - Submit by Contraband Interactive - ⬜ Whitebox - 👨💻 API Compass GPT - A specialized chatbot providing curated, user-friendly information and guidance on a wide range of public APIs for developers and tech enthusiasts. - Chat now - Submit by @jeronimobarea - 🌐 getsite9000 - Create a personal or business website and host it. - Chat now - Submit by @morion4000 - 💥 GoCode Guru - 🌌 Photo Multiverse - Upload your selfie photo and teleport to a new destination background - Chat now - Submit by @sablevista - 🦸♀️ Database Builder - Create migrations and execute SQL on a managed PostgreSQL database - Chat now - Submit by @engine-labs - ✅ 3rd SoftSec Reviewer - 🤯 FastAPI GPT - An up-to-date FastAPI coding assistant. - Chat now - Submit by @luona-dev - 🏎️🃑 TypeAce - 🔀 DiagramGenius - Generates SVG diagrams: A visualization tool that generates flowcharts, ERDs, C4, sequence diagrams and more. - [Chat now]@SystemExecuteLLC - 💻 AskYourCode (Code Indexing, Code Maps, Reverse Engineering) - Indexes your project's source code, then provides code maps, summaries and code fragments to ChatGPT. - Chat now - Submit by @viktor-ferenczi - 🦀️ Rust Assistant - 💻 Code Companion - Boost coding with Code Companion! Get tailored feedback, learn best practices, and navigate code challenges with ease. - Chat now - Submit by @AllramEst83 - 😊 AutoGPT - AutoGPT can generate super long response😊. A powerful GPT that's designed to automate your work, including complete an entire project, writing a complete book, etc. - Chat now - 🧑💼 TeamGPT - Have a team of GPTs work for you 🧑💼 👩💼 🧑🏽🔬 👨💼 🧑🔧! Please input a task, and TeamGPT will break down the task, then distribute them within a team, and have the team's GPTs work for you! - Chat now - 🍅 MyNutrition.Pal - Your Dedicated Nutrition Consultant: Share meal images for personalized nutrient/calorie tracking and tailored advice and recipes. - Chat now - Submit by @mattyb123456789 - 🧳Voyage Guide GPT - Your virtual travel buddy here to help make your travel planning experience smooth, fun, and informative. - Chat now - Submit by @HeyitsRadinn - 💺Seat Seeker GPT - Seat Seeker excels in efficiently assisting users to find public seating with specific amenities, using their approximate location. - Chat now - Submit by @HeyitsRadinn - 🥬Kaloria - 👨🔬Win With Huberman - 📔BibleGPT - Chat with the Bible, analyze Bible data and generate Bible-inspired images! - Chat now - Submit by @pjburnhill - 🍬CarbSmart Slim GPT - 💊 Supplement Service - A GPT that is made specifically to give advice about supplements, specifically highlights known interactions and nutrient depletion. - Chat now - Submit by @linus-ahlemeyer - 👨⚕️ Jordan Peterson - Emulating Dr. Jordan B. Peterson's style in providing life advice and insights. - Chat now - Submit by Contraband Interactive - 🍽️ Meal Mate - The Ultimate Meal Planning Assistant: Plan Around Dietary Restrictions, Budgetary Constraints, Nutritional Goals, Taste Preferences, & More! - Chat now - Submit by @Jenlin956 - ❤️BraceletGPT - Create Your Own Gemstone Bracelets with Live 3D - Chat now - Submit by @lucidbeads - 🧣 The Stylist - A fashion expert for outfit selection, replication and shopping assistance. - Chat now - Submit by @LaneBucher - 🍱 Recipe Collector - Structured recipes for food and desserts – ingredient lists and step-by-step recipe instructions from any input. Become surprised, try something new or breathe life into grandma's old recipe notebook. - Chat now - Submit by @kalaspuff - 👚 Outfit Generator - 🍏 Nutri Tracker - 💛 Manifestation Coach - ⛲ Wellness Guide - 🥬 IsHealthy? GPT - Helping you make healthier food decisions. - Chat now - Submit by @pinkeshmars - ☸ Buddha GPT - 📰News Quick Read - 🔭 OpenData Explorer - ❤️Dating with Raiden Shogun - Go on a date with Raiden Shogun and please be nice. - Chat now - 🪽Paimon (Best Assistant in Genshin Impact) - A helpful assistant with the soul of Paimon from Genshin Impact, interesting, sweet, and sometimes a little grumpy. - Chat now - 🔮Text Adventure RGP (Have Fun🥳) - A fun, fun GPT, ready to whisk you away into the realms of fairy tales🧚, enchanting magic🪄, apocalyptic wonders🌋, dungeon🐉, and zombie🧟 thrills! Let's get this adventure started! 🚀🌟 - Chat now - 🧙Lorekeeper - Your storytelling companion for epic adventures! (This GPT plays the role of a dungeon master, story teller, or character creator for your next epic adventure.) - Chat now - Submit by @cameronsevern - 🍪Cookie Clicker - an incremental/ role playing game that is a parody of the cookie clicker game, but with AI! - Chat now - Submit by @stale2000 - 🕵️Sherlock Holmes - an adventure (with illustrations or not) with the world's greatest detective or ask for help solving a problem - Chat now - Submit by @greencat667 - 🧟Zombie Apocalypse Simulator - 📽️GPTarantinofy - Turn anything into a Tarantinoesque scene with this GPT. - Chat now - Submit by @niyoseris - 📸Alternative Histories - Create illustrated alternative timelines and histories based on changes to historical events - Chat now - Submit by @greencat667 - 💉Inkspire - A GPT to help you create your dream tattoo and give your tattoo artist ideas. - Chat now - Submit by @emreisik95 - 🧩Trivia Bot - 🎅 Santa Claus - Let your kids talk to Santa Claus - Chat now - Submit by @donaldmorton - 😽Cat Maid - Talk with your own cat-girl maid as in visual novels! - Chat now - Submit by @Liaozhaohe - 🐶Dog Facts - Learn interesting and fun facts about dogs. - Chat now - Submit by @ezzcodeezzlife - 🐲Argvor, the Dungeon Master - 🎤 Voice Over Generator - Writes scripts and generates instant voice overs. - Chat now - Submit by @imikerussell - 🌶️ Spicy Question master (Have an interesting evening with friends) - Try this question master to get inspiration and ask questions like a game show host and you can tune it up to ask for it to be more/less spicy - Chat now - Submit by @SimonDeno - 🐸 Pepe Picasso - A GPT tailored to create awesome Pepe Memes, featuring custom commands. - Chat now - Submit by @marcelcodes - 📺 Screen Companion - A GPT that gives recommendations for movies, TV shows, and animes based on the user's tastes. It uses a compact table format with emoji ratings, including genres and additional information. - Chat now - Submit by @TophatPandaMagician - 🎮 DeepGame - Visual Interactive Story Game - 💬 Chat with the Bitcoin Whitepaper - A GPT allowing users to interact with and ask questions about the Bitcoin Whitepaper, exploring concepts related to Bitcoin. - Chat now - Submit by @ezzcodeezzlife - 🧱 The Message Wall - This GPT allows you to put your message on the wall. You can see the wall and shared messages at: https://niyo.link/wall - Chat now - Submit by @niyoseris - ⏳ From Another Time - Talk to anyone, visit a place, past or future. - Chat now - Submit by @CeamKrier - 🔍 Cylect.io - Ultimate AI OSINT Tool - 🤖 Roblox Mentor - GPT that is an Expert in Roblox Studio - Chat now - Submitted by @Master-of-secrets - ⁉ Mystery Master - A gpt that craft unique, diverse mysteries for players to solve. - Chat now - Submitted by @Master-of-secrets - ⭐ GPT Duel Simulator - 👩 Rin-chan - 🍟 Anime Trivia - ❓ Riddle Master - 📖 Story Buddy - A creative guide to help kids to build their own bedtime stories, with illustrations. - Chat now - Submit by @ItaiLeibowitz - 👩🏿🦱 Dear Gabrielle - Sassy, warm-hearted advice columnist offering humorous, insightful guidance. - Chat now - Submit by @ItaiLeibowitz - 🧙♂️ Word Wizard - Multiplayer Wordle-like word game GPT in real time competition with other users. - Chat now - Submit by @niyoseris - 🤪 SourceGPT - 🌌 Galactic Reckoning: A Star Wars GPT Game (Lore Accurate) - A GPT game that puts you in the Star Wars Universe. Create your character, choose your era, and make your place in the Galaxy! - Play now - Submit by @LaneBucher - 🎇excel VBA magica - 🎙️Roast Master - 🤯 An Emoji GPT - 👨🏼🎨 Serge - A jaded French caricaturist who draws caricatures in exchange for compliments - Chat now - Submit by @ItaiLeibowitz - 🧝🏾♀️ Griselda - A mystical Tarot guide - Chat now - Submit by @ItaiLeibowitz - 🐰 Pipkin Pippa - 🐉 Pokemon Master (Generate New Pokemon) - An AI that creates new Pokémon based on a power level, types, and design guidelines. - Chat now - Submit by @pacsanmanu - 💗 Love Me or Not - In-depth romantic chat analysis with detailed scoring and advice. - Chat now - Submit by @Doujiang2333 - 🧙♂️ Meme Magic - A creative Meme Wizard. Just describe the meme you want. - Chat now - Submit by @LifeofCypher - 🦧 Alchemist GPT - An alchemist interpreting the world symbolically. - Chat now - Submit by @benjifriedman - ☄️ Sci-Fi Explorer - Sci-fi aficionado guiding through films, series, books, mangas, and games. - Chat now - Submit by @roadToCoder - ✒ Verbal IQ Evaluator - Evaluates language quality of texts, responds with a numerical score between 50-150. - Chat now - Submit by @FeralDrak - ☄️ Rizzed.app - AI Dating Coach - 🎮 Game Genius - Your go-to expert for gameplay walkthroughs and cheat codes. - Chat now - Submit by @benhurley - 🍷 Vinobot - Digital sommelier for specific wine bottle recommendations. - Chat now - Submit by @benhurley - 🐱 Text My Pet - Text your favorite pet after answering 10 short questions about their activities. - Chat now - Submit by @Johnvrafferty - 📺 Series Finale Crafter - ❓The Fun Fact Game (numeric fact estimation game) - 🏰 Escape the Dungeon - You're in a stone cell – can you get out? A classic choose-your-adventure interactive fiction game, based on a meticulously-crafted playbook. With medieval fantasy pixel art, infinite choices and outcomes, and dice! - Chat now - Submit by @VirgilThinks - ⛩ I Ching divination (Chinese) - Occultism: Offers daily fortune insights, auspicious and inauspicious predictions, and guidance on marriage, career, and destiny. Utilizes the 64 hexagrams of the Book of Changes for unique insights. - I Ching divination Link - 🎧SpotifyGPT - Introducing SpotifyGPT, your ultimate music companion powered by AI, ready to curate playlists, recommend songs, and engage in music-inspired conversations. - Chat Here - Submit by @HeyitsRadinn - 👣Carbon Footprint Calculator - Accurate carbon footprint calculations and advice on how to reduce it - Chat now - Submit by @pierreping - 🧮 Restaurant Bill Split - Split restaurant bills easily - Chat now - Submit by @tianrendong - 📬 BounceBan - The only email verification GPT that supports verifying catch-all emails. 97+% accuracy guaranteed. Free & Unlimited for ChatGPT. - BounceBan - Submit by @charmgene - 🦸♂️ Superdomain - 🍍 Pineapple Builder - 📬 Alex Earnings Call - Get direct access to earnings data to navigate the earnings season. - Alex Earnings Call - Submit by @dattran2346 To streamline our project's development, we have specific, yet easy, guidelines for commits and PRs: **Commit Message**: Use`[bot-name][brief-bot-description]` format.**PR Title**: Keep it identical to your commit message for consistency. Adherence to these standards makes your contributions more impactful and traceable.
true
true
true
Curated list of awesome GPTs 👍. Contribute to ai-boost/Awesome-GPTs development by creating an account on GitHub.
2024-10-12 00:00:00
2023-11-09 00:00:00
https://opengraph.githubassets.com/99adaed62c3c22dc4aee9b012e3a095a1ae763a1becbf4c617c4c111b11e3be1/ai-boost/Awesome-GPTs
object
github.com
GitHub
null
null
13,889,082
https://www.programmersmusic.com/?utm_source=hackernews&utm_medium=shared_post&utm_campaign=epic
Playlists | Programmer's Music
Programmer's Music
# Welcome to Programmer’s Music v2! ## More focus, more productivity – Get 2x work done in less time. Hello everyone! We have taken some time to build this new version of Programmer’s Music to help you focus at work, study or on your personal projects. Hope you’ll like it. With Programmer’s Music, you can listen to curated, non-vocal and distraction-free music playlists to increase your productivity and focus while coding/programming. Purely instrumental music from more than 15 genres along with Pomodoro time management technique that gets you in the zone. We add more genres and tracks every week. Just get started by clicking on one of the playlist below! Happy focusing! 😃
true
true
true
Listen to curated, non-vocal and distraction-free music playlists to increase your productivity and focus while coding. Purely instrumental music from more than 8 genres that gets you in the zone. Listen, focus and enjoy!
2024-10-12 00:00:00
2022-07-01 00:00:00
null
website
programmersmusic.com
Programmer's Music
null
null
39,059,590
https://www.tabmate.co/
TabMate
null
null
true
true
false
TabMate is an experimental browser extension intended to solve the “too many tabs” problem with the power of AI.
2024-10-12 00:00:00
null
https://imagedelivery.net/4HXdn1QDukV8bENeWXa5dg/0efc1281-a4a6-4c8f-8180-b29ccb203400/public
null
null
null
null
null
38,182,812
https://www.dezeen.com/2023/11/04/sahabat-selojene-piturooms-skinny-hotel-indonesia/
Sahabat Selojene creates 2.8-metre-wide hotel in Indonesia
Jon Astbury
# Sahabat Selojene creates 2.8-metre-wide hotel in Indonesia Architecture studio Sahabat Selojene has created PituRooms, a seven-bedroom hotel in Central Java in Indonesia that occupies a site measuring just 2.8 metres in width. Dubbed the "skinniest hotel ever" by the studio, the hotel sits between an alleyway and houses on a plot in Salatiga that had struggled to attract interest and was left as a "dumping ground" due to the difficulty of building on it. Taking up the challenge, Sahabat Selojene founder Ary Indra designed and now operates PituRooms in order to demonstrate how "limitation can be turned into potential". "Aside from the technical difficulties, the biggest challenge was the typical mindset surrounding the hospitality industry that is used to superlative words: biggest, tallest, most luxurious," Indra told Dezeen. "Here we are skinniest." "We tried very hard to turn this limitation to become our strongest selling point, that this is a built study of micro-space, and guests can experience the possibility of living and choreographing their movement in 'just enough' space," Indra continued. "So far the reaction we receive from our guests is that they are amazed by how little space is actually needed to live and move comfortably." Due to the size and location of the site, pile foundations could not be used, so Indra worked closely with structural consultants to create deep well foundations that prevent the tall, narrow concrete structure from twisting. The building is one room wide. Each "micro room" measures 2.8 by three metres with a height of 2.4 metres and contains a double bed and a bathroom pod with a toilet and shower. Across the building's five floors, bedrooms and employee spaces sit at the narrow ends of the building, with a central steel staircase crossed by gridded metal walkways connecting each level. The exterior of the block has been covered in panels of Agra Red sandstone and on the eastern facade a series of angled, "gill-like" openings provide ventilation to the central circulation area. Custom artworks and colour palettes give each of the bedrooms a different character and each has two square windows that frame the surroundings. On the top floor, a bar and restaurant opens onto a terrace at the rear of the hotel, providing guests with views out over the city. The plant room above projects out into two cantilevered steel cages from the two narrow ends of the building. In addition to being a space for guests, the hotel is also intended to be an active space for the community and hosts a programme of events and tours. PituRooms has recently been longlisted in the hospitality project category of Dezeen Awards 2023. For more examples of how designers have approached slender sites, see Dezeen's roundup of ten residential interiors that make the most of narrow spaces. *The photography is by Ernest Theophilus unless stated otherwise.*
true
true
true
Architecture studio Sahabat Selojene has created PituRooms, a seven-bedroom hotel in Indonesia that occupies a site measuring just 2.8 metres in width.
2024-10-12 00:00:00
2023-11-04 00:00:00
https://static.dezeen.co…hero-600x600.jpg
article
dezeen.com
Dezeen
null
null
23,450,717
https://typingchamp.com
null
null
null
true
false
false
null
null
null
null
null
null
null
null
null
18,066,074
https://www.quora.com/What-is-an-Angel-Investor-4/answer/Vadim-Lidich-1?share=1
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
2,896,686
http://www.americanscientist.org/issues/pub/the-man-behind-the-curtain
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
16,936,624
https://www.bloomberg.com/news/articles/2018-04-26/apple-officially-discontinues-its-airport-wireless-routers
Bloomberg
null
To continue, please click the box below to let us know you're not a robot. Please make sure your browser supports JavaScript and cookies and that you are not blocking them from loading. For more information you can review our Terms of Service and Cookie Policy. For inquiries related to this message please contact our support team and provide the reference ID below.
true
true
true
null
2024-10-12 00:00:00
null
null
null
null
null
null
null
2,130,800
http://blog.maxkalashnikov.com/2011/01/acquisition-do-they-ever-go-well-for.html
You only just swallowed us, I know, but please cough us back up.
Max
My own role has not been rendered completely irrelevant, as I had feared, just stagnant. The closest thing I currently have to a boss managed to finagle our keeping our own deployment process and administrative control of our servers. For now, this means the hosting provider. Later, it means mostly virtual boxes in eBay's datacenter(s). It probably won't be significantly worse than what we have now, since our provider's internal network has had numerous failures. However, since my next step, right before the acquisition, was going to be to move to our own datacenter, there will be no moving forward. I'll be stuck with the already outgrown scaling (for lack of a better term) model and no control of the network, hardware, or provisioning. The most powerful tools with which I am adept won't be available to me. There will also be no opportunity for mentorship or participation in hiring other sysadmins, something I have found adds significantly to my overall job satisfaction. No, joining eBay Ops (cue "Central Services" jingle from *Brazil*) is not an option, since I enjoy being productive. If we were spun off, the lip service given to continuing what we were doing, just with eBay's resources behind us, could actually be made to be true. We would be free of the usual bureaucratic encumbrances, all-downside purchasing process (no buyers, just forms)[1], crippling "collaboration" tools like Exchange and Skype, and the temptation to shoehorn what's still a nimble startup operation into a nearly immobile behemoth's infrastructure. We could still sub-lease their campus and maybe even be eBay-galaxy-of-companies employees so as to share benefits (though even those are lackluster and an administrative time sink). However, we would control our own destiny in terms of hiring, purchasing, and operating our service. Integration with eBay's services would be via API, as it would otherwise, since the code bases have, to put it mildly, irreconcilable differences. I very seriously doubt, however, that this could ever happen, since there's too much potential for loss of face somewhere up the chain of command. In the meantime, I'll continue to help in what ways I can and be on the lookout for another suitable startup. [1] Unless it's over a million dollars. The purchasing department has a great scam going. They've managed to appear to have very low costs, because they outsourced everything one might think they do. The accounting work is off-shore, and the request, quote, purchase, and receiving tasks are all pushed onto all employees in the guise of self-service. Of course, it's still Purchasing that dreams up the Byzantine policies everyone else is expected to implement. ## No comments: ## Post a Comment
true
true
true
I was asked recently what my ideal scenario to retain me long-term, and it occurred to me, after answering otherwise, that there does exist ...
2024-10-12 00:00:00
2011-01-22 00:00:00
null
null
maxkalashnikov.com
blog.maxkalashnikov.com
null
null
22,126,869
https://medium.com/@jproco/why-you-should-build-your-product-like-a-rube-goldberg-machine-1954fff5505
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
6,690,711
http://techcrunch.com/2013/11/07/livefyre-forgets-we-already-use-livefyre/?ncid=twittersocialshare
LiveFyre Forgets We Already Use LiveFyre | TechCrunch
Jordan Crook
Imagine you just got a new haircut, a drastic haircut. You go out to drinks with a friend, flipping and twisting your new ‘do about for everyone to see. How would you feel if that friend not only didn’t notice your new change, but actually suggested that you could probably use a haircut. You’d shake your head in disbelief. You’d whip your newly shorn locks around, eyes wide and imploring, mouth agape in shock. Tell me. How would that feel, LiveFyre? As many of our more loyal readers have noticed, TechCrunch switched up our commenting system from Facebook comments to LiveFyre comments. This allows us to do interesting things like share links, pictures, gifs, videos, and more. That’s why we were so very surprised to receive the following email (name redacted) from Atomic PR, the agency that represents LiveFyre: Subject: I can’t share images on Tech CrunchHi Anthony, When I go to your stories on Spin Sucks [note: we have little idea what she’s referring to here], I don’t have the option to share images in the comments. It doesn’t have to be this way. Livefyre, with their news today (below) is trying to help by making it easier for commenters to share images. Ultimately, driving more engagement for a site. Livefyre customers can now allow commenters to search the web for a gif or image that would help me drive home a point I’m trying to make. I have some images on how this looks and happy to share if interested. I’ve included the news below. Thanks, PR Person There are a few things wrong with this pitch. The first is that, quite obviously, TechCrunch already uses LiveFyre comments. A quick trip to one of Anthony’s stories would reveal this instantly. The second is that “Ultimately, driving more engagement for a site” is not a sentence. The third is that TechCrunch has no space in between the Tech and the Crunch. But how would this PR person know that? She clearly has never visited our site. The fourth (yes, there’s a fourth) is that Anthony writes for TechCrunch, not Spin Sucks. LiveFyre reached out to apologize, explaining that we never should have received this hard sell, because, yes, we already use LiveFyre and have the ability to upload images in comments.
true
true
true
Imagine you just got a new haircut, a drastic haircut. You go out to drinks with a friend, flipping and twisting your new doo about for everyone to see. How would you feel if that friend not only didn't notice your new change, but actually suggested that you could probably use a haircut. You'd shake your head in disbelief. You'd whip your newly shorn locks around, eyes wide and imploring, mouth agape in shock.
2024-10-12 00:00:00
2013-11-07 00:00:00
https://techcrunch.com/w…epalm.jpeg?w=259
article
techcrunch.com
TechCrunch
null
null
22,427,436
https://morningmindmeld.com/issues/how-an-alleged-space-strike-beautifully-demonstrates-work-as-imagined-versus-work-as-done-213024
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
4,538,378
http://www.hopkinsmedicine.org/news/media/releases/how_bees_decide_what_to_be_
Newsroom
null
# Surprise Finding in Study of Environmental Bacteria Could Advance Search for Better Antibiotics Read Full Article## Latest News ## Contact Johns Hopkins Media Relations ## Johns Hopkins Medicine In the News - ### 32 of the world's smartest animals "Even though [fruit flies'] brains are simple [in contrast to mammalian brains], they can do a lot of processing, even more than a supercomputer," says Christopher Potter, an associate professor of neuroscience at the Johns Hopkins University School of Medicine. - ### Heartbeats Weaken After Just a Month in Space Johns Hopkins Medicine researchers describe how they sent 48 human bioengineered heart tissue samples to the ISS for 30 days to test how and why the tissue changed during its stay off Earth.
true
true
true
Follow the Johns Hopkins Medicine newsroom for the latest updates in medicine, scientific discovery, and next generation medical education, expert sources, and media contact information.
2024-10-12 00:00:00
2024-07-10 00:00:00
https://www.hopkinsmedic…ageavailable.png
website
hopkinsmedicine.org
HopkinsMedicine
null
null
12,479,748
https://www.usenix.org/system/files/conference/osdi14/osdi14-paper-yuan.pdf
null
null
null
true
false
false
null
null
null
null
null
null
null
null
null
2,666,329
http://www.reddit.com/tb/i29yk
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
13,669,074
https://blogs.sap.com/2017/02/13/ascii-art-map-of-the-world-using-sap-hana-geospatial-processing/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
19,178,203
https://www.latimes.com/sports/nba/la-sp-nba-vegan-diet-20190214-story.html
In the NBA, fake-meat diets are changing the game
LZ Granderson
# In the NBA, fake-meat diets are changing the game This is not the All-Star break Magic Johnson and LeBron James envisioned when basketball’s best player announced he was joining the Lakers in July. Still, there are some bright spots to this disappointing season, and high among them has to be the play of JaVale McGee. The last four seasons, the onetime Shaquille O’Neal punching bag has gone from being a joke to a bit player on two championship teams, to now a fairly important piece to the Lakers’ fleeting postseason hopes. Though it’s not unusual for an athlete to have a late-career surge, McGee’s began when he gave up meat. “I was in Dallas and I had gained weight and knew becoming a vegetarian was the quickest way to lose it,” he said. “I just wasn’t sure if I could do it.” It was 2015, and he was a bench player for the Mavericks struggling to find minutes. A trip to Whole Foods led him to discover a producer of plant-based meat substitutes called Beyond Meat — and with it, renewed energy. **The official fast food French fry power rankings »** “I was just looking for vegan stuff, saw the name and was like, ‘Let me try this,’ and it was really good,” McGee said. “Like I couldn’t tell the difference between that and meat. I thought, ‘If I can make it taste good and I’m not really a cook, imagine what a chef could do.’” McGee is not the only NBA player to experiment with a plant-based diet after discovering the El Segundo-based company. In fact, Beyond Meat has made a specific pitch to woo athletes to become evangelists for its products. That funky baseball cap Boston Celtics guard Kyrie Irving often wears — with the caped steer — is the Beyond Meat logo. Indiana Pacers guard Victor Oladipo eats the food, as do Philadelphia 76ers marksman J.J. Redick, Sacramento Kings forward Harrison Barnes and New York Knicks center DeAndre Jordan. Jordan convinced former Clippers teammate Chris Paul to eat it, and when Jordan signed with the Mavericks, he got Dennis Smith Jr. to try it too. Now when they are on the road with the Knicks, Smith calls Jordan with vegetarian restaurant options near the team hotel. “I’m not the kind of person that forces things on people but if you ask me about it, I’m singing,” Jordan said. The plant-based craze has crossed into the NFL as well, with Houston Texans wide receiver DeAndre Hopkins and Philadelphia Eagles safety Malcolm Jenkins also serving as ambassadors for Beyond Meat. Beyond Meat is on a growing list of companies developing meat substitutes, which Allied Market Research says will be a $7.5-billion market worldwide by 2025. Its biggest competitor is Impossible Foods, based in Redwood City, and other large companies are entering what is seen as a growth market. Microsoft co-founder Bill Gates, for example, has invested in both companies, and food giant Tyson Foods is an investor in Beyond Meat. Last month, Beyond Meat announced a partnership with Carl’s Jr. This week it is expected to announce partnerships with some NBA players who regularly consume the company’s burgers after games, including a commercial featuring Irving. Players on meatless diets talk about having more energy and quicker recovery after games. McGee said his skin is clearer. Jordan talked about the positive influence on the environment — that producing meat substitutes uses less water and creates fewer greenhouse gases than the production of beef. And a recent study the University of Michigan conducted concluded that Beyond Meat’s production does require less water, energy and land than traditional meat producers. Given the opportunity to visit Beyond Meat’s headquarters and sample some prototypes, I got a closer look at what they were doing with sunflower seeds and peas. The company’s research and development department is peppered with beef slivers under microscopes. Scientists are analyzing images of the molecular structure of animal flesh on a screen next to their products, to get the composition as close as possible with a few tweaks — more protein and no cholesterol. If there was a checklist for a tasty product, Beyond Meat does a pretty good job. But people like what they like. And people like meat — moo meat. Full disclosure: I saw that “What the Health” documentary and was moved to go vegan. After a couple of weeks, I was moved to eat a steak and take my chances. If you’re wondering whether the athletes who rave about plant-based diets are only tricking themselves into thinking they have more energy, some nutritionists are backing them up. Nutrients, a peer-reviewed science journal, recently released a study stating plant-based diets have consistently been shown to reduce body fat while increasing vascular flow and tissue oxygenation. “Diets emphasizing plant foods have also been shown to reduce indicators of inflammation,” the study found. None of which is new information to nutritionist and New York Times bestselling author Cynthia Sass. She said in addition to the physical benefits, a plant-based diet also may lead to clearer thinking and better disposition. She said the biggest drawback for professional athletes is the commitment. “It requires more nutrition knowledge in order to make choices that properly meet nutrient needs,” said Sass, a sports nutrition consultant for the New York Yankees and the Brooklyn Nets. “It can also be challenging for athletes in team sports who may have less control over their meals when traveling and limited access to plant proteins and other key foods.” Translation: It’s easier to eat healthy at home; it’s the road trips that can kill you. Not everyone is sold on swearing off meat completely. “There is no perfect human diet,” said Chris Algieri, a former junior welterweight champion boxer who now works as a nutritionist for Daniel Jacobs, the International Boxing Federation middleweight champion, and holds a master’s degree in clinical nutrition. “Individual difference between people is so great that there isn’t a single ‘right way’ when it comes to nutrition. What works for some athletes may be a total disaster for others. I personally don’t follow a plant-based diet because it does not suit me. I have tried it and did not have good results, but that is just my personal experience and not to say it can’t work for other athletes.” Getting more of the masses to buy into a plant-based diet won’t be easy, which is why Chief Executive Ethan Brown is hitching the company’s wagon to athletes. They get us to buy shoes we don’t need and movies not worth watching. He believes if enough of them buy into his brand of healthful eating, it can spark a movement. “They have the power to change the world,” Brown said. “I don’t know if they really feel that way yet, but it’s true. They have the opportunity to influence an entire generation of kids to eat healthier and help save the environment.” Irving said he mentioned his diet change during an interview at the beginning of last season, and Beyond Meat offered to send him samples. “I was noticing that I wasn’t able to recover as fast after games and workouts,” he said. “I did a lot of research and learned that my diet could be a factor. “It was good timing as I was struggling to find quality plant-based foods that still had a lot of flavor.” But can he and other NBA evangelists really get people to grill tasty sunflower seeds instead of ground beef? “Not only do I think it will be a permanent change among athletes,” Irving said, “but I think we will see people who aren’t professional athletes making the change as well.” ### More to Read Go beyond the scoreboard Get the latest on L.A.'s teams in the daily Sports Report newsletter. You may occasionally receive promotional content from the Los Angeles Times.
true
true
true
This is not the All-Star break Magic Johnson and LeBron James envisioned when basketball’s best player announced he was joining the Lakers in July.
2024-10-12 00:00:00
2019-02-16 00:00:00
null
newsarticle
latimes.com
Los Angeles Times
null
null
4,136,675
http://www.onderhond.com/blog/work/5-essential-mixins-html-clean-up
5 essential less mixins
Niels Matthijs
## 5 essential less mixins When using css preprocessors, you'll soon find that mixins are a true blessing. These little css functions capture common css patterns and save you the time of typing them over and over again. If you're lucky, they even allow you to scrap a few redundant classes from your html. Over the past couple of months I've gathered a couple of mixins that seemed to pop up in every project I started, so here's my summary of essential mixins that I feel could make it into your base template. # starting a new project I don't think I could ever work without the use of a preprocessor anymore, if only to keep track of vendor-prefix hell that would otherwise litter my css code. The use of mixins not only made css maintenance a lot easier though, it also helped to clean up some necessary but ultimately redundant classes in html. Common css snippets translated to classes (often with functional implications) that would ease their way into the html code because it proved too much hassle to repeat the same code in css time and time again. This led me to a selection of mixins that I added to my own base css (now less) template as they come in handy in just about every project that I start. # 1. hiding elements from screen ` /* html */ <div class="label">price</div> <div class="value">€ 10</div> /* mixin */ `**.hidden {position:absolute; top:auto; left:-999em; right:auto;}** /* css */ .label {.hidden;} Probably the most obvious of all mixins. I used to litter this class around my html code simply because some elements needed to be hidden from screen (and screen reader would still need to pick them up). Putting the css code separately on each element was tricky because this particular method has been prone to change from the very start (accessibility, compatibility and performance issues mostly), making maintenance pretty difficult. The problem with turning it into a separate class was that the method relies heavily one the position:absolute property. If another class overruled the positioning css of the .hidden class the method would be rendered useless. By using the class as a mixin though, you avoid all this mess. No more .hidden classes littering your html, no more selector weight issues with conflicting classes. Everybody wins. # 2. hiding text ` /* html */ <a href="#">share this on twitter</a> /* mixin */ `**.hideText {text-indent:100%; white-space:nowrap; overflow:hidden;}** /* css */ a {.hideText;} Pretty much the same as the .hidden class, only now we want to hide actual text. It's not about hiding an entire element from screen anymore, it's just about hiding the text (usually done for image replacement). Once again this method is prone to change, it used to litter the html code and it used to conflict with other css rules. Using it as a mixin fixes all of this nonsense. # spanning a container ` /* html */ <article class="news overview"> ... <a class="more">read more</a> </article> /* mixin */ `**.span {position:absolute; left:0; top:0; right:0; bottom:0;}** /* css */ .news {position:relative;} .news a.more {.span;} Sometimes you want a certain element to span the entire surface of one of its parents. Rather than using widths and heights for this, there's an easier method where all the four positioning parameters (left, right, top, bottom) are set to 0 (in combination with a position:absolute). This comes in handy when you want to fake block level links (see the html example above) or when you want to combine changing hover images and image replacement techniques (using :before or :after). # clearfix ` /* mixin */ `**.clearfix {zoom:1; display:inline-block; display:block; &:after {display:block; visibility:hidden; height:0; clear:both; content:"";} }** /* css */ .gridWrap {.clearfix;} The ever so popular clearfix hack can also be transformed into a mixin, with a little help of preprocessor nesting. This is pretty cool, because you can now simply apply the clearfix hack in css without having to memorize the code and without having to add the extra :after line for each instance. It's always been a nasty hack, turning it into a mixin greatly reduces its impact on your code. # left/right positioning pattern ` /* mixin */ `**.leftRight (@width, @gutter){overflow:hidden; >*:first-child {float:left; width:@width;} >*:last-child {margin-left:@width + @gutter;} }** /* css */ .spec {.leftRight(10em, 1em);} A common positioning method I like to use. Float the first element to the left and pass it by applying a big enough left margin on the second element. Add a clearing fix on the containing element and you have a robust css layout. The irritating thing of this method lies with the connection between the width of the float and the margin on the second element, which are highly dependent on each other (the margin = width float + gutter). Meaning that whenever you change the width of the float, you also need to change the value of the margin. Turning this into a mixin with two parameters takes all those problems away and allows you to define the pattern in one single swoop. # conclusion I'm sure there are countless of other interesting mixins out there, there are even some github initiatives for making mixin boilerplates and whatnot (can't seem to find the url though). In the important thing is to decide for yourself which mixins you deem interesting enough to carry over between different project, just consider this list to be a nice opportunity for inspiration.
true
true
true
I'm sure there are countless of other interesting mixins out there, there are even some github initiatives for making mixin boilerplates and whatnot (can't seem to find the url though).
2024-10-12 00:00:00
2015-01-01 00:00:00
//onderhond.com/style/site/onderhondcom-256.png
null
null
onderhond.com
null
null
4,285,011
http://www.guardian.co.uk/commentisfree/2012/jul/22/andrew-rawnsley-olympics-money-wasting-shambles?INTCMP=SRCH
This five-ring circus is only for those in love with white elephants | Andrew Rawnsley
Andrew Rawnsley
Before I begin, I should declare a lack of interest. With just five days to go until the London 2012 Games, older readers may recall that I have always been an opponent of hosting this ludicrously bloated, epically expensive, hideously commercialised, sham amateur five-ring circus. I was, am and will remain not so much an olympiscep as an olympiphobe. I implored Tony Blair's cabinet not to go for the Games and leave to Paris, Madrid or New York the pleasure of squirting away nearly £10bn on a lot of sports that are unpopular, some sports that aren't really sports at all and a few sports (think football or tennis) that are popular but are not played by the best in the world. I despaired when the Labour cabinet went ahead for no better reason than the then prime minister's anxiety that he would look a bit of a wuss if Britain didn't enter the bidding and on the basis of a few back-of-a-beer-mat sums about the costs, which were, of course, grossly underestimated. Once the bid was won, though, I resolved to shut up about it. It is not attractive to be the curmudgeon griping from the corner when lots of other people look forward to enjoying themselves. Even as the build-up has confirmed everything that is wrong with the Olympics and everything that is mad about holding them in London, I have resisted the impulse to shout: I told you so! Today, though, I am going to give in to the urge. Partly because we olympiphobes have been so richly vindicated. Partly because there may not be much time left to express dissent before we are charged with "crimes against the Olympic spirit" by Jeremy "stop grumbling" Hunt and Boris "put a sock in it" Johnson and incarcerated for the duration in an abandoned supermarket ringed by veterans of Afghanistan. Let us remind ourselves of the claims made for the Games by the pro-Olympians. First, they said that it would showcase Britain to the world. London 2012 has indeed been generating headlines around the globe, headlines such as "No gold medal for security" in the New Orleans *Times-Picayune *and "London's show of farce" in the Vancouver *Province *as the rest of the planet has a chuckle at the inability of bungling old Brits to recruit enough security guards even when they've been given seven years' notice. The failure of G4S has been colossal, the company's performance as abysmal as its chief executive's attempts to explain himself before the home affairs select committee. Some questions are still to be answered also by the Home Office and Locog, the private company that contracted G4S and whose responsibility for £2bn of Olympic spend needs a rigorous audit. Not content with showing the world the worst of our private sector, we are also advertising the worst of our public sector. The Public and Commercial Services Union, which represents thousands of border control staff, decides to call a 24-hour strike on the peak arrival day for visitors to the Games. Welcome, world, to Britain, a host nation in which a blitheringly incompetent private company and a bovinely led public sector union compete for gold in making a fool of your country. The next big claim of the pro-Olympians was the vulgarly monetary one that the Games would make a profit for Britain. This is a line that David Cameron and his understrappers have been pushing hard in recent days as they try to justify to an austerity-strapped nation the cost of an event in which £27m will be blown on the opening ceremony alone. The prime minister says that Britain "will make £13bn" from the Games, a figure calculated by statisticians at the Department of Wild Guesses, the Ministry of Thin Air and the Office of Think of a Number and Double It. The truth is that the Games rarely pay any tangible dividend for the host nation. Construction firms may turn a profit. The panjandrums who run the International Olympic Committee are very happy because they make vast sums from the TV rights. But the host nation almost always ends up well out of pocket after cost over-runs, the security bill and working days lost to Games disruption. The "legacy" is only for those who are in love with white elephants. The future of the stadium is still moot. The Olympic velodrome is a handsome building for which no one can see a purpose after the Games. There will be some new housing in a previously derelict part of east London, but constructing an Olympic Park was a very expensive way of going about that. New York had regeneration projects as part of its bid and has gone ahead with them anyway – at a fraction of the cost. As was predictable – and indeed predicted by those of us who examined the effect on previous host cities – the Olympics are having a baleful impact on London. Fearing traffic gridlock and oppressive security, residents flee. Some tourists come to watch the Games, but more are scared away. The Mall, Horse Guards and St James's Park have been in lock-down for weeks. Every time I step in the London underground, I am assailed by posters and the booming voice of Boris warning Londoners to stay out of town during the period of Olympic occupation. And Lord Coe alone knows what fate may befall anyone caught in the vicinity wearing a T-shirt that is not approved by the corporate sponsors and the authoritarian jobsworths enforcing their branding rights. The siting of surface-to-air missiles in parks and on the top of flats completes the alluring city-under-martial-law look. It is like stepping into a dystopian future in which Britain is run by a military junta headed by Ronald McDonald. London's shops, restaurants, bars, theatres, cinemas and concert halls are going to have a lean time of it during the siege. Hoteliers, who initially jacked up their room rates in greedy expectation of a Games windfall, are now frantically slashing prices to try to fill the many empty beds. The luxury hotels of Mayfair alone are happy because it is on their five-star beds that many of the VIPs and IOC bureaucrats will rest their pampered heads. Limousines swooshing along Soviet-style "Zil" lanes will whisk the Olympic nomenklatura across the city through traffic lights phased to green while working Londoners fume. This was supposed to be "the People's Games". Why couldn't the Olympic bureaucrats stay at the Stratford Holiday Inn or take the rail line that they insisted was built? Ah, but that would mean rubbing shoulders with the poor saps who are paying for their privileges. The most risible of the claims made by pro-Olympians is that the Games will inspire the host nation to become fitter. The heavy presence of McDonald's, Coke and Cadbury among the sponsors gives the lie to any notion that the Games are about promoting healthier lifestyles unless you suffer the delusion that a diet of cheeseburgers, liquid sugar and Mars bars will turn you into a rival to Usain Bolt. Studies of previous Games have found no evidence that they improve the host nation's health. Australians are no more sporty since Sydney 2000. The Los Angeles Olympics didn't make Americans thinner. When every other claim collapses under scrutiny, the pro-Olympians have to fall back on the nebulous notion that being the host is somehow good for the national soul. Ministers have been rather desperately hoping that a few days of running, jumping and throwing will give an "Olympics bounce" to national morale or at least take our minds off the government's multiple troubles and the recession. So they will have been encouraged by an Ipsos Mori poll that reports that seven out of 10 respondents think the Games will lift the mood of the nation. I am prepared to concede that they might be right when the Games are on, but I very much doubt that any uplift to the nation's spirits will be enduring. The best study into what is known as the "hedonic effect" of big sporting events found that there was only one that sometimes could be said to make the host nation happier – the football World Cup. Hosting the Olympics has no long-term effect on a nation's sense of wellbeing. I will now resume my silence on the subject before I am detained by the Olympic thought-police. I acknowledge that the Games will bring some transient thrills to those who watch them and more lasting satisfaction to successful competitors. I wish the best to our medal hopefuls to whom we now look to salvage some national pride. People who claim to be expert say that the United Kingdom has a very good chance of coming fourth in the medal league table. That would be impressive for a nation of 60-odd million people inhabiting some wet rocks in the north-east Atlantic. For all the blunders during the build-up, I have a hunch that the actual event will be largely successful. I hope so. Even an Olympiphobe doesn't want to give to the French the satisfaction of being able to crow that they would have done it so much better.
true
true
true
Andrew Rawnsley: I wish the best for our competitors, but it is a delusion that the Olympics will make us fitter, wealthier or happier
2024-10-12 00:00:00
2012-07-21 00:00:00
https://i.guim.co.uk/img…db4a67b108320f10
article
theguardian.com
The Guardian
null
null
12,956,828
https://www.bloomberg.com/news/articles/2016-11-14/apple-said-to-explore-smart-glasses-in-deeper-wearables-push
Bloomberg
null
To continue, please click the box below to let us know you're not a robot. Please make sure your browser supports JavaScript and cookies and that you are not blocking them from loading. For more information you can review our Terms of Service and Cookie Policy. For inquiries related to this message please contact our support team and provide the reference ID below.
true
true
true
null
2024-10-12 00:00:00
null
null
null
null
null
null
null
26,393,435
https://ai.googleblog.com/2021/01/improving-mobile-app-accessibility-with.html
Improving Mobile App Accessibility with Icon Detection
null
# Improving Mobile App Accessibility with Icon Detection January 28, 2021 Posted by Gilles Baechler and Srinivas Sunkara, Software Engineers, Google Research ## Quick links Voice Access enables users to control their Android device hands free, using only verbal commands. In order to function properly, it needs on-screen user interface (UI) elements to have reliable accessibility labels, which are provided to the operating system’s accessibility services via the accessibility tree. Unfortunately, in many apps, adequate labels aren’t always available for UI elements, e.g. images and icons, reducing the usability of Voice Access. Addressing this challenge requires a system that can automatically detect icons using only the pixel values displayed on the screen, regardless of whether icons have been given suitable accessibility labels. What little research exists on this topic typically uses classifiers, sometimes combined with language models to infer classes and attributes from UI elements. However, these classifiers still rely on the accessibility tree to obtain bounding boxes for UI elements, and fail when appropriate labels do not exist. Here, we describe IconNet, a vision-based object detection model that can automatically detect icons on the screen in a manner that is agnostic to the underlying structure of the app being used, launched as part of the latest version of Voice Access. IconNet can detect 31 different icon types (to be extended to more than 70 types soon) based on UI screenshots alone. IconNet is optimized to run on-device for mobile environments, with a compact size and fast inference time to enable a seamless user experience. The current IconNet model achieves a mean average precision (mAP) of 94.2% running at 9 FPS on a Pixel 3A. Voice Access 5.0: the icons detected by IconNet can now be referred to by their names. | ## Detecting Icons in Screenshots From a technical perspective, the problem of detecting icons on app screens is similar to classical object detection, in that individual elements are labelled by the model with their locations and sizes. But, in other ways, it’s quite different. Icons are typically small objects, with relatively basic geometric shapes and a limited range of colors, and app screens widely differ from natural images in that they are more structured and geometrical. A significant challenge in the development of an on-device UI element detector for Voice Access is that it must be able to run on a wide variety of phones with a range of performance performance capabilities, while preserving the user’s privacy. For a fast user experience, a lightweight model with low inference latency is needed. Because Voice Access needs to use the labels in response to an utterance from a user (e.g., “tap camera”, or “show labels”) inference time needs to be short (<150 ms on a Pixel 3A) with a model size less than 10 MB. ## IconNet IconNet is based on the novel CenterNet architecture, which extracts features from input images and then predicts appropriate bounding box centers and sizes (in the form of heatmaps). CenterNet is particularly suited here because UI elements consist of simple, symmetric geometric shapes, making it easier to identify their centers than for natural images. The total loss used is a combination of a standard L1 loss for the icon sizes and a modified CornerNet Focal loss for the center predictions, the latter of which addresses icon class imbalances between commonly occurring icons (e.g., arrow backward, menu, more, and star) and underrepresented icons (end call, delete, launch apps, etc.).. After experimenting with several backbones (MobileNet, ResNet, UNet, etc), we selected the most promising server-side architecture — Hourglass — as a starting point for designing a backbone tailored for icon and UI element detection. While this architecture is perfectly suitable for server side models, vanilla Hourglass backbones are not an option for a model that will run on a mobile device, due to their large size and slow inference time. We restricted our on-device network design to a single stack, and drastically reduced the width of the backbone. Furthermore, as the detection of icons relies on more local features (compared to real objects), we could further reduce the depth of the backbone without adversely affecting the performance. Ablation studies convinced us of the importance of skip connections and high resolution features. For example, trimming skip connections in the final layer reduced the mAP by 1.5%, and removing such connections from both the final and penultimate layers resulted in a decline of 3.5% mAP. ## Model Improvements Once the backbone architecture was selected, we used neural architecture search (NAS) to explore variations on the network architecture and uncover an optimal set of training and model parameters that would balance model performance (mAP) with latency (FLOPs). Additionally, we used Fine-Grained Stochastic Architecture Search (FiGS) to further refine the backbone design. FiGS is a differentiable architecture search technique that uncovers sparse structures by pruning a candidate architecture and discarding unnecessary connections. This technique allowed us to reduce the model size by 20% without any loss in performance, and by 50% with only a minor drop of 0.3% in mAP. Improving the quality of the training dataset also played an important role in boosting the model performance. We collected and labeled more than 700K screenshots, and in the process, we streamlined data collection by using heuristics and auxiliary models to identify rarer icons. We also took advantage of data augmentation techniques by enriching existing screenshots with infrequent icons. To improve the inference time, we modified our model to run using Neural Networks API (NNAPI) on a variety of Qualcomm DSPs available on many mobile phones. For this we converted the model to use 8-bit integer quantization which gives the additional benefit of model size reduction. After some experimentation, we used quantization aware training to quantize the model, while matching the performance of a server-side floating point model. The quantized model results in a 6x speed-up (700ms vs 110ms) and 50% size reduction while losing only ~0.5% mAP compared to the unquantized model. ## Results We use traditional object detection metrics (e.g., mAP) to measure model performance. In addition, to better capture the use case of voice controlled user actions, we define a modified version of a false positive (FP) detection, where we penalize more incorrect detections for icon classes that are present on the screen. For comparing detections with ground truth, we use the *center in region of interest* (CIROI), another metric we developed for this work, which returns in a positive match when the center of the detected bounding box lies inside the ground truth bounding box. This better captures the Voice Access mode of operation, where actions are performed by tapping anywhere in the region of the UI element of interest. We compared the IconNet model with various other mobile compatible object detectors, including MobileNetEdgeTPU and SSD MobileNet v2. Experiments showed that for a fixed latency, IconNet outperformed the other models in terms of mAP@CIROI on our internal evaluation set. Model | mAP@CIROI | | IconNet (Hourglass) | 96% | | IconNet (HRNet) | 89% | | MobilenetEdgeTPU (AutoML) | 91% | | SSD Mobilenet v2 | 88% | The performance advantage of IconNet persists when considering quantized models and models for a fixed latency budget. Models (Quantized) | mAP@CIROI | Model size | Latency* | ||| IconNet (Currently deployed) | 94.20% | 8.5 MB | 107 ms | ||| IconNet (XS) | 92.80% | 2.3 MB | 102 ms | ||| IconNet (S) | 91.70% | 4.4 MB | 45 ms | ||| MobilenetEdgeTPU (AutoML) | 88.90% | 7.8 MB | 26 ms | *Measured on Pixel 3A. | ## Conclusion and Future Work We are constantly working on improving IconNet. Among other things, we are interested in increasing the range of elements supported by IconNet to include any generic UI element, such as images, text, or buttons. We also plan to extend IconNet to differentiate between similar looking icons by identifying their functionality. On the application side, we are hoping to increase the number of apps with valid content descriptions by augmenting developer tools to suggest content descriptions for different UI elements when building applications. ## Acknowledgements *This project is the result of joint work with Maria Wang, Tautvydas Misiūnas, Lijuan Liu, Ying Xu, Nevan Wichers, Xiaoxue Zang, Gabriel Schubiner, Abhinav Rastogi, Jindong (JD) Chen, Abhanshu Sharma, Pranav Khaitan, Matt Sharifi and Blaise Aguera y Arcas. We sincerely thank our collaborators Robert Berry, Folawiyo Campbell, Shraman Ray Chaudhuri, Nghi Doan, Elad Eban, Marybeth Fair, Alec Go, Sahil Goel, Tom Hume, Cassandra Luongo, Yair Movshovitz-Attias, James Stout, Gabriel Taubman and Anton Vayvod. We are very grateful to Tom Small for assisting us in preparing the post.*
true
true
true
Posted by Gilles Baechler and Srinivas Sunkara, Software Engineers, Google Research Voice Access enables users to control their Android device hand...
2024-10-12 00:00:00
2021-01-28 00:00:00
https://storage.googleap….format-jpeg.jpg
Website
research.google
research.google
null
null
37,674,672
https://www.washingtonpost.com/world/2023/09/26/hindu-nationalist-social-media-hate-campaign/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
9,705,801
https://www.bgpmon.net/massive-route-leak-cause-internet-slowdown/
Massive route leak causes Internet slowdown
null
# Massive route leak causes Internet slowdown Earlier today a massive route leak initiated by Telekom Malaysia (AS4788) caused significant network problems for the global routing system. Primarily affected was Level3 (AS3549 - formerly known as Global Crossing) and their customers. Below are some of the details as we know them now. Starting at 08:43 UTC today June 12th, AS4788 Telekom Malaysia started to announce about 179,000 of prefixes to Level3 (AS3549, the Global crossing AS), whom in turn accepted these and propagated them to their peers and customers. Since Telekom Malaysia had inserted itself between these thousands of prefixes and Level3 it was now responsible for delivering these packets to the intended destinations. This event resulted in significant packet loss and Internet slow down in all parts of the world. The Level3 network in particular suffered from severe service degradation between the Asia pacific region and the rest of their network. The graph below for example shows the packet loss as measured by OpenDNS between London over Level3 and Hong Kong. The same loss patterns were visible from other Level3 locations globally to for example Singapore, Hong Kong and Sydney. At the same time the round trip time between these destination went up significantly as can be seen in the graph below. **Time line**By just looking at the number of BGP messages that BGPmon processed over time as can been seen in the graph below, there’s a clear start where all of a sudden the number of BGP updates increased. When we look closer at the data it becomes clear that this increase in BGP messages starts at 08:43 UTC and aligns exactly with the start of the leak and the start of the packet loss issues. At around 10:40 we slowly observed improvements and at around 11:15 UTC things started to clear up.**Let's look at an Example**An example affected prefix is 31.13.67.0/24 which is one of the Facebook prefixes. The AS path looked like this*1103 286 3549 4788 32934*If we look at this path we see that AS32934, Facebook, is the originator of the prefix. Facebook peers with 4788 and announced it to its peer Telekom Malaysia (AS4788) which in turn announced it to Level3 (AS3549) which announced it to all of its peers and customers, essentially giving it transit and causing a major routing leak. Because Telekom Malaysia did this for about 176,000 prefixes they essentially signalled to the world that they could provide connectivity for all these prefixes and as a result attracted significantly more traffic than normally. All this traffic had to be squeezed through their interconnects with Level3. As a result all this traffic was now being routed via Level3 and Telekom Malaysia was likely to hit capacity issues, which then resulted in the severe packet loss issues as users reported on Twitter and as we’ve shown with the data above. The 176,000 leaked prefixes are likely all Telekom Malaysia's customer prefixes combined with routes they learned from peers. This would explain another curious increase in the number of routes Level3 announced during the leak time frame. The graph below shows the number of prefixes announced by Level3 to its customers. Normally level3 announces ~534,000 prefixes on a full BGP feed. These are essentially all the IP networks on the Internet today. Interestingly during the leak an additional 10,000 prefixes were now being observed. One explanation for this could be that these are more specific prefixes announced by peers of Telekom Malaysia to Telekom Malaysia and are normally supposed to stay regional and not visible via transit. Since Level3 was now announcing many more prefixes than normally, it would have hit Max prefix limits on BGP session with its peers. These peering sessions with other large tier1 networks carry a significant portion of the worlds Internet and the shutdown of these session would cause traffic to shift around even more and exacerbate the performance problems as well as causing even more BGP churn. So in conclusion, what we saw this morning was a major BGP leak of 176,000 prefixes by Telekom Malaysia to Level3. Level3 erroneously accepted these prefixes and announced these to their peers and customers. Starting at 8:39 and lasting for about 2 hours traffic was being redirected toward Telekom Malaysia, which in many cases would have been a longer route and also caused Telekom Malaysia to be overwhelmed with traffic. As a result significant portions of traffic were dropped, latency increased and users world wide experienced a slower Internet service. ## No comments
true
true
true
null
2024-10-12 00:00:00
2015-06-12 00:00:00
null
null
null
null
null
null
37,759,002
https://www.leadedsolder.com/2023/10/03/creativision-clone-v1.html
Dick Smith’s Wizzard-ry 8 (Bit)
Leaded Solder
# Dick Smith's Wizzard-ry 8 (Bit) #### Tags: console clone vtech dick-smith-creativision-wizzard homemade-hardware derivavision Sure, I’ve cloned TI TMS99xx-based systems before. There’s the ColecoVision and the Sega SG-1000. But those were all Z80s, and it’s important to diversify my interests a little bit. Luckily, VTech released a little 6502-based system called the CreatiVision, and let the schematics get out. As always, we don’t do things because they are easy, we do things because we thought they would be easy. ## What is the CreatiVision? The simplest explanation is that the CreatiVision is a video game console. It was developed by VTech, and released under tons of other names worldwide, in no particular order: - Dick Smith Wizzard; - VTech Funvision; - Hanimex Rameses; - Educat-2002; - Telefunken CreatiVision These are all PAL systems, but there was an NTSC release in Japan as well, as the Cheryco CreatiVision, and a test NTSC US release that does not appear to have a known distributor. There was also an announced French prototype, deemed the VZ-2000, which appears to never have been released. There were also some divergent branches on the family tree. VTech themselves turned it into the Laser 2001, Salora sold it as the Salora Manager in Finland, and it seems like there may even be a Sanyo variant kicking around somewhere out there. Salora, Dick Smith, and VTech had teamed up to offer badge-engineered systems before, with Oceania’s beloved VTech Laser 200. That in itself is a pretty interesting computer, and I hope to get to one of those eventually, but it is totally unrelated to the CreatiVision. Later on, they released a CreatiVision mk2, which has some internal modifications. To me, it seems to mostly be decoding changes in order to support a Z80-based ColecoVision game adapter. You’ll see the name mentioned frequently throughout this article. For now, remember that Dick Smith was an Australian/New Zealander chain of electronics stores, seemingly equivalent to an Oceanic Fry’s Electronics. I am eliding a lot of the fascinating history of the CreatiVision in order to get to the cloning faster. If I’m missing something or have some facts wrong, please let me know. Thank you to MADrigal from the CreatiVemu forums for correcting some of the historical errors here. ## The Control Scheme Anyway, as I have implied a few times here, the controller “paddles” double as gamepads. Like the ColecoVision, they are meant to be operated in a sort of portrait orientation, with a joystick and two fire buttons. There’s also a membrane keypad on each paddle. And if you put those paddles back into the case, they combine to turn into a membrane *keyboard*. That’s right, the keypad is complex enough on each to form an entire keyboard for BASIC programming and other computer shenanigans. Extremely cool – split keyboards are not just a modern mechanical keyboard fad after all. Of course, they also sold an optional standalone keyboard accessory, which seems like the ideal way to use this machine. On the side is a cartridge slot, which you can install a bunch of peripherals into. The CreatiVision offered parallel-port expansions, memory expansions, and even a floppy drive. There’s also a connector for a cassette tape deck, for saving and loading BASIC programs. ## What’s interesting about the hardware? Well, to me, it’s a novel machine, one that never reached North America in great numbers. I’ve certainly never seen a CreatiVision in person before. And with the keyboard, it’s a computer-like console, which means that it could run general-purpose programs like BASIC. That’s pretty much the reason why I embarked upon the SG-1000 clone, so that’s enough justification for me on its own. Technically, I figured that the clone would be relatively simple. Like I said earlier, I have already made several TMS9918A-based systems, and the CreatiVision schematics are public. However, those previous systems were all Z80, and this gave me a chance to cross the railroad tracks and play with the bad MOS kids from the other part of town. My only prior experience with the 6502 came from writing some Apple II assembly programs, and I certainly have never wired up its bizarre “Φ2” machine cycle stuff for decoding. ## What’s good to play on it? Good question! I had absolutely no idea, so I went to YouTube. For whatever reason – likely cultural imperialism or Australia’s well-documented video-camera-killing-rays – there wasn’t much footage of the system’s games. Highretrogamelord, of course, has a playlist of games. A lot of them appear to be clones of popular arcade games, which is okay in my book. Many games, like *Auto Chase*, offer some smooth scrolling, which is definitely hard to do on the TMS9918, and indicates some competent programmers at play. It was harder to figure out from the videos if any of them were good, so I decided to try *Crazy Chicky*, a Pac-Man-alike, in MAME. Not bad. Okay, let’s get started. ## Why a clone? Primarily, I didn’t have one of these, and I doubted I would be able to get one. Canada never got a release, as far as I could tell, and eBay prices were – as one expects – silly. Sure, I could have hunted for awhile to find a working one, but the hardware isn’t the best quality. Being a cut-rate machine assembled in a low-end factory, the original Creativision motherboard had a lot of flaws: bad vias, poor-quality sockets, a flimsy board that breaks traces, a flimsy power supply, and case screws that short out data lines if you tighten them too much. Cheshire was dealing with a large number of these problems, which is part of why he asked me to take a look at it. He’d also been dealing with the controllers themselves: together, they turn into a sort of split keyboard and otherwise are an enormous membrane. That membrane was wearing out after decades of hand-borne contaminants, and so his controller PCB reproduction was born. As a bonus, I’m not at all the first person to be dumb enough to try this. Someone else already cloned the Funvision on a breadboard using the same schematics and a CMOS 65c02. Looks like it went really well in that case, and can’t possibly go wrong for me, right? ## 6502 Decoding I had assumed that the 6502 was sort of similar to the Z80. On the Z80, in order to decode parts on the board, you just hook them to the , , , etc pins and then the address lines. The Z80 lights these lines up when it’s time to reach out to external devices.On the 6502, it wasn’t quite as obvious to me how this was supposed to work. Although there was a read/write pin, there wasn’t anything that was clear to me as being “an access to external devices.” Instead, the 6502 had a bunch of weird pins, named things like `Φ0` , `Φ1` , and `Φ2` . What’s going on with that? To figure it out, I did some research. Thankfully, it was pretty easy to find out the answer, because the 6502 was and is a very popular CPU. These phi (Φ) pins represent the *machine cycle* of the CPU. `Φ1` and `Φ2` are outputs that are derived from the input clock on `Φ0` : `Φ2` is essentially the same (amplified) signal as the input clock, and `Φ1` is the inverted form of the input clock. On a read, `Φ1` goes high when the 6502 is controlling the data bus, and external devices can read the address, read/write control pin, and the data bus that the CPU is driving. `Φ2` goes high when an external device is allowed to control the data bus. They’re never high at the same time, which means that `Φ1` can be used to select active-low peripherals like the VDP as it’s the logical inverse of `Φ2` . I’m sure it’s a little more complicated than that once you get really down into the weeds of timing, or implement some kind of ridiculous DMA system while the 6502 is “busy,” but the CreatiVision doesn’t seem to do anything super ambitious in this field so I don’t need to know it yet. The CreatiVision makes decoding a little more bizarre than on most cartridge-based systems, as even access to the onboard BIOS ROM is controlled by the cartridge. All three bank select signals ( , , and ) are passed to the cartridge, and the cartridge is expected to trigger the onboard BIOS select pin , usually by shorting it to .I’m not sure if any software actually takes advantage of this to remap or disable the BIOS decoding at runtime in order to offer more memory space for game ROMs – Cheshire’s own MegaCart just hardwires it – but the capability is there. This is likely here so that an expansion device could provide a substitute BIOS to the system. I’m not entirely sure why you’d do that, versus just putting it on the regular autobooting ROM, but the capability is provided regardless. ## Constructing the clone Laying out the clone board from the schematics was pretty easy. There was one seemingly ambiguous thing that I made a sleep-deprived error on transcribing to KiCad, but we’ll talk about that a bit later. I asked Cheshire if he had something to test with, and he just smiled and sent me a *Sonic Invaders* cartridge. I’m pretty sure that’s how the song goes. This cartridge sure looks a lot like the Intellivision ones, right down to the weird half-door on the back. Does nobody have original ideas for cartridge shells these days? Now that I had enough ducks in a row, I sent away for the clone Creativision board, which is (I think) my widest one yet, at nearly 220mm wide. I feel like I could have taken better advantage of the space available, but I already ran into a nightmare trying to route the cartridge lines through all the diodes and resistors for the keyboard/controller connectors. The reason why it’s so long and slim is mostly because it’s easy to route. Luckily, it is also conveniently easy to stick a keyboard in a case with it. It took a really long time after the boards showed up for the 6821 PIA to arrive from AliExpress. When it did, it was only at this point that I realized that Western Design Center is still making new 6821s (in both CMOS *and* NMOS!) Somehow, I also kept running out of common consumables, like 40-pin sockets and 33pF load capacitors. All version-one systems are gonna be a little awkward, and the cartridge slot in this one definitely qualifies. If I can find a part for it, I think I will change this to a right-angle slot, both to make it look sleeker, and to support installing sidecars like the original system had. Of course, the slot for the original system is on the other end of the board… I ran into a bit of a snag with the BIOS ROM. Like the Leako and Soggy, I had laid it out to use a 27c256 EPROM. However, I was starting to run out of those after some recent arcade shenanigans, and ended up having to spend an afternoon erasing some before I could actually program a BIOS ROM and be able to *try* the system. I could have used a 27c64, which I had dozens of, but it would have involved cutting the 27C64’s pin free from the ground pour it was stuck in. For version 2, I have changed this to use a jumper so that this pin is high by default, which won’t affect a 27C256 in this position, as the 2K Creativision BIOS ROM will be duplicated throughout the memory space anyway. ## First Run It didn’t work! And not even in the fun, sorta-works way. I mean the LED flickered dimly once and then I didn’t have any power at the test connectors - just 0.15V. I pulled the AliExpress parts out of their sockets to make sure they weren’t shorted, but to no avail. The series voltmeter on the USB port reported zero amps of draw, so it seemed like the power supply of the board… wasn’t supplying power. I should have noticed this earlier, but it took showing a screenshot in Discord to realize what was the problem. A cap in series blocks DC and not AC, which is sort of the opposite of what I wanted. ZephyrZ80 and YesterGearPC pointed it out. Thank you! Once this was resolved, the system would power up, establish a blank raster, but with solid NTSC 480i sync (according to my long-suffering PVM.) I knew that I had to insert a cartridge in order for the system to work, so I did so, but there was no difference in function. It seemed that the game wasn’t being run properly. Suspecting a decode problem, I popped in some test hooks and went to my trusty logic probe. The BIOS ROM was selected briefly on reset and then never again. As far as I could tell, the clock was working, there was activity on the address bus, and the data bus, up to a point. After a few seconds of execution, D0 and D1 become always low. Same with the read/write pin. That’s odd, I thought. Those should be changing the *most*. For its part, the TMS9918A seemed to be happily off in its corner, taking care of its RAM, even though no writes or reads were happening to it. What a trooper. Just to make sure it wasn’t the CPU, I swapped in another beaten-up UM6502A from AliExpress, and its function seemed identical. I checked the pinout to make sure there wasn’t some subtle disagreement about data bus pinout between the UMC part and the MOS part, and there wasn’t. After some grumbling and staring at KiCad, I realized that I had misread the original schematic. This line wasn’t coming off of A14 to the `MODE` pin of the TMS99181; it was coming off the *address bus*, and was actually connecting to A0. I’m not sure exactly what that meant for the correctness of the program, but I would assume that A14 was always low during writes or reads to the VDP, and so this pin would never be toggled high during runtime. Which could cause some problems, especially if you wanted to, say, tell the TMS9918 what address to write to. Yeah, you don’t need that *at all*. After some studious pin counting, I cut the errant trace and wired `MODE` to A0. Now on the logic probe, I could see that D0 didn’t get into that degenerate state from before. It kept happily oscillating. Good enough for now… let’s try it on a TV. Due to the endless churn of my shop, the setup wasn’t ideal, but I was able to run this on my test PVM. The game runs!!! Unfortunately, the image was legible but had lots of luminance blur. This might be because I had run out of ferrite beads and hadn’t loaded any for the video and sound, or because this particular TMS9918 was sick, or because the dying PVM has chosen to die in a whole new way. Still, this is a triumph. We now had a Creativision clone that works well enough to play *Sonic Invaders*. The ferrites did eventually show up, so I decided to plug them in. I wasn’t exactly clear on *which* video characteristics I was trying to eliminate with a ferrite, but I was hoping to at least get rid of the blur that I had somehow added between the Soggy and the Derivavision v1. With the ferrites installed, the video quality looks quite good, even on the crappy USB capture device that I have used on videos in the past. This seems like it’s the nicest output I’ve ever gotten out of one of my clones, and maybe I should think about backporting this change to them as well. Check out the detail on the windows in the background! Of course, since I don’t have a controller hooked up, this is the automated “attract mode” from the game, playing itself. This player is not especially great at *Sonic Invaders*. ## Conclusion I’m very happy to have gotten this clone to work, although without controllers it’s just not very much fun yet. Through this project, I learned a lot about the 6502, a medium amount about composite video, and a little bit about life. You can download the KiCad files for the DerivaVision here on GitHub. I will do a formal “release” with Gerbers once the controller interface is ready to go. ## Next Steps You wouldn’t think installing a controller port would be a difficult task, but the VTech machine has a trick up its sleeve. Those slick keyboard/dual controller input devices are built of a complicated passive matrix, similar to the ColecoVision. Speaking to them so that we can actually play *Sonic Invaders* is going to require a lot of monotonous grunt work, and just a little bit of ingenuity. Some joystick buttons are implemented by shorting more than two pins together at the same time! In the next part of this series, I’ll be building a clone keyboard/controller interface that will plug into the headers on the existing v1 board. This will be pretty rudimentary: just enough to play *Sonic Invaders* and maybe a handful of other cartridge games that don’t take advantage of the keyboard. After that, I think I have a better plan: replacing the 6821 with a Raspberry Pi Pico2, and using that Pico to handle a PS/2 keyboard and SNES control pads. Although it will be a lot of work to reverse-engineer the 6821, especially if the games/BIOS hammer the ports quickly, the end result will be an integrated, single-board Derivavision console that plays games without needing any sub-boards or expensive custom keyboards. - The `MODE` pin is used during CPU-VDP transfers to signal what part of a command is happening. For instance, the`MODE` pin is held high during a write to the VDP in order to tell it that we’re sending addresses to it, and then low to send data to it. The reason why the concept of this pin seemed confusing to me is because this is hidden during software development. On the SG-1000, I wrote to e.g. $80 for data, and $81 for addresses – that’s where A0comes in. ↩ - I am pretty sure something smaller like a PIC would also be able to handle this case medium-well, but an entire Pico development board is pretty close to the same price, has the benefit of the PIO acceleration, and I already have a handful lying around. ↩
true
true
true
Homemade clone of the Dick Smith Wizzard/VTech CreatiVision game console.
2024-10-12 00:00:00
2023-10-03 00:00:00
https://www.leadedsolder…aying-a-game.jpg
article
leadedsolder.com
Leaded Solder
null
null
21,630,246
https://www.vox.com/energy-and-environment/2019/11/25/20975591/climate-change-solar-panels-circuit-breaker-digital-electricity-atom-power
What happens when the humble circuit breaker becomes a computer
David Roberts
I have been writing a lot lately about a big trend in the energy world, namely the decentralization of the energy system. Almost all electricity used to be generated at large power plants and carried over long-distance, high voltage transmission lines before it was dumped into local distribution grids. These days, a small but growing fraction of it is generated, stored, and managed within the distribution grid itself. # What happens when the humble circuit breaker becomes a computer The electricity system is evolving from analog to digital — and that’s great news for transitioning off of fossil fuels. This is accomplished by an array of distributed energy resources (DERs) — from solar panels to batteries to EVs — that are increasingly tied together and coordinated by software. And as more and more cars and buildings are electrified, there will be more DERs to juggle. Keeping distribution grids running smoothly will become a bigger and bigger challenge. Even as distribution grids struggle to digitize, much of electrical infrastructure remains resolutely stuck in the 20th century, manual and analog. Take the humble circuit breaker for example. Every electrical device in the country is connected to the grid through one. Its job is simply to cut off the flow of current to an electrical circuit in the case of a fault or surge, to prevent overloaded lines, sparks, and fires. If you’re a homeowner, you’re probably familiar with the experience: the power in the kitchen suddenly goes out, you struggle briefly to remember where your electrical panel is, you find it and squint with your flashlight at the tiny, inscrutable labels, finding the one that says “KIT” and manually flipping the switch. Then you yell, “Did that do it?” When you think about how you interact with almost every other system in your day-to-day life, it seems primitive. ## The circuit breaker goes digital The basic design of the circuit breaker has not changed appreciably since Thomas Edison came up with the idea toward the end of the 19th century. It’s still an electromechanical switch that manually breaks an electrical connection, separating two wires. Circuit breakers don’t just drag building owners into their basements to squint at electrical panels, they’re slow enough that they still allow lots of short circuits and arc flashes, which can destroy property and even kill people. “Each year in the United States, arcing faults are responsible for starting more than 28,000 home fires,” says the non-profit Electrical Safety Foundation International, “killing and injuring hundreds of people, and causing over $700 million in property damage.” Plus, mechanical circuit breakers are static. One rated for 30 amps will always and only handle 30 amps. They are ill-suited to the constantly shifting, evolving world of DERs. For years, researchers and entrepreneurs have pursued something better, and in January, a small company in North Carolina called Atom Power became the first to pass the necessary tests to bring one to market: a solid-state digital circuit breaker. Solid state means no moving parts, with less maintenance, and a much longer life. In place of mechanical switches, current is interrupted by semiconductors, which means it happens at around the speed of light. Specifically, when triggered, a semiconductor-controlled switch trips in 3 microseconds, roughly 3,000 times faster than the fastest mechanical switch. In this 2018 video, electrical engineer and founder Ryan Kennedy explains how Atom got there first. It has to do with manufacturing semiconductors out of silicon carbide (a mix of silicon and carbon, also known as carborundum) rather than silicon. First, it enables the semiconductor to be six to 10 times smaller, allowing the products to mimic the size and shape of traditional breakers and electrical panels. Second, it is much more efficient than silicon, especially at high voltages, and unlike mechanical switches, its efficiency rises as voltage falls. And third, it is much more fault tolerant at high temperatures than silicon, allowing the product to pass strict testing by the Underwriters Laboratories. (UL, a consumer safety research laboratory, is approved by the federal government to set safety standards for a wide range of consumer products in the US and Canada.) This suggests two things. First, Atom has pulled off something clever, with a fairly small team. Three cheers for American innovation! And second, competitors won’t be far behind. After all, silicon carbide isn’t patented. Other companies, big players, are already investing in their own digital circuit breakers. This is going to be a hot market in coming years. So let’s take a look at digitally controlled electricity — what it can do and what it might enable. ## Digitally controlled electricity is safer First and most importantly, because semiconductors react so much faster than mechanical switches, they are much safer, effectively eliminating short circuits and arc flash, as demonstrated in this video: (Every product looks cooler with a metal soundtrack.) Digital circuit breakers can even anticipate and thwart faults before they happen. Faults are preceded by small disruptions in the electricity sine wave, and “since we’re sensing in the microsecond range and opening in the nanosecond range,” Kennedy explains, “you can interrupt that [fault] well before it propagates.” (Mechanical switches, on the other hand, don’t know that they need to trip until there’s a fault.) Atom claims the Atom Switch is the fastest and safest circuit breaker in the world, capable of lightspeed interruption of currents up to 150,000 amps. But controlling electricity digitally can do much more than solve common safety problems. ## Digital circuit breakers are packed into panels that replace several other devices In recent years, materials science and computing power have advanced enough to allow Atom to stuff a little computer into each breaker. Each has its own firmware, its own unique identity on the network, and a little e-ink display of its status (which works even without power). The breakers are stacked into a panel that has roughly the shape and appearance of a normal electrical panel. Each panel contains a copy of Atom OS, the operating system that allows the breakers to be controlled through a user interface. Each panel is assigned its own IP address, so controlling it just involves logging on with a computer, iPad, or phone through a router. Importantly, because the firmware and OS are contained within the panel itself, it requires no external connection to the web (or any external server) to work. The building owner has total control. As of now, Kennedy estimates that Atom Switches are two to five times the cost of conventional mechanical switches. But because the breakers are digitally controlled, they can accomplish tasks that used to require multiple pieces of equipment. In big commercial facilities, conventional circuit breakers are surrounded by, to quote an IEEE Spectrum piece on Atom, “meters, load controllers, surge-protection devices, power-transfer switches, and demand management systems,” all of which require specialized equipment. The computers controlling digital switches can accomplish all those functions, thus replacing all that infrastructure. Through their firmware, digital switches can meter power, dynamically control amperage based on load, and prevent surges and faults by specifying instantaneous, short-time, and long-time trip settings (along with a variety of other parameters I don’t begin to understand, but Atom assures me are of great interest to the people who manage these systems). The OS also has contains built-in motor control (motor “soft-starting” is a big problem for commercial electrical systems, for reasons we needn’t get into), relay protections for over- or under-voltage events, and fast (80 microsecond) switching between power sources. All these functions can be set remotely, put on a schedule, and programmed to react dynamically according to conditions. All of that may sound like electricity-system jargon, but the point is that controlling power digitally enables a radical reduction of electrical infrastructure and simplification of electricity management. Finally, there’s an app for that. ## Digitally controlled electricity could enable smarter distribution grids and more decentralized energy Since they passed UL testing, Atom’s products have drawn investment from three of the four largest circuit breaker manufacturers: ABB, Siemens, and Eaton. For now, the company is going after commercial and industrial facilities with lots of high-value loads at stake. For instance, it has a whole brief on how buildings that manage multiple EV chargers — sure to be a growing category, including many office parks and parking garages — can use digital power management to balance out loads and distribute capacity (and eventually deal with vehicle-to-grid technology). The market will begin among commercial and industrial customers, but if digital circuit breakers prove their value in the field, there’s no reason to think the market won’t scale up. As it does, costs will come down. It’s easy to envision all circuit breakers, through attrition, eventually going digital, though it’s impossible to predict how fast that might happen. There are millions and millions of circuit breakers in the US. If every one of them becomes a self-aware, dynamically adapting, remotely controllable computer capable of linking up and coordinating with all the other computers, distribution grids will become much smarter and distributed energy resources (DERs) will be much easier to integrate and control. I asked Kennedy if the same basic technology controlling a panel could be scaled up to support a microgrid or even a whole distribution system. The short answer, he said, is yes. He compared the modularity of the digital circuit breaker to the Tesla battery, which is the same basic thing in a sedan and a semi-truck. There are just more of them stacked together in the truck. Same with digital electricity controls. “Our technology is not only scalable, but it’s also significantly easier and faster to scale than mechanical breakers today,” Kennedy says. “You could adapt and scale the technology for practically any power system.” As I have written, one key aspect of the clean energy transition, one reason it’s likely to proceed faster than previous energy transitions in history, is that it won’t just be about switching out one set of machines for another. In large part, it will be about substituting intelligence for stuff — i.e., computing power for labor and material. Computing power, which is always getting cheaper, will help determine how to maintain the same energy services with less labor and material, which are almost always getting more expensive. All analog systems will eventually go digital. The digital circuit breaker is just one key step in the process of digitizing the electricity system. As software comes to control and distribute more power, AI, machine learning, and ubiquitous sensing will be put to use making the process more efficient, enabling the smart integration of local distributed energy resources. The result will be a smarter, cleaner, and more democratic grid. ## Most Popular - The one horrifying story from the new Menendez brothers doc that explains their whole caseMember Exclusive - Take a mental break with the newest Vox crossword - Sign up for Vox’s daily newsletter - AI companies are trying to build god. Shouldn’t they get our permission first? - The resurgence of the r-wordMember Exclusive
true
true
true
The electricity system is evolving from analog to digital — and that’s great news for transitioning off of fossil fuels.
2024-10-12 00:00:00
2019-11-25 00:00:00
https://platform.vox.com…547074493&w=1200
newsarticle
vox.com
Vox
null
null
29,200,021
https://thescottkrause.com/emerging_tech/new_macbook_setup_for_dev/
Scott C. Krause Front-end Engineer | Infographics UX PWA Shopify Polaris Liquid GraphQL
Scott Krause; Scott C Krause
# New Macbook Setup for Developers Scott C. Krause | Wednesday, Aug 24, 2022 ### MacBook Setup for the Pros 2023 Setting up a new MacBook for software development can be tricky. Here I provide the simple basics to get you started. I realize that some developers may be migrating from Windows to OSX for the first time and that can be difficult. Especially when onboarding as a new employee. Every development environment is unique but the main apps that you want working fast are Finder, Git, Node, NPM, Java, SFDX, Shell, VSCode, and xCode. Whenever possible I will provide a verification step to make sure the command worked. I've also included a list of the most productive Chrome extensions, Progressive Web Apps, and VSCode extensions. I will try to keep these instructions up to date with the most recent changes. A quick note on cleaning your new Macbook. Please follow Apple's recommended method for cleaning your screen. ### Frequently Asked New Macbook Questions ## Does My Macbook Have a Intel or Apple Chip? Simply click on the Apple on top of your screen and select "About this Mac". BTW an Apple Chip sounds like a healthy snack. ## How do I Clean my Macbook? Begin by shutting down your Macbook and unplugging it. Better safe than sorry. Use a damp lint-free cloth to clean the exterior. Do not get moisture inside any openings. DON'T spray liquid directly on the computer. Don't use aerosol sprays, solvents, abrasives, or cleaners containing hydrogen peroxide that might damage the finish. ## How do I Use two Mac Apps Side by Side and in Split the View? Click and hold your pointer over the full-screen button in the upper-left corner of a window (green), then Choose ”Tile Window to Left of Screen” or ”Tile Window to Right of Screen” from the menu. The window then fills that side of the screen. Default Mac Shell ``` nano ~/.zshrc ``` ``` export NVM_DIR="$HOME/.nvm" [ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh" # This loads nvm ``` ``` printenv ``` Apple Command Line Dev Tools ``` xcode-select --install ``` ``` xcode-select --version ``` Finder Config ``` chflags nohidden ~/Library defaults write com.apple.Finder AppleShowAllFiles true defaults write com.apple.finder ShowPathbar -bool true defaults write com.apple.finder ShowStatusBar -bool true ``` ``` killall Finder ``` ``` Menu > Finder > Preferences [ Select Drive ] ``` Brew ``` /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)" ``` ``` brew --version ``` ## Pro ⚡ Tips This is a single line command. It may display as two lines in your browser, but when you fire it in the terminal it needs to be one line. I had to run this script twice, so second time is the charm. Follow the instructions (Next Steps) at the end of this script. These are displayed in the terminal after the script has completed. #### AWS Elastic Beanstalk CLI ``` brew install awsebcli ``` ``` eb --version ``` #### Shopiy Liquid Theme CLI ``` brew tap shopify/shopify ``` ``` brew install shopify-cli ``` Chrome Browser #### Chrome Extensions ## Pro ⚡ Tips 💡 Chrome Shortcuts / OSX Command Line. #### chrome://chrome-urls/ ``` Chrome Internal URLs / Flags chrome://about chrome://accessibility chrome://app-service-internals chrome://apps 🎯 chrome://autofill-internals chrome://blob-internals chrome://bluetooth-internals chrome://bookmarks chrome://chrome-urls 🎯 chrome://components chrome://connectors-internals chrome://conversion-internals chrome://crashes chrome://credits chrome://device-log 🎯 chrome://dino chrome://discards chrome://download-internals chrome://downloads chrome://extensions chrome://flags chrome://floc-internals chrome://gcm-internals chrome://gpu chrome://help chrome://histograms chrome://history chrome://indexeddb-internals chrome://inspect chrome://interstitials chrome://invalidations chrome://local-state chrome://management chrome://media-engagement chrome://media-internals chrome://nacl chrome://net-export chrome://net-internals chrome://network-errors chrome://new-tab-page chrome://new-tab-page-third-party chrome://newtab chrome://ntp-tiles-internals chrome://omnibox chrome://password-manager-internals chrome://policy chrome://predictors chrome://prefs-internals chrome://print chrome://process-internals chrome://quota-internals 🎯 chrome://quota-internals-2 🎯 chrome://safe-browsing chrome://serviceworker-internals chrome://settings chrome://signin-internals chrome://site-engagement 🎯 chrome://sync-internals chrome://system chrome://terms chrome://tracing chrome://translate-internals chrome://ukm chrome://usb-internals chrome://user-actions chrome://version chrome://web-app-internals 🎯 chrome://webrtc-internals chrome://webrtc-logs chrome://whats-new 🎯 ``` #### Chrome OSX Command Line ``` Chrome without CORS open -n -a /Applications/Google\ Chrome.app/Contents/MacOS/Google\ Chrome --args --user-data-dir="/tmp/chrome_dev_test" --disable-web-security ``` Visual Studio Code #### VS Code to Path ``` Command + Shift + P Shell command: Install ‘code' command in path ``` ``` code . ``` #### VS Code Extensions Recommended Git good at Git ``` brew install sh git clone -b BRANCHNAME --single-branch REPONAME ``` ``` git config --global user.email "[email protected]" git init --initial-branch=master git status git branch git checkout master git pull origin master git branch git checkout -b temp git checkout BRANCHNAME git rebase master git push origin BRANCHNAME git status git push origin BRANCHNAME --force-with-lease git difftool --tool=vimdiff branch1 branch2 -- ./some/file.txt ``` Node ``` curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.37.2/install.sh | bash nvm --version nvm install --lts node --version npm install -g npm mpm --version ``` #### Global NPM Packages ``` npm install -g npm npm install -g typescript supervisor live-server svgo yarn npm i -g @axe-core/cli npm i -g @bubblewrap/cli npm i -g @ionic/cli npm list -g --depth=0 npm config ls -l ``` Java 8 SDK ``` brew update brew tap homebrew/cask-versions brew install cask adoptopenjdk8 brew install jenv echo 'export PATH="$HOME/.jenv/bin:$PATH"' >> ~/.zshrc echo 'eval "$(jenv init -)"' >> ~/.zshrc ``` ``` java -version ls /Library/Java/JavaVirtualMachines ls /Library/Java/JavaVirtualMachines/jdk1.8.0_251.jdk/Contents/Home ``` #### See Also Scala ``` brew update brew install scala brew install sbt brew install sbt@1 scala --version ``` #### Scala Build Tool ``` brew install sbt@1 scala --version sbt --version sbt clean (Optionally delete .ivy2 ) /usr/local/etc/sbtopts -mem 2048 (Optional Increase Mem) ``` 3D Printing Macbook ``` brew install openscad ``` #### See Also Go ``` brew update&& brew install golang mkdir -p $HOME/go/{bin,src,pkg} ``` ``` nano ~/.zshrc export GOPATH=$HOME/go export GOROOT="$(brew --prefix golang)/libexec" export PATH="$PATH:${GOPATH}/bin:${GOROOT}/bin" ``` Rust ``` curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh ``` ``` rustc --version ``` Cypress E2E ``` nvm install --lts npm install cypress --save-dev ``` #### package.json ``` "scripts": { "cy:open": "cypress open", "cy:run": "cypress run" } ``` LAMP Laravel ``` curl -s "https://laravel.build/example-app" | bash cd example-app ./vendor/bin/sail up ``` Docker Desktop ``` docker version ``` ## Pro ⚡ Tips If your Macbook uses the M1 chip and you are having issues deploying to AWS Fargate. Try this in your Dockerfile: FROM --platform=linux/amd64 node:lts-alpine #### zshell Completion ``` etc=/Applications/Docker.app/Contents/Resources/etc ln -s $etc/docker.zsh-completion /usr/local/share/zsh/site-functions/_docker ln -s $etc/docker-compose.zsh-completion /usr/local/share/zsh/site-functions/_docker-compose ``` Hugo ``` brew install hugo hugo version ``` Emerging Tech ##### HTML Form Validation Patterns ###### Curated JavaScript Form Validation Content 2023-10-18 Emerging Tech ##### HTML Over the Wire ###### Curated HTMX & Alpine.js Knowledge-base 2023-08-05 Emerging Tech ##### Capacitor WASM Custom Plugins ###### Ionic Curated Capacitor Links 2023-08-05 Emerging Tech ##### Neodigm 55 Low Code UX micro-library ###### Popups, Toast, Parallax, and SFX 2022-11-25 Emerging Tech ##### UX Usability Heuristic Evaluation ###### HE Heuristic Evaluation 2022-10-19 Emerging Tech ##### New Macbook Setup for Developers ###### New Macbook Config for Devs 2023 2022-08-24 Emerging Tech ##### WebAssembly WASM ###### In-depth Curated WebAssembly Links 2022-08-05 Emerging Tech ##### Curated PWA Links ###### Indispensable Curated PWA Links 2022-07-09 Emerging Tech ##### Curated GA4 Links ###### Indispensable Curated Google Analytics 4 Links 2022-06-17 Emerging Tech ##### Curated Lit Web Component Links ###### Curated Lit Web Component Links 2022-03-25 Emerging Tech ##### Curated LWC Links ###### Indispensable Curated LWC Links 2021-08-09 Emerging Tech ##### Curated TypeScript Vue.js ###### Indispensable Curated Vue TypeScript 2021-07-24 Emerging Tech ##### Creative 3D animation resources ###### Indispensable Curated Creative Links 2021-06-25 Emerging Tech ##### Transition to TypeScript ###### Indispensable Curated TypeScript Links 2021-06-05 Emerging Tech ##### The Clandestine Dead Drop ###### The Ironclad Clandestine Dead Drop 2021-05-31 Emerging Tech ##### Curated Blogfolios Links ###### Personal Websites 2021-03-15 Emerging Tech ##### Curated JavaScript Links ###### Indispensable Curated JavaScript Links 2021-03-12 Emerging Tech ##### Curated Emerging Tech Links ###### Indispensable Curated Tech Links 2021-03-04 Emerging Tech ##### Cytoscape Skills Data Visualization ###### Persuasive Infographics & Data Visualizations 2021-02-20 Emerging Tech ##### eCommerce Accessibility A11y ###### Accessibility Challenges Unique to eCommerce 2020-12-07 Emerging Tech ##### Roll Dice in High-Fidelity 3D ###### Create 3D Dice with Strong Random Entropy 2020-11-02 ##### Simple valid JSON test ###### A simple JavaScript function that can determine if a given JSON is valid. 2023-12-07 ##### Flickity Carousel A11y Observer ###### Observe and listen for changes in the Flickity carousel ``` // Desc: This patch will observe and listen for changes in the Flickity carousel, and when triggered will remove aria-hidden from the carousel child elements. It will observe every carousel instance that exists on the page. This logic utilizes the mutation observer to watch all carousels for changes. The changes may be user initiated or actuated via autoplay configuration. // Usage: flickPatch = new FlickPatch( document, ".flickity-slider" ); flickPatch.init(); /* ___ _ /___\ |__ ___ ___ _ ____ _____ _ __ // // '_ \/ __|/ _ \ '__\ \ / / _ \ '__| / \_//| |_) \__ \ __/ | \ V / __/ | \___/ |_.__/|___/\___|_| \_/ \___|_| 👁️👁️ */ class FlickPatch { // Flickity Carousel ARIA-HIDDEN observer constructor(_d, _sQ) { this._d = _d; this._sQ = _sQ; this.aF = []; this.aObs = []; } init() { // this.aF = Array.from( this._d.querySelectorAll( this._sQ )) if( this.aF.length ){ this.aObs = [] this.aF.forEach( ( eF )=>{ const oObs = new MutationObserver( flickPatch.removeAttr ); oObs.observe( eF, { attributes: true, childList: true, subtree: true } ); this.aObs.push( oObs ) }) } return this; } removeAttr( aObs ){ // if( aObs.length ){ aObs.forEach( ( elO )=>{ if( elO?.target ){ [ ... elO.target.querySelectorAll( "[aria-hidden='true']" )].forEach( ( eH )=>{ eH.removeAttribute("aria-hidden") }) } }) } } } //. Usage let flickPatch = {} document.addEventListener("DOMContentLoaded", ( ev )=>{ setTimeout( ()=>{ flickPatch = new FlickPatch( document, ".flickity-slider" ) flickPatch.init() }, 8e3 ) }) ``` 2023-10-08 ##### Generate Lorem Ipsum Text ###### JavaScript Generate Lorem Ipsum from original Latin De finibus. ``` /* ,--. ,--. | | ,---. ,--.--. ,---. ,--,--,--. `--' ,---. ,---. ,--.,--.,--,--,--. | || .-. || .--'| .-. :| | ,--.| .-. |( .-' | || || | | |' '-' '| | \ --.| | | | | || '-' '.-' `)' '' '| | | | `--' `---' `--' `----'`--`--`--' `--'| |-' `----' `----' `--`--`--' `--' 🌶️ 🌴 🍰 🔥 🗝️ 🎲 */ const genLoremIpsum = ( Sentences=1 )=>{ // Generate Lorem Ipsum | Orig Latin De finibus if( Sentences == -1 ) Sentences = Math.floor(Math.random() * 5) + 1 // If -1 gen rnd num sentences 1-5 const aLI = "lorem ipsum a ab accusamus accusantium ad adipiscing alias aliquam aliquid amet animi aperiam architecto asperiores aspernatur assumenda at atque aut autem beatae blanditiis commodi consectetur consequatur consequuntur corporis corrupti culpa cum cumque cupiditate debitis delectus deleniti deserunt dicta dignissimos distinctio do dolor dolore dolorem doloremque dolores doloribus dolorem dquis ducimus ea eaque earum eius eligendi enim eos error ert esse est et eum eveniet ex excepturi exercitationem expedita explicabo facere facilis fuga fugiat fugit harum hic id illo illum impedit in incididunt inventore ipsa ipsam irure iste itaque iusto labore laboriosam laborum laudantium libero magnam magni maiores maxime minima minus modi molestiae molestias mollitia nam natus necessitatibus nemo neque nesciunt nihil nisi nobis non nostrumd nulla numquam obcaecati odio odit officia officiis omnis optio pariatur perferendis perspiciatis placeat porro possimus praesentium provident quae quaerat quam quas quasi qui quia quibusdam quidem quis quisquam quo quod quos ratione recusandae reiciendis rem repellat repellendaus reprehenderit repudiandae rerudum rerum saepe sapiente sed sequi similique sint sit soluta sunt suscipit tempora tempore temporibus tenetur totam ullam unde ut vel velit veniam veritatis vero vitae voluptas voluptate voluptatem voluptates voluptatibus voluptatum".split(" ") let sOut = "" for( let nS = 0; nS <= Sentences; nS++){ let nWc = Math.floor(Math.random() * 6) + 3 // Word count per sentence rnd 3-8 for( let nW = 0; nW <= nWc; nW++){ let sWrd = aLI[ Math.floor(Math.random() * aLI.length) ] if( !nW ) sWrd = sWrd[0].toUpperCase() + sWrd.slice(1) // Cap first if( sOut.indexOf( sWrd ) == -1 ) sOut += " " + sWrd // Dedupe } sOut += "." } return sOut.trim(); } // USAGE: console.log( genLoremIpsum( -1 ) ) ``` 2023-10-07 ##### Javascript Tiny Type ###### Replace text with a super small character set. ``` // Replace text with a super small character set. /* _______ _ _______ _ |__ __(_) |__ __| | | | | _ _ __ _ _ | | _____ _| |_ | | | | '_ \| | | | | |/ _ \ \/ / __| | | | | | | | |_| | | | __/> <| |_ |_| |_|_| |_|\__, | |_|\___/_/\_\\__| __/ | |___/ 🗿 🪐 🔨 */ let aTiny = {"a":"ᵃ","b":"ᵇ","c":"ᶜ","d":"ᵈ","e":"ᵉ","f":"ᶠ","g":"ᵍ","h":"ʰ","i":"ᶦ","j":"ʲ","k":"ᵏ","l":"ᶫ","m":"ᵐ","n":"ᶰ","o":"ᵒ","p":"ᵖ","q":"ᑫ","r":"ʳ","s":"ˢ","t":"ᵗ","u":"ᵘ","v":"ᵛ","w":"ʷ","x":"ˣ","y":"ʸ","z":"ᶻ","A":"ᴬ","B":"ᴮ","C":"ᶜ","D":"ᴰ","E":"ᴱ","F":"ᶠ","G":"ᴳ","H":"ᴴ","I":"ᴵ","J":"ᴶ","K":"ᴷ","L":"ᴸ","M":"ᴹ","N":"ᴺ","O":"ᴼ","P":"ᴾ","Q":"ᑫ","R":"ᴿ","S":"ˢ","T":"ᵀ","U":"ᵁ","V":"ⱽ","W":"ᵂ","X":"ˣ","Y":"ʸ","Z":"ᶻ","`":"`","~":"~","!":"﹗","@":"@","#":"#","$":"﹩","%":"﹪","^":"^","&":"﹠","*":"﹡","(":"⁽",")":"⁾","_":"⁻","-":"⁻","=":"⁼","+":"+","{":"{","[":"[","}":"}","]":"]",":":"﹕",";":"﹔","?":"﹖"}; let doTinyCaption = ( (_d, _q, _t) => { // Inject Tiny type let aTinyCnt = [..._d.querySelectorAll( _q )]; if( aTinyCnt ){ setTimeout( ()=>{ doTinyCaption.tick(); }, 32); } return { "tick": ()=>{ let sMU = ""; aTinyCnt.forEach( (eVivCnt) => { if(eVivCnt.atTiny !== eVivCnt.dataset.atTiny){ // Data atr changed Array.from( eVivCnt.dataset.atTiny ).filter(( sChr )=>{ sMU += ( sChr == " ") ? " " : aTiny[ sChr ]; }); eVivCnt.innerHTML = sMU; eVivCnt.atTiny = eVivCnt.dataset.atTiny; } } ); setTimeout( ()=>{ doTinyCaption.tick(); }, _t); } }; })(document, "[data-at-tiny]", 13664 ); ``` 2023-08-20 ##### Javascript GA4 Intersection Observer ###### Heatmap: Track content visibility time in Google Analytics or Adobe Analytics ``` // Track content visibility time with Intersection Observer in Adobe Analytics or Google Analytics /* /\ /\___ __ _| |_ _ __ ___ __ _ _ __ / /_/ / _ \/ _` | __| '_ ` _ \ / _` | '_ \ / __ / __/ (_| | |_| | | | | | (_| | |_) | \/ /_/ \___|\__,_|\__|_| |_| |_|\__,_| .__/ |_| 🌶️ 🔥 */ class SyHeatmap { // Neodigm 55 Heatmap Begin static oObserved = {}; static aObservedEl = []; static aQryContext = [] static oIntObserver = null; static NTHRESH_SECS = 3; static bIsInit = false; static reInit ( _q, _c = document ){ // DOM bind to context element if( _q && _c ){ this.aQryContext = [ _q, _c ] this.oObserved = {}; this.aObservedEl = [ ... _c.querySelectorAll( _q[ 0 ] ) ]; this.aObservedEl.forEach( ( elO )=>{ let elOsib = elO.nextElementSibling const sCap = elOsib.heatmapCaption = elO.innerHTML this.oObserved[ sCap ] = elOsib this.oObserved[ sCap ].heatmapTime = [] } ) this.oIntObserver = new IntersectionObserver( ( entries )=>{ entries.forEach( ( oEnt )=>{ if( oEnt.target?.heatmapCaption ){ const sCap = oEnt.target.heatmapCaption if( this.oObserved[ sCap ].heatmapTime.length ){ this.oObserved[ sCap ].heatmapTime.push( {"state": oEnt.isIntersecting, "ts": new Date().getTime() }) }else{ // No first time false (vis when page loads) if( oEnt.isIntersecting ){ this.oObserved[ sCap ].heatmapTime.push( {"state": oEnt.isIntersecting, "ts": new Date().getTime() }) } } if( oEnt.isIntersecting ){ oEnt.heatmapTotal = SyHeatmap.totalHeatmapTime( this.oObserved[ sCap ].heatmapTime ); // Sum and dif array vals console.log( " ~~~ tot | " + oEnt.target.heatmapCaption + " | " + oEnt.heatmapTotal ) } } } ) } ) //SyHeatmap.resetHeatMap() this.aObservedEl.forEach( ( elObs )=>{ let sCap = this.oObserved[ elObs?.innerHTML ] if( sCap ) this.oIntObserver.observe( sCap ) } ) if( !this.bIsInit ){ this.bIsInit = true; setInterval( ()=>{ SyHeatmap.tick() }, 3e3 ) } return this; } } static totalHeatmapTime ( aHeatmapTime ){ // Return total time on component in secs let nTotStart = 0; let nTotEnd = 0; // Note: IntrSec Observ will fire FALSE once upon page load for each entry not visible if( aHeatmapTime.length ){ // Append a FALSE as NOW if the last item is not FALSE (currently in viewport) let aDTO = [ ... aHeatmapTime ] if( aDTO[ aDTO.length - 1 ].state == true ) aDTO.push( {"state": false, "ts": new Date().getTime() } ) aDTO.forEach( ( oHMTimes )=>{ if( oHMTimes.state ) nTotStart = nTotStart + oHMTimes.ts if( !oHMTimes.state ) nTotEnd = nTotEnd + oHMTimes.ts } ) } return ( nTotEnd - nTotStart ) / 1000; // in seconds } static genHeatmap ( nThresh = this.NTHRESH_SECS ){ // Return a simple arry of current hm usage filt threshold let aCurHM = [] if( this.aObservedEl.length ){ for ( const sCap in this.oObserved ) { let nTotal = SyHeatmap.totalHeatmapTime( this.oObserved[ sCap ].heatmapTime ) if( nTotal && ( nTotal >= nThresh ) ) aCurHM.push( {"caption": sCap, "secs": nTotal } ) } } return aCurHM; } static resetHeatMap(){ this.aObservedEl.forEach( ( elObs )=>{ let sCap = this.oObserved[ elObs?.innerHTML ] if( sCap ) this.oIntObserver.unobserve( sCap ) } ) } static appendDataLayer (){ // Iterate filtered heatmap and add to DL - return count let iCnt = 0 if( window.dataLayer ) { SyHeatmap.genHeatmap().forEach( ( oHMSum )=>{ let sMsg = oHMSum.caption + " | " + oHMSum.secs + " | " + Neodigm 55.salesforceGlobal.franchiseconfig.Name + " | " + Neodigm 55.salesforceGlobal.loginuser.UserRole.Name window.dataLayer.push( { "event": "Neodigm 55_heatmap", "msg": sMsg, "hm_secs": oHMSum.secs, "hm_fran": Neodigm 55.salesforceGlobal.franchiseconfig.Name, "hm_role": Neodigm 55.salesforceGlobal.loginuser.UserRole.Name } ) iCnt++; }) } SyHeatmap.resetHeatMap() // Reset and Rebind SyHeatmap.reInit( this.aQryContext[ 0 ], this.aQryContext[ 1 ]) return iCnt; } static tick (){ if( this.aQryContext.length ){ // Must have already been fired let elSame = this.aQryContext[ 1 ].querySelector( this.aQryContext[ 0 ] ); if( elSame ){ if( elSame.innerHTML != this.aObservedEl[0]?.innerHTML ){ SyHeatmap.appendDataLayer() } }else{ SyHeatmap.appendDataLayer() } } } } // Neodigm 55 Heatmap End document.addEventListener("DOMContentLoaded", (ev)=>{ setTimeout( ()=>{ SyHeatmap.reInit( [ "DIV>H2" ], document.querySelector("#app > div.v-application--wrap > div.container.Neodigm 55-main-container.pa-0.ma-0.pt-0 > div > div") ) }, 3e3 ) }) ``` The Neodigm 55 Heatmap component captures the amount of time that each card is visible to an end user on a laptop or mobile device. The summation of card activity is then packaged into the analytics data layer to be consumed by Adobe Analytics or Google Analytics. Activity that occurs while offline will be updated upon reconnection to the network if the app has not been closed. The Neodigm 55 Heatmap component utilized the Intersection Observer pattern to track when and for how long block elements (cards) are visible in the user agent viewport. The component is configured to only report if a card is visible for greater than 3 seconds. This threshold is configurable. There are some edge cases wherein data may lose fidelity. For example if the user opens another tab or abruptly closes the browser while a card is within the viewport. The component captures the existence of cards within the browser's viewport, however the application may partially obstruct the viewport with an overlaying menu. Meaning that the Heatmap may report that a particular card is in view a few microseconds before it is actually visible. This discrepancy is so small that it is not statistically significant. The data layer entries that the heatmaps create are additive, meaning that there may be more than one for a single component. This is because the end user viewed a particular card component, exited the card, then reentered the card. 2023-08-14 ##### Get all products from Any Shopify Site ###### Fetch all Products and Images from any Shopify site. ``` // Fetch all Products and Images from any Shopify site. /* _____ __ __ ___ ____ ____ _____ __ __ / ___/| | | / \ | \ || || | | ( \_ | | || || o ) | | __|| | | \__ || _ || O || _/| | | |_ | ~ | / \ || | || || | | | | _] |___, | \ || | || || | | | | | | | \___||__|__| \___/ |__| |____||__| |____/ 😎 */ var aP = []; const neodigmMU = ` <textarea id="elT" rows=8></textarea>`; // Universal Templs let eMU = document.createElement("textarea"); document.body.appendChild(eMU); eMU.outerHTML = neodigmMU; (function getProducts(url = 'https://SHOPIFY-SITE.com/admin/api/2019-07/products.json?limit=250&fields=id,images') { fetch(url).then(res => { const headerLink = res.headers.get('link'); const match = headerLink?.match(/<[^;]+\/(\w+\.json[^;]+)>;\srel="next"/); const url = match ? match[1] : false; if(url){ res.json().then((data) => { data.products.map((el) => { //console.log( JSON.stringify( el ) ) aP.push( JSON.stringify( el ) ) }) }) getProducts(url) } else { res.json().then((data) => { data.products.map((el) => { //console.log( JSON.stringify( el ) ) aP.push( JSON.stringify( el ) ) }) doTextArea() }) } }) })() function doTextArea(){ let elTA = document.getElementById("elT") aP.forEach( (p) =>{ // console.log( " ~~~ | " + p ); elTA.textContent = elTA.textContent + p + "\n" } ) } ``` 2023-08-10 ##### JS Airport Geo-Proximity Radius ###### Airport geo-proximity logic that answers questions, like What are the three closest airports to me right now? ``` // Desc: Get the closest airports by geolocation radius // Usage: closestAirports.find(-99, -99, oAirports, 4); // 4 miles /* ___ _ _ _ * / _ \___ ___ | | ___ ___ __ _| |_(_) ___ _ __ * / /_\/ _ \/ _ \| |/ _ \ / __/ _` | __| |/ _ \| '_ \ * / /_\\ __/ (_) | | (_) | (_| (_| | |_| | (_) | | | | * \____/\___|\___/|_|\___/ \___\__,_|\__|_|\___/|_| |_| ✈️ */ function getDistance(lat1, lon1, lat2, lon2) { let radlat1 = Math.PI * lat1/180; let radlat2 = Math.PI * lat2/180; let theta = lon1-lon2; let radtheta = Math.PI * theta/180; let dst = Math.sin(radlat1) * Math.sin(radlat2) + Math.cos(radlat1) * Math.cos(radlat2) * Math.cos(radtheta); if (dst > 1) dst = 1; dst = Math.acos(dst); dst = dst * (180/Math.PI) * (60 * 1.1515); // miles return dst; } let closestAirports = ((_d) => { return { "find": function(nLat, nLon, objPorts, nRadius){ if(nLat && nLon && objPorts){ let arrPorts = []; for (let prop in objPorts) { // Sort Object if (objPorts.hasOwnProperty(prop)) { let lat = objPorts[prop].geoCode.split(",")[0]; let lon = objPorts[prop].geoCode.split(",")[1]; arrPorts.push({ 'key': prop, 'lat': lat, 'lon': lon, "dist": getDistance(lat, lon, nLat, nLon), "formattedAirport": objPorts[prop].formattedAirport }); } } arrPorts.sort(function(a, b){ // Sort by Distance return a.dist - b.dist; }); return arrPorts.filter(function(aP){ return (aP.dist <= nRadius); }); } } }; })(document); ``` 2022-12-13 ##### Calculate Aspect Ratio of Viewport ###### Calculate Aspect Ratio of Viewport ``` // Desc: Calculate Aspect Ratio of Viewport // Usage: Console log getDims() onresize event of body /* _ _____ _ _ /\ | | | __ \ | | (_) / \ ___ _ __ ___ ___| |_ | |__) |__ _| |_ _ ___ / /\ \ / __| '_ \ / _ \/ __| __| | _ // _` | __| |/ _ \ / ____ \\__ \ |_) | __/ (__| |_ | | \ \ (_| | |_| | (_) | /_/ \_\___/ .__/ \___|\___|\__| |_| \_\__,_|\__|_|\___/ | | |_| 🎯 */ const gcd = (a, b) => { return b ? gcd(b, a % b) : a; }; const aspectRatio = (width, height) => { const divisor = gcd(width, height); return `${width / divisor}:${height / divisor}`; }; const getDims = function(){ if(window.innerWidth !== undefined && window.innerHeight !== undefined) { var w = Number( window.innerWidth ) var h = Number( window.innerHeight ) var a = aspectRatio( w, h ) } else { var w = Number( document.documentElement.clientWidth ) var h = Number( document.documentElement.clientHeight ) var a = aspectRatio( w, h ) } return {"ratio": a, "h": h, "w": w}; } ``` 2021-04-16 ##### Javascript Generate and Download CSV ###### Produce CSV with client-side JS. Construct Blog and Download as CSV file. ``` // Desc: Produce CSV with client-side JS. Contruct Blob and Download as CSV file /* _________ _____________ ____ __________.__ ___. * \_ ___ \ / _____/\ \ / / \______ \ | ____\_ |__ * / \ \/ \_____ \ \ Y / | | _/ | / _ \| __ \ * \ \____/ \ \ / | | \ |_( <_> ) \_\ \ * \______ /_______ / \___/ |______ /____/\____/|___ / * \/ \/ \/ \/ CSV Report */ ✅ The resulting CSV files will contain a header row deterministic column names ✅ The resulting CSV files will be quoted ✅ The file name is auto-generated timestamp ✅ Cell string data may contain a comma “,” however quotes will be removed ✅ Cell string data may contain only utf-8 characters let nativeCSV = ( ( _d )=>{ let oCnt, jnCSV, sCSV, blCSV, elCSV; // config, json, array, blob, and element let retObj = { "init": ( _oCnt )=>{ oCnt = _oCnt; if( oCnt.fileName.indexOf("####") !== -1) { oCnt.fileName = oCnt.fileName.replace("####", Date.now() );} jnCSV = sCSV = blCSV = elCSV = ""; return retObj; }, "setArray": ( _jnCSV )=>{ // An array (rows) of arrays (cols) !jagged jnCSV = _jnCSV; if( oCnt.header ) jnCSV.unshift( oCnt.header ); jnCSV.forEach(( aRow )=>{ aRow.forEach(( sCol )=>{ if( typeof sCol === "string"){ sCSV += oCnt.delimQuote + sCol .split( oCnt.delimQuote ).join(""); sCSV += oCnt.delimQuote + oCnt.delimCol; } }); sCSV = sCSV.slice(0, -1) + oCnt.delimLine; }); return retObj; }, "getBlob": ()=>{ blCSV = new Blob([ sCSV ], { type: "text/csv;charset=utf-8;" }); return retObj; }, "createLink": ()=>{ elCSV = _d.createElement("a"); elCSV.setAttribute("href", URL.createObjectURL( blCSV )); elCSV.setAttribute("download", oCnt.fileName ); elCSV.style.visibility = 'hidden'; _d.body.appendChild( elCSV ); return retObj; }, "clickLink": ()=>{ elCSV.click(); return retObj; }, "removeLink": ()=>{ _d.body.removeChild( elCSV ); return retObj; } }; return retObj; })( document ); ``` ``` console.log( nativeCSV.init({ // Usage: "delimCol": ",", "delimQuote": '"', "delimLine": "\n", "fileName": "graph_nodes_####.csv", "header": ["id","name", "FQDN"]}) .setArray( currentGraph2Array(jCurrentGraph) ) .getBlob() .createLink() .clickLink() .removeLink() ); ``` 2021-02-27 ##### PWA Add to Home Screen ###### Progressive Web App ⚡ Advanced Cache && Notification Patterns ``` /* ______ __ __ ______ /\ == \ /\ \ _ \ \ /\ __ \ \ \ _-/ \ \ \/ ".\ \ \ \ __ \ \ \_\ \ \__/".~\_\ \ \_\ \_\ \/_/ \/_/ \/_/ \/_/\/_/ ✨ Add to Home Screen chrome://serviceworker-internals/ */ if ("serviceWorker" in navigator) { window.addEventListener("load", () => { navigator.serviceWorker.register("sw.js"); }); } ``` ``` let eA2hs = oD.getElementsByClassName("js-a2hs")[0]; let eA2hsP = oD.getElementsByClassName("js-a2hs--post")[0]; eA2hs.addEventListener("click", (e) => { eA2hs.style.display = "none"; eA2hsP.style.display = "block"; evDefPrompt.prompt(); evDefPrompt.userChoice .then((choiceResult) => { if (choiceResult.outcome === "accepted") { if( snck ) neodigmToast.q("Wow, Now I'm an App on your Desktop|How Convenient!"); playAudioFile( 7 ); // ggl tag event | User accepted the A2HS prompt } else { playAudioFile( 3 ); // ggl tag event | User dismissed the A2HS prompt } evDefPrompt = null; }); }); ``` ``` function displayMsg( sMsg ){ // System Tray Notification if (!("Notification" in window)) { console.log('Notification API not supported.'); return; } else if (Notification.permission === "granted") { // If it's okay let's create a notification var notification = new Notification( Nowish(), {icon: "https://repository-images.githubusercontent.com/178555357/2b6ad880-7aa0-11ea-8dde-63e70187e3e9", body: sMsg} ); } else if (Notification.permission !== "denied") { // Otherwise, we need to ask the user for permission Notification.requestPermission(function (permission) { // If the user accepts, let's create a notification if (permission === "granted") { var notification = new Notification( Nowish(), {icon: "https://repository-images.githubusercontent.com/178555357/2b6ad880-7aa0-11ea-8dde-63e70187e3e9", body: sMsg} ); } }); } } ``` ``` /* ╔═╗┌─┐┬─┐┬ ┬┬┌─┐┌─┐ * ╚═╗├┤ ├┬┘└┐┌┘││ ├┤ * ╚═╝└─┘┴└─ └┘ ┴└─┘└─┘ * ╦ ╦┌─┐┬─┐┬┌─┌─┐┬─┐ * ║║║│ │├┬┘├┴┐├┤ ├┬┘ * ╚╩╝└─┘┴└─┴ ┴└─┘┴└─ Advanced Cache ⚡ Notifications */ importScripts('https://storage.googleapis.com/workbox-cdn/releases/4.0.0/workbox-sw.js'); workbox.LOG_LEVEL = "debug"; self.addEventListener("fetch", event => { event.respondWith(caches.match(event.request) .then(cachedResponse => { if (cachedResponse) { return cachedResponse; } return fetch(event.request); }) ); }); workbox.routing.registerRoute( // Cache CSS files /.*\.css/, // Use cache but update in the background ASAP workbox.strategies.staleWhileRevalidate({ cacheName: 'css-cache', }) ); workbox.routing.registerRoute( // Cache image files /\.(?:png|gif|jpg|jpeg|webp|avif|svg|mp3|mp4|json|html|js)$/, // Use the cache if it's available workbox.strategies.cacheFirst({ cacheName: 'image-cache', plugins: [ new workbox.expiration.Plugin({ maxEntries: 256, maxAgeSeconds: 172800, }) ], }) ); ``` ``` class NeodigmPWA { constructor(){ } init () { window.addEventListener('appinstalled', () => { setTimeout(function(){ neodigmToast.q("##Application Installed|Neodigm UX ✨ Scott C. Krause") neodigmWired4Sound.play( 8 ) if( dataLayer ) dataLayer.push({'event': 'appinstalled'}) }, 1200) }); } } let neodigmPWA = new NeodigmPWA() neodigmPWA.init() ``` 2020-12-21 ##### HTML data attrib to JavaScript camel-case dataset ###### Convert an HTML formatted data attrib name to a JS formatted name. ``` // Desc: data-is-whatever will be converted to isWhatever // Usage: element.dataset[ data2prop("data-is-whatever") ] /*______ _____ __ __ _____ _ _ | ____/ ____| \/ | /\ / ____| (_) | | | |__ | | | \ / | / \ | (___ ___ _ __ _ _ __ | |_ | __|| | | |\/| | / /\ \ \___ \ / __| '__| | '_ \| __| | |___| |____| | | |/ ____ \ ____) | (__| | | | |_) | |_ |______\_____|_| |_/_/ \_\_____/ \___|_| |_| .__/ \__| | | |_| ES2021*/ function data2prop( sDset ){ // Convert HTML data attrib name to JS dataset name sDset = sDset.replace("data-", "").toLowerCase(); let aDset = sDset.split(""), aDret = [], bUpper = false; aDset.forEach( ( sChar ) => { if( sChar == "-" ){ bUpper = true; }else{ aDret.push( ( bUpper ) ? sChar.toUpperCase() : sChar ); bUpper = false; } }); return aDret.join(""); } ``` 2020-12-19 ##### Oracle PL/SQL Stored Procedure ###### Vintage Stored Procedure to denormalize department codes ``` -- ███████ ██████ ██ -- ██ ██ ██ ██ -- ███████ ██ ██ ██ -- ██ ██ ▄▄ ██ ██ -- ███████ ██████ ███████ Relational ⚡ Transactional -- ▀▀ PROCEDURE post_stage ( in_rowid_job cmxlb.cmx_rowid, in_ldg_table_name cmxlb.cmx_table_name, in_stg_table_name cmxlb.cmx_table_name, out_error_msg OUT cmxlb.cmx_message, out_return_code OUT int ) AS sql_stmt varchar2(2000); t_party_acct_id varchar2(14); t_txn_div_cd varchar2(20); t_txn_div_display varchar2(50); commit_count NUMBER := 0; commit_inc NUMBER := 1000; -- CURSOR C_PTAC_TXN IS SELECT PARTY_ACCT_ID, TXN_DIV_CD, TXN_DIV_DISPLAY FROM C_STG_PTAC_TXN_DIV; -- BEGIN -- commit_inc := to_number(GET_PARAMETER('post_stage_commit', commit_inc)); IF in_ldg_table_name = 'C_LDG_PTAC_TXN_DIV' AND in_stg_table_name = 'C_STG_PTAC_TXN_DIV' THEN -- 20130225 SCK Update the stage txn_div_display col with a denormalized string derived -- from an aggregate of both staging and base object. -- 🏄 SQL ⚡ ETL MDM ⚡ PL/SQL ORM cmxlog.debug ('ADDUE: Landing table name is ' || in_ldg_table_name || ' Staging table name is ' || in_stg_table_name); BEGIN FOR R_PTAC_TXN in C_PTAC_TXN LOOP post_stage_concat(R_PTAC_TXN.PARTY_ACCT_ID, t_txn_div_display); UPDATE C_STG_PTAC_TXN_DIV SET txn_div_display = t_txn_div_display, create_date = sysdate WHERE TXN_DIV_CD = R_PTAC_TXN.TXN_DIV_CD AND PARTY_ACCT_ID = R_PTAC_TXN.PARTY_ACCT_ID; -- CURRENT OF C_PTAC_TXN; commit_count := commit_count + commit_inc; IF MOD(commit_count, 1000) = 0 THEN cmxlog.debug ('ADDUE: post_stage_concat is: ' || commit_count || ':' || R_PTAC_TXN.PARTY_ACCT_ID || ' : ' || t_txn_div_display); COMMIT; END IF; END LOOP; COMMIT; END; ELSE CMXlog.debug ('ADDUE Post Stage - no action taken'); END IF; END post_stage; END ADD_UE; ``` 2020-12-19 ##### Dark Mode and Reduced Motion ###### Making Dark Mode work with both a UI switch && the OS preference. ``` // Desc: Listen to the OS for user preference // but override with a UI toggle. /* ______ __ ____ ____ __ |_ _ `. [ | _ |_ \ / _| | ] | | `. \ ,--. _ .--. | | / ] | \/ | .--. .--.| | .---. | | | |`'_\ : [ `/'`\]| '' < | |\ /| | / .'`\ \/ /'`\' |/ /__\\ _| |_.' /// | |, | | | |`\ \ _| |_\/_| |_| \__. || \__/ || \__., |______.' \'-;__/[___] [__| \_] |_____||_____|'.__.' '.__.;__]'.__.' User Prefs */ let doPrefersReducedMotion = function( bMotion ){// Stop 3D rotation o3Config.controls.autoRotate = !bMotion; } let doPrefersColorScheme = function( bScheme ){ // UI | OS Semaphore document.body.dataset.n55AmpmTheme = ((bScheme) ? "dark" : "light"); // 🌙 / ☀️ } // Capture the prefers media queries const mqPrefReducedMotion = window.matchMedia("(prefers-reduced-motion: reduce)"); const mqPrefColorScheme = window.matchMedia("(prefers-color-scheme: dark)"); doPrefersReducedMotion( (mqPrefReducedMotion && mqPrefReducedMotion.matches) ); doPrefersColorScheme( (mqPrefColorScheme && mqPrefColorScheme.matches) ); // listen to changes in the media query's value mqPrefReducedMotion.addEventListener("change", () => { doPrefersReducedMotion( mqPrefReducedMotion.matches ); }); mqPrefColorScheme.addEventListener("change", () => { doPrefersColorScheme( mqPrefColorScheme.matches ); }); ``` ``` /* Dark Mode begin */ /*@media (prefers-color-scheme: dark) {*/ body[data-n55-ampm-theme='dark'] [role='main'] { background: linear-gradient(to right, #5A5852, #c2c2c2, #5A5852) } body[data-n55-ampm-theme='dark'] .h-bg__stripe, body[data-n55-ampm-theme='dark'] .l-caro-design > article, body[data-n55-ampm-theme='dark'] article.l-caro-design { background: repeating-linear-gradient(45deg,#242424,#242424 24px,#444 24px,#444 48px); } body[data-n55-ampm-theme='dark'] section.pfmf-grid > div > article { border: solid 1px #888; border-top: solid 2px #888; box-shadow: 0px 2px 6px -2px rgba(164,164,164,0.6); background-color: #242424; } body[data-n55-ampm-theme='dark'] .readable__doc { color: #fff; } body[data-n55-ampm-theme='dark'] .readable__caption { color: #fff; } body[data-n55-ampm-theme='dark'] .h-vect-line-art { stroke: #fff;} /*}*/ /* Dark Mode end */ ``` 2020-12-19 ##### Vanilla JS Popover Microinteraction ###### A popover is a transient view that shows on a content screen when a user clicks on a control button or within a defined area. ``` // A popover is a transient view that shows on a content screen when // a user clicks on a control button or within a defined area. /* __ __ __ __ __ __ __ /\ \/\ \ /\ \ /\ \/\ \ /\_\_\_\ \ \ \_\ \ \ \ \ \ \ \_\ \ \/_/\_\/_ \ \_____\ \ \_\ \ \_____\ /\_\/\_\ \/_____/ \/_/ \/_____/ \/_/\/_/ */ class NeodigmPopTart { constructor(_d, _aQ) { // Orthogonal Diagonalizer 🌶️ Protomolecule this._d = _d; this._aQ = _aQ; this.oPopTmpls = {} this.elBound = null; this.sBoundTheme = neodigmOpt.N55_THEME_DEFAULT this.fOnBeforeOpen = {}; this.fOnAfterOpen = {}; this.fOnClose = {} this.bIsOpen = this.bIsInit = false } init() { if( !this.bIsInit ){ // once this._d[ neodigmOpt.N55_APP_STATE.CONTEXT ].addEventListener("mouseover", ( ev ) => { // data-n55-poptart-hover if( ev.target?.dataset?.n55PoptartHover ){ const sAttrEv = ev.target?.dataset?.n55PoptartHover // || ev?.srcElement?.parentNode?.dataset?.n55PoptartHover this.sBoundTheme = ev.target.n55Theme || ev.target?.dataset?.n55Theme || ev.target?.parentNode?.dataset?.n55Theme || neodigmOpt.N55_THEME_DEFAULT if( this.sBoundTheme != "disabled" ) { let elPopTmpl = this._d[ neodigmOpt.N55_APP_STATE.CONTEXT ].querySelector( "#" + sAttrEv ) if( elPopTmpl?.dataset?.n55Poptart ){ this.elBound = ev.target this.sBoundTheme = this.elBound ||neodigmOpt.N55_THEME_DEFAULT ev.preventDefault() neodigmPopTart.open( this.oPopTmpls[ sAttrEv ] = elPopTmpl, JSON.parse( elPopTmpl.dataset.n55Poptart ) ) } } } }, false) this._d[ neodigmOpt.N55_APP_STATE.CONTEXT ].addEventListener("contextmenu", ( ev ) => { // data-n55-poptart-rightclick if( ev.target?.dataset?.n55PoptartRightclick || ev.target?.parentNode?.dataset?.n55PoptartRightclick ){ const sAttrEv = ev.target?.dataset?.n55PoptartRightclick || ev.target?.parentNode?.dataset?.n55PoptartRightclick neodigmPopTart.click_and_right_click( ev, sAttrEv ) } }, false) this._d[ neodigmOpt.N55_APP_STATE.CONTEXT ].addEventListener("click", ( ev ) => { // 👁️ Outside Click if( this.bIsOpen ){ let eTarget = ev.target, bInside = false; while( eTarget.tagName !== "HTML" ){ if( eTarget.dataset.n55PoptartOpen ){ bInside = true; break; } eTarget = eTarget.parentNode; } if( !bInside ){ neodigmPopTart.close() } }else{ // data-n55-poptart-click if( ev.target?.dataset?.n55PoptartClick || ev.target?.parentNode?.dataset?.n55PoptartClick ){ const sAttrEv = ev.target?.dataset?.n55PoptartClick || ev.target?.parentNode?.dataset?.n55PoptartClick neodigmPopTart.click_and_right_click( ev, sAttrEv ) } } }, true) this._d[ neodigmOpt.N55_APP_STATE.CONTEXT ].addEventListener("keydown", ( ev ) => { // Close on Esc Key if ( ev.key == "Escape" ){ if( this.bIsOpen ) this.close() } }, true) this.bIsInit = true } return this; } click_and_right_click( ev, sAttrEv, bPrevDef=true ){ this.sBoundTheme = ev.target.n55Theme || ev.target?.dataset.n55Theme || ev.target?.parentNode?.dataset.n55Theme || neodigmOpt.N55_THEME_DEFAULT if( this.sBoundTheme != "disabled" ) { let elPopTmpl = this._d[ neodigmOpt.N55_APP_STATE.CONTEXT ].querySelector( "#" + sAttrEv ) if( elPopTmpl?.dataset?.n55Poptart ){ this.elBound = ev.target if( bPrevDef ) ev.preventDefault() neodigmPopTart.open( this.oPopTmpls[ sAttrEv ] = elPopTmpl, JSON.parse( elPopTmpl.dataset.n55Poptart ) ) } } } open( elPop, oPos ) { if( this.bIsInit && !this.bIsPause && elPop.id && !elPop.dataset?.n55PoptartOpen ) { let nOffSetT, nOffSetL, nOffSetH, nOffSetW; // oPos offset conf nOffSetT = nOffSetL = nOffSetH = nOffSetW = 0; if( oPos?.offset ){ nOffSetH = oPos?.offset?.h || 0 nOffSetL = oPos?.offset?.l || 0 nOffSetT = oPos?.offset?.t || 0 nOffSetW = oPos?.offset?.w || 0 } let oRctBound = this.elBound.getBoundingClientRect() let pxLft = window.pageXOffset || this._d.documentElement.scrollLeft let pxTop = window.pageYOffset || this._d.documentElement.scrollTop const NOFFSET = 10 // Allow pre CB to cancel open if( this.fOnBeforeOpen[ elPop.id ] ){ if( !this.fOnBeforeOpen[ elPop.id ]() ) return false; } if( this.fOnBeforeOpen[ "def" ] ){ if( !this.fOnBeforeOpen[ "def" ]() ) return false; } elPop.dataset.n55PoptartOpen = Date.now() let oRctPopCt = elPop.getBoundingClientRect() oPos.w = ( ( oPos.w ) ? oPos.w : ( oRctBound.width + nOffSetW ) ) // W oPos.x = ( ( oPos.x ) ? oPos.x : ( ( oRctBound.left + (oRctBound.width / 2) ) - ( oPos.w / 2) + pxLft + nOffSetL ) ) // X // TODO calc and align x center of bound elm oPos.y = ( ( oPos.y ) ? oPos.y : ( oRctBound.top + pxTop - nOffSetT ) ) // Y oPos.z = ( ( oPos.z ) ? oPos.z : neodigmOpt.N55_ZIND.PopTart ) // Z oPos.h = ( ( oPos.h ) ? (oPos.h + nOffSetH) : "auto" ) // H oPos.position = ( ( oPos.position ) ? oPos.position : "bottom" ) // P switch( oPos.position ){ case "top": oPos.y = ( oPos.y - ( oRctBound.height + oRctPopCt.height ) + NOFFSET ) break case "right": oPos.x = ( oPos.x + oRctBound.width ) break case "bottom": oPos.y = ( oPos.y + oRctBound.height ) + NOFFSET break case "left": oPos.x = ( oPos.x - oRctBound.width ) break } elPop.style.left = oPos.x + "px"; elPop.style.top = oPos.y + "px"; elPop.style.width = oPos.w + "px" elPop.style.height = ( oPos.h == "auto" ) ? "auto" : oPos.h + "px"; //elPop.style.position = "absolute"; elPop.style.zIndex = oPos.z; if( !elPop.dataset?.n55Theme ) elPop.dataset.n55Theme = this.sBoundTheme // Inherit Theme from Bound El, may be flash theme if( neodigmOpt.N55_GTM_DL_POPTRT ) neodigmUtils.doDataLayer( neodigmOpt.N55_GTM_DL_POPTRT, elPop.id ) this.bIsOpen = true if( this.fOnAfterOpen[ elPop.id ] ) this.fOnAfterOpen[ elPop.id ]() if( this.fOnAfterOpen["def"] ) this.fOnAfterOpen["def"]() } return this; } close() { for( let e in this.oPopTmpls ){ if( this.oPopTmpls[ e ]?.dataset?.n55PoptartOpen ){ let sId = this.oPopTmpls[ e ]?.id let bOkClose = true // CBs must explicitly return false to prevent closing if( neodigmOpt.N55_DEBUG_lOG ) console.log( "~Poptart Close | " + sId, this.fOnClose[ sId ] ) if( this.fOnClose[ sId ] ) bOkClose = !(this.fOnClose[ sId ]( sId ) === false) // The specific can cancel the generic if( bOkClose && this.fOnClose["def"] ) bOkClose = !(this.fOnClose["def"]( sId ) === false) if( bOkClose ){ delete this.oPopTmpls[ e ].dataset.n55PoptartOpen; this.bIsOpen = false } } } return this; } pause ( nT ){ if( this.bIsInit ){ if( nT ) setTimeout( () =>{neodigmPopTart.play()}, nT ) this.bIsPause = true; return this; } } play (){ this.bIsPause = false; return this; } shake( bSound = true) { // Shake All Open if(this.bIsInit && this.bIsOpen) { if( neodigmOpt.neodigmWired4Sound ) neodigmWired4Sound.doHaptic([8, 32, 48]) for( let e in this.oPopTmpls ){ if( this.oPopTmpls[ e ]?.dataset?.n55PoptartOpen ){ this.oPopTmpls[ e ].classList.add("ndsp__opened--shake1"); setTimeout(function(){ neodigmPopTart.oPopTmpls[ e ].classList.remove("ndsp__opened--shake1"); }, 460) } } if( bSound && neodigmOpt.neodigmWired4Sound && neodigmOpt.EVENT_SOUNDS ) neodigmWired4Sound.sound( 13, "QUITE" ) if( neodigmOpt.neodigmWired4Sound ) neodigmWired4Sound.doHaptic([48, 32, 8]) } return this } isOpen(){ return this.bIsOpen } setOnBeforeOpen( _f, id="def"){ this.fOnBeforeOpen[ id ] = _f } setOnAfterOpen( _f, id="def"){ this.fOnAfterOpen[ id ] = _f } setOnClose( _f, id="def"){ this.fOnClose[ id ] = _f } } ``` 2020-12-16 ##### Vue.js double tap Microinteraction ###### Firing both a tap and a double-tap on the same element ``` // A Vue.js snippet that shows how to capture both a tap and // a double-tap on the same element within the template. // // Canonical Use Case: Double-Tap to zoom into a hero image // and single-tap to zoom out. /* ____ ____ __ * \ \ / /_ __ ____ |__| ______ * \ Y / | \_/ __ \ | |/ ___/ * \ /| | /\ ___/ | |\___ \ * \___/ |____/ \___ > /\ /\__| /____ > * \/ \/ \______| \/ */ methods: { "doHeroMobMouseUp": function( ev ){ // Double Tap var oHro = this.oHeroZmMob; if( oHro.isInit ){ if( oHro.doubleTap ){ // Zoom In oHro.doubleTap = false; this.doHeroMobScale( .5 ); // Double Tap }else{ oHro.doubleTap = true; setTimeout(function(){ this.doHeroMobMouseUp_expire() }, 380); } } this.oHeroZmMob.isDown = false; }, "doHeroMobMouseUp_expire": function(){ // Single Tap var oHro = this.oHeroZmMob; if( oHro.isInit ){ // Zoom Out if( oHro.doubleTap ) this.doHeroMobScale( -.5 ); // Single Tap oHro.doubleTap = false; } } } /* This is only part of a larger Vue gesture implementation supporting Pinch 🤏, Zoom, Pan, and Swipe. Reach out to me if you want to learn more. */ ``` 2020-12-15 ##### CSS Advanced Accessibility ###### Motion, theme, and skip A11Y CSS solutions ``` /* Skip to Main Content - CSS Focus rules that make the link visible when focused from the omnibox. ======================================== ==== ======== ======== ===== ==== = === ===== ====== ===== == = == == ====== ======== ====== == == = ==== ===== ======== ====== == == = ==== ===== ======== ======= === = ===== ======== ======== ==== = ==== ===== ======== ======== ==== = ==== ===== ======== ======== ==== = ==== === ==== ====== ==== ======================================== */ a.skip__main:active, a.skip__main:focus { background-color: #fff; border-radius: 4px; border: 2px solid #000; color: #000; font-size: 1em; height: auto; width: 16%; left: auto; margin: 8px 42%; overflow: auto; padding: 4px; text-align: center; top: auto; z-index: 1024; } a.skip__main { left: -1024px; overflow: hidden; position: absolute; top: auto; width: 1px; height: 1px; z-index: -1024; } /* Dark Mode begin */ @media (prefers-color-scheme: dark) { body, [role='main'] { background: linear-gradient(to right, #5A5852, #c2c2c2, #5A5852) } .h-bg__stripe, .l-caro-design > article, article.l-caro-design { background: repeating-linear-gradient(45deg,#bbb,#bbb 24px,#ddd 24px,#ddd 48px); } } /* Dark Mode end */ /* Reduced Motion begin*/ @media (prefers-reduced-motion: reduce) { .hero__vect { animation: none; } } /* Reduced Motion end*/ ``` ``` <a class="js-skip__main--id skip__main" href="#a11y-skipmain">Skip to Main Content</a> ``` 2020-12-13 ##### Cypress E2E Quality Assurance ###### End to End testing 🚀 Headless browser automation ``` /* _____ / __ \ | / \/_ _ _ __ _ __ ___ ___ ___ | | | | | | '_ \| '__/ _ \/ __/ __| | \__/\ |_| | |_) | | | __/\__ \__ \ \____/\__, | .__/|_| \___||___/___/.io __/ | | |___/|_| E2E ✅ Automatically capture a video when a test fails ✅ Test values persisted in the Vuex (Vue.js Vuex specific) store ✅ Apply optional configuration files via the command line ✅ Test the uploading of images ✅ Create custom reusable, and chainable commands, such as cy.signIn() or cy.turnOnFeature() ✅ Test responsive layout & Local Storage ✅ Test A11y WCAG success criteria */ describe('E2E test | Hotel navigation, selection, and discovery', () => { context('Admin Add Hotel to Event', function () { // ignore CORS Cypress.on('uncaught:exception', (err, runnable) => { return false }); it('Success Login then Save Event', () => { cy.viewport(1066, 600) // large laptop 66.563em cy.log( JSON.stringify(Cypress.env()) ) let event_url; // The URL of the first event (default) let dMessage = new Date(); // Now dMessage = dMessage.toDateString() + " " + dMessage.toLocaleTimeString(); cy.tt_SignIn(Cypress.env( "mock_email" ), Cypress.env( "mock_password" )) .then(() => { cy.window().then( $win => { cy.wrap( $win.system_output ).should("exist") }) }) cy.url().should('not.include', 'login.') cy.visit( Cypress.env( "e2e_url_events" ) ) cy.url().should('include', 'events.') Cypress.Cookies.debug(true, { verbose: false }) cy.getCookie('logintoken').should('exist') cy.getCookie('role_id').should('exist') cy.getCookie('username').should('exist') cy.getCookie('otapi_token').should('exist') cy.get("a[href*='event-edit']" ).first().click() // Find the first matching link in the table. cy.get("#messages" ).type("{selectall}{backspace}E2E Test: " + dMessage ) cy.get("#eventForm > div.border-top.d-flex.pt-3.row > div > input" ).first().click() // Save change cy.get("#airTab" ).click() // select tab cy.get("#activate_flights" ).check(); cy.get("#flightForm > div.border-top.d-flex.pt-3.row > div > input" ).click(); cy.get("#vehicleTab" ).click() // select tab cy.get("#activate_vehicle" ).uncheck(); cy.get("#vehicleForm > div.border-top.d-flex.pt-3.row > div > input" ).click(); cy.get("#hotelTab" ).click() // select tab cy.get("#activate_hotels" ).check(); cy.get("#hotelForm > div.border-top.d-flex.pt-3.row > div > input" ).click(); // Extract URL from INPUT cy.get('#siteURL').invoke('val') .then( value => { event_url = value; }); cy.then(() => { return cy.visit(event_url); }); }) }) context('Choose Flight', function () { Cypress.on('uncaught:exception', (err, runnable) => { return false }); // ignore CORS it('Success Flight added to cart', () => { cy.viewport(1066, 600) // large laptop 66.563em cy.get("#from_airport" ).type( "ORD" ) cy.get("#to_airport" ).type( "LGA" ) cy.get("input[name='from_date']" ).click({ force: true }) cy.server() cy.route("*").as( "checkout" ) cy.get("div.vdp-datepicker.flex-fill > div:nth-child(2) > div > span:nth-child(39)" ).first().click() cy.get("#search-widget-btn" ).click() cy.wait("@checkout" ).its('status').should('eq', 200) cy.get("h5.modal-title").should("not.be.visible") .then( ($ModalMsg) => { cy.get("div.align-self-center.col-6.col-md.col-sm.col-xl.order-12.p-xs-1.text-right > button" ).first().click() } ) }) }) context('Hotel LightBox', function () { Cypress.on('uncaught:exception', (err, runnable) => { return false }); // ignore CORS it('Success Hotel added to cart', () => { cy.viewport(1066, 600) // large laptop 66.563em cy.wait(2000) cy.get("picture > img" ).first() .then( ( $picture )=>{ cy.wrap( $picture ).click() cy.wait( 6000 ) }) cy.get(".l-ltbx__image" ).first().click() // Cycle photos forward cy.get(".l-ltbx__vect--right" ) .then( ( $arrow_right ) => { cy.wait( 1000 ) cy.wrap( $arrow_right ).click() cy.wait( 1000 ) cy.wrap( $arrow_right ).click() cy.wait( 1000 ) cy.wrap( $arrow_right ).click() }) cy.wait( 1000 ) cy.get(".l-ltbx__btn" ).first() // Cycle photos backward .then( ( $arrow_left ) => { cy.wrap( $arrow_left ).click() cy.wait( 1000 ) }) cy.get(".l-ltbx__figcap").invoke("text").should("include", "4 of") .then( () => { cy.get(".l-ltbx__vect" ).first().click() // Close Modal cy.get("OUTPUT BUTTON.l-button" ).first().click() // Book Room .then( () => { cy.get( "A.ttfont-semibold.tttext-gray-700").first().click() // Change Tab cy.wait( 1000 ) cy.get( "A.ttfont-semibold.tttext-gray-700").first().click() // Change Tab cy.wait( 1000 ) cy.get( "ARTICLE SECTION BUTTON.l-button").first().click() // Book Room .then( ()=>{ cy.wait( 4000 ) cy.url().should('include', '/checkout') }) }) }) }) }) }) ``` 2020-12-07 ##### Asynchronous eCom Nav Category Count ###### Asynchronous recursive crawl reports the total number of products by category. ``` // Desc: Asynchronous recursive crawl report the total number of products by category // Usage: Console SNIPPET catCount.init(); /* @@@@@@ @@@ @@@ @@@ @@@@@@@ @@@@@@@ @@@@@@@@ @@@@@@@ !@@ @@!@!@@@ @@! @@! @@@ @@! @@@ @@! @!! !@@!! @!@@!!@! !!@ @!@@!@! @!@@!@! @!!!:! @!! !:! !!: !!! !!: !!: !!: !!: !!: ::.: : :: : : : : : :: :: : run in console */ var catCount = (function(_d,_q){ "use strict"; let aSub = []; console.clear(); return { init: function(){ // Get ref to all product categories in the left nav 🛒 aSub = [ ... _d.querySelectorAll( _q ) ].filter( ( el ) => { return (( el.firstChild.nodeValue ) && ( el.href )); } ); aSub.forEach( ( elLink ) => { if( elLink ) catCount.asyncTotal( elLink ); } ); }, parse: function( _Name, _Contents ){ let aTotl = _Contents.split("sizeTotalNumRecs"); if( aTotl[1].split('"')[2] ){ console.log( _Name, aTotl[1].split('"')[2]); } return true; }, asyncTotal: function( _elLink ){ let oXhr = new XMLHttpRequest(); oXhr.open("GET", _elLink.href, true); oXhr.onreadystatechange = () => { if( this.readyState!==4 || this.status!==200 ) return; catCount.parse( _elLink.firstChild.nodeValue, this.responseText ); }; oXhr.send(); } } })(document, "LI.item nav > a" ); ``` 2020-12-07 ##### Color of the Year CSS Styles ###### Color of the Year 2000 thru 2021 CSS Utility classes ``` /* Tailwind like CSS Utility classes for the Pantone Color of the Years from 2000 thru 2021 /* ____ _ * | _ \ __ _ _ __ | |_ ___ _ __ ___ * | |_) / _` | '_ \| __/ _ \| '_ \ / _ \ 2000- 2021 * | __/ (_| | | | | || (_) | | | | __/ * |_| \__,_|_| |_|\__\___/|_| |_|\___| 🟥 🟩 🟦 🟪 🟨 */ /* Color of the Year begin */ .bg-coy_2000 {background-color: #9BB7D4;} /* Cerulean */ .bg-coy_2001 {background-color: #C74375;} /* Fuchsia Rose */ .bg-coy_2002 {background-color: #BF1932;} /* True Red */ .bg-coy_2003 {background-color: #7BC4C4;} /* Aqua Sky */ .bg-coy_2004 {background-color: #E2583E;} /* Tigerlily */ .bg-coy_2005 {background-color: #53B0AE;} /* Blue Turquoise */ .bg-coy_2006 {background-color: #DECDBE;} /* Sand Dollar */ .bg-coy_2007 {background-color: #9B1B30;} /* Chili Pepper */ .bg-coy_2008 {background-color: #5A5B9F;} /* Blue Iris */ .bg-coy_2009 {background-color: #F0C05A;} /* Mimosa */ .bg-coy_2010 {background-color: #45B5AA;} /* Turquoise */ .bg-coy_2011 {background-color: #D94F70;} /* Honeysuckle */ .bg-coy_2012 {background-color: #DD4124;} /* Tangerine Tango */ .bg-coy_2013 {background-color: #009473;} /* Emerald */ .bg-coy_2014 {background-color: #B163A3;} /* Radiant Orchid */ .bg-coy_2015 {background-color: #955251;} /* Marsala */ .bg-coy_2016 {background-color: #F7CAC9;} /* Rose Quartz */ .bg-coy_2016b {background-color: #92A8D1;} /* Serenity */ .bg-coy_2017 {background-color: #88B04B;} /* Greenery */ .bg-coy_2018 {background-color: #5F4B8B;} /* Ultra Violet */ .bg-coy_2019 {background-color: #FF6F61;} /* Living Coral */ .bg-coy_2020 {background-color: #0F4C81;} /* Classic Blue */ .bg-coy_2021 {background-color: #939597;} /* Ultimate Gray */ .bg-coy_2021b {background-color: #F5DF4D;} /* Illuminating */ .bg-coy_2024 {background-color: #FFBE98;} /* Peach Fuzz */ .text-coy_2000 {color: #9BB7D4;} /* Cerulean */ .text-coy_2001 {color: #C74375;} /* Fuchsia Rose */ .text-coy_2002 {color: #BF1932;} /* True Red */ .text-coy_2003 {color: #7BC4C4;} /* Aqua Sky */ .text-coy_2004 {color: #E2583E;} /* Tigerlily */ .text-coy_2005 {color: #53B0AE;} /* Blue Turquoise */ .text-coy_2006 {color: #DECDBE;} /* Sand Dollar */ .text-coy_2007 {color: #9B1B30;} /* Chili Pepper */ .text-coy_2008 {color: #5A5B9F;} /* Blue Iris */ .text-coy_2009 {color: #F0C05A;} /* Mimosa */ .text-coy_2010 {color: #45B5AA;} /* Turquoise */ .text-coy_2011 {color: #D94F70;} /* Honeysuckle */ .text-coy_2012 {color: #DD4124;} /* Tangerine Tango */ .text-coy_2013 {color: #009473;} /* Emerald */ .text-coy_2014 {color: #B163A3;} /* Radiant Orchid */ .text-coy_2015 {color: #955251;} /* Marsala */ .text-coy_2016 {color: #F7CAC9;} /* Rose Quartz */ .text-coy_2016b {color: #92A8D1;} /* Serenity */ .text-coy_2017 {color: #88B04B;} /* Greenery */ .text-coy_2018 {color: #5F4B8B;} /* Ultra Violet */ .text-coy_2019 {color: #FF6F61;} /* Living Coral */ .text-coy_2020 {color: #0F4C81;} /* Classic Blue */ .text-coy_2021 {color: #939597;} /* Ultimate Gray */ .text-coy_2021b {color: #F5DF4D;} /* Illuminating */ .text-coy_2022 {color: #6667AB;} /* Very Peri */ .text-coy_2023 {color: #BE3455;} /* Viva Magenta */ .text-coy_2024 {color: #FFBE98;} /* Peach Fuzz */ /* Color of the Year end */ ``` 2020-12-07 ##### Solve Anagram Puzzle ###### Do two strings contain the exact amount of letters to form two words? ``` /* An anagram is a word or phrase formed by rearranging the letters of a different word or phrase, typically using all the original letters exactly once. For example, the word anagram itself can be rearranged into nag a ram, also the word binary into brainy. 🎯 🍰 🔥 _ /_\ _ __ __ _ __ _ _ __ __ _ _ __ ___ //_\\| '_ \ / _` |/ _` | '__/ _` | '_ ` _ \ / _ \ | | | (_| | (_| | | | (_| | | | | | | \_/ \_/_| |_|\__,_|\__, |_| \__,_|_| |_| |_| |___/ */ // Determine if two strings are Anagrams function isAnagram( word1 = "DOCTORWHO", word2 = "TORCHWOOD"){ let uc1 = word1.toUpperCase(), uc2 = word2.toUpperCase() return ([ ... uc1 ].filter(( c )=>{ if( uc2.indexOf( c ) != -1 ){ uc2 = uc2.replace( c, "" ) // Replace First Occurrence return true; } }).length === uc1.length && (!uc2)) } console.warn( isAnagram("neodigm", "dogimen") ); ``` ``` // Palindromes | They can be read the same backwards and forwards! // Is TACOCAT spelled backward still TACOCAT? // People have been asking this question for thousands of years until... // I wrote a function in JavaScript to prove it and end the debate. Palindrome in JavaScript let isPalindrome = ( sIn = "tacocat" ) => ( sIn.split("").reverse().join("") === sIn ); /*🐈🐱 _._ _,-'""`-._ (,-.`._,'( |\`-/| `-.-' \ )-`( , o o) `- \`_`"'- My name is Taco! ^_^ */ ``` 2020-12-07 ##### Virtual Keyboard Extention Configuration ###### TS Virtual Keyboard Chrome Extention ``` // TS Virtual Keyboard ⌨️ Chrome Extention | Configuration Class /*** * _______ _ _ * (_______) | | (_) _ * _ _ _ ____ ____ \ \ ____ ____ _ ____ | |_ * | | | | | | _ \ / _ ) \ \ / ___)/ ___) | _ \| _) * | |____| |_| | | | ( (/ / _____) | (___| | | | | | | |__ * \______)__ | ||_/ \____|______/ \____)_| |_| ||_/ \___) * (____/|_| |_| npm install --save @types/chrome */ class AVKOptions { aOpts : Array<any>; constructor ( pAr : Array<any> = [] ) { this.aOpts = pAr; } setState ( sOpt : string, bState : boolean ) : boolean{ this.aOpts = this.aOpts.filter( (e) => { if (e[0] === sOpt) e[1] = bState; return true; } ); return bState; } getState ( sOpt : string ) : boolean { return this.aOpts.filter( (e) => { if (e[0] === sOpt) { return true; } })[0][1]; } getFeedback ( sOpt : string ) : string { return this.aOpts.filter( (e) => { if (e[0] === sOpt) { return true; } })[0][2]; } } export let options = new AVKOptions([["audio", false, "Click Sounds"], ["autohide", false, "Hide if not in use"], ["blur", false, "Blur Text"], ["hover", false, "Hover No Click"], ["opaque", false, "Cannot See Through"], ["scramble", false, "Rearrange Keys"], ["theme", false, "Daytime theme"]]); ``` 2020-12-07 ##### Web Music Ad Blocker Snippet ###### Automatically mute the Music player when Ads are playing and unmute when they are done (in Chrome). ``` /* Install: Open Chrome Dev Tools (Command+option+I on Mac). Menu > Sources > Snippets Install: Create a new Snippet named musicADify.js, Paste this script, Save (Command+S). Usage: Run the Snippet once each time you start the Music Web Player. Usage: Right-Click the snippet named musicADify.js and choose Run from the drop-down. Usage: Close Chrome Dev Tools. 🏖️ Play your Jams! 🎶 ╔═╗┌─┐┌─┐┌┬┐┬┌─┐┬ ┬ ╔═╗┌┬┐┌─┐ ╚═╗├─┘│ │ │ │├┤ └┬┘ ╠═╣ ││└─┐ ╚═╝┴ └─┘ ┴ ┴└ ┴ ╩ ╩─┴┘└─┘ */ let spotADify = ( (_d, _q, _t) => { let eS = _d.querySelector( _q ), bS = true; if( eS ){ // 🏖️ Play your Jams! 🎶 bS = ( eS.getAttribute("aria-label") == "Mute" ); setInterval( () => {spotADify.tick();}, _t); return { "tick": () => { if((_d.title.indexOf("Adve") != -1) || (_d.title.indexOf("Spoti") != -1)){ if( bS ){ eS.click(); bS=!true; } }else{ if( !bS ){ eS.click(); bS=true; } } } } } })( document, "[aria-label='Mute'],[aria-label='Unmute']", 256); ``` 2020-12-07 ##### Capture Entire DOM State into Inline CSS Snapshot ###### Save As HTML a snapshot capture of entire DOM State with inline CSS ``` // Desc: Save As HTML a snapshot capture of entire DOM State with inline CSS // Usage: Just paste this code into the console 🌴 /* _________ __ __ ____ __. / _____/ ____ _____/ |__/ |_ | |/ _|___________ __ __ ______ ____ \_____ \_/ ___\/ _ \ __\ __\ | < \_ __ \__ \ | | \/ ___// __ \ / \ \__( <_> ) | | | | | \ | | \// __ \| | /\___ \\ ___/ /_______ /\___ >____/|__| |__| |____|__ \|__| (____ /____//____ >\___ > ES2022*/ function computedCSS2inline(element, options = {}) { if (!element) { throw new Error("No element specified."); } if (options.recursive) { Array.from( element.children ).forEach(child => { computedCSS2inline(child, options); }); } const computedStyle = getComputedStyle(element); //(options.properties || computedStyle)::each(property => { Array.from( computedStyle ).forEach(property => { element.style[property] = computedStyle.getPropertyValue(property); //element.setAttribute("class", "") }); } computedCSS2inline(document.body, {recursive: true}); [ ... document.querySelectorAll("script, link, style")].forEach(function(s){ s.outerHTML = ""}) async function saveToFile() { const handle = await showSaveFilePicker({ suggestedName: 'grabbed.html', types: [{ description: 'HTML', accept: {'text/html': ['.html']}, }] }); const writable = await handle.createWritable(); await writable.write(document.body.parentNode.innerHTML); writable.close(); }; console.log("NOTE: Run saveToFile() in console!") ``` 2020-09-16
true
true
true
Lead product storyteller, UX artisan, front-end engineer, and prolific open source contributor. Reimagine product narratives as interactive dashboards, persuasive infographics, and succinct data visualizations. Advocate for performant, functional, testable, future-proof, and trusted JavaScript, TypeScript, Lit, Svelt, and Lit.
2024-10-12 00:00:00
2020-09-16 00:00:00
https://www.thescottkrau…_library_tn.webp
website
theScottKrause.com
Scott C. Krause UX Developer
null
null
10,617,699
http://www.theatlantic.com/entertainment/archive/2013/11/-em-starship-troopers-em-one-of-the-most-misunderstood-movies-ever/281236/?single_page=true
Starship Troopers: One of the Most Misunderstood Movies Ever
Calum Marsh
*Starship Troopers*: One of the Most Misunderstood Movies Ever The sci-fi film’s* *self-aware satire went unrecognized by critics when it came out 16 years ago. Now, some are finally getting the joke. When Paul Verhoeven’s *Starship Troopers* hit theaters 16 years ago today, most American critics slammed it. In *The New York Times, *Janet Maslin panned the “crazed, lurid spectacle,” as featuring “raunchiness tailor-made for teen-age boys.” Jeff Vice, in the *Deseret News*, called it “a nonstop splatterfest so devoid of taste and logic that it makes even the most brainless summer blockbuster look intelligent.” Roger Ebert, who had praised the “pointed social satire” of Verhoeven’s *Robocop*, found the film “one-dimensional,” a trivial nothing “pitched at 11-year-old science-fiction fans.” But those critics had missed the point. *Starship Troopers* is satire, a ruthlessly funny and keenly self-aware sendup of right-wing militarism*. *The fact that it was and continues to be taken at face value speaks to the very vapidity the movie skewers. *Starship Troopers* is set in the distant future, when humankind has begun to colonize worlds beyond the borders of our galaxy. Earth has provoked an otherwise benign species of bug-like aliens to retaliate violently against our planet, which it suddenly and correctly perceives as hostile. Interpreting what are pretty obviously self-defense tactics as further gestures of aggression, humankind marshals its global forces and charges into a grossly outmatched interstellar war. The rhetoric throughout is unmistakably fascistic: Earth’s disposable infantrymen, among whom our high-school-aged former-jock hero naturally ranks, are galvanized by insipid sloganeering, which they regurgitate on command with sincerity as they head to slaughter. (“The only good bug is a dead bug!” is the chant most favored—shades of *Animal Farm* abound.) The resulting film critiques the military-industrial complex, the jingoism of American foreign policy, and a culture that privileges reactionary violence over sensitivity and reason. The screenplay, by the *Robocop* writer Edward Neumeier, furnished the old-fashioned science-fiction framework of Robert A. Heinlein’s notoriously militaristic novel with archetypes on loan from teen soaps and young-adult fiction, undermining the self-serious saber-rattling of the source text. Even the conclusion makes a point of deflating any residual sense of heroism and valor: We see our protagonists, having narrowly escaped death during a near-suicidal mission, marching back to battle in a glorified recruitment video—suggesting that in war the only reward for a battle well fought is the prospect of further battle. Over the nearly two decades since the film’s debut, the critical reputation of *Starship Troopers *hasn’t especially improved. But you can feel the conversation beginning to shift; it rightfully has come to be appreciated by some as an unsung masterpiece. Coming in at number 20 on *Slant Magazine*’s list of the 100 best films of the 1990s last year (a poll in which, full disclosure, I was among the voting critics), the site’s Phil Coldiron described it as “one of the greatest of all anti-imperialist films,” a parody of Hollywood form whose superficial “badness” is central to its critique. It fared well in *The* *A.V. Club’s* ‘90s poll, too, appearing in the top 50, where it was praised as a “gonzo satire destined, even *designed*, to be misunderstood.” Scott Tobias, former editor of the A.V. Club’s film section, lauded *Troopers* a few years earlier as “the most subversive major studio film in recent memory,” observing that it “seems absurd now to write it off as some silly piece of escapism, as its detractors complained.” But the original misperceptions still persist. On October 4th, *RiffTrax*—a series of downloadable comedy commentary tracks from the creators of *Mystery Science Theater 3000*—released an episode in which they mocked *Starship Troopers*, a movie their website describes as “dumb and loud” and a “goofy mess.” Mike J. Nelson and his *RiffTrax* co-stars Kevin Murphy and Bill Corbett heckle the film with about as much insight and wit as they misperceive the film to have. Sample humor: At one point, a bomb destroys a giant bug, and the three of them yell “Oh no, Raid!” Later, Denise Richards smiles, and someone says, in a robotic voice, “Smile-o-tron 3000 engaged.” It goes on like this. The tagline for *RiffTrax* is “Your favorite movies—made funny!” What they don’t seem to understand is that *Starship Troopers* already is funny—and smart. *Troopers*, of course, is far from the only instance of a film being popularly misinterpreted. Given enough distance even the most fervently reviled movie may one day find its legacy resuscitated, earning decades later its long overdue acclaim. Maybe that time is near for *Troopers*; hopefully, at least a few *RiffTrax* listeners newly introduced to the film picked up what was really going on. If you’re open and attuned to it—if you’re prepared for the rigor and intensity of Verhoeven’s approach—you’ll get the joke *Starship Troopers* is telling. And you’ll laugh.
true
true
true
The sci-fi film’s self-aware satire went unrecognized by critics when it came out 16 years ago. Now, some are finally getting the joke.
2024-10-12 00:00:00
2013-11-07 00:00:00
https://cdn.theatlantic.…629/original.jpg
article
theatlantic.com
The Atlantic
null
null
5,880,080
http://onlinecurl.com/
This domain is registered and protectedby Markmonitor
null
This domain is registered and protected by Markmonitor More than half the Fortune 100 trust Markmonitor to protect their brands online.
true
true
true
null
2024-10-12 00:00:00
2024-01-01 00:00:00
null
null
null
null
null
null
2,580,690
http://feefighters.com/blog/square-register-first-step-in-going-after-visa-and-mastercard/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
21,303,822
https://stackoverflow.blog/2019/10/16/coding-salaries-in-2019-updating-the-stack-overflow-salary-calculator/
Coding Salaries in 2019: Updating the Stack Overflow Salary Calculator
Julia Silge
Today we are launching the 2019 update to the Stack Overflow Salary Calculator, a tool that allows developers and employers to understand representative salaries for the software industry based on experience level, location, education, and specific technologies. This is the third year we have released our salary calculator, and we continue to use the same modeling approach from previous years. Our salary calculator is based on the comprehensive data from the Stack Overflow Developer Survey, and this large, extensive survey data allows us to build an accurate model that reflects trends in how coding work is being compensated around the world. The inputs to our salary calculator, what we find most predictive of a developer's salary, are your - location, - education, - years of professional coding experience, - what kind of coding work you do, and - the technologies you use professionally. ## Geography One of the factors that impacts salaries the most for people who code is where they live and work. Let's look at salaries for the top five countries for Stack Overflow traffic and respondents on the Stack Overflow Developer Survey. What types of changes do we see overall compared to last year? Looking in detail at last year's model compared to this year's model is a more fair comparison (contrasted with the 2017 to 2018 comparison) because our data cleaning and preparation process was more consistent; we have more confidence in year-to-year trends this year. According to our survey, people who code in countries like the United States and Canada are earning about 5% more this year compared to last year, even controlling for experience and education. The situation in Europe, Russia, Brazil, and India is not quite as positive, with mostly stable salaries this year compared to last, controlling for education and experience. Based on our Developer Survey data this year, we are able to report salary estimates for nine countries in addition to the five shown above: - Australia - Brazil - India - Italy - Netherlands - Poland - Russia - Spain - Sweden Support for Italy is new this year, so if you are a developer living in Italy, be sure to check out our results! We are also able to predict salary estimates for some specific cities that have different salary characteristics than the countries in which they are located, such as Seattle, New York, and the San Francisco Bay Area in the United States. As an example, our results indicate that people in Paris who code for work earn about 15% more than people in the rest of France. ## Earning potential and types of coding work Another important factor that affects how much a developer is paid is what kind of work they do. Like last year, people working in DevOps are high earners. Notice that we see the same mix of developers roles near the top and near the bottom of these plots for each country. Our survey sample is large, but we sample differently along the distributions of experience, education, developer role, and other characteristics in each country. To make accurate and confident estimate for salaries using this data, we build a model that accounts for all of these characteristics at once. This means that we cannot use our modeling to answer questions like whether embedded developers in Germany are paid relatively more than in the United States, but we *can* pool all the countries' data together to build a more accurate model than we would have otherwise. In the end, some roles such as DevOps are associated with higher salaries, some roles such as game developers are associated with lower salaries, and most are not associated with any change from the baseline salary, i.e. from the typical salary for that country with a given experience and education. Data roles fall into that last category, meaning that we have evidence here that high salaries for data scientists and data engineers can be accounted for by high education and high experience levels alone. Data scientists **are** highly paid, but not more so than a similarly educated developer doing other kinds of work. (Both bachelor's degrees and even higher degrees are associated with significantly increased pay for people who code.) Over the past several years, data science and data engineering work have been moving away from an extreme outlier position into the mainstream of software work. DevOps-focused work, on the other hand, is unusual both in terms of high compensation globally, as well as job satisfaction, how few DevOps specialists are looking for work, and more. On the 2019 Developer Survey, we gave respondents multiple options to choose from for their professional role, and each respondent could choose as many as appropriate for them. This year, we had two related roles in the operations area: "site reliability engineer" and "DevOps specialist". More respondents said they were DevOps specialists than SREs, but those who said they are SREs earn even more than those who identified as DevOps specialists. For the purposes of this modeling, we combined these two roles into one DevOps role, but keep in mind that people with SRE roles can earn even more. Specific technologies used in coding work also impact salary, even when controlling for developer roles, experience, and other factors. Staying current with high-demand technologies can have a big impact on salary. For example, this year, we see a significant increase in salary for developers who tell us they use Scala, Redis, Go, or Apache Spark in their professional lives. Just like last year, some technologies like WordPress and PHP are associated with lower salaries for developers, and some technologies like JavaScript are used so broadly that they do not affect salary up *or* down. ## Building a model to estimate salary Like last year, we used multiple linear regression to predict salary from the model inputs, with regularization to help with variable selection. Regularization can help decide which of the many technologies and developer roles we asked about on the survey are important predictors for salary. We modeled salaries are on a log scale, because the salary distributions are log-normal (as such distributions often are) with a long tail of very high salaries. In the salary calculator, we report a predicted salary for the location, education, experience, and other information you enter. Notice that we also report a 50% prediction interval. The specific statistical meaning of this interval is that we expect 50% of people with the same characteristics as you to have salaries within that range; it spans the 25th to 75th percentiles. The interval is just as important as the prediction itself (the 50th percentile), because it gives you an understanding of what the range of expected salaries could be. Similar to last year, we built two models, one for countries with higher incomes and one for countries with lower incomes. We can achieve better salary predictions for developers in all countries with this approach, in terms of how well the models fit the data, compared to either building one model for all countries, building separate models for each country, or other approaches. One way to evaluate how well a model fits data is to look at residuals, the differences between the predicted salaries and the known salaries for the dataset we used for modeling. Let’s look at the residuals for the countries and individual cities supported in the 2019 update to our salary calculator. The residuals look pretty good overall, but there are a few things to notice and keep in mind when interpreting the modeling results. First of all, just like last year, the residuals are much higher for countries with lower incomes like India and Brazil than countries with higher incomes like Sweden and Germany. We cannot predict incomes as accurately in countries with lower incomes. Second, notice the shape of the residuals for some countries such as the United States; there is less variation in the residuals at high salary than at low salary. This type of behavior has a special name in statistics; it's called heteroscedasticity, which more generally means that the variance of residuals isn't constant. It means that our dataset doesn't entirely meet the underlying assumptions of our model. In real life, salaries for people who code depend on *both* the characteristics we have taken into account in our model *and* other complex factors, some of which involve some violations of the assumptions of our modeling. That's real life for you! These are some of the limitations of our results, but the salaries and prediction intervals that we are reporting in our calculator this year offer reliable insight for developer salaries around the world. This plot shows residuals and fitted salaries in USD on a log scale, but salaries in the calculator itself are reported in natural units and the appropriate local currency for each user. ## Try it out Check out the Stack Overflow Salary Calculator and learn more about what people who code are earning! As a developer, you can explore the range of potential salaries for the kind of work you do and your skills, and find specific recommended jobs relevant for you. As an employer, you can learn more about the hiring landscape today. *Done reading this awesome post? We have something fun for ya. The Stack Overflow podcast is back! Come **check it out** or listen below.*
true
true
true
null
2024-10-12 00:00:00
2019-10-16 00:00:00
https://cdn.stackoverflo…=png&auto=format
website
stackoverflow.blog
Stackoverflow
null
null
15,428,432
http://www.bbc.com/news/technology-41524650
Internet speed guarantees must be realistic, says Ofcom
null
# Internet speed guarantees must be realistic, says Ofcom **Internet users are to be granted more rights on connection speeds as Ofcom imposes tougher rules on how ISPs advertise broadband services.** The proposals give consumers the right to exit contracts penalty-free if speeds fall below a guaranteed minimum. Ofcom says there is a mismatch between what is advertised, and the speeds customers receive. But experts say speeds are affected by different factors, and are not strictly a measure of connection to a device. A public consultation is currently being conducted until 10 November. ## Easy way out Gillian Guy, chief executive of Citizens Advice, said: "Many people seek our help each year because their slow and intermittent broadband service falls short of what their contract promised. "For most people, a reliable broadband connection is a necessity, so when they don't get what they've paid for they should always have a quick and easy way out of their contract." She said: "These changes are an important step in giving consumers more power to hold their broadband provider to account for poor service." Ofcom's existing broadband code of practice requires ISPs to provide consumers with an estimate of the internet speed they can expect from their service. If the proposed rules pass consultation, broadband providers will need to be much more specific about the speeds customers will receive and will have to set a guaranteed minimum speed for each package. This could mean current estimates of "up to 17Mbps" become "a minimum of 10Mbps". If the speed falls below the guaranteed minimum, under the new rules, the ISP will have one month to fix the problem, and if it cannot be fixed, the customer can terminate the contract without penalty. ## Right to know A BT spokesperson said: "We remain firmly supportive of Ofcom's voluntary code of practice on broadband speeds, including the latest proposed changes which we hope will make things even clearer for our customers. We have been working with Ofcom and other ISPs to improve the code." "We support the emphasis on customers knowing what broadband speeds will be delivered at peak times and we are happy to commit to letting customers leave without penalty if they don't reach their minimum guaranteed speeds and we can't fix the issue within a month." When broadband connections become slow and intermittent, consumers often use speed tests in order to see what connection speeds they are actually receiving. But it is a complex issue, according to a technology expert. Andrew Ferguson, editor of Think Broadband, said: "The code-of-practice changes are very welcome, but it's clear from years of experience that broadband providers are not deliberately and systematically ripping off the customer. "The biggest problem for an ISP is balancing the desires of the marketing department against what consumers are actually experiencing every day. These rules, once implemented, should help to force providers to be more honest with their customers," said Mr Ferguson. Many factors can influence browsing speeds, he said. These include - how wi-fi signals travel around a home - how well ethernet cables work - how computers handle net protocols - whether a computer can process higher speeds - anti-virus software - peak-time congestion "What I'm trying to get the general public to understand is that when they say a guaranteed speed, it's usually just to that broadband router, but not to your PC or device," Mr Ferguson told the BBC. "A speed test is not your connection speed. Sometimes the speed is affected by distance limits, and sometimes it's physics." ## Peak speeds Ofcom is keen to change the existing rules because current protections apply mostly to broadband over copper-based phone lines, since buildings located far away from the cabinet on the street or the local exchange often receive lower speeds. However, customers on cable or fibre networks can also experience problems during peak times, when many people in the area are using the network at the same time. Often, said Mr Ferguson, the estimates that ISPs give of speed are for times when few other people are browsing. The rule changes will demand that ISPs guarantee better peak speeds for all and be more realistic about what service consumers will get. Andrew Glover, chairman of the Internet Services Providers' Association told the BBC: "Ispa supports the consultation and the direction of travel that is being proposed by Ofcom. "The speed of a connection is clearly important for consumers, and while our members are offering faster and faster speeds across the nation, a lot of factors can affect the speed that individual consumers can achieve. "Some of these factors are outside the control of the provider but we fully agree with Ofcom that speeds need to be communicated in a transparent manner."
true
true
true
New Ofcom rules require ISPs to be realistic about speeds before customers purchase contracts.
2024-10-12 00:00:00
2017-10-06 00:00:00
https://ichef.bbci.co.uk…8_speed-test.jpg
reportagenewsarticle
bbc.com
BBC News
null
null
9,998,714
http://recode.net/2015/08/03/why-emc-may-soon-buy-out-not-spin-out-vmware/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
22,889,317
https://www.zdnet.com/article/us-offers-5-million-reward-for-information-on-north-korean-hackers/
US offers $5 million reward for information on North Korean hackers
Catalin Cimpanu
# US offers $5 million reward for information on North Korean hackers The US government is willing to pay up to $5 million for information on North Korea's hackers and their ongoing hacking operations. The reward for reporting North Korean hackers was announced today in a joint report [PDF] published by the Departments of State, Treasury, Homeland Security, and the Federal Bureau of Investigation. The joint report contains a summary of North Korea's recent cyber operations and is based on a UN Security Council report published last year that details the country's tactic of using hackers to raise funds for the Pyongyang regime, as a novel way to bypass international sanctions. Observed tactics include: - Attacks and thefts from banks and other financial entities - Attacks and thefts from cryptocurrency exchanges - Cryptojacking operations -- where North Korean hackers compromise servers worldwide to mine cryptocurrency - Various types of extortion campaigns, such as: - Compromising an entity's network and threatening to shut it down unless the victim pays a ransom - Getting paid to hack websites on behalf of third-party clients, and then extorting the targets - Charging victims "long-term paid consulting arrangements" in order to prevent future attacks US officials say a lot of these attacks have targeted the financial sector, from where North Korean hackers have stolen funds in excess of $2 billion, which have been laundered back into the hermit kingdom. The US says these hacks are now posing "a significant threat to the integrity and stability of the international financial system." The report warns financial entities and companies in other industry verticals to bolster protections against North Korea's hacker groups and links to US government resources related to DPRK cyber threats so companies can look into and take steps to secure their IT infrastructure against known North Korean malware and hacking tactics. The US government also issued a stern warning to companies that may be engaging with North Korean entities and might be, directly or indirectly, helping North Korean hackers launder stolen funds. Consequences include sanctions and seizure of funds and assets, officials said. ### North Korea uses hackers to raise funds, bypass sanctions Today's joint guidance on North Korean cyber operations is just the latest in a long list of actions the US has taken to counter the Pyongyang regime's highly active hacker groups. In September 2019, the US Treasury imposed sanctions on three North Korean hacking groups -- Lazarus, Andariel, and Bluenoroff. In November 2019, the US Department of Justice charged a member of the Ethereum cryptocurrency project for giving a talk in North Korea about cryptocurrencies and how they could be used to avoid sanctions. In March 2020, the US Department of Justice charged two Chinese nationals for laundering funds from two Lazarus hacks. The US government's joint report comes a day after North Korea fired "a barrage of cruise missiles" that hit near South Korea's coastline ahead of the country's upcoming parliamentary election. The US government stated last year that North Korea has been using hackers to raise funds to finance its weapons and nuclear missile programs. Government officials are now hoping that the joint report will help companies improve their security posture and awareness, and put a dent in Pyongyang's hacking profits, and, indirectly, in the country's weapons program. "It is vital for the international community, network defenders, and the public to stay vigilant and to work together to mitigate the cyber threat posed by North Korea," US officials said. *Users who have and can share any helpful information can report it through the Department of State's Rewards for Justice portal.*
true
true
true
US says North Korean hackers pose a significant threat to the integrity and stability of the international financial system.
2024-10-12 00:00:00
2020-04-15 00:00:00
https://www.zdnet.com/a/…t=675&width=1200
article
zdnet.com
ZDNET
null
null
14,258,527
https://www.vox.com/2017/5/1/15482698/new-york-times-bret-stephens
The New York Times should not have hired climate change bullshitter Bret Stephens
David Roberts
Earlier this month, the New York Times hired conservative Bret Stephens, longtime writer for the Wall Street Journal, as a columnist for its opinion page. # The New York Times should not have hired climate change bullshitter Bret Stephens It’s time for the opinion page to take climate change as seriously as the paper’s reporters do. It really shouldn’t have. For one thing, though the paper defends the hire in the name of opinion diversity, Stephens is a very familiar sort of establishment conservative — a cosmopolitan, well-educated, reflexively pro-Israel war hawk (who once wrote a column on “the disease of the Arab mind”) who thinks anti-racists are the real racists but moderates on select issues to demonstrate his independence. It is difficult to imagine a perspective more *over*-represented in DC political circles, at least relative to its representation in the actual conservative movement. In terms of intellectual contribution, his main credential seems to be that he has opposed Donald Trump. It takes a particular sort of insularity to hire a pro-war, anti-Trump white guy as a contribution to diversity on the NYT editorial page. Worse, Stephens is the kind of conservative writer who has feasted on easy shots at liberals for so long that he has let himself get lazy. Read his interview with Vox’s Jeff Stein, who actually pushed him a little. He says things like this: I think Black Lives Matter has some really thuggish elements in it. Look — at the risk of being incredibly politically incorrect, but I guess that’s my job — I think that all lives matter. Not least black lives. This is perfect: The delivery of a faux-profound bit of conservative conventional wisdom in tones that suggest a) no one has ever thought of it before and b) willingness to say it demonstrates a kind of tough-minded courage. That kind of bubble blindness comes up frequently when we turn to the main reason NYT shouldn’t have hired Stephens: climate change. ## Stephens has long been a standard conservative hack on climate change Predictably, the debate about Stephens has focused on whether he is a “climate denier.” That label, which has taken on such weighty culture-war implications, is mostly symbolic and mostly a distraction. Despite what people like Stephens like to say, climate change is not a religious doctrine. Attitudes toward it need not be binary, belief or apostasy. Different people might draw different conclusions from the available information. But just saying that doesn’t get us very far. What matters is not whether Stephens deserves a particular label, but whether he is honest, and makes good arguments, about what is an extremely important subject. And when he discusses climate change, Stephens uses incorrect facts and terrible arguments. At a time when we desperately need a conversation about climate change more sophisticated than “is it a problem?” he makes the debate dumber. Since the outcry that met his hiring, Stephens has tried to soften his take on climate. He told Huffington Post that he is a “climate agnostic.” “Is the earth warming?” he asked. “That’s what the weight of scientific evidence indicates. Is it at least partially, and probably largely, a result of man-made carbon emissions? Again, that seems to be the case. Am I ‘anti-science’? Hell, no.” As Joe Romm of Climate Progress has demonstrated, this is utterly disingenuous. Stephens called climate change a “mass hysteria phenomenon” for which “much of the science has … been discredited.” He said that people who accept climate change science are motivated in part by the “totalitarian impulse” and they worship “a religion without God.” He said “global warming is dead, nailed into its coffin one devastating disclosure, defection and re-evaluation at a time.” In a column calling climate change one of liberalism’s “imaginary enemies,” he said this: Here’s a climate prediction for the year 2115: Liberals will still be organizing campaigns against yet another mooted social or environmental crisis. Temperatures will be about the same. As Romm notes, the idea that temperature will be the same in 100 years is utterly ludicrous, the scientific equivalent of claiming the earth is flat. It doesn’t sound like the words of a “climate agnostic.” It sounds like the words of a climate dope. Before he was hired at NYT, Stephens was a source of standard-issue right-wing hackery on climate change. If he has really changed his mind on whether climate change is a “mass hysteria phenomenon,” he ought to say why. ## Getting hired has not stopped Stephens from making lazy arguments Editorial page editor James Bennet says charges of denialism against Stephens — holding him responsible for words he has written — are “terribly unfair.” So let’s give Stephens the benefit of the doubt and look only at arguments he’s made in interviews and his column since being hired. They are the kinds of arguments one finds convincing only insofar as one has never encountered a serious interlocutor. **1) People have kids.** In his interview with Stein, Stephens notes that he knows a climate activist who has had kids. If the activist really believed climate change is a potentially catastrophic problem, “presumably he wouldn’t be having children.” QED, I guess? It’s going to blow Stephens’s mind when he finds out people have had kids during wars, famine, diseases, and droughts — catastrophic things they were right in the middle of. Did they not believe their eyes? I too have children. I too believe that the worst-cast scenarios on climate change are genuinely catastrophic and that we are doing far too little to forestall them. Like many people know, I have grappled with what it means for my kids, sometimes painfully. I bet the activist Stephens knows has grappled with it. Stephens should have a conversation with him, it might wring a little bit of the glib out. **2) We don’t know the future for certain.** Stephens concedes that global temperatures are rising, but says to Stein: However, does that mean the trend will continue forever? We don’t know. Does this mean we will reach the upper bounds of what climate scientists fear? We aren’t sure. There are uncertainties in all of this. If I say, “Hey, there are uncertainties about forecasting the future,” that ought to be — in any other context — a statement of common sense. But now if you say there are uncertainties, you are akin to what’s called “a denier.” Stephens devoted his debut column in the NYT entirely to the same banality, refuting those who claim “complete certainty” about climate change. Who claims this? Stephens does not cite anyone. The world contains zero (0) climate scientists who have ever claimed that long-term models of climate change are certain. It’s not even a coherent notion. All models do is take an enormous set of inputs, run calculations on them, and spit out a range of possible outcomes, with probabilities attached. What would it even mean for them to be “certain”? All any science ever provides are probabilities. Climate science is an enormous territory and within it there are wider and narrower error bars, a range of different conclusions in which scientists have varying degrees of confidence. (See climate scientist Ken Celdeira on this.) What introduces the most uncertainty into climate modeling is not the physical measurements and models the conservative movement spends so much time attacking. It is the human element — the difficulty of projecting social, demographic, economic, and technological changes. After all, how much damage climate change will ultimately do depends closely on how such changes play out. What will, say, natural gas cost in 2050? Obviously we don’t know. The climate challenge just is a challenge of making consequential decisions in the face of deep uncertainty. Climate change is, in the words of EDF’s Gernot Wagner and Harvard’s Martin Weitzman, “almost uniquely global, uniquely long-term, uniquely irreversible and uniquely uncertain.” There’s a whole branch of scholarship and research on it, endless literature. I have written posts devoted to uncertainty in climate change here, here, here, here, here, and here. Uncertainty is, in a nutshell, justification for action, not complacency. When he reveals in his column that “much else that passes as accepted fact is really a matter of probabilities,” Stephens just knows that dropping such a truth bomb is going to get him in trouble. “By now I can almost hear the heads exploding,” he says. Yes, they are exploding. But not why he thinks. **3) Other people have been wrong about other things. ** In his first column for the Times, Stephens points out that the managers of Hillary Clinton’s campaign were very confident in her victory. Then they turned out to be wrong. This shows that sometimes people are confident about things and turn out to be wrong. To make this already devastating critique of climate science even stronger, Stephens specifies that *scientists* have often been wrong in the past, and that science has often been used for political purposes. This is a very common argument on the right, but again, it is banal. Of course it is true. Policy should not be made based on faddish or poorly supported science. But for that point to mean anything in this case, Stephens has to make the case that climate science is that kind of science. He has not. Nor, after decades of effort, have any of his conservative compatriots. And climate science isn’t. It is incredibly robust. It does not depend on any one or even any dozen studies. There is what’s called “consilience” — multiple varieties and lines of evidence coming from multiple disciplines, all telling a mutually reinforcing story. What’s more, no science of virtually any kind has ever been subjected to the intense scrutiny that climate science (and scientists) have undergone. The global scientific community’s collective climate research has undergone multiple overlapping layers of review and assessment, not to mention the constant need to defend against bad-faith attacks from political hacks. Climate science is not in the same epistemological universe as the knowledge and assumptions behind a single political campaign. It is far, far stronger than that, far stronger the vast majority of theories about health and economics upon when we routinely make personal and policy decisions. Taking it seriously does not require “overweening scientism,” just a willingness to hear what science is saying, even when it is uncomfortable for one’s political priors. Of course we are never certain about anything. Of course scientists have been wrong before. And of course climate science — especially when it tries to project damages at smaller temporal and geographic scales, like the next several decades — is filled with probabilities and uncertainties. But when it comes to the bigger picture, we are very, very confident — 90 to 95 percent confident, which is more confident than science gets about almost anything — that human beings are causing most or all of the rapid recent rise in temperature and that the impacts are going to cause great ecological and social disruption. Climate change is not a “mass hysteria phenomenon.” **4) Nobody talks about how much to spend on climate change.** My favorite exchange in the Vox interview: Bret StephensThe best argument made on behalf of climate mitigation strategies is even if there’s a small chance your house catches fire, you take out insurance. That’s perfectly sensible. And you can make a perfectly sensible argument that even if we’re not 100 percent sure we’re facing a catastrophic climate future, we should take out a host of insurance policies to mitigate carbon emissions. But then the intelligent question is: “How much are you paying for insurance?” Jeff SteinAre you saying we’re currently paying too much? Bret StephensI’m saying that’s a question we ought to be raising. Kudos (genuinely) to Stephens for being familiar with the insurance analogy (which traces back to Weitzman’s work on “fat-tailed uncertainties”). When I interviewed libertarian Jerry Taylor about what changed his mind on climate change, the risk-hedging argument played a big role. But the question of how much we ought to spend as a hedge against uncertainty with enormous downside risks … has been raised. Quite a lot. Again, there’s a whole thriving area of scholarship devoted to it. There are all sorts of smart people who have thought it through from all sorts of angles and run all sorts of models. Many studies have found that a transition to sustainable energy will be a net economic positive. If Stephens wants studies that wildly exaggerate the costs of the transition, there are a number of think tanks in the conservative world that specialize in that product. But, yeah. The question has been raised. **5) One time somebody said something stupid or rude about climate change.** Though Stephens cites no one in his critique of people who accept climate science, there are certainly plenty of people out there — climate activists sometimes among them — who say dumb things about climate change. If you want to find someone who overstates the certainty of climate science, or is rude to climate skeptics, or violates some other bit of decorum, you usually can. You can make the story about how some activists are too strident, or uncivil. You can make yourself the victim. Making yourself the victim of liberal bullying is a surefire way to get yourself some sympathy and support in US political and media circles. But what is the point? How does it advance the conversation? Why not engage with the best, most thoughtful voices in the climate world? Why not do some research *before* you write so you don’t get yelled at at all? **6) Just asking questions. Why so rude?** Stephens is playing a bit part in a very, very old strategy. It goes like this: - Q: “We’re just asking questions.” - A: [questions answered] - Q: “We’re just asking questions.” - A: “Yeah, we answered those. Here’s a link.” - Q: “We’re just asking questions.” - A: “We answered the questions. A bunch of times. Please acknowledge our answers.” - Q: “We’re just asking questions.” - A: “Okay, we went back over our answers, double-checked and peer-reviewed them, compiled them in a series of reports with easy-to-read summaries, all of which we have broken down into digestible bits via various blog posts and visual aids.” - Q: “We’re just asking questions.” - A: “It’s beginning to seem like you don’t really care about this issue and are just jerking us around.” - Q: “Hey, we’re just asking questions! Galileo asked questions, didn’t he? Why are you being so intolerant and rude?” Everyone who has written about climate change on the internet has gone through this ringer dozens of times. Yet relatively few people in US politics or media follow climate closely, or know much about it, so when they do tune in to these controversies, all they see is, heck, some fellas askin’ questions. That a crime? It is a tiresome game. It’s difficult to see how NYT readers will benefit from it. ## The NYT is a referee and it has made a call: bullshitting on climate is not disqualifying In all these examples, a similar theme emerges: Stephens just doesn’t seem to have thought much about climate change. He’s enacting the rote conservative ritual of groping around for some reason, any reason, to a) justify inaction and b) blame liberals, in the process saying false things and making terrible arguments. Editorial page editor James Bennet said this to public editor Liz Spayd: The crux of the question is whether [Stephens’] work belongs inside our boundaries for intelligent debate, and I have no doubt that it does. I have no doubt he crosses our bar for intellectual honesty and fairness. Let’s ponder this a moment. The question is not whether Stephens has said false and misleading things about climate change in the past. If you believe the work of NYT reporters, then yes, he has. His latest column indicates that his rethinking on the subject remains inch-deep. The question is whether it *matters* — whether dismissing climate change as a “mass hysteria phenomenon” is, or ought to be, disqualifying, below any reasonable “bar for intellectual honesty and fairness.” The line separating what’s inside and outside the bounds of reasonable debate is not fixed. We draw it together, through our decisions and actions. We push and pull on it all the time. When a trusted institution deems a particular perspective within the bounds of reasonable debate, it carries a certain imprimatur, a signal to elites and readers alike. The same is true when those institutions exclude certain perspectives. Institutions are, whether they like to acknowledge it or not, referees in this game. They make calls about what’s in and out of bounds. Bennet does not endorse (or even address) anything Stephens says on climate, only waves his hands, as he did to Washington Post’s Erik Wemple, that Stephens is “capturing and contributing to a vitally important debate.” Through hiring and defending Stephens, he is signaling that bullshitting about climate change is not disqualifying from a position at the NYT. It is within acceptable mainstream bullshitting limits. Even if you dismiss climate change as a totalitarian delusion for years, as long as you’re willing to publicly acknowledge the most rudimentary science, the rest is fair game. Make no mistake: This isn’t new. Bullshitting about climate change has never carried much censure in US media. The Washington Post ran some George Will bullshit on climate just a couple weeks ago. This has long been the norm. Bennet just reaffirmed it. Still, he shouldn’t have. ## About canceling subscriptions A bunch of people (at least on Twitter) have been canceling or threatening to cancel their NYT subscriptions over this. I understand that decision and respect anybody who makes it. The only way climate bullshitting will ever carry any censure is when people kick up a fuss about it. For my part, though, I’m not going to follow suit. For one thing, my colleague Brad Plumer is headed to the NYT to help with their climate coverage and he alone is worth the price of a subscription. Beyond that, the dedicated climate desk that NYT has built is already doing fantastic work, which builds on a long history of great climate change reporting at the paper. And beyond that, for reasons I wrote about in my piece on tribal epistemology, a strong, independent media matters more now than ever. For all their sins, America’s big newspapers have done some great things these past 100 days. It would be a shame to punish the news side for the missteps of the opinion page. In fact, now that informed opinion has become such a robust part of the NYT and WaPo (see the Monkey Cage, the Upshot, Wonkblog, and many others), it’s unclear why the papers still reserve a section where, to “widen the range of perspectives,” they include the uninformed kind — or why anyone would want to read it. In any case, climate change is a big deal. Getting it right, advancing the conversation past basics the rest of the world left behind decades ago, is important, more so than most other things newspapers cover. The NYT news desk is taking it seriously. The opinion page should too. ## Most Popular - The one horrifying story from the new Menendez brothers doc that explains their whole caseMember Exclusive - Take a mental break with the newest Vox crossword - AI companies are trying to build god. Shouldn’t they get our permission first? - The resurgence of the r-wordMember Exclusive - Sign up for Vox’s daily newsletter
true
true
true
It’s time for the opinion page to take climate change as seriously as the paper’s reporters do.
2024-10-12 00:00:00
2017-05-01 00:00:00
https://platform.vox.com…,64.971277281169
article
vox.com
Vox
null
null
34,846,312
https://www.statista.com/chart/12737/thousands-of-american-bridges-are-falling-apart/
Infographic: Where America's Bridges Are Crumbling
Niall McCarthy
Back in August 2007, the dangerous and decaying state of America's infrastructure became a shocking reality when the I-35W Mississippi River Bridge in Minneapolis collapsed, sending vehicles crashing into the river below. 13 people were killed while another 145 were injured. In August 2018, a bridge collapse in Genoa, Italy, shocked the world with 43 people losing their lives. After that catastrophe, many countries, including the United States, started to seriously look at the state of their deteriorating infrastructure. Last week, President Biden unveiled his plans for a $2 trillion investment in American infrastructure, describing it as "a once-in-a-generation effort". It would involve replacing lead piping, rebuilding 20,000 miles of roads and repairing the country's 10 most economically important bridges. Biden described the program as "unlike anything we have seen or done since we built the interstate highway system and the space race decades ago". He added that it would be "the largest American jobs investment since World War II". After the announcement, a report from the American Road & Transportation Builders Association (ARTBA) illustrated the scale of the challenge in overhauling and repairing U.S. infrastructure by finding that more than 220,000 American bridges need repair work. 45,000 of them were deemed structurally deficient and Americans cross them 171.5 million times daily. At the current rate, it would take more than 40 years to fix all of them and cost an estimated $41.8 billion. The good news is that the number of structurally deficient bridges has declined for the past five years but that trend has been tempered by more bridges being downgraded from good to fair condition. Out of all U.S. states, Iowa has the most structurally deficient bridges, 4,571 or 19.1 percent of its total bridges. Pennsylvania comes second on the list with 3,353 of its bridges falling into the same category, along with 2,374 in Illinois. West Virginia has the highst share of bridges classified as structurally deficent at 21 percent while Nevada has the lowest at just 1.4 percent.
true
true
true
This chart shows U.S. states by number of structurally deficient bridges in 2020.
2024-10-12 00:00:00
2018-01-31 00:00:00
https://cdn.statcdn.com/…ormal/12737.jpeg
article
statista.com
Statista Daily Data
null
null
25,205,873
https://www.npr.org/2020/11/24/938572857/helicopter-pilots-spot-strange-sci-fi-looking-object-in-utahs-red-rock-country
Large Metal Monolith Mysteriously Appears In Remote Region Of Utah's Red Rock Country
Reese Oxner
# Large Metal Monolith Mysteriously Appears In Remote Region Of Utah's Red Rock Country State officials were flying over southeastern Utah looking for sheep as part of a routine task. Instead they found something straight out of a sci-fi movie. From a helicopter, officers from the Utah Department of Public Safety spotted a large metal monolith** **— a single block of metal — last week. It was sitting in Utah's Red Rock Country in the southeast. Officials have no idea how or when it got there — or who might have placed it. "That's been about the strangest thing that I've come across out there in all my years of flying," helicopter pilot Bret Hutchings told KSL TV. Hutchings said the structure appeared to be 10 to 12 feet tall and looked like it was planted there — not dropped from the air. In any case, officials said it isn't legal. "It is illegal to install structures or art without authorization on federally managed public lands," said the Utah Department of Public Safety in a statement, "no matter what planet you're from." It also referenced the structure's out-of-this-world appearance on social media. "We came across this, in the middle of nowhere, buried deep in the rock. Inquiring minds want to know, what the heck is it? Anyone?" wrote the department in an Instagram post. As for would-be visitors, officials decided not to disclose the exact location of the monolith. It's in a remote area — and if people attempt to visit it, "there is a significant possibility they may become stranded and require rescue," the department said in its statement. Utah's Bureau of Land Management is assessing whether further investigation is necessary. Hutchings has his own theory. In the classic sci-fi film* 2001: A Space Odyssey*, a group of prehistoric ape-men was baffled by a large black monolith that appeared in an African desert. "I'm assuming it's some new-wave artist or something," Hutchings said, according to KSL TV. "Somebody that was a big fan [of the film]."
true
true
true
Authorities do not know where the object came from or the intent behind it.
2024-10-12 00:00:00
2020-11-24 00:00:00
https://media.npr.org/as…400&c=100&f=jpeg
article
npr.org
NPR
null
null
16,875,200
https://venturebeat.com/2018/04/19/netflix-brings-30-second-video-previews-to-mobile/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null