id
int64 3
41.8M
| url
stringlengths 1
1.84k
| title
stringlengths 1
9.99k
⌀ | author
stringlengths 1
10k
⌀ | markdown
stringlengths 1
4.36M
⌀ | downloaded
bool 2
classes | meta_extracted
bool 2
classes | parsed
bool 2
classes | description
stringlengths 1
10k
⌀ | filedate
stringclasses 2
values | date
stringlengths 9
19
⌀ | image
stringlengths 1
10k
⌀ | pagetype
stringclasses 365
values | hostname
stringlengths 4
84
⌀ | sitename
stringlengths 1
1.6k
⌀ | tags
stringclasses 0
values | categories
stringclasses 0
values |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
6,276,051 |
http://www.youtube.com/watch?v=4ErFYulSyyM
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
22,857,268 |
https://www.nextplatform.com/2020/04/07/changing-conditions-for-neural-network-processing/
|
Changing Conditions for Neural Network Processing
|
Nicole Hemsoth Prickett
|
Over the last few years the idea of “conditional computation” has been key to making neural network processing more efficient, even though much of the hardware ecosystem has focused on general purpose approaches that rely on matrix math operations that brute force the problem instead of selectively operate on only the required pieces. In turn, given that this was the processor world we live in, innovation on the frameworks side kept time with what devices were readily available.
Conditional computation, which was proposed (among other places) in 2016, was at first relegated to model development conversations. Now, however, there is a processor corollary that blends the best of those concepts with a novel packet processing method to displace traditional matrix math units. The team behind this effort blends engineering expertise from previous experiences building high-end CPU cores at ARM, graphics processors at AMD and Nvidia, and software and compiler know-how from Intel/Altera and might have something worth snatching us out of those “yet another AI chip startup” doldrums.
At this point in this chip startup space it takes actual differentiation to make waves in the neural network processor pond. What’s interesting about this effort, led by the startup in question, Tenstorrent, is that what they’ve developed is not only unique, it is answers the calls from some of the most recognizable names in model development (LeCun, Bengio, and Hinton, among others) for a reprieve from that matrix math unit stuffing approach that is not well-suited to neural network model sizes that are growing exponentially. The calls from this community have been for a number of things, including the all-important notion of conditional computation, freedom from matrix operations and batching, the ability to deal with sparsity, and of course, scalability.
In short, relying on matrix operations is *so* 2019. The ability to do fine-grained conditional computation instead of taking big matrices and running math does run into some walls. The real value is in being able to cut into the matrix and put an “if” statement in, removing that direct tie between the amount of required computation and the model size. Trying to do all things well (or just select things like inference or training) on a general purpose device (and even in some cases, specialty ones focused on just inference for specific workloads) there is much left on the table, especially with what Tenstorrent CEO, Ljubisa Bajic calls, the “brute force” approach.
But how to get around brute force when it’s…well, the most forceful? The answer is surprisingly simple, at least from the outside. And it’s a wonder this hasn’t been done before, especially given all the talk about conditional computation for (increasingly vast) neural networks and what they do, and don’t, need. For nearly every task in deep learning, some aspects are simple, others complex, but current hardware is built without recognition of that diversity. Carving out the easy stuff is not difficult, it’s just not simple with current architectures based on fixed sized inputs with high padding.
Tenstorrent has taken an approach that dynamically eliminates unnecessary computation, thus breaking the direct link between model size growth and compute/memory bandwidth requirements. Conditional computation enables adaptation to both inference and training of a model to the exact input that was presented, like adjusting NLP model computations to the exact length of the text presented, and dynamically pruning portions of the model based on input characteristics.
Tenstorrent’s idea is that for easier examples in neural networks, depending on the input, it’s not likely that all the neurons should be running but rather, one should be able to feed in an image and run a different type of neural network that’s determined at runtime. Just doing this can double speed in theory without hurting the quality of results much, but even better, if you’re redesigning the model entirely for conditional computation it can go far beyond that. It’s there where the solution gets quite a bit more complicated.
Each of the company’s “Grayskull” processors are essentially packet processors that take a neural network and factor the groups of numbers that make it up into “packets” from the beginning.
Imagine a neural network layer with two matrices that need to be multiplied. With this approach, from the very beginning these are broken into “Ethernet-sized chunks” as Bajic describes. Everything is done on the basis of packets from that point forward, with the packets being scheduled onto a grid of these processors connected via a custom network, partially a network on chip, partially via an off-chip interconnect. From the beginning point to the end result everything happens on packets without any of the memory or interconnect back and forth. The network is, in this case, the computer.
The blue square at the top is basically a tensor for numbers like input for a neural network layer. That gets broken into sub-tensors (those “Ethernet-sized chunks” Bajic referred to), which are then framed into a collection of packets. Their compiler then schedules movement of those packets between cores on one or multiple chips and DRAM. The compiler can then target (one or some of the chip) when the configuration is fed into it, to create a right-sized, scalable machine.
All of this enables changing the way inference runs at runtime based on what’s been entered, a marked difference from the normal compile-first approaches. In this world, we have to live with whatever the compiler produces so the ability to adapt to anything (including size of the input, an increasingly important concern) but this comes with a lot of wasted cycles with a lot of padding. Bajic uses a BERT example and says that input can vary from 3-33 words. Currently, everyone has to run a superset number but the brute force of making one-size-fits-all leaves quite a bit of performance and efficiency on the table.
The Tenstorrent architecture is based on “Tensix” cores, each comprising a high utilization packet processor and a powerful, programmable SIMD and dense math computational block, along with five efficient and flexible single-issue RISC cores. The array of Tensix cores is stitched together with a custom, double 2D torus network on a chip (NoC), at the core of the company’s emphasis on minimal software burden for scheduling coarse-grain data transfers. Bajic says flexible parallelization and complete programmability enable runtime adaptation and workload balancing for the touted power, runtime, and cost benefits.
“Grayskull” has 120 Tensix cores, with 120MB of local SRAM with 8 channels of LPDDR4, supporting up to 16GB of external DRAM and 16 lanes of PCI-E Gen 4. At the chip thermal design power set-point required for a 75W bus-powered PCIE card, Grayskull achieves 368TOPS. The training device is 300W. As clarification, each chip has 120 cores each with a single MB or SRAM. Some workloads can fit comfortably on that, others can pop out to DRAM (16GB/DDR4).
“In addition to doing machine learning efficiently, we want to build a system that can break the links between model size growth and memory and compute limits, which requires something that is full stack, that’s not just about the model but the compiler for that model. With current hardware, even if you manage to reduce the amount of computation you still need huge machines, so what is needed is an architecture that can enable more scaled-out machines than what we have today. Anyone serious about this needs to scale like a TPU pod or even better. The quick changes in models people are running now versus a few years ago demonstrate that zooming in on a particular workload is a risky strategy, just ask anyone who built a ResNet-oriented machine,” Bajic says.
“The past several years in parallel computer architecture were all about increasing TOPS, TOPS per watt and TOPS per cost, and the ability to utilize provisioned TOPS well. As machine learning model complexity continues to explode, and the ability to improve TOPS oriented metrics rapidly diminishes, the future of the growing computational demand naturally leads to stepping away from brute force computation and enabling solutions with more scale than ever before.”
There is still plenty to watch in the AI chip startup world, but differences are often minute and nuanced. This is indeed also nuanced but the concept is valid and has been demanded for years from hardware makers. What is also notable here, in addition to the team (Bajic himself worked on early Hypertransport, early Tegra efforts for autonomous vehicles on the memory subsystems side at Nvidia, and more, for instance) is that they have brought two chips to bear for $34 million in current funding (Eclipse and Real Ventures with private backing from what Bajic says is a famous name in the industry and in hardware startups) with “quite a bit” left over to keep rolling toward their first production chips in fall 2020.
Definitely one to watch. Bajic says he knows that the hyperscale companies are ready to try on anything that might be promising and knows that the AI chip startup space is crowded but thinks this approach is different enough in terms of what’s lies ahead for model size growth and complexity and what will be in demand when general purpose or matrix-based processors aren’t up to the task.
## Be the first to comment
| true | true | true |
Over the last few years the idea of “conditional computation” has been key to making neural network processing more efficient, even though much of the
|
2024-10-12 00:00:00
|
2020-04-07 00:00:00
|
article
|
nextplatform.com
|
The Next Platform
| null | null |
|
34,303,386 |
https://www.setphaserstostun.org/posts/monitoring-ecc-memory-on-linux-with-rasdaemon/
|
Monitoring ECC memory on Linux with rasdaemon
|
Gabriele Svelto
|
# Monitoring ECC memory on Linux with rasdaemon
If you have a workstation built around an AMD Ryzen/Threadripper or Intel Xeon processor chances are you're using ECC memory. ECC memory is a worthy investment to improve the reliability of your machine and if properly monitored will allow you to spot memory problems before they become catastrophic.
On recent Linux kernels the rasdaemon tools can be used to monitor ECC memory and report both correctable and uncorrectable memory errors. As we'll see with a little bit of tweaking it's also possible to know exactly which DIMM is experiencing the errors.
## Installing rasdaemon
First of all you'll need to intall **rasdeamon**, it's packaged for most Linux
distributions:
-
**Debian/Ubuntu**# apt-get install rasdaemon
-
**Fedora**# dnf install rasdaemon
-
**openSUSE**# zypper install rasdaemon
-
**Gentoo**The package is currently marked as unstable so you'll need to unmask it first:
# echo "app-admin/rasdaemon ~amd64" >> /etc/portage/package.keywords
Then I recommend enabling sqlite support, this makes rasdaemon record events to disk and is particularly useful for machines that get rebooted often:
# echo "app-admin/rasdaemon sqlite" >> /etc/portage/packages.use
Finally install rasdaemon itself:
emerge rasdaemon
## Configuring rasdaemon
Then we'll setup **rasdaemon** to launch at startup and to record events to
an on-disk sqlite database.
Note that when booting with Secure Boot enabled, using the kernel lockdown
facility in **confidentiality** mode will prevent rasdaemon from running. To
use **rasdaemon** you'll have to use a different lockdown mode, disable
lockdown entirely or disable Secure Boot. You'll find more information in the
Troubleshooting section.
-
**Debian/Ubuntu/Fedora/openSUSE and other systemd-based distros**# systemctl enable rasdaemon # systemctl start rasdaemon
-
**Gentoo with OpenRC**Add the following line to
`/etc/conf.d/rasdaemon`
:RASDAEMON_ARGS=--record
Add
`rasdaemon`
to the**default**run-level and start it# rc-config add rasdaemon default # rc-config start rasdaemon
## Configuring DIMM labels
At this point **rasdaemon** should already be running on your system. You can
now use the **ras-mc-ctl** tool to query the errors that have been detected.
From now on I will use data from my machine to give an example of the output.
# ras-mc-ctl --error-count Label CE UE mc#0csrow#2channel#0 0 0 mc#0csrow#2channel#1 0 0 mc#0csrow#3channel#1 0 0 mc#0csrow#3channel#0 0 0
The CE column represents the number of corrected errors for a given DIMM, UE
represents uncorrectable errors that were detected. The label on the left
shows the EDAC path under `/sys/devices/system/edac/mc/`
of every DIMM.
This is not very readable. Since the kernel has no idea of the physical layout of your motherboard it will print the EDAC paths instead of the names of the DIMM slots. We can confirm that the labels are missing with this command:
# ras-mc-ctl --print-labels ras-mc-ctl: Error: No dimm labels for ASUSTeK COMPUTER INC. model PRIME B450-PLUS
To identify which DIMM slot corresponds to which EDAC path you will have to
reboot your system with only one DIMM inserted, write down the name of the
slot you insterted it in and then printing out the paths with
`ras-mc-ctl --error-count`
.
In my case this was the mapping:
mc#0csrow#0channel#0 DIMM_A1 mc#0csrow#0channel#1 DIMM_A2 mc#0csrow#1channel#1 DIMM_A2 mc#0csrow#1channel#0 DIMM_A1 mc#0csrow#2channel#0 DIMM_B1 mc#0csrow#2channel#1 DIMM_B2 mc#0csrow#3channel#1 DIMM_B2 mc#0csrow#3channel#0 DIMM_B1
Note that there's more than one path per DIMM label, that's fine.
With this data at hand create a text file under `/etc/ras/dimm_labels.d/`
.
You will need to fill it up with the mapping data in the following format:
Vendor: <motherboard vendor name> Model: <motherboard model name> <label>: <mc>.<row>.<channel>
You can obtain the motherboard vendor and model name with the following command:
# sudo ras-mc-ctl --mainboard ras-mc-ctl: mainboard: ASUSTeK COMPUTER INC. model PRIME B450-PLUS
The label lines take a string (the name of the physical DIMM slot), then the
numbers in the EDAC path corresponding to the physical slot. You can put
more than one label entry per line by separating them with a semicolon. If a
given label is associated with more than one EDAC path you can add the separate
`<mc>.<row>.<channel>`
sequences by separating them with a comma.
In my case the resulting file (`/etc/ras/dimm_labels.d/asus`
) looks like this:
Vendor: ASUSTeK COMPUTER INC. Model: PRIME B450-PLUS DIMM_A1: 0.0.0, 0.1.0; DIMM_A2: 0.0.1, 0.1.1; DIMM_B1: 0.2.0, 0.3.0; DIMM_B2: 0.2.1, 0.3.1;
You can find another example of this, with configuration entries for a bunch of other motherboards, in the edac-utils repo.
Once the file is ready it's time to load the labels in the kernel with the following command:
# ras-mc-ctl --register-labels
Printing out labels and error counts will now use the physical DIMM slot names. This is much better if you need to figure out which of your DIMMs is faulty and needs to be replaced:
# ras-mc-ctl --print-labels LOCATION CONFIGURED LABEL SYSFS CONTENTS DIMM_A1 0:0:0 missing DIMM_A2 0:0:1 missing DIMM_A1 0:1:0 missing DIMM_A2 0:1:1 missing mc0 csrow 2 channel 0 DIMM_B1 DIMM_B1 mc0 csrow 2 channel 1 DIMM_B2 DIMM_B2 mc0 csrow 3 channel 0 DIMM_B1 DIMM_B1 mc0 csrow 3 channel 1 DIMM_B2 DIMM_B2 # ras-mc-ctl --error-count Label CE UE DIMM_B2 0 0 DIMM_B1 0 0 DIMM_B1 0 0 DIMM_B2 0 0
To persist the DIMM names across reboots load the `rac-mc-ctl`
service at
startup:
-
**Debian/Ubuntu/Fedora and other systemd-based distros**# systemctl enable ras-mc-ctl # systemctl start ras-mc-ctl
-
**Gentoo with OpenRC**# rc-config add ras-mc-ctl default # rc-config start ras-mc-ctl
You're done! After rebooting your system rasdaemon will be continually running
and recording errors. You can use `ras-mc-ctl`
to print out a summary of all
the errors that have been seen and recorded. Since the counts are stored on
disk they will be persisted across reboots. Here's some example output from my
machine:
# ras-mc-ctl --summary Memory controller events summary: Corrected on DIMM Label(s): 'DIMM_B1' location: 0:2:0:-1 errors: 5 PCIe AER events summary: 1 Uncorrected (Non-Fatal) errors: BIT21 No Extlog errors. No devlink errors. Disk errors summary: 0:0 has 6646 errors No MCE errors.
## Troubleshooting
-
`ras-mc-ctl --status`
prints out`ras-mc-ctl: drivers are not loaded`
For
**rasdaemon**to work the EDAC kernel drivers for your particular machine need to be loaded. They are usually loaded automatically at boot. You can check out which ones are loaded with this command:# lsmod | grep edac amd64_edac_mod 32768 0 edac_mce_amd 28672 1 amd64_edac_mod
If the EDAC drivers haven't been loaded automatically either your kernel doesn't provide one for your machine or you need to manually load it. Check the EDAC kernel documentation for more details.
-
`rasdaemon`
fails to start, complaining it can't access the debugfs filesystemYou're likely using the kernel lockdown module in
**confidentiality**mode. When Secure Boot is enabled this will prevent**rasdaemon**from reading the files it needs to gather its statistics.**rasdaemon**can work with kernel lockdown when using the**integrity**mode. To switch to**integrity**mode add the lockdown=integrity option to the Linux kernel command line in your boot loader.When using
**GRUB**this can usually be achieved by editing`/etc/default/grub`
and changing the`GRUB_CMDLINE_LINUX_DEFAULT`
variable to include the option, e.g.:GRUB_CMDLINE_LINUX_DEFAULT="quiet splash lockdown=integrity"
Keep in mind that
**integrity**mode is less strict than**confidentiality**mode, as it permits userspace applications to access a fair amount of information that lives in the kernel. This might not be suitable for some deployments - such as those that must run untrusted userspace code.
| true | true | true |
Monitoring ECC memory errors on Linux with rasdaemon
|
2024-10-12 00:00:00
|
2020-02-13 00:00:00
| null |
article
|
setphaserstostun.org
|
Just another blog
| null | null |
17,083,799 |
https://medium.com/@AugustinLF/on-learning-rust-69ba956a63e3
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
9,061,690 |
http://www.flowreader.com
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
6,313,328 |
http://allthingsd.com/20130901/exclusive-japans-rakuten-acquires-viki-video-site-for-200-million/
|
Exclusive: Japan’s Rakuten Acquires Viki Video Site for $200 Million
|
Kara Swisher
|
# Exclusive: Japan’s Rakuten Acquires Viki Video Site for $200 Million
Japan’s Internet e-commerce giant Rakuten is set to purchase Viki, a premium video site that has been described as “Hulu for the rest of the world.”
The deal was set to be announced next week, but after I queried the company about it, its CEO, Hiroshi Mikitani (whose nickname is Mickey), confirmed the transaction, although he declined to give the price.
That would be $200 million, according to sources with knowledge of the situation.
“Our foundation is not only limited to e-commerce, but an intention to strengthen our ecosystem in Japan and worldwide,” he said in an interview. “We have been looking into finding a global solution for video.”
It is, in my humble estimation, an exciting and innovative purchase by Rakuten, which has been aggressively expanding globally over the last two years to compete with Amazon and others. It is currently in 13 markets worldwide.
That includes a big investment in social bookmarking phenom Pinterest, as well as the $315 million purchase of the Kobo e-book business and Wuaki, a Spanish video-on-demand service.
Viki — a combo of “video” and “wiki” — is a fascinating addition to these, a startup that began in Singapore, offering premium television, movies and music videos that are made globally accessible via crowdsourced subtitling via PCs and over mobile devices.
“Community members subtitle their favorite videos into their preferred languages under a Creative Commons license using Viki’s subtitling technology, which enables thousands of fans around the world to collaborate in dozens of languages at once,” said Viki on its website.
Viki’s videos have been translated into close to 170 languages, or about 500 million words. It is growing fast in Southeast Asia, but is currently aiming at expanding into Latin America and Europe.
For example, I am currently enjoying “Amaya” from the Philippines, which has his can’t-miss come-on: “Can a young princess raised in secrecy rise to fulfill her prophesy as the most powerful woman in Philippine history?”
(*Well, can she?!?*)
Viki makes its money much like Hulu does, with in-stream advertising revenue that it shares with content producers. There is a subscription and data play there, too, but a non-paid content model powers its business for now.
Investors have poured only about $25 million into Viki; they include Andreessen Horowitz, SK Telecom, Greylock Partners, Omidyar Network, Charles River Ventures and Neoteny Labs. Neoteny’s Joi Ito, who also heads MIT’s Media Lab, is on its board.
The most recent additions to that Silicon Valley power group are Microsoft exec and Sling Media founder Blake Krikorian and Survey Monkey CEO Dave Goldberg — both of whom have much digital entertainment experience — who put an undisclosed amount into Viki in late July.
The company had been in the process of raising another round, before the intense interest from Rakuten’s Mikitani. While other companies — such as Yahoo — looked at a Viki acquisition, one source noted they were too slow.
Viki has offices in San Francisco, Singapore and Seoul, and describes itself as “born as a joint class project between Harvard and Stanford graduate students who wanted to remove barriers to popular entertainment, regardless of language or country of origin. Today, our team is as diverse as Viki itself — entrepreneurs, technologists and pop-culture fans from Canada, China, Egypt, England, Hungary, India, Indonesia, Japan, Korea, Mongolia, Philippines, Singapore, Spain, Taiwan, Thailand, Ukraine, Venezuela, Vietnam and the United States.”
“We look at this as a hybrid of social network and a digital content distribution network that is unique and different from others … in a country-by-country approach,” said Mikitani. “A multi-language video offering is one of smartest ways for us to expand.”
And he said, after he saw it: “I basically fell in love with the service.”
Here ‘s a very clever video from Viki, which explains a whole lot, and in a lot of languages:
| true | true | true |
Crowdsourced premium video site gets bought for global expansion by the Amazon of Japan.
|
2024-10-12 00:00:00
|
2013-09-01 00:00:00
|
/wp-content/themes/atd-2.0/images/staff/kara-swisher-170x170.jpg
|
article
|
allthingsd.com
|
AllThingsD
| null | null |
1,779,649 |
http://www.wired.com/beyond_the_beyond/2010/10/augmented-reality-the-incredible-world-of-diminished-reality/
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
19,314,322 |
https://www.washingtonpost.com/health/2019/03/05/fda-commissioner-gottlieb-who-raised-alarms-about-teen-vaping-resigns/
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
11,907,977 |
http://semiaccurate.com/2016/06/13/intels-broadwell-e-not-released/
|
Intel’s Broadwell-E should not have been released
|
Charlie Demerjian
|
At Computex Intel launched Broadwell-E to no fanfare for a good reason. This is a chip that has SemiAccurate reaching for reasons to justify its existence but Intel had to stretch far further.
As you might have figured out from the name, Broadwell-E is the consumer version of the excellent Broadwell-EP/EX chips. The silicon changes are zero, it is the same part with some functionality fused off but as you will see, what makes a world-class server part does not translate into a passable consumer chip. What little enthusiasm we had for the part was shattered by Intel’s messaging around the part, they flat-out should not have launched this part.
What is Broadwell-E? It is the smallest of the three Broadwell-EP dies called LCC, MCC and HCC are the medium and large variants. It measures 16.2×15.2mm (246.24mm^) with a total of 3.2B transistors. Please note that Intel would not disclose these numbers or that Broadwell-E was the LCC die saying, “The die size and transistor density are proprietary to Intel and we are no longer disclosing that detail.” Intel management seems determined to make the company irrelevant to consumers and this new attitude is a wonderful start. In any case this information was already public and released by Intel which makes their stance all the more curious.
**If you look closely, you will see changes**
As you can see there are now four BDW-E SKUs up from the three in the previous few generations. The 6xxx naming is an unfortunate carry-over from their past marketing games where they would increase the model number of the -E parts to differentiate them from their consumer die based brethren. These games have now caught up to Intel and Broadwell based chips are in the same ‘family’ as Sky consumer parts. The two share nothing and the Skylake based 6xxx parts are much better suited to the task at hand, gaming. Why? Lets start with a look at the previous Haswell-E parts.
**Haswell-E is almost the same**
What does BDW-E bring to the table over HSW-E? How about a massive 100MHz clock boost, yes you read that right, 100 freaking MHz worth of pure upside!!! Yes if you abandon your expensive Haswell-E and spend nearly $600 more, you too can burn through games at 2.857% faster (base) or 2.703% with best case turbo conditions! Wow, if that isn’t worth $412-1569, I don’t know what is. Now you see why see why SemiAccurate is calling Broadwell-E a joke for consumers?
Lets take a look at the four SKUs in a bit more detail though. The lowest end part is a 6-core device that is artificially crippled by removing 12 PCIe lanes. This is important because it precludes the one actual benefit a -E part has over its consumer die brethren, two GPU support. If you want to do 2x PCIe3 16, the Intel -E lines are the only realistic way to accomplish that goal.
The $412 6800K exists for one reason, to make the whole line look less expensive. Intel was very direct in messaging that BDW-E 6800K starts close to where the 6700K is priced, doesn’t that sound better than the 6850K being almost twice the price of the 6700K? That said both realistically support one GPU and the Skylake cores on the 6700K are measurably better than the Broadwell ones in the -E parts. And they have a 400MHz higher base and turbo clock. Did we mention the overwhelming majority of games have an indivisible thread that is clock bound but almost none ever use a full four cores much less 8 threads? For the non-technical out there, this is the long way of saying 6/12 cores/threads is idiotic for gaming. Luckily BDW-E offers up to 10/20 for a mere $1599.
So the Skylake based 6700K is cheaper, faster, and better in every regard but PCIe lanes than the lowest end 6800K. What about the more expensive devices? The 6-core 6850K is the sweet spot for the line, it is ‘reasonably’ priced, for some definitions of reasonable, at $587 and clocked at 3.6/3.8GHz base/turbo. It offers the full 40 PCIe lanes so you can have two 16x GPUs and a couple of NVMe 4x drives for good measure. From there the core/thread counts go up to 8/16 and 10/20 but the clocks decrease steadily. Since Intel chooses to fuse off L3 cache with cores, the cache does go up but not enough to paper over the lower clocks.
Since gaming performance is almost linearly tied to clock rate, the 8 and 10 core BDW-Es are going to be worse for their intended market than the 6-core non-crippled parts. Luckily for no one at all, the slower worse parts are significantly more expensive at an eye-watering $999 and $1569. If you crave 5 or 10MB more L3 and are willing to pay double or triple the price of a 6850K, these are your next chips. For those who want gaming performance, the only SKU in this new quartet worth considering is the 6850K, all the others are either crippled or slower. Better yet buy a 6700K and a single card dual GPU. Or wait for the big AMD Zen but I can’t say why yet, heh, Zepplin.
Luckily for the Intel marketeers, there is a solution to this clock speed mess, having a vastly cheaper part that is better in every way then your flagship simply won’t do. To rectify this, Intel invented, “Intel Turbo Boost Max Technology 3.0”, something that is called out separately from Turbo 2.0. Why? Because it isn’t real, it is a complete marketing sham that does next to nothing for the user.
Intel promised the moon for the new Turbo Max 3.0 and then didn’t go into how it works. SemiAccurate asked about how it worked and was given a useless quote from a technical guide Intel sent to others but not SemiAccurate. We asked, “The new turbo, when it ID’s the fastest core, how does it assign critical processes to that core? Software? Firm/hardware? Is it Windows only or OS agnostic?”
The reply was:
*“Intel Turbo Boost Max Technology 3.0 works by using a driver coupled with information stored in the CPU to identify and direct workloads to the fastest core on the die first, allowing workloads to run more quickly and efficiently. The driver also allows for custom configuration via a whitelist that gives end users the ability to set priority to preferred applications. The driver MUST be present on the system and configured correctly in order for this benefit to be realized as current operating systems cannot effectively route workloads to ordered cores. *
*People can then use Intel Turbo Boost Max Technology 3.0 Application to further optimize. I’ve attached an addendum that further explains the options once the application is running.”*(Note: Intel didn’t actually attach anything to the email.)
Since Intel declined to answer the questions we asked, SemiAccurate dug in. Max 3.0 is a driver that only works with Windows, it isn’t actually a hardware or board feature like Intel wants you to believe. It only works with specific boards, specific firmware builds, and with specific drivers, none of which are available as a single package. Good luck to the average user trying to get it all to work, and don’t expect it to be supported once the launch party is over.
Should it work you are given a text box which you can input a list of programs that get priority. Intel says the program, sorry, feature in marketspeak, will identify the highest clocking core and effectively pin the hardest working thread to it for maximum performance and boost. How Intel identifies which thread should be pinned would be interesting to hear about but Intel refuses to actually talk tech so we can’t explain it. As far as we can tell you can only list the process which may or may not have the intended results. In any case if you don’t the ‘technology’ will pin the task in focus automatically.
Intel claims that it is, “more than 15% faster” based on SPECInt_base2006 running on a 6950X vs a 5960X. We feel obliged to point out two things here clocks and marketing. Since SPECInt_base2006 is a single threaded test and both listed cores clock to 3.5GHz, that number should reflect the IPC differences between the Haswell and Broadwell cores. That difference is maybe 5%, 15% is right out. If you read the fine print on the slide, you will see that Intel recompiled the binaries so that 15% difference that they are attributing to Max 3.0 is likely 2/3 due to compilers, 1/3 due to core microarchitecture, and 0/3 due to Turbo Max 3.0.
Secondly we will point out that the relevance of SPECInt_base2006 to gaming is a little less than zero, Intel didn’t use a game to promote this ‘feature’ because it wouldn’t show any gains in the real world. They had to resort to an out of date, deprecated server core bench to show any gains, how ethical. Intel of old would never have resorted to such underhanded shenanigans. Max 3.0 is a piece of software that could be ported to any Intel CPU, past or present, and would so the same ‘good’ there too, namely nothing. It is as much of a ‘valuable feature’ as vendor bloatwarae on a new PC is.
Luckily Intel has a justification as to why 10 slower cores are better than 6 faster ones when less than four are actually needed. That justification? Mega-Tasking! Yes Intel made up a new word to promote a less useful product that costs 3x as much as a better suited one. You know all those gamers who want to game, render 4K movies without hardware acceleration on the CPU in the background, stream their videos with CPU encoding, and do 17 other things at once, the 10-core 6950X is for you. You can Mega-Task!!!! Granted your actual game will run slower even if none of these background tasks take resources it needs, but for some reason this is a good thing. It was painful to hear Intel pitch this but you knew they had no other way to justify the 8 and 10 core parts. I honestly felt bad for the product spokespeople on the call.
**They don’t want you to read the fine print**
More hilarity on this front came from a slide on the raw multi-threaded performance of the new 6950X using Cinebench R15 to showcase the raw grunt. Intel claims up to 35% better performance versus the previous generation and up to 2x better performance than 4-cores. Lets take a look at these claims in more detail, it shows how a desperate Intel can bend the press into repeating borderline irresponsible claims. Please email the author with examples of sites that repeated their bullet points if you find them.
First comes the 35% number, 25% of which comes from two more cores, Cinebench is trivially threaded. Throw in another 5% or so for HSW->BDW core IPC improvements and you have maybe 5% better performance. This could be easily explained by slightly higher, DDR4-2400 vs DDR4-2133, memory clocks. In short if you can utilize 10/20 cores/threads full-time, the 6950X is the part for you. Actually a dual socket BDW-EP Xeon is a much better buy, but that would ruin the messaging.
Then we have the “up to 2x” better than 4-cores when compared against the aforementioned Skylake based 6700K. Note that 10 cores is 2.5x more than 4 cores meaning the Skylake core is notably better per core than the Broadwell ones. If your game is bound by a single indivisible thread like 99.9+% of the modern games are, you would be better off with a Skylake even if you don’t consider the $1200 price difference. Multi-threaded performance is not a really good proof point for BDW-E now is it?
Intel lists a bunch of similar numbers which show that the new BDW-E parts are at best a tepid gain over the existing parts and will often lose to them in the real world. This conclusion takes a little thinking to understand, you have to do the math rather than take the skewed bullet points Intel calls out in big yellow numbers. Intel is desperate to show that BDW-E is a step up but they fail in every way, the part is borderline useless for the markets they claim it excels at.
**Something here doesn’t hold water**
One claim that bothers SemiAccurate is that the new BDW-E is “up to 25% faster vs. previous gen” at Twitch + 4K gaming + encode. Since this encoding and streaming is the sole domain of GPU acceleration, something the 6700K has but BDW-E lacks, we asked how this technically can be. Here is our question and Intel’s answer.
**S|A** *You claim BDW-E is better for livestreaming, why is a very expensive CPU better for this than a cheap GPU with H.265 encode? What am I missing here? New hardware?*
**Intel:** *It’s the combination of workloads and scenarios that gamers and content creators are looking to do at the same time. This is where we talked about the concept of mega-tasking or the experience of simultaneous, compute-intensive, multi-threaded workloads as part of their typical PC usage.*
*For gamers, they’re gaming in 4k, livestreaming, encoding their gameplay to the system for editing and uploading later, and communicating with their eSports teams all at the same time.*
*For content creators, they’re editing video footage, doing image retouching, creating and rendering graphics, downloading footage from cameras, and working on soundtracks.*
*The apps are all active and working simultaneously and require high performance systems to satisfy enthusiasts.*
Yes they didn’t answer the question because their claim is completely without merit. Encoding and streaming on a CPU will take any of the four new BDW-Es to their knees and still not deliver adequate throughput, two added cores or not. This job is unquestionably the domain of the GPU and it’s hardware accelerated encoders. Intel dodged the question because they simply could not justify their claims, two more cores do nothing to speed up the encode scenario they offered, period.
One glaring lack of data is gaming for which there were two numbers, Firestrike and The Division. Firestrike scores 30% better than a 5960X because it is effectively a physics benchmark. Fair enough but see the comments above about 25% more cores and architecture. Firestrike is a best case scenario that is not a game benchmark, it is a compute benchmark. Similarly with The Division which is a game, Intel claims, “>85 Frames Per Second vs. 4 Cores”. If this doesn’t parse for you, that is correct, it is a nonsense number, Intel doesn’t list the competition nor what it scores. All you can tell is that a 6950X scored >85FPS. Any guesses as to why they didn’t put in a single actual game benchmark in their briefing?
All this nonsense aside there are actually three new features for overclockers. Those are per core overclocking, AVX ratio offset, and VccU voltage control. The first and the last are self-explanatory but the AVX ratio offset is not. The BDW-EP/EX parts have an AVX clock that lowers the frequency of the core when AVX instructions are detected to prevent overtaxing of power delivery circuitry. This is a good thing and for servers it means higher performance all around without compromising reliability.
On the consumer cores it means, well we are not sure so we asked. Instead of a technical answer we once again got marketing BS that barely parses in English. While we suspect it is just a new name for the server AVX frequency drop, we can’t say for sure. Here is our question and Intel’s answer.
**S|A:** *Can you go more into the AVX offset ratio? Is this the same as the AVX clocks on HSW and BDW-EP?*
**Intel:** *This feature allows operating the processor at different turbo frequencies (ratio) based on the workloads. SSE based applications can be enabled to run at higher frequencies by allowing AVX2 instructions to run at lower frequencies. Benefits include more stable overclocking and higher Over clocking frequencies for SSE based applications.*
So in the end, other than marketing doublespeak, what does Intel’s new Broadwell-E family bring to the table? <3% clock gains mainly, the rest is either spin so powerful that it will be studied by physicists for years to come or flat-out nonsense. The 10-core version is utterly pointless, the 8-core is a regression from the non-crippled 6-core, and all are less suited to their main tasks than the Skylake based 6700K. If you have any Sandy Bridge based -E part or newer, there is absolutely no reason for you to upgrade. While there is an off chance that a minor feature or two is lurking under the marketing spin, Intel’s refusal to promote their own leadership features leaves us unable to recommend these new CPUs for any reason.**S|A**
#### Charlie Demerjian
#### Latest posts by Charlie Demerjian (see all)
- Microsoft Hobbles Intel Once Again - Sep 20, 2024
- What is really going on with Intel’s 18a process? - Sep 9, 2024
- Industry pioneer Mike Magee has passed away - Aug 12, 2024
- What is Qualcomm launching at IFA this year? - Aug 9, 2024
- SemiAccurate is back up - Aug 7, 2024
| true | true | true |
At Computex Intel launched Broadwell-E to no fanfare for a good reason.
|
2024-10-12 00:00:00
|
2016-06-13 00:00:00
|
http://semiaccurate.com/assets/uploads/2013/06/4th-Generation-Intel«-CoreÖ-i7-Processor-Badge.png
|
article
|
semiaccurate.com
|
SemiAccurate
| null | null |
13,489,063 |
https://geo.itunes.apple.com/us/podcast/dan-carlins-hardcore-history/id173001861?ign-mpt=uo%25253D4&at=1010lsT4&ct=dancarlin&mt=2
|
Dan Carlin's Hardcore History
|
Deagh Dia
|
# Dan Carlin's Hardcore History
In "Hardcore History" journalist and broadcaster Dan Carlin takes his "Martian", unorthodox way of thinking and applies it to the past. Was Alexander the Great as bad a person as Adolf Hitler? What would Apaches with modern weapons be like? Will our modern civilization ever fall like civilizations from past eras? This isn't academic history (and Carlin isn't a historian) but the podcast's unique blend of high drama, masterful narration and Twilight Zone-style twists has entertained millions of listeners.
## Hosts & Guests
### Simply the best history podcast
06/24/2007
As an avid student of history and folklore, I have listened to most of the history podcasts on iTunes, but am usually disapointed by the quality and content. Dan Carlin, however, has done a masterful job of creating an enthralling and extremely informative podcast. Many history podcasts sound like they are simply read from a book or are scattered and incoherent, but Dan sounds as though he is actually having a conversation (albeit a very Socratic one) with the listener. Not only does he give a passionate and accurate account of the facts, but aslo provides valuable and intriguing insight, focusing on trying to make the listener understand the contemporary perspective as well as the historical. This is simply the best history podcast on iTunes and I would highly recommend it to anyone.
### Why didn't they teach history like this in school?
02/28/2007
Why didn't they teach history like this in school? Carlin makes the story relevant and meaningful like I've never heard or read. I'm not a history buff or generally even been that interested. This is way different. Check it out...
### Good job by you Dan
Sep 11
Best storyteller and great content. We see the hard work and appreciate it. Thanks
### Best work
Oct 3
This is where Dan is at his best. This is where he seems to do his best work although I don’t think he is any kind of super-historian but at a basic level he works at his better analysis and isn’t bad to listen to. Nobody should ever consider only one source on history however Dan is not the worst place to start study of specific issues and this is where his “amateur” status disables him the least because he seems to do his best to do the work of that discipline (historiograghic research). Dan is a dramatic guy and does some Bill Shatner level pauses for over-dramatic filler and it really pads the episodes in that way and if you don’t like it, don’t keep listening because he isn’t going to stop.
## About
## Information
- CreatorDan Carlin
- Years Active2006 - 2024
- Episodes16
- RatingClean
- Copyright© dancarlin.com
- Show Website
| true | true | true |
History Podcast · @@episodeCount@@ Episodes
|
2024-10-12 00:00:00
|
2007-06-24 00:00:00
|
website
|
apple.com
|
Apple Podcasts
| null | null |
|
7,946,505 |
http://blog.erratasec.com/2014/06/riley-v-california-support-cloud.html
|
Riley v California: support cloud privacy too?
|
Robert Graham
|
*Riley v California*, SCOTUS struck down warrantless cellphone searches during arrest. However, I think more importantly, they are setting things for a future battle over cloud privacy. They could have decided the case on narrow grounds (such as in Alito's concurring opinion), but they chose instead broad grounds.
Today, the police can grab your old emails stored in the cloud. This is based on two existing decisions.
In
*Smith v Maryland*, SCOTUS said government can grab your phone records (who you dialed) without a warrant. The idea is the "third party doctrine", that you gave up your "reasonable expectation of privacy" when you gave the information to a third party.
In
*US v Miller*, SCOTUS said the same thing about bank records. Old emails, stored on a server (rather than in transmit between two parties) are considered the same sort of "record". Other information in the cloud, such as your photos backed up on Apple, Google, or Amazon cloud, likewise are mere "business records". Storing things in the cloud forfeits Fourth Amendment rights.
Today's decision, Riley v California, gives a lot of ammunition to overturn these decisions with regard to cloud information. The court says the following on page 21:
Cell phone users often may not know whether particular information is stored on the device or in the cloud, and it generally makes little difference.It's meant one way, to show how a search incident of an arrest doesn't automatically extend to servers in, say, Germany. But this is worded in a way that (one could argue) now goes the other way: the expectation of privacy for the device in their pocket extends to the data in the cloud.
On page 23 is an even better nugget for us privacy weenies:
The sources of potential pertinent information are virtually unlimited, so applying the Gant standard to cell phones would in effect give "police officers unbridled discretion to rummage at will among a person's private effects."That is the entirety of the "cloud privacy" argument.
*Smith v Maryland*was predicated on the idea that a pen register had limited utility to law enforcement. In today's "cloud", the opposite is true. All a person's effects are in the cloud, including not only data mentioned here (such as whether they were texting while driving), but personal correspondence, photos with EXIF location data, that novel they've been writing, their search history, and so on. The power of the government to rummage through a person's effects is unrivaled in history -- and is why searches of cloud information should not be allowed without a warrant.
On page 24, the court talks about
*Smith v Maryland*, and how the call log on the phone differs from a pen register list of phone number's dialed:
call logs typically contain more than just phone numbers; they include any identifying information that an individual might add, such as the label "my house" in Wurie's caseThis circumscribes
*Smith v Maryland*to
*just*phone numbers -- signaling it might not apply to more extensive information, like names associated with numbers.
On page 3, the court compares a modern cellphone to history objects that might be on a person:
Cell phones differ in both a quantitative and qualitative sense from other objects taht might be carried on an arrestee's person. Notably, modern cell phones have an immense storage capacity. Before cell phones, a search of a person was limited by physical realities and generally constituted only a narrow intrusion on privacy.That, and the continuing discussion, applies equally to cloud data compared pen-register and bank records. Historically, they were limited and only constituted a narrow intrusion on privacy. Today, arbitrary police access of the cloud would represent a wide intrusion on privacy.
Further on page 3, SCOTUS says:
A decade ago officers might have occasionally stumbled across a highly personal item such as a diary, but today many of the more than 90% of American adults who own cell phones keep on their person a digital record of nearly every aspect of their lives....and by extension, the same argument applies to the cloud. American adults have a reasonable expectation of privacy over a digital record that covers 90% of their lives.
On page 25 is this wonderful statement:
We cannot deny that our decision today will have an impact on the ability of law enforcement to combat crime. ... Privacy comes at a cost.Part of the justification for the "third-party doctrine" is that technology allows criminals to move things around to whichever has the most protection. If warrants are required to search cellphones, then criminals will move information onto this safe haven. Therefore, the argument goes, we shouldn't give them safe havens. I think SCOTUS is arguing against this -- signaling that they don't mind making the police's job harder.
Many commentators have pointed out this statement:
That is like saying a ride on horseback is materially indistinguishable from a flight to the moon.What SCOTUS is signaling here is that technology obsoletes previous decisions. In other words, modern cellphone records that the NSA has been monitoring may actually be substantially different than the pen register taps in
*Smith v Maryland*.
And finally, damnit, SCOTUS said this:
Our cases have recognized that the Fourth Amendment was the founding generation's response to the reviled "general warrants" and "writs of assistance" of the colonial era, which allowed British officers to rummage through homes in an unrestrained search for evidence of criminal activity. Opposition to such searches was in fact one of the driving forces being the Revolution itself. In 1761, the patriot James Otis delivered a speech in Boston denouncing the use of writs of assistance. A young John Adams was there, and he would later write that "every man of a a crowed audience appeared to me to go away, as I did, ready to take arms against writs of assistance."I'm not a lawyer, but a revolutionary. I don't care about precedent. I believe a Right to Cloud Privacy exists even if I believe that a logical adherence to precedent means that SCOTUS can't find such a right. That government can rummage through 90% of our personal effects in an unrestrained search for evidence of criminal activity is intolerable. I'm heartened by the fact that SCOTUS seems, actually, ready to agree with me.
## No comments:
Post a Comment
| true | true | true |
Today, in Riley v California , SCOTUS struck down warrantless cellphone searches during arrest. However, I think more importantly, they are ...
|
2024-10-12 00:00:00
|
2014-06-25 00:00:00
| null | null |
erratasec.com
|
blog.erratasec.com
| null | null |
8,165,238 |
http://inc.com/jill-krasny/joshua-shenk-why-creative-pairs-spark-innovation.html
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
17,136,375 |
https://gnunet.org/reclaimID
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
34,611,275 |
https://www.aboutamazon.com/news/sustainability/amazon-sets-a-new-record-for-the-most-renewable-energy-purchased-in-a-single-year
|
Amazon sets a new record for the most renewable energy purchased in a single year
|
Amazon Staff
|
Amazon has announced that in 2022, it grew its renewable energy capacity by 8.3 gigawatts (GW) through 133 new projects in 11 countries. This brings Amazon’s total portfolio to more than 20 GW—that could generate the amount of energy to power 5.3 million U.S. homes—across 401 renewable energy projects in 22 countries.
The company’s renewable energy purchases continue to add new wind and solar projects on the grids that power Amazon’s operations, including Amazon Web Services (AWS) data centers, Amazon fulfillment centers, and physical stores around the world.
With these continued investments, Amazon set a new corporate record for the most renewable energy announced by a single company in one year. The company remains the largest corporate buyer of renewable energy—a position it’s held since 2020, according to Bloomberg New Energy Finance. Amazon’s continued investment in renewable energy helps to accelerate growth in new regions through innovative deal structures, technologies, and cloud solutions.
These purchases also bring Amazon closer to powering its operations with 100% renewable energy by 2025—five years ahead of its original 2030 target. In 2022, the company announced new projects in Australia, Canada, Finland, France, Germany, Japan, Poland, Singapore, Spain, and the U.S., and broke ground on projects in Brazil, India, and Indonesia.
With 25 new renewable energy projects secured to close out the year, the company now has 401 projects globally, including 164 wind farms and solar farms, and 237 rooftop solar projects on Amazon facilities. Once operational, Amazon’s global renewable energy projects are expected to generate 56,881 gigawatt-hours (GWh) of clean energy each year.
“As we continue to launch new renewable energy projects around the world, we’re pleased to be on track to power our operations with 100% renewable energy, five years ahead of our original target. With 133 projects in 11 countries announced in 2022, Amazon had another record year,” said Adam Selipsky, CEO of AWS. “These projects highlight the diversity of our renewable energy sources and showcase our ability to bring new technologies to new markets and further reduce the impacts of climate change.”
In addition to the 108 clean energy projects the company announced in 2022, Amazon is now announcing 25 additional 2022 clean energy projects. These include:
- Eleven new projects in Europe, including in Finland, Germany, Italy, Spain, and the United Kingdom, totaling 372 megawatts (MW) of capacity. Tapping into one of the world’s best renewable energy resources, Amazon continued to add to its portfolio of offshore wind projects, investing in two new offshore wind projects in Europe totaling 280 MW of capacity.
- Four new projects in North America, totaling 918 MW of energy in Arizona, California, and Texas. A new solar project paired with energy storage in California allows Amazon to store clean energy produced by its solar projects and deploy it when solar energy is not available, such as in the evening hours or during periods of high demand. Also in California, Amazon added its first on-site solar project at the Amazon Air Hub where employees pack and handle freight and conduct planeside operations.
- Ten new renewable energy projects in India, Indonesia, and Japan. In India, a third 200 MW wind-solar hybrid project was added to Amazon’s first two wind-solar hybrid projects. Renewable hybrid energy systems can play a key role in helping India accelerate the decarbonization of power generation, lowering the cost of electricity in the medium term. These hybrid energy systems also maximize clean energy use on the grid by combining two technologies with different generation profiles, reducing variability in renewable generation, and improving grid stability. In Indonesia, Amazon invested in its first renewable energy projects, securing a first-of its-kind agreement for corporations to access additional utility-scale solar projects. In Japan, Amazon added three on-site solar projects and a new 38 MW utility-scale solar project.
Rapidly scaling renewable energy is one of the most effective strategies to fight climate change. To ensure that organizations’ renewable energy purchases have the greatest impact on emissions reductions, Amazon recently led the creation of the new Emissions First coalition. This coalition is leading advocacy efforts to modernize the world’s leading carbon-accounting standard, helping to reduce carbon from global electricity grids as quickly and cost-effectively as possible.
"Amazon's clean energy portfolio doesn't just top the corporate charts—it is now among the leading utilities globally, as well,” said Kyle Harrison, head of sustainability research at Bloomberg New Energy Finance. “The fact that it announced a new annual record of clean energy in a year mired by a global energy crisis, supply chain bottlenecks, and high interest rates speaks to its forward planning and expertise in navigating power markets and executing long-term contracts."
“Amidst the market uncertainty of 2022, Amazon led clean energy buyers and doubled down on its commitment to renewable energy,” said Miranda Ballentine, CEO of the Clean Energy Buyers Association (CEBA). “Amazon’s commitment to decarbonization is demonstrated through its leading placement on CEBA’s Deal Tracker Top 10, within our member community, and on a global scale.”
Sam Kimmins, director of energy at the Climate Group and spokesperson for the Asia Clean Energy Coalition (ACEC), added: “As Asia continues to transition away from coal and gas, these investments by Amazon in wind and solar are further evidence that there is a large and growing corporate renewable electricity demand in this region. We look forward to continuing to work with Amazon and our other ACEC members to rapidly increase the supply of renewables and to achieve our shared 100% renewable ambitions in the region.”
Amazon co-founded The Climate Pledge in 2019, committing to reach net-zero carbon by 2040—10 years ahead of the Paris Agreement. The Pledge now has nearly 400 signatories, including Best Buy, IBM, Microsoft, PepsiCo, Siemens, Unilever, Verizon, and Visa. Amazon continues to transform its transportation network, including electrifying its delivery fleet and sourcing alternatives to fossil fuels—it currently has thousands of electric delivery vehicles from Rivian in more than 100 cities and regions in the U.S., more than 3,000 electric vans delivering packages to customers in Europe, and several electric vehicle partnerships in APAC.
The company is also investing $2 billion in the development of decarbonizing services and solutions through The Climate Pledge Fund. Learn more about sustainability at Amazon.
| true | true | true |
Amazon’s purchases last year bring the company closer to powering its operations with 100% renewable energy by 2025, five years ahead of its original 2030 target.
|
2024-10-12 00:00:00
|
2023-01-31 00:00:00
|
https://assets.aboutamazon.com/dims4/default/f100a3b/2147483647/strip/true/crop/2000x1000+0+63/resize/1200x600!/quality/90/?url=https%3A%2F%2Famazon-blogs-brightspot.s3.amazonaws.com%2F8a%2F35%2F6875a55a4cdabcac9fc9291bfbc6%2Frenewable-energy-hero-2000x1125.jpg
|
article
|
aboutamazon.com
|
US About Amazon
| null | null |
8,602,963 |
http://www.conversionaid.com/podcast/jesse-mecham-ynab/
|
How to Build a $5 Million Software Business from a Spreadsheet File - with Jesse Mecham [020]
|
Omer Khan
|
### How to Build a $5 Million Software Business from a Spreadsheet File
Jesse is the founder of “You Need a Budget”, a software product that helps you with your personal budgeting. Jesse is a former Certified Public Accountant (CPA) turned software entrepreneur. He founded YNAB in 2004 and has successfully grown it into a multi-million dollar business with tens of thousands of users from all over the world.
### Key Talking Points
- How Jesse started this business by selling an Excel spreadsheet that he'd initially created for himself and his wife.
- How the Excel spreadsheet wasn't selling until Jesse started charging MORE money for it.
- Why content creation and education is the top priority for YNAB and why software plays ‘second fiddle'.
- The key essential lessons that Jesse learned about copywriting and landing pages which helped to double his sales.
- How Jesse's goal with his business was to simply generate $350 per month to cover his rent.
- How Jesse grew YNAB from its modest beginnings into a business that he could work on full-time.
- Why Jesse scrapped software that he'd spent over $60,000 on and rebuilt it from scratch.
- Why Jesse kept working in the full-time job when his business was generating about double his salary.
### Success Quote
- “The single-minded ones, the monomaniacs, are the only true achievers. The rest, the ones like me, may have more fun; but they fritter themselves away…. Whenever anything is being accomplished, it is being done, I have learned, by a monomaniac with a mission.” –
**Peter Drucker**
### Book Recommendation
- “The Millionaire Next Door” by
**Thomas J. Stanley**
### Links & Resources Mentioned
### Contact Information
- Jesse Mecham – @jessemecham or jesse [at] ynab [dot] com
- Omer Khan – @omerkhan
### Recommended Podcast Episodes
- Qwilr: SaaS Freemium Pitfalls and How to Avoid Them - with Mark Tanner [261]
- Calendly's Founder: Building a $30M SaaS After 3 Failed Startups - with Tope Awotona [213]
- Superhuman: The Power of Data-Driven Product Market Fit - with Rahul Vohra [342]
- Boast.ai: The Tough Road to SaaS Success and Beyond - with Lloyed Lobo [377]
- How Carrd's Founder Turned a Side Project into a Profitable SaaS - with AJ [225]
- ITProTV: From Brick & Mortar to $9M SaaS Training Company - with Tim Broom [150]
- Thinkific: From $29 Online Course to $60M ARR SaaS Company - with Greg Smith [380]
- PSPDFKit: Bootstrapping a SaaS $20K to $1M ARR in 8 Months - with Jonathan Rhyne [407]
- LeanData: Overcoming New Category Challenges to 8-Figures - with Evan Liang [399]
- Subly: Bootstrapping a SaaS from Zero to 60,000 Users - with Holly Stephens [289]
| true | true | true |
Jesse is the founder of “You Need a Budget”, a software product that helps you with your personal budgeting. Jesse is a former Certified Public Accountant (CPA) turned software entrepreneur.
|
2024-10-12 00:00:00
|
2014-11-13 00:00:00
|
article
|
saasclub.io
|
SaaS Club
| null | null |
|
38,642,375 |
https://gizmodo.com/elon-musk-is-reportedly-launching-a-new-university-in-t-1851097464
|
Elon Musk Is Launching a New University in Texas
|
Lucas Ropek
|
The legions of Elon Musk stans who have long dreamt of following in the unhinged billionaire’s footsteps will soon have an opportunity to learn directly from the master. Or, rather, they’ll have an opportunity to attend a university set up by Musk. And learn science. Or something. Maybe.
Tax filings first spotted by Bloomberg show that the billionaire is intent on setting up a primary and secondary school in Austin, Texas, that will focus primarily on STEM. An application for tax-exempt status filed with the IRS last year and approved in March shows that Musk is using $100 million of his own money to get the school up and running. The school plans to pursue accreditation from the Southern Association of Colleges and Schools Commission on Colleges and will eventually “expand,” developing into a university “dedicated to education at the highest levels.”
In short: Musk appears to be taking after the cadres of other intellectually disgruntled figures who want to remake higher education in their own image.
Indeed, the “intellectual dark web” crowd(of which, Musk is tangentially related) has long been critical of the too “woke” ways of traditional universities and, as such, has tried—again and again—to set up new educational systems that are more attuned to their ideological preferences. For the most part, those attempts have not been particularly successful.
Musk’s motivation for setting up his own university hasn’t been made clear at this point, so there’s no telling why he thinks this is a good idea. Knowing Musk, I’d imagine he has some grand, “humanity helping” rationale in mind.
In truth, all of the folks hellbent on remaking American education should really team up and make one giant, super weird, mega-college. Bari Weiss can handle the humanities. Musk will deal with STEM. Jordan Peterson can handle…whatever it is he’s doing these days. I’m sure the kids will love it.
| true | true | true |
Professor Musk? The billionaire plans to create a new STEM university. It's unclear if the woke mind virus will be allowed on campus.
|
2024-10-12 00:00:00
|
2023-12-13 00:00:00
|
article
|
gizmodo.com
|
Gizmodo
| null | null |
|
26,254,991 |
https://www.laphamsquarterly.org/roundtable/english-style
|
The English Style | Margarette Lincoln
|
Margarette Lincoln
|
The Restoration of the Stuart monarchy in England in 1660 brought a great revival of interest in fashion, both at court and among those who could afford to imitate court dress. Clothes not only indicated status but sometimes political allegiances and religious beliefs as well. During the civil war period they proclaimed whose side you were on, since Cavaliers—members of the Established Church who recognized the king as its head, supported bishops and the *Book of Common Prayer*, and ardently upheld the sacredness of the king’s majesty—and Roundheads—who positioned themselves as defenders of historic English freedoms in the face of royal tyranny during the English Civil War—dressed differently. Broadly, the Interregnum had been a time of austerity and now people coveted luxury, but clothes continued to have political meaning and the cloth they were made from reflected national, commercial interests.
King Charles II consciously fashioned his image with his tailors, understanding the value of clothes in affirming the status of monarchy. When he arrived in England after the Restoration, he was dressed in the height of French fashion. Although this vogue was widely admired, it had its critics too: Why ape Paris fashions when France’s Catholicism and system of absolute monarchy were loathed and feared? Numerous London trades were making money from royalty by turning out commemorative items; cloth dealers also sought to profit from royalty, urging the king to wear textiles made in England. In October 1665 Charles duly announced that he and the queen would wear only English cloth, except linen and calicoes, and encouraged the court to do the same.
The royals soon saw that it would be expedient to claim to break with French fashion and create an “English” style. Samuel Pepys recorded how, in October 1666, Charles ostentatiously began to wear a vest, or long waistcoat. It may well have been influenced by examples of Turkish dress worn on stage; John Evelyn credited the new fashion with being of Persian origin:
Thence to Court, it being the first time of his Majesties putting himselfe solemnly into the Eastern fashion of vest, changing doublet, stiff collar, [bands] & cloake &c: into a comely vest, after the Persian mode with girdle or shash…resolving never to alter it, & to leave the French mode.
Suits were now usually of somber-colored velvets, although vests were in brighter, contrasting colors; male dress gradually settled on coat, vest, and breeches, foreshadowing the modern suit. Charles I had mostly worn flashy satin, but Charles II preferred woven cloth: camlets, made of wool and silk, or velvet. Although he wore English cloth, he augmented it with luxury fabrics from France and Italy. So in 1668 Parliament petitioned the king and his brother James to set an example by wearing only English manufactures. The king’s personal taste was for casual, quite plain clothes for everyday wear, but fashionable court dress ran to extremes. For women, dresses became more structured, with a tight, low-waisted bodice that was very low-cut; full, high-waisted gowns went quite out of fashion.
Among elite men, extravagant, long, curling wigs became the norm, because when Charles began to go gray toward the end of 1663, he started to wear them. Possibly he was already trapped by his royal image. Pepys cut his hair short and put on a periwig for the first time on September 3, 1665; he was rather nervous in case he was teased. Many others followed court fashions and bought royalist accessories for tactical reasons, as a demonstration of loyalty. Such behavior prompted cynicism: turncoats, for example, who with regime change had switched political and perhaps religious allegiances, found it politic to sport snuffboxes bearing the king’s portrait.
The Restoration brought prosperity to some and encouraged demonstrations of wealth and status after the leveling effects of the Interregnum. Social climbers eagerly purchased items of dress and domestic goods that would convey prestige as well as patriotism. Consequently, in December 1662, Pepys recorded how ashamed he was that his wife was still wearing her taffeta dress when all fashionable women had purchased warmer mohair gowns. The number of Londoners who could afford luxury fabrics remained small, so novelty was at a premium among this elite. Nobody wanted to be wearing the same patterned fabric as everyone else.
Fashion trends posed challenges for cloth importers. Alderman Sir William Turner, a textiles merchant based in St. Paul’s Churchyard who mostly imported fabrics from Genoa and Paris, found that he had to satisfy new trends quickly and not overstock. At the beginning of September 1664, he wrote to his chief foreign supplier, Boquelin and Sons of Paris, to explain “there will be newer fashions for winter and therefore I will not order any tabys [silks] yet till the fashions are settled for winter.” He later added, “As for the Brocades which you would fain persuade me to keep I have already said enouf to make you understand the nature of flowred silks the fashion changing so often that they are worth nothing when something newer comes.” Turner certainly profited from the extravagance of others: already in 1660 his estimated annual income was £2,000. But he was a Puritan and devoted much of his earnings to charitable works, winning a knighthood for public service in 1662.
**I**n Restoration times, as now, fashion was big business. In 1671 a hot news item was that French drapers had come to London and bought up all English light-colored broadcloth to the value of many thousand pounds. It would be the only material worn in the French court that winter. This was welcome news, unless Londoners were determined to follow the French style. All the while, fashion trends were becoming more complex with the expansion of trade routes. Sir Joseph Williamson, royalist MP and master of the Clothworkers’ Company, noted at a committee for trade in 1676 that “fashions of ribbons, stuffs, etc. run round the world.” Outdated fashions from France were brought to England as new, and then passed to Ireland and the West Indies. When the manufacture of one sort of goods was established in England, the French would set up another fashion, and when that too was adopted in England, they would change it for a third. Canny merchants bought English cloth direct from weavers, then sold it as French at a higher price. Williamson warned that England imported so much linen, silk, wine, and brandy from across the Channel, the balance of trade favored France. To encourage English manufactures and limit French imports at a time of depressed trade, ministers again proposed that the king should wear only English textiles, and that ladies of the court should wear only English silks or silk mixtures. Because the king’s mistresses were Catholics (excepting only Nell Gwyn), their fondness for French fashions rankled. Prudently, Charles did now adopt English medley cloth, which was made of multicolored fibers woven together.
London’s parks were places where people could display their fine dress. Charles made improvements to parks so that they offered even more attractive walks, although John Wilmot, the libertine Earl of Rochester, remarked that after dusk they were used mostly for sex. In October 1660 the king ordered a river to be cut in St. James’ Park and a new alley for the game pall mall, an early form of croquet, to run along the south side of the park wall. These alterations, together with the improvements to Whitehall Palace, employed some three hundred men. Unfortunately, on the other side of the park wall lay the highway between Charing Cross and St. James’ Palace; carriages created so much dust that players often found they could not continue with their game. At some expense Charles was forced to move the highway further north, creating today’s Pall Mall.
Nonconformists, Quakers especially, still wore sober costumes that set them apart, although they dropped the uncompromising Puritan habit of the earlier period. They might not wear black, which was associated with mourning, but still favored muted colors. They rejected ornament and despised fashion trends. The poor struggled to appear respectable enough to go to church, often pawning Sunday clothes to tide them over for the rest of the week. Because clothing was expensive, London had a thriving secondhand clothes market, with many dealers to the east of Tower Hill, the place of execution for high-profile traitors. Because garments could so easily be turned into ready cash, street robbers targeted victims for their clothes as well as their valuables. In 1661 a gang of nine laborers assaulted the elderly Sir John Scudamore in St. Giles-in-the-Fields, taking not only his money and jewels but also his hat, cloak, breeches, doublet, shirt, and gloves. The estimated value of his clothes alone was £17 2s. All the robbers were arrested and sentenced to death.
The sumptuous costumes of the aristocracy, soon associated with the sensual pleasures of the Restoration court, elicited disapproval tinged with envy. The outrageous behavior of some aristocratic rakes hardened criticism. Lord Rochester killed a waterman merely for commenting to another that Rochester was a handsome man. Hearing this as he passed by, Rochester turned back, swore, and asked the waterman who made him a judge of beauty. He gave the man a box on the ear and when the waterman seemed about to retaliate, drew his sword and ran him through. Gentlemen routinely wore swords—usually rapiers—as a mark of their honor and status, almost as a decorative element of dress, so minor disputes easily escalated. They set a poor example; London’s homicide rate, although decreasing, was higher than it is today. The violence was disturbing; one resident of St. Martin-in-the-Fields was prosecuted for wishing “all the gentry in the land would kill one another, so that the comminalty might live the better.”
The fascination that the court inspired slowly came to be mixed with repugnance at its immorality; people understood that silks and fine wigs often concealed the ravages of syphilis. Contemporary advertisements promising discreet cures pulled no punches in the description of sexually transmitted diseases. One medicine promised to:
Cure a Gonorrhaea, Running of the Rein or Clap, Pain, Heat, Scalding in making Water, &c. in a few Days: And very speedily take away all Aches or Pains in the Head, Back, Shoulders, Arms, Legs, or Night-Pains; Sores or Ulcers in the Mouth or Throat; All Sorn and stubborn Scabs, or breaking out in any part of the Body; preserve the Palate and Bridge of the Nose; purifie the Blood; perfectly free the Body of the Remains of any Pox or Clap formerly ill Cured.
Prince Rupert, the king’s cousin, reputed to be suffering from chronic syphilis, finally submitted to operations on his skull to release corrupted matter. Pepys saw him at Whitehall Palace less than two months afterward, “pretty well as he used to be, and looks well; only, something appears to be under his periwig on the crown of his head.” The prince may simply have undergone treatment for an old injury, since he was once shot in the head, but this was not Pepys’ understanding. Many writers who brooded on appearance and reality developed a corrosive cynicism; it was a theme that permeated through to popular literature and colored the spirit of the age. The ballad “News from Hide Park” describes a country gentleman’s encounter with a prostitute, who seemed an “armful of satin” but divested of her wig and fine clothes became “like a Lancashire witch of fourscore and ten.”
*From *London and the Seventeenth Century: The Making of the World’s Greatest City* by Margarette Lincoln, published by Yale University Press. Copyright © 2021 by Margarette Lincoln. All rights reserved.*
| true | true | true |
On the fashion trends of seventeenth-century London.
|
2024-10-12 00:00:00
|
2021-02-24 00:00:00
| null |
laphamsquarterly.org
|
Lapham’s Quarterly
| null | null |
|
39,104,206 |
https://moddable.com/display
|
Moddable Display
| null |
# Display 6
Display 6 is powered by Moddable Six, a powerfully beautiful color touch screen with Wi-Fi and BLE. The dual-core ESP32-S3 has plenty of power and memory for your project's graphics and communication needs. Create compelling user experiences with the integrated speaker.
- Espressif ESP32-S3
- Wi-Fi
- BLE
- 240 x 320 IPS display
- Audio amp & integrated speaker
- Capacitive touch
- 20 expansion pins
- Portrait, landscape and touch
Purchase $49.99
**Introductory Price**
# Display 4
Display 4 is powered by Moddable Four to deliver year-long operation on a coin cell with BLE and a fast, always-on screen. Because it runs without a power cable, you can put Display 4 just about anywhere.
- Nordic nRF52
- BLE
- Battery powered
- 128 x 128 reflective memory display
- 12 expansion pins
- Portrait and landscape
# Display 3
Display 3 is powered by Moddable Three, a versatile, affordable ePaper display with Wi-Fi. ePaper is perfect for subtly displaying information around your home or office.
- Espressif ESP8266
- Wi-Fi
- 250 x 122 ePaper display
- 11 expansion pins
- Portrait and landscape
| true | true | true | null |
2024-10-12 00:00:00
|
2018-04-01 00:00:00
| null | null |
moddable.com
|
Moddabletech
| null | null |
7,592,878 |
https://www.varnish-cache.org/docs/trunk/phk/http20.html
|
Why HTTP/2.0 does not seem interesting¶
| null |
# Why HTTP/2.0 does not seem interesting¶
This is the email I sent to the IETF HTTP Working Group:
```
From: Poul-Henning Kamp <[email protected]>
Subject: HTTP/2 Expression of luke-warm interest: Varnish
To: HTTP Working Group <[email protected]>
Message-Id: <[email protected]>
Date: Thu, 12 Jul 2012 23:48:20 GMT
```
This is Varnish’ response to the call for expression of interest in HTTP/2[1].
## Varnish¶
Presently Varnish[2] only implements a subset of HTTP/1.1 consistent with its hybrid/dual “http-server” / “http-proxy” role.
I cannot at this point say much about what Varnish will or will not implement protocol wise in the future.
Our general policy is to only add protocols if we can do a better job than the alternative, which is why we have not implemented HTTPS for instance.
Should the outcome of the HTTP/2.0 effort result in a protocol which gains traction, Varnish will probably implement it, but we are unlikely to become an early implementation, given the current proposals at the table.
## Why I’m not impressed¶
I have read all, and participated in one, of the three proposals presently on the table.
Overall, I find all three proposals are focused on solving yesteryears problems, rather than on creating a protocol that stands a chance to last us the next 20 years.
Each proposal comes out of a particular “camp” and therefore all seem to suffer a certain amount from tunnel-vision.
It is my considered opinion that none of the proposals have what it will take to replace HTTP/1.1 in practice.
## What if they made a new protocol, and nobody used it ?¶
We have learned, painfully, that an IPv6 which is only marginally better than IPv4 and which offers no tangible benefit for the people who have the cost/trouble of the upgrade, does not penetrate the network on its own, and barely even on governments mandate.
We have also learned that a protocol which delivers the goods can replace all competition in virtually no time.
See for instance how SSH replaced TELNET, REXEC, RSH, SUPDUP, and to a large extent KERBEROS, in a matter of a few years.
Or I might add, how HTTP replaced GOPHER[3].
HTTP/1.1 is arguably in the top-five most used protocols, after IP, TCP, UDP and, sadly, ICMP, and therefore coming up with a replacement should be approached humbly.
## Beating HTTP/1.1¶
Fortunately, there are many ways to improve over HTTP/1.1, which lacks support for several widely used features, and sports many trouble-causing weeds, both of which are ripe for HTTP/2.0 to pounce on.
Most notably HTTP/1.1 lacks a working session/endpoint-identity facility, a shortcoming which people have pasted over with the ill-conceived Cookie hack.
Cookies are, as the EU commission correctly noted, fundamentally flawed, because they store potentially sensitive information on whatever computer the user happens to use, and as a result of various abuses and incompetences, EU felt compelled to legislate a “notice and announce” policy for HTTP-cookies.
But it doesn’t stop there: The information stored in cookies have potentially very high value for the HTTP server, and because the server has no control over the integrity of the storage, we are now seeing cookies being crypto-signed, to prevent forgeries.
The term “bass ackwards” comes to mind.
Cookies are also one of the main wasters of bandwidth, disabling caching by default, sending lots of cookies where they are not needed, which made many sites register separate domains for image content, to “save” bandwidth by avoiding cookies.
The term “not really helping” also comes to mind.
In my view, HTTP/2.0 should kill Cookies as a concept, and replace it with a session/identity facility, which makes it easier to do things right with HTTP/2.0 than with HTTP/1.1.
Being able to be “automatically in compliance” by using HTTP/2.0 no matter how big dick-heads your advertisers are or how incompetent your web-developers are, would be a big selling point for HTTP/2.0 over HTTP/1.1.
However, as I read them, none of the three proposals try to address, much less remedy, this situation, nor for that matter any of the many other issues or troubles with HTTP/1.x.
What’s even worse, they are all additive proposals, which add a new layer of complexity without removing any of the old complexity from the protocol.
My conclusion is that HTTP/2.0 is really just a grandiose name for HTTP/1.2: An attempt to smooth out some sharp corners, to save a bit of bandwidth, but not get anywhere near all the architectural problems of HTTP/1.1 and to preserve faithfully its heritage of badly thought out sedimentary hacks.
And therefore, I don’t see much chance that the current crop of HTTP/2.0 proposals will fare significantly better than IPv6 with respect to adoption.
## HTTP Routers¶
One particular hot-spot in the HTTP world these days is the “load-balancer” or as I prefer to call it, the “HTTP router”.
These boxes sit at the DNS resolved IP numbers and distributes client requests to a farm of HTTP servers, based on simple criteria such as “Host:”, URI patterns and/or server availability, sometimes with an added twist of geo-location[4].
HTTP routers see very high traffic densities, the highest traffic densities, because they are the focal point of DoS mitigation, flash mobs and special event traffic spikes.
In the time frame where HTTP/2.0 will become standardized, HTTP routers will routinely deal with 40Gbit/s traffic and people will start to architect for 1Tbit/s traffic.
HTTP routers are usually only interested in a small part of the HTTP request and barely in the response at all, usually only the status code.
The demands for bandwidth efficiency has made makers of these devices take many unwarranted shortcuts, for instance assuming that requests always start on a packet boundary, “nulling out” HTTP headers by changing the first character and so on.
Whatever HTTP/2.0 becomes, I strongly urge IETF and the WG to formally recognize the role of HTTP routers, and to actively design the protocol to make life easier for HTTP routers, so that they can fulfill their job, while being standards compliant.
The need for HTTP routers does not disappear just because HTTPS is employed, and serious thought should be turned to the question of mixing HTTP and HTTPS traffic on the same TCP connection, while allowing a HTTP router on the server side to correctly distribute requests to different servers.
One simple way to gain a lot of benefit for little cost in this area, would be to assign “flow-labels” which each are restricted to one particular Host: header, allowing HTTP routers to only examine the first request on each flow.
## SPDY¶
SPDY has come a long way, and has served as a very worthwhile proof of concept prototype, to document that there are gains to be had.
But as Frederick P. Brooks admonishes us: Always throw the prototype away and start over, because you will throw it away eventually, and doing so early saves time and effort.
Overall, I find the design approach taken in SPDY deeply flawed.
For instance identifying the standardized HTTP headers, by a 4-byte length and textual name, and then applying a deflate compressor to save bandwidth is totally at odds with the job of HTTP routers which need to quickly extract the Host: header in order to route the traffic, preferably without committing extensive resources to each request.
It is also not at all clear if the built-in dictionary is well researched or just happens to work well for some subset of present day websites, and at the very least some kind of versioning of this dictionary should be incorporated.
It is still unclear for me if or how SPDY can be used on TCP port 80 or if it will need a WKS allocation of its own, which would open a ton of issues with firewalling, filtering and proxying during deployment.
(This is one of the things which makes it hard to avoid the feeling that SPDY really wants to do away with all the “middle-men”)
With my security-analyst hat on, I see a lot of DoS potential in the SPDY protocol, many ways in which the client can make the server expend resources, and foresee a lot of complexity in implementing the server side to mitigate and deflect malicious traffic.
Server Push breaks the HTTP transaction model, and opens a pile of cans of security and privacy issues, which would not be sneaked in during the design of a transport-encoding for HTTP/1+ traffic, but rather be standardized as an independent and well analysed extension to HTTP in general.
## HTTP Speed+Mobility¶
Is really just SPDY with WebSockets underneath.
I’m really not sure I see any benefit to that, except that the encoding chosen is marginally more efficient to implement in hardware than SPDY.
I have not understood why it has “mobility” in the name, a word which only makes an appearance in the ID as part of the name.
If the use of the word “mobility” only refers only to bandwidth usage, I would call its use borderline-deceptive.
If it covers session stability across IP# changes for mobile devices, I have missed it in my reading.
## draft-tarreau-httpbis-network-friendly-00¶
I have participated a little bit in this draft initially, but it uses a number of concepts which I think are very problematic for high performance (as in 1Tbit/s) implementations, for instance variant-size length fields etc.
I do think the proposal is much better than the other two, taking a much more fundamental view of the task, and if for no other reason, because it takes an approach to bandwidth-saving based on enumeration and repeat markers, rather than throwing everything after deflate and hope for a miracle.
I think this protocol is the best basis to start from, but like the other two, it has a long way to go, before it can truly earn the name HTTP/2.0.
## Conclusion¶
Overall, I don’t see any of the three proposals offer anything that will make the majority of web-sites go “Ohh we’ve been waiting for that!”
Bigger sites will be enticed by small bandwidth savings, but the majority of the HTTP users will see scant or no net positive benefit if one or more of these three proposals were to become HTTP/2.0
Considering how sketchy the HTTP/1.1 interop is described it is hard to estimate how much trouble (as in: “Why doesn’t this website work ?”) their deployment will cause, nor is it entirely clear to what extent the experience with SPDY is representative of a wider deployment or only of ‘flying under the radar’ with respect to people with an interest in intercepting HTTP traffic.
Given the role of HTTP/1.1 in the net, I fear that the current rush to push out a HTTP/2.0 by purely additive means is badly misguided, and approaching a critical mass which will delay or prevent adoption on its own.
At the end of the day, a HTTP request or a HTTP response is just some metadata and an optional chunk of bytes as body, and if it already takes 700 pages to standardize that, and HTTP/2.0 will add another 100 pages to it, we’re clearly doing something wrong.
I think it would be far better to start from scratch, look at what HTTP/2.0 should actually do, and then design a simple, efficient and future proof protocol to do just that, and leave behind all the aggregations of badly thought out hacks of HTTP/1.1.
But to the extent that the WG produces a HTTP/2.0 protocol which people will start to use, the Varnish project will be interested.
Poul-Henning Kamp
Author of Varnish
[1] http://trac.tools.ietf.org/wg/httpbis/trac/wiki/Http2CfI
[2] https://www.varnish-cache.org/
[3] Yes, I’m that old.
- [4] Which is really a transport level job, but it was left out of IPv6
along with other useful features, to not delay adoption[5].
[5] No, I’m not kidding.
| true | true | true | null |
2024-10-12 00:00:00
|
2014-01-01 00:00:00
| null | null | null |
Why HTTP/2.0 does not seem interesting
| null | null |
22,508,201 |
http://www.edepot.com/playstation3.html
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
17,237,504 |
https://www.forbes.com/sites/korihale/2018/05/22/ai-challenging-banks-lending-practices/#3951b02c7e0e
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
18,189,312 |
https://beincrypto.com/ibm-blockchain-food-supply-chain/
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
37,075,885 |
https://strengthlevel.com/
|
Strength Level - Weightlifting Calculator (Bench/Squat/Deadlift)
| null |
# Strength Level Calculator (Bench/Squat/Deadlift)
## Join 13283317+ Lifters and Calculate your Relative Strength:
## Strength Level calculates your performance in compound exercises like bench press, deadlift and squat.
Enter your one-rep max and we will rank you against other lifters at your bodyweight. This will give you a level between Beginner ★★★★★ and Elite ★★★★★.
If you don't know your current one-rep max, change the number of repetitions and enter your most recent workout set where you went to failure.
You can compare your scores against heavier or lighter friends, and across genders.
← Start now!
## Strength Standards
Our strength standards are based on over 134,541,000 lifts entered by Strength Level users.
We have male and female standards for these gym exercises and more: bench press, squat, deadlift, shoulder press, pull ups, dumbbell bench press, dumbbell curl, push ups, barbell curl, dumbbell shoulder press.
## Fitness Standards
We have fitness calculators and cardio standards for running, rowing and cycling. How fit are you?
## Looking to take your strength to the next level?
Boostcamp is the last lifting app you'll ever need. Follow proven programs, create custom programs, and track workouts–all for free.
Get Boostcamp for free on iOS and Android:
★★★★★ 4.8 Stars with 10,000+ Ratings
## Track your workouts
Join the Strength Level community today! Track your workouts and compare with your friends and other lifters.
## One Rep Max Calculator
If you want to calculate your one rep max, or see percentages of your 1RM, check out our one rep max calculator:
## Strength Level's Paper Workout Log
A paperback book for tracking your workouts in the gym.
With extra pages for tracking goals and personal bests.
| true | true | true |
Calculate male/female ability in exercises like bench press, squat and deadlift. Compare your max lifts against other lifters at your bodyweight. Compete with friends.
|
2024-10-12 00:00:00
|
2024-01-01 00:00:00
|
website
|
strengthlevel.com
|
Strength Level
| null | null |
|
25,241,030 |
http://336699.org/GrowlRetirement
|
Growl in Retirement • Let us chat about nothing.
| null |
# Growl in Retirement
Growl is being retired after surviving for 17 years. With the announcement of Apple’s new hardware platform, a general shift of developers to Apple’s notification system, and a lack of obvious ways to improve Growl beyond what it is and has been, we’re announcing the retirement of Growl as of today.
It’s been a long time coming. Growl is the project I worked on for the longest period of my open source career. However at WWDC in 2012 everyone on the team saw the writing on the wall. This was my only WWDC. This is the WWDC where Notification Center was announced. Ironically Growl was called Global Notifications Center, before I renamed it to Growl because I thought the name was too geeky. There’s even a sourceforge project for Global Notifications Center still out there if you want to go find it.
We’ve had a lot of support over the years; from our hosting providers at Network Redux, CacheFly and others, to all of the apps using our framework, bindings, or any other integration. Special thanks go to Adium and Colloquy. Without these two projects having developers who wanted different types notifications, Growl wouldn’t have existed. Without Growl I do not know that we would have any sort of decent notification system in OS X, iOS, Android or who knows what else.
Special thanks goes to Transifex who made localizing into 24 languages a lot easier than anything else we tried. It’s a fantastic product, if you make software please try it. Our localizers were fantastic people and should all be commended for their work.
For developers we recommend transitioning away from Growl at this point. The apps themselves are gone from the app store, however the code itself still lives. Everything from our rake build system to our code is available for use on our GitHub page
| true | true | true |
Growl is being retired after surviving for 17 years. With the announcement of Apple’s new hardware platform, a general shift of developers to Apple’s notification system, and a lack of obvious ways to improve Growl beyond what it is and has been,... | Let us chat about nothing. | Now over here, if you’ll follow me, we have something rather special.
|
2024-10-12 00:00:00
|
2020-11-28 00:00:00
|
article
|
336699.org
|
Let us chat about nothing. on Svbtle
| null | null |
|
3,034,139 |
http://www.rcgroups.com/forums/showthread.php?t=1478852
|
The Dehogaflier: Skywalker with FLIR thermal camera.
| null |
|
|||
|
Discussion
|
FPV - Feral Hog Damage to Rice Crop (4 min 13 sec) |
|
|
|
|
The only major bit of rework I had to do was a tilt mount for the camera. Considering the price tag for this thing we didn't want to mount it right on the nose without any kind of protection. We had to be able to look down as well. In fact we plan on looking down almost all the time. So I decided to mount it inside the nose with a slot cut to see out of.
The camera and mount and control rods made it a little too big for the Skywalker nose so I made my own out of balsa. Then I chopped most of the Skywalker's nose off and glued the wooden frame in place. After this I filled the gap with expanding foam and trimmed it back to shape. The expanding foam is a little more spongy and crumbly than EPP so I will probably glass over it later. I wish I had mounted it an inch or two further out. I may try again and do pan and tilt on my other skywalker frame. More on the camera. http://www.flir.com/uploadedFiles/Ta...uideRev120.pdf We went with a Tau 320 because the PathFindIR supposedly didn't have decent adaptive auto gains and other settings like the Tau's have. Also the Taus are way lighter and smaller. We also couldn't afford the Tau 640 and I wasn't sure how well the extra resolution would make it past the video Tx anyway. (anyone know the answer to this?) The 320 was around $4500, the 640 was around $9000 if I remember correctly. We did a little pixel size math based on the size of a pig and the desired flying height etc and decided that a 19mm lens was our best bet. A wider view would have me flying too low, a tighter lens would make it hard to land. So far it looks like I could possibly spot an adult pig at up to 1000ft up if I'm looking directly down on him. At one point during the maiden flight the ship was around 1400ft away LOS and you could still see me sitting by the ground station. Will type more later... Need to sleep. ## ImagesView all Images in thread
|
|
Last edited by BushmanLA; Jul 28, 2011 at 01:50 AM.
|
|
|||||||
|
Reserved space...
Here's the video just in case I fall asleep before I get finished typing all the info above.... Test video from ground
Test video from ground vs deer in a bean field
Maiden Thermal Flight
|
||||||
|
Last edited by BushmanLA; Jul 28, 2011 at 02:02 AM.
|
||||||
|
||
|
Quote:
The problem is that this is waaaaay out in the country with no city lighting etc. I could probably get it done on a moonlit night but I'm not familiar enough with the low light cameras to know if it would work with starlight only. |
|
|
||
|
|
|
Dude that rules! Are you gonna have a spotter in the plane to guide in the hunter on the ground by radio?
Such a massive difference between states in this country... Using mil grade tech to find pot growers over there while here it's grown in warehouses and sold in glass windowed shops. |
|
|
|
|||||
|
This is how things have gone so far...
After the first thermal flight posted here we went over to the farm to try and spot for feral hogs. We launched and everything was great except for the camera being tilted up slightly which made me fly tilted down all the time and well... you get the idea. Nothing broke, rice fields make a great cushion. After that I made some major changes to the airplane. Better Dragonlink antenna setup. Better RVOSD anti vibration mount. And we made a camera mount for the daylight camera so we could quick change between it and the thermal cam. I did lots of test flights with the daytime camera getting things trimmed and tuned and comfortable. The problem with the thermal camera is that it has a very narrow field of view compared to what we all fly AND it just makes the horizon a nasty blur. So I had to get used to flying with the RVOSD data on screen and trusting it. We made another trip to the farm Friday and everything wen't ok. Did a daytime flight to survey damage.
Then we did a dusk flight with the thermal. Very stressful. There anything out there but fields and trees and they are all pretty much the same temp! Add the tight field of view and it is very easy to get disoriented. Would be totally impossible without an OSD. After that we landed, charged the batteries and had a smoke to calm my nerves. We launched again at around 10 pm. My copilot/spotter/communications officer used a spot light to help me take off. We have LED's on the wings and reflective tape so it wasn't too hard. I found the flying much easier this time, I think the cool of night was helping. There was a decent wind blowing and at one time I was actually flying backwards according to the gps We spotted a pig on the camera and shortly afterward my camera tilt servo died (stuck looking down!) so using RTH and the spotlight I landed it asap.
We loaded up and converged on the pig's position a few mins with intent to do violence but it turned out to be a little deer taking a nap. It was an exciting and exhausting night. Flying with that camera is HARD. Later on everyone else went home but I stayed to try and find the pigs the old fashioned way. I ended up falling asleep on the road only to be woken up an hour later by some big giant beast of a pig wallowing in the ditch a few yards from me. He ran away before I could get a shot off. |
||||
|
|||||
|
|||
|
Here's our second successful outing. Spotted some piggers this time..
RVOSD is still giving me fits with vibrations.
|
||
|
|||
|
||
|
Quote:
|
|
|
||
|
||
|
Quote:
Yeah 4S 5000mAh. Not sure of the weight, will have to measure but I'm sure it's on the heavy side as far as Skywalker builds go. |
|
|
||
Thread Tools | |
Similar Threads | |||||
Category | Thread | Thread Starter | Forum | Replies | Last Post |
Discussion | Skywalker gyro stabilized camera | filipinto | FPV Talk | 39 | Jan 20, 2012 12:40 PM |
Question | Airborne Models Skywalker or Thermal Rider?? | ugly john | Electric Sailplanes | 4 | Mar 09, 2009 08:25 PM |
| true | true | true |
Discussion The Dehogaflier: Skywalker with FLIR thermal camera. FPV Talk
|
2024-10-12 00:00:00
|
2009-03-09 00:00:00
| null | null | null |
The Dehogaflier: Skywalker with FLIR thermal camera.
| null | null |
14,254,552 |
https://www.researchgate.net/publication/313358593_Self-Control_Generosity_and_Honesty_Depend_on_Exposure_to_Pictures_of_the_Opposite_Sex_in_Men_but_not_Women
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
4,330,773 |
http://blog.quicklychat.com/2012/08/02/multi-user-video-walkie-talkie-and-more/
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
33,883,053 |
https://www.parentdata.org/p/where-does-data-come-from
|
Where Does Data Come From? | ParentData by Emily Oster
|
Emily Oster
|
I teach a lot of classes on data, to a lot of different audiences: college students, graduate students, businesspeople, people who find me through the newsletter or Instagram. Once, I taught a data class to a middle-school assembly. My favorite way to start these classes — an opening that I’ve used with all of these audiences — is this:
Here is a fact from the CDC: In 2017-2018, 42.7% of Americans were obese. And here is my question: How do they know?
This is a simple question, and one that many people have a knee-jerk reaction to: “They weighed people.” But of course, it’s not that simple. They do weigh people. But which people? And who does the weighing? And how do you go from weighing some people to a statement about all Americans?
Today I want to unpack the answers to these questions, a bit like the way I would in class. Understanding the answers here is a lens into thinking more generally about where data comes from, and whether we can be confident in what it tells us.
*Important note: I’m using obesity rate throughout today’s post because it is a number that is often cited in policy discussions, and it’s a good illustration of general principles. There are many good arguments for why BMI-based measures like this do not reflect a person’s health and why we should move away from a focus on them. (For more, I highly recommend Virginia Sole-Smith — we’ll be discussing her upcoming book, *Fat Talk: Parenting in the Age of Diet Culture*, in the newsletter early next year.) I’m using this example to talk about analytic concepts, not because there’s great value in the data itself. *
## The data source
Let’s start by asking what the *ideal* way to measure this would be, from a data standpoint. If you wanted to know the exact right number at all times, you’d want to basically force everyone to weigh themselves every day and have that data be uploaded to some government server. From this, we’d be able to precisely calculate the share of people at any particular weight. Data-wise, great. In all other ways, terrifying and horrible. It’s not *The Handmaid’s Tale* over here (yet), so that’s not the way this is done.
Slightly more realistically, there are a number of existing data sources for this information. A large-scale example is anonymized medical records, which would provide weight from yearly check-ups. Apps are another possibility. People who use a fitness or diet tracker may enter their weight as part of that; certain diet trackers will link electronically to a scale, so this information is entered automatically. Some states will collect your weight when you apply for or renew a driver’s license.
In principle, these may seem like a good way to measure America’s weight. They’re easily accessible, in the sense that the information doesn’t have to be collected anew, and possibly cover many, many people. However: there are significant issues with these sources. One is that, in some cases, they rely on self-reports, which may not be accurate. A more pernicious issue is that all of these samples are *selected* in various ways. That is to say: none are representative of the full U.S. population.
This lack of representativeness is obvious if we think about dieters or the wearers of fitness trackers. But even a population of medical records — while better — may not be representative. People who are engaged with their health such that they go to the doctor for well visits are, at least in some ways, different from those who do not. Using only that population, we’ll get a biased estimate of what we want to know.
In order to make a statement like the one that I attributed to the CDC above, we need to have data from a representative sample of Americans — the simplest way to think about this is a *random sample*. It is important to say that we do not need to sample *all* Americans. One of the wondrous, magic things about statistics and sampling is that we can sample a subset of people — even quite a small share — and make statements about the whole population. These statements will come with some possible error, but we can quantify that error. But this is only possible if our sample is representative of the whole population.
The way the CDC actually gets these data on obesity (and on much other health data) is with a survey called the National Health and Nutrition Examination Survey (NHANES). The NHANES survey began in the 1960s and has run in more or less its current form since 1999. It includes roughly 5,000 individuals each year, and it’s run continuously with data released every two years.
The NHANES has two components. There is a survey component, which asks questions about demographics (race, income, education), health conditions, and diet. This is the source for a lot of data on American dietary patterns; participants do a one- or two-day “dietary recall” in which they list everything they ate during those days. We also get detailed information about any existing health conditions, and health behaviors.
The second component is the examination. This consists of a series of measurements, including medical and dental tests and laboratory tests. It is in this survey segment that information is collected on weight, along with blood pressure, laboratory measurements like triglycerides, and so on. This examination portion of the survey is done by NHANES survey people in a series of carefully designed mobile examination units.
The NHANES is designed as a representative sample. In an ideal world, the way we’d do that is to randomly choose 5,000 people from the American population of 300 million and survey them. This is infeasible with a study like this for many reasons, most notably that you’d need to get your mobile examination units all over the country. Instead, what the survey does is choose 15 random counties each year and then choose random households from within these counties, and then random people within those households. This approach allows the researchers to have the mobile units in a smaller number of locations. It also allows them to advertise the existence of the survey and to let people know what is going on.
Again, it might seem like magic, but actually this approach to sampling — when done randomly — will give you a representative sample that you can use to reflect the U.S. population. Statistics is cool! That magic, though, happens when you are able to actually survey and examine everyone you sample. That is: the survey picks a set of people within each county, and the ability to draw conclusions based on that subset of the population is reliant on them *actually surveying and examining* those people they picked.
The main issue with this is non-response. Not everyone you contact wants to be surveyed, and even fewer people want to be weighed and have their blood drawn. It takes time, and also can be invasive. In the data, about half of the people contacted are willing to be surveyed, and slightly fewer are willing to undergo the examination. If the refusal was random, this would be okay — you’d need to start with twice as many people, but you’d still do all right on being representative. The problem is that refusal is not random.
For example: Likely due to long-standing issues of mistreatment by the medical system, Black individuals are less likely to opt into the survey than those of other races. More-educated individuals are more likely to agree to be surveyed, on average, as are richer people. This means that the sample that you get is not random, and the data cannot simply be used as it is. The NHANES approaches (say) 10,000 people to get 5,000 responses; but even though the 10,000 people were randomly selected, the 5,000 are not.
At the end of the NHANES process, there is a data set of 5,000 individuals. On average, there are more white people in the sample than in the overall population, and more people with more education (among other imbalances). Reporting the obesity rate in the observed data would not be representative of the overall population.
So… what do you do?
## Reweighting data
The short answer is that you “reweight” the data. Imagine that your data is on 10 people: 9 white people and 1 Black person. But your overall population has 7 white people and 3 Black people. If you want your data to represent your population in terms of race, you need to count your one Black person three times and each of your white people only 7/9ths of a time. In doing this, you are giving more weight to the person representing the group you do not have enough of and less weight to the people who you have too many of.
This reweighting can get very complicated when the sample is imbalanced in a lot of ways, as it is in the NHANES. Typically, the way it is done is by grouping people based on a set of characteristics (e.g. 20-to-39-year-old non-Hispanic white women living in urban areas with an intermediate median income) and then asking how the share of the people in the survey with that set of characteristics compares with the share of the overall U.S. population. Participants in this group are then assigned a survey weight, which tells researchers whether to up- or down-weight them in any overall statistics.
There are two important subtleties to this. The first is that in order to do this, you need a good number of people in each group. If there is literally one Black person in your entire sample, you cannot count them for the entire Black population of the U.S. One implication of this is that a survey like the NHANES starts out by “oversampling” smaller population groups, to make sure they have enough people to do their weighting.
A second issue, more pernicious, is that you can only weight based on things you see. This was brought home to me by a reader email, which was about a survey in the U.K. but has a similar feel:
I’ve just signed up for the U.K. future health study:
I live in an ethnically diverse city with significant areas of deprivation.
However, at the initial screening I was struck by how similar the cohort was, mostly white middle-aged professionals who worked in The City (or at least had a job where they could get time off to attend). I’d say 75%+ had a Garmin [fitness watch] or equivalent, and we were all quite excited to get a free cholesterol check.
Some of the issues this person identifies (imbalances in race or age) are things we can deal with using the weighting procedures described. But some of these features — like Garmin ownership — are not things we measure, and therefore not things we can base our weights on.
The bottom line is that if the non-response to a survey like the NHANES is in part a function of unobservable differences across people (which it surely is), then we retain concerns about lack of representativeness. It can be difficult to know how important this is in magnitude, or even in what direction it biases our conclusions. We can sometimes speculate, but we cannot be sure.
I wish I could tell you that there is a good way to address these problems! It’s a topic I’ve studied in my academic work (see “A simple approximation for evaluating external validity bias,” written with the incomparable Isaiah Andrews), but we do not come up with any airtight solutions. The fundamental problem is that if your sample is selected based on features you cannot see — unobservables — then you’re kind of out of luck for making precise conclusions.
## Summary and other contexts
When we want to make statements about characteristics of whole populations, be they the entire U.S. or something smaller, or larger, we must use representative samples. If my goal was to get the weight of 5,000 people, there are much simpler ways to collect that data than mobile clinics across the U.S. I could weigh people at the New York City Marathon, or people who go to a Packers game. But those approaches will yield a biased sample.
Yet even when we do our very, very best to sample in a representative way, we still run into problems if not everyone responds (and they do not!). We can up-weight and down-weight, and even when we do that, we are still not usually all the way there. It’s better than the Packers game! Not perfect, though.
The issues here come up all the time if you’re looking for them. Political polling, for example. Pollsters randomly sample people to call, but they definitely do not get a random sample of people answering them. There are many reweighting approaches to addressing these imbalances, but they do all run into the problem of unobservable selection (also, lying, but that’s for another day).
It *is* worth looking for these issues. We spend a lot of time, in this newsletter and in media in general, talking about issues like correlation versus causation. Those are important! But the more mundane question of where data comes from, and what it really measures — this is crucially important too.
| true | true | true |
Weight and weighting
|
2024-10-12 00:00:00
|
2022-12-05 00:00:00
|
article
|
parentdata.org
|
ParentData by Emily Oster
| null | null |
|
10,841,867 |
http://cloudy.community/decentrify/
|
cloudy.community
| null |
Inquire about this domain
| true | true | true |
This domain may be for sale!
|
2024-10-12 00:00:00
|
2024-01-01 00:00:00
| null | null | null | null | null | null |
1,289,187 |
http://tweetagewasteland.com/2010/04/facebook-social-web/
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
24,008,437 |
https://www.vanityfair.com/news/2020/07/how-jared-kushners-secret-testing-plan-went-poof-into-thin-air
|
How Jared Kushner’s Secret Testing Plan “Went Poof Into Thin Air”
|
Katherine Eban
|
On March 31, three weeks after the World Health Organization designated the coronavirus outbreak a global pandemic, a DHL truck rattled up to the gray stone embassy of the United Arab Emirates in Washington, D.C., delivering precious cargo: 1 million Chinese-made diagnostic tests for COVID-19, ordered at the behest of the Trump administration.
Normally, federal government purchases come with detailed contracts, replete with acronyms and identifying codes. They require sign-off from an authorized contract officer and are typically made public in a U.S. government procurement database, under a system intended as a hedge against waste, fraud, and abuse.
This purchase did not appear in any government database. Nor was there any contract officer involved. Instead, it was documented in an invoice obtained by *Vanity Fair*, from a company, Cogna Technology Solutions (its own name misspelled as “Tecnology” on the bill), which noted a total order of 3.5 million tests for an amount owed of $52 million. The “client name” simply noted “WH.”
Over the next three months, the tests’ mysterious provenance would spark confusion and finger-pointing. An Abu Dhabi–based artificial intelligence company, Group 42, with close ties to the UAE’s ruling family, identified itself as the seller of 3.5 million tests and demanded payment. Its requests were routed through various divisions within Health and Human Services, whose lawyers sought in vain for a bona fide contracting officer.
During that period, more than 2.4 million Americans contracted COVID-19 and 123,331 of them died of the illness. First in New York, and then in states around the country, governors, public health experts, and frightened citizens sounded the alarm that a critical shortage of tests, and the ballooning time to get results, were crippling the U.S. pandemic response.
But the million tests, some of which were distributed by the Federal Emergency Management Agency to several states, were of no help. According to documents obtained by *Vanity Fair*, they were examined in two separate government laboratories and found to be “contaminated and unusable.”
Group 42 representatives did not respond to repeated requests for comment.
**TEAM JARED**
The secret, and legally dubious, acquisition of those test kits was the work of a task force at the White House, where **Jared Kushner**, President **Donald Trump**’s son-in-law and special adviser, has assumed a sprawling role in the pandemic response. That explains the “WH” on the invoice. While it’s unclear whether Kushner himself played a role in the acquisition, improper procurement of supplies “is a serious deal,” said a former White House staffer. “That is appropriations 101. That would be *not good*.”
Though Kushner’s outsized role has been widely reported, the procurement of Chinese-made test kits is being disclosed here for the first time. So is an even more extraordinary effort that Kushner oversaw: a secret project to devise a comprehensive plan that would have massively ramped up and coordinated testing for COVID-19 at the federal level.
Six months into the pandemic, the United States continues to suffer the worst outbreak of COVID-19 in the developed world. Considerable blame belongs to a federal response that offloaded responsibility for the crucial task of testing to the states. The irony is that, after assembling the team that came up with an aggressive and ambitious national testing plan, Kushner then appears to have decided, for reasons that remain murky, to scrap its proposal. Today, as governors and mayors scramble to stamp out epidemics plaguing their populations, philanthropists at the Rockefeller Foundation are working to fill the void and organize enough testing to bring the nationwide epidemic under control.
Inside the White House, over much of March and early April, Kushner’s handpicked group of young business associates, which included a former college roommate, teamed up with several top experts from the diagnostic-testing industry. Together, they hammered out the outline of a national testing strategy. The group—working night and day, using the encrypted platform WhatsApp—emerged with a detailed plan obtained by *Vanity Fair.*
Rather than have states fight each other for scarce diagnostic tests and limited lab capacity, the plan would have set up a system of national oversight and coordination to surge supplies, allocate test kits, lift regulatory and contractual roadblocks, and establish a widespread virus surveillance system by the fall, to help pinpoint subsequent outbreaks.
The solutions it proposed weren’t rocket science—or even comparable to the dauntingly complex undertaking of developing a new vaccine. Any national plan to address testing deficits would likely be more on the level of “replicating UPS for an industry,” said Dr. **Mike Pellini**, the managing partner of Section 32, a technology and health care venture capital fund. “Imagine if UPS or FedEx didn’t have infrastructure to connect all the dots. It would be complete chaos.”
The plan crafted at the White House, then, set out to connect the dots. Some of those who worked on the plan were told that it would be presented to President Trump and likely announced in the Rose Garden in early April. “I was beyond optimistic,” said one participant. “My understanding was that the final document would make its way to the president over that weekend” and would result in a “significant announcement.”
But no nationally coordinated testing strategy was ever announced. The plan, according to the participant, “just went poof into thin air.”
In a statement, White House press secretary Kayleigh McEnany said, “The premise of this article is completely false.”
This summer has illustrated in devastating detail the human and economic cost of not launching a system of national testing, which most every other industrialized nation has done. South Korea serves as the gold standard, with innovative “phone booth” and drive-through testing sites, results that get returned within 24 hours, and supportive isolation for those who test positive, including food drop-offs.
In the U.S., by contrast, cable news and front pages have been dominated by images of miles-long lines of cars in scorching Arizona and Texas heat, their drivers waiting hours for scarce diagnostic tests, and desperate Sunbelt mayors pleading in vain for federal help to expand testing capacity. In short, a “freaking debacle,” as one top public health expert put it.
We are just weeks away from dangerous and controversial school reopenings and the looming fall flu season, which the aborted plan had accounted for as a critical deadline for establishing a national system for quickly identifying new outbreaks and hot spots.
Without systematic testing, “We might as well put duct tape over our eyes, cotton in our ears, and hide under the bed,” said Dr. **Margaret Bourdeaux**, research director for the Harvard Medical School Program in Global Public Policy.
Though President Trump likes to trumpet America’s sheer number of tests, that metric does not account for the speed of results or the response to them, said Dr. **June-Ho Kim**, a public health researcher at Ariadne Labs, a collaboration between Harvard’s T.H. Chan School of Public Health and Brigham and Women’s Hospital, who leads a team studying outlier countries with successful COVID-19 responses. “If you’re pedaling really hard and not going anywhere, it’s all for naught.”
With no bankable national plan, the effort to create one has fallen to a network of high-level civilians and nongovernmental organizations. The most visible effort is led by the Rockefeller Foundation and its soft-spoken president, Dr. **Rajiv Shah**. Focused and determinedly apolitical, Shah, 47, is now steering a widening and bipartisan coalition that includes three former FDA commissioners, a Nobel Prize–winning economist, a movie star, and 27 American cities, states, and tribal nations, all toward the far-reaching goal of getting to 30 million COVID-19 tests a week by autumn, up from the current rate of roughly 5.5 million a week.
“We know what has to be done: broad and ubiquitous testing tied to broad and effective contact tracing,” until a vaccine can be widely administered, Shah told *Vanity Fair.* “It takes about five minutes for anyone to understand that is the only path forward to reopening and recovering.” Without that, he said, “Our country is going to be stuck facing a series of rebound epidemics that are highly consequential in a really deleterious way.”
**AN ABORTED PLAN**
Countries that have successfully contained their outbreaks have empowered scientists to lead the response. But when Jared Kushner set out in March to solve the diagnostic-testing crisis, his efforts began not with public health experts but with bankers and billionaires. They saw themselves as the “A-team of people who get shit done,” as one participant proclaimed in a March *Politico* article.
Kushner’s brain trust included **Adam Boehler**, his summer college roommate who now serves as chief executive officer of the newly created U.S. International Development Finance Corporation, a government development bank that makes loans overseas. Other group members included **Nat Turner**, the cofounder and CEO of Flatiron Health, which works to improve cancer treatment and research.
A Morgan Stanley banker with no notable health care experience, **Jason Yeung** took a leave of absence to join the task force. Along the way, the group reached out for advice to billionaires, such as Silicon Valley investor **Marc Andreessen.**
The group’s collective lack of relevant experience was far from the only challenge it faced. The obstacles arrayed against any effective national testing effort included: limited laboratory capacity, supply shortages, huge discrepancies in employers’ abilities to cover testing costs for their employees, an enormous number of uninsured Americans, and a fragmented diagnostic-testing marketplace.
According to one participant, the group did not coordinate its work with a diagnostic-testing team at Health and Human Services, working under Admiral **Brett Giroir**, who was appointed as the nation’s “testing czar” on March 12. Kushner’s group was “in their own bubble,” said the participant. “Other agencies were in their own bubbles. The circles never overlapped.”
In the White House statement, McEnany responded, “Jared and his team worked hand-in-hand with Admiral Giroir. The public-private teams were embedded with Giroir and represented a single and united administration effort that succeeded in rapidly expanding our robust testing regime and making America number one in testing.”
As it evolved, Kushner’s group called on the help of several top diagnostic-testing experts. Together, they worked around the clock, and through a forest of WhatsApp messages. The effort of the White House team was “apolitical,” said the participant, and undertaken “with the nation’s best interests in mind.”
Kushner’s team hammered out a detailed plan, which *Vanity Fair* obtained. It stated, “Current challenges that need to be resolved include uneven testing capacity and supplies throughout the US, both between and within regions, significant delays in reporting results (4-11 days), and national supply chain constraints, such as PPE, swabs, and certain testing reagents.”
The plan called for the federal government to coordinate distribution of test kits, so they could be surged to heavily affected areas, and oversee a national contact-tracing infrastructure. It also proposed lifting contract restrictions on where doctors and hospitals send tests, allowing any laboratory with capacity to test any sample. It proposed a massive scale-up of antibody testing to facilitate a return to work. It called for mandating that all COVID-19 test results from any kind of testing, taken anywhere, be reported to a national repository as well as to state and local health departments.
And it proposed establishing “a national Sentinel Surveillance System” with “real-time intelligence capabilities to understand leading indicators where hot spots are arising and where the risks are high vs. where people can get back to work.”
By early April, some who worked on the plan were given the strong impression that it would soon be shared with President Trump and announced by the White House. The plan, though imperfect, was a starting point. Simply working together as a nation on it “would have put us in a fundamentally different place,” said the participant.
But the effort ran headlong into shifting sentiment at the White House. Trusting his vaunted political instincts, President Trump had been downplaying concerns about the virus and spreading misinformation about it—efforts that were soon amplified by Republican elected officials and right-wing media figures. Worried about the stock market and his reelection prospects, Trump also feared that more testing would only lead to higher case counts and more bad publicity. Meanwhile, Dr. **Deborah Birx**, the White House’s coronavirus response coordinator, was reportedly sharing models with senior staff that optimistically—and erroneously, it would turn out—predicted the virus would soon fade away.
Against that background, the prospect of launching a large-scale national plan was losing favor, said one public health expert in frequent contact with the White House’s official coronavirus task force.
Most troubling of all, perhaps, was a sentiment the expert said a member of Kushner’s team expressed: that because the virus had hit blue states hardest, a national plan was unnecessary and would not make sense politically. “The political folks believed that because it was going to be relegated to Democratic states, that they could blame those governors, and that would be an effective political strategy,” said the expert.
That logic may have swayed Kushner. “It was very clear that Jared was ultimately the decision maker as to what [plan] was going to come out,” the expert said.
In her statement, McEnany said, “The article is completely incorrect in its assertion that any plan was stopped for political or other reasons. Our testing strategy has one goal in mind—delivering for the American people—and is being executed and modified daily to incorporate new facts on the ground.”
On April 27, Trump stepped to a podium in the Rose Garden, flanked by members of his coronavirus task force and leaders of America’s big commercial testing laboratories, Quest Diagnostics and LabCorp, and finally announced a testing plan: It bore almost no resemblance to the one that had been forged in late March, and shifted the problem of diagnostic testing almost entirely to individual states.
Under the plan released that day, the federal government would act as a facilitator to help increase needed supplies and rapidly approve new versions of diagnostic-testing kits. But the bulk of the effort to operate testing sites and find available labs fell to the states.
“I had this naive optimism: This is too important to be caught in a partisan filter of how we view truth and the world,” said **Rick Klausner**, a Rockefeller Foundation adviser and former director of the National Cancer Institute. “But the federal government has decided to abrogate responsibility, and basically throw 50 states onto their own.”
**THE SUMMER OF DISASTER**
It soon became clear that ceding testing responsibility to the states was a recipe for disaster, not just in Democratic-governed areas but across the country.
In April, Phoenix, Arizona, was struggling just to provide tests to its health care workers and patients with severe symptoms of COVID-19. When Mayor **Kate Gallego** reached out to the federal government for help, she got an unmistakable message back: America’s fifth-largest city was on its own. “We didn’t have a sufficient number of cases to warrant” the help, Gallego told *Vanity Fair.*
Phoenix found itself in a catch-22, which the city’s government relations manager explained to lawyers in an April 21 email obtained by *Vanity Fair* through a public records request: “On a call with the county last week the Mayor was told that the region has [not] received FEMA funds related to testing because we don’t have bad numbers. The problem with that logic is that the Mayor believes we don’t have bad numbers because [of] a lack of testing.”
In June, Phoenix’s case counts began to rise dramatically. At a drive-through testing site near her house, Gallego saw miles-long lines of cars waiting in temperatures above 100 degrees. “We had people waiting 13 hours to get a test,” said Gallego. “These are people who are struggling to breathe, whose bodies ache, who have to sit in a car for hours. One man, his car had run out of gas and he had to refill while struggling to breathe.”
Gallego’s own staff members were waiting two weeks to get back test results, a period in which they could have been unwittingly transmitting the virus. “The turnaround times are way beyond what’s clinically relevant,” said Dr. **James Lawler**, executive director of international programs and innovation at the Global Center for Health Security at the University of Nebraska Medical Center.
By July 5, Gallego was out of patience. She went on ABC News, wearing a neon-pink blouse, and politely blasted the federal response: “We’ve asked FEMA if they could come and do community-based testing here. We were told they’re moving away from that, which feels like they’re declaring victory while we’re still in crisis mode.”
Three days later, at a press conference, the White House’s testing czar, Admiral Giroir, blasted her back by name. Claiming that the federal government was already operating or contributing support for 41 Phoenix testing sites, he said: “Now, two days ago, I heard that Mayor Gallego was unhappy because there was no federal support…. It was clear to me that Phoenix was not in tune with all the things that the state were doing.”
Gallego recounted how her mother “just happened to catch this on CNN. She sent me a text message saying, ‘I don’t think they like you at the White House.’”
Despite Giroir’s defensiveness, however, Gallego ultimately prevailed in her public demand for help: Health and Human Services agreed to set up a surge testing site in Phoenix. “The effect was, we had to be in a massive crisis before they would help,” said Gallego.
And that is where the U.S. finds itself today—in a massive testing crisis. States have been forced to go their own way, amid rising case counts, skyrocketing demand for tests, and dwindling laboratory capacity. By mid-July, Quest Diagnostics announced that the average time to turn around test results was seven days.
It is obvious to experts that 50 individual states cannot effectively deploy testing resources amid vast regulatory, financial, and supply-chain obstacles. The diagnostic-testing industry is a “loosely constructed web,” said Dr. Pellini of Section 32, “and COVID-19 is a stage five hurricane.”
Dr. Lawler likened the nation’s balkanized testing infrastructure to the “early 20th century, when each city had its own electrical grid and they weren’t connected.” If one area lost power, “you couldn’t support it by diverting power from another grid.”
Experts are now warning that the U.S. testing system is on the brink of collapse. “We are at a very bad moment here,” said Margaret Bourdeaux. “We are about to lose visibility on this monster and it’s going to rampage through our whole country. This is a massive emergency.”
**THE PLOT TO SAVE AMERICA**
In late January, Rajiv Shah, president of the Rockefeller Foundation, went to Davos, Switzerland, and served on a panel at the World Economic Forum with climate activist **Greta Thunberg**. There, he had coffee with WHO Director-General Dr. **Tedros Adhanom Ghebreyesus**, whom he’d known from his years working in global public health, first at the Gates Foundation and then as director of USAID, an international development agency within the U.S. government.
Shah returned to New York, and to the Rockefeller Foundation headquarters, with a clear understanding: *SARS-CoV-2* was going to be the big one.
The Rockefeller Foundation, which aims to address global inequality with a $4.4 billion endowment, helped create America’s modern public health system through the early work of the Rockefeller Sanitary Commission to eradicate hookworm disease. Shah immediately began to refocus the foundation on the coming pandemic, and hired a worldwide expert, Dr. **Jonathan Quick**, to guide its response.
Meanwhile, he kept watching and waiting for what he assumed would be a massive federal mobilization. “The normal [strong] federal emergency response, protocols, guidance, materials, organization, and leadership were not immediately taking form,” he said. “It was pretty obvious the right things weren’t happening.”
As director of USAID from 2009 to 2015, Shah led the U.S. response to both the Haiti earthquake and the West African Ebola outbreak, and knew that the “relentless” collection of real-time metrics in a disaster was essential.
During the Ebola outbreak, which he managed from West Africa, he brought in a world-famous European epidemiologist, **Hans Rosling**, and President **Barack Obama**’s chief information officer to develop a detailed set of metrics, update them continuously in a spreadsheet, and send them daily to 25 top U.S. government officials. When it comes to outbreaks, said Shah, “If you don’t get this thing early, you’re chasing an exponentially steep curve.”
On April 21, the Rockefeller Foundation released a detailed plan for what it described as the “largest public health testing program in American history,” a massive scale-up from roughly 1 million tests a week at the time to 3 million a week by June and 30 million by the fall.
Estimating the cost at $100 billion, it proposed an all-hands-on-deck approach that would unite federal, state, and local governments; academic institutions; and the private and nonprofit sectors. Together, they would rapidly optimize laboratory capacity, create an emergency supply chain, build a 300,000-strong contact-tracing health corps, and create a real-time public data platform to guide the response and prevent reemergence.
The Rockefeller plan sought to do exactly what the federal government had chosen not to: create a national infrastructure in a record-short period of time. “Raj doesn’t do non-huge things,” said **Andrew Sweet**, the Rockefeller Foundation’s managing director for COVID-19 response and recovery. In a discussion with coalition members, Dr. **Anthony Fauci** called the Rockefeller plan “music to my ears.”
Reaching out to state and local governments, the foundation and its advisers soon became flooded with calls for help from school districts, hospital systems, and workplaces, all desperate for guidance. In regular video calls, a core advisory team that includes Shah, former FDA commissioner **Mark McClellan**, former National Cancer Institute director Rick Klausner, and Section 32’s Mike Pellini worked through how best to support members of its growing coalition.
Schools “keep hitting refresh on the CDC website and nothing’s changed in the last two months,” Shah told his colleagues in a video meeting in June. In the absence of trustworthy federal guidance, the Rockefeller team hashed out an array of issues: How should schools handle symptomatic and asymptomatic students? What about legal liability? What about public schools that were too poor to even afford a nurse?
(Last week, the CDC issued new guidelines that enthusiastically endorsed reopening schools and downplayed the risks, after coming under heavy pressure from President Trump to revise guidelines that he said were “very tough and expensive.”)
Through a testing-solutions group, the foundation is collaborating with city, state, and other testing programs, including those on Native American reservations, and helping to bolster them.
“They came on board and turbocharged us,” said **Ann Lee**, CEO of the humanitarian organization CORE (Community Organized Relief Effort), cofounded by Lee and the actor **Sean Penn**. CORE now operates 44 testing sites throughout the U.S., including Dodger Stadium in Los Angeles and mobile units within the Navajo Nation, which also offer food and essential supplies.
It may seem impossible for anyone but the federal government to scale up diagnostic testing one hundred-fold through a painstaking and piecemeal approach. But in private conversations, dispirited members of the White House task force urged members of the Rockefeller coalition to persist in their efforts. “Despite what we might be hearing, there is nothing being done in the administration on testing,” one of them was told on a phone call.
“It was a scary and telling moment,” the participant recounted.
**A BAD GAMBLE**
Despite the Rockefeller Foundation’s round-the-clock work to guide the U.S. to a nationwide testing system essential to reopening, the foundation has not yet been able to bend the most important curve of all: the Trump administration’s determined disinterest in big federal action.
On July 15, in a video call with journalists, Dr. Shah looked visibly frustrated. The next day, the Rockefeller Foundation would be releasing a follow-up report: It called on the federal government to commit $75 billion more to testing and contact tracing, work to break through the testing bottlenecks that had led to days-long delays in the delivery of test results, and vastly increase more rapid point-of-care tests.
Though speaking in a typically mild-mannered tone, Shah delivered a stark warning: “We fear the fall will be worse than the spring.” He added, putting it bluntly: “America is not near the top of countries who have handled COVID-19 effectively.”
Just three days later, news reports revealed that the Trump administration was trying to block any new funding for testing and contact tracing in the new coronavirus relief package being hammered out in Congress. As one member of the Rockefeller coalition said of the administration’s response, “We’re dealing with a schizophrenic organization. Who the hell knows what’s going on? It’s just insanity.”
On Friday, July 31, the U.S. House Select Subcommittee on the Coronavirus, which is investigating the federal response, will hold a hearing to examine the “urgent need” for a comprehensive national plan, at which Dr. Fauci, CDC director **Robert Redfield**, and Admiral Brett Giroir will testify. Among other things, the subcommittee is probing whether the Trump administration sought to suppress testing, in part due to Trump’s claim at his Tulsa, Oklahoma, rally in June that he ordered staff to “slow the testing down.”
The gamble that son-in-law real estate developers, or Morgan Stanley bankers liaising with billionaires, could effectively stand in for a well-coordinated federal response has proven to be dead wrong. Even the smallest of Jared Kushner’s solutions to the pandemic have entangled government agencies in confusion and raised concerns about illegality.
In the three months after the mysterious test kits arrived at the UAE embassy, diplomats there had been prodding the U.S. government to make good on the $52 million shipment. Finally, on June 26, lawyers for the Department of Health and Human Services sent a cable to the embassy, directed to the company which had misspelled its own name on the original invoice: Cogna Technology Solutions LLC.
The cable stated, “HHS is unable to remit payment for the test kits in question, as the Department has not identified any warranted United States contracting officer” or any contract documents involved in the procurement. The cable cited relevant federal contract laws that would make it “unlawful for the Government to pay for the test kits in question.”
But perhaps most relevant for Americans counting on the federal government to mount an effective response to the pandemic and safeguard their health, the test kits didn’t work. As the Health and Human Services cable to the UAE embassy noted: “When the kits were delivered they were tested in accordance with standard procedures and were found to be contaminated and unusable.”
An FDA spokesperson told *Vanity Fair* the tests may have been rendered ineffective because of how they were stored when they were shipped from the Middle East. “The reagents should be kept cold,” the spokesperson said.
Although officials with FEMA and Health and Human Services would not acknowledge that the tests even exist, stating only that there was no official government contract for them, the UAE’s records are clear enough. As a spokesperson for the UAE embassy confirmed, “the US Government made an urgent request for additional COVID-19 test kits from the UAE government. One million test kits were delivered to the US government by April 1. An additional 2.5 million test kits were delivered to the US government by April 20.”
The tests may not have worked, in other words, but Donald Trump would have been pleased at the sheer number of them.
*This article has been updated to include a statement from the White House.*
*Vanity Fair*
— As Chaos Engulfs Trump Campaign, Loyalists Look For the Next Thing
— In Mary Trump’s New Book, a Conclusive Diagnosis of Donald Trump’s Psychopathology
— For Some on Wall Street Beating Trump Is More Important Than Money
— Bill Barr Is Running an October-Surprise Factory at Justice
— Bari Weiss Makes Her Bid for Woke-Wars Martyrdom
— Inside the Cult of Trump, His Rallies Are Church and He Is the Gospel
— From the Archive: Untangling the Symbiosis of Donald Trump and Roy Cohn
Looking for more? Sign up for our daily Hive newsletter and never miss a story.
| true | true | true |
This spring, a team working under the president’s son-in-law produced a plan for an aggressive, coordinated national COVID-19 response that could have brought the pandemic under control. So why did the White House spike it in favor of a shambolic 50-state response?
|
2024-10-12 00:00:00
|
2020-07-30 00:00:00
|
article
|
vanityfair.com
|
Vanity Fair
| null | null |
|
32,304,829 |
https://www.weforum.org/agenda/2022/07/4-ways-to-incorporate-cyber-resilience-in-your-business/
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
16,449,707 |
https://plato.stanford.edu/entries/disagreement/
|
Disagreement
|
Frances; Bryan; Matheson; Jonathan
|
# Disagreement
*First published Fri Feb 23, 2018; substantive revision Thu Feb 29, 2024*
We often find ourselves in disagreement with others. You may think nuclear energy is so volatile that no nuclear energy plants should be built anytime soon. But you are aware that there are many people who disagree with you on that very question. You disagree with your sister regarding the location of the piano in your childhood home, with you thinking it was in the primary living area and her thinking it was in the small den. You and many others believe Jesus Christ rose from the dead; millions of others disagree.
It seems that awareness of disagreement can, at least in many cases, supply one with a powerful reason to think that one’s belief is false. When you learned that your sister thought the piano had been in the den instead of the living room, you acquired a good reason to think it really wasn’t in the living room, as you know full well that your sister is a generally intelligent individual, has the appropriate background experience (she lived in the house too), and is about as honest, forthright, and good at remembering events from childhood as you are. If, in the face of all this, you stick with your belief that the piano was in the living room, will your retaining that belief be reasonable?
In the piano case there is probably nothing important riding on the question of what to do in the face of disagreement. But in many cases our disagreements are of great weight, both in the public arena and in our personal lives. You may disagree with your spouse or partner about whether to live together, whether to get married, where you should live, or how to raise your children. People with political power disagree about how to spend enormous amounts of money, or about what laws to pass, or about wars to fight. If only we were better able to resolve our disagreements, we would probably save millions of lives and prevent millions of others from living in poverty.
Disagreement has been put to many tasks in philosophy. In metaethics, disagreements about ethics have been used to motivate anti-realist views. In the philosophy of religion, disagreement has been used to motivate religious pluralism. This article examines the central epistemological issues tied to the recognition of disagreement, the implications that disagreement has for our knowledge and the rationality of our beliefs.
- 1. Disagreement and Belief
- 2. Belief-Disagreement vs. Action-Disagreement
- 3. Response to Disagreement vs. Subsequent Level of Confidence
- 4. Disagreement with Superiors, Inferiors, Peers, and Unknowns
- 5. Peer Disagreements
- 6. Disagreement By the Numbers
- 7. Disagreement and Skepticism
- Bibliography
- Academic Tools
- Other Internet Resources
- Related Entries
## 1. Disagreement and Belief
To a certain extent, it may seem that there are just three doxastic attitudes to adopt regarding the truth of a claim: believe it’s true, believe it’s false (i.e., disbelieve it), and suspend judgment on it. In the most straightforward sense, two individuals disagree about a proposition when they adopt different doxastic attitudes toward the same proposition (i.e., one believes it and one disbelieves it, or one believes it and one suspends judgment). But of course there are levels of confidence one can have regarding a proposition as well. We may agree that global warming is occurring but you may be much more confident than I am. It can be useful to use ‘disagreement’ to cover any difference in levels of confidence: if \(X\) has one level of confidence regarding belief \(B\)’s truth while \(Y\) has a different level of confidence, then they “disagree” about \(B\)—even if this is a slightly artificial sense of ‘disagree’. These levels of confidence, or degrees of belief, are often represented as point values on a 0–1 scale (inclusive), with larger values indicating greater degrees of confidence that the proposition is true. Even if somewhat artificial, such representations allow for more precision in discussing cases.
We are contrasting disagreements about belief from disagreements about matters of taste. Our focus is on disagreements where there is a fact of the matter, or at least the participants are reasonable in believing that there is such a fact.
## 2. Belief-Disagreement vs. Action-Disagreement
Suppose Jop and Dop are college students who are dating. They disagree
about two matters: whether it’s harder to get top grades in
economics classes or philosophy classes, and whether they should move
in together this summer. The first disagreement is over *the truth
of a claim*: is the claim (or belief) ‘It is harder to get
top grades in economics classes compared to philosophy classes’
true or not? The second disagreement is over *an action*:
should we move in together or not (the action = moving in together)?
Call the first kind of disagreement *belief-disagreement*; call
the second kind *action-disagreement*.
The latter is very different from the former. Laksha is a doctor faced with a tough decision regarding one of her patients. She needs to figure out whether it’s best, all things considered, to just continue with the medications she has been prescribing or stop them and go with surgery. She confers closely with some of her colleagues. Some of them say surgery is the way to go, others say she should continue with medications and see what happens, but no one has a firm opinion: all the doctors agree that it’s a close call, all things considered. Laksha realizes that as far as anyone can tell it really is a tie.
In this situation Laksha should probably suspend judgment on each of the two claims ‘Surgery is the best overall option for this patient’ and ‘Medication is the best overall option for this patient’. When asked ‘Which option is best?’ she should suspend judgment.
That’s all well and good, but she still has to *do*
something. She can’t just refuse to treat the patient. Even if
she continues to investigate the case for days and days, in effect she
has made the decision to not do surgery. She has made a choice even if
she dithers.
The point is this: when it comes to *belief-disagreements*,
there are three broad options with respect to a specific claim:
believe it, disbelieve it, and suspend judgment on it. (And of course
there are a great many levels of confidence to take as well.) But when
it comes to *action-disagreements*, there are just two options
with respect to an action \(X\): do \(X\), don’t do \(X\).
Suspending judgment just doesn’t exist when it comes to an
action. Or, to put it a different way, suspending judgment on whether
to do \(X\) does exist but is pretty much the same thing as not doing
\(X\), since in both cases you don’t do \(X\) (Feldman
2006c).
Thus, there are disagreements over *what to believe* and
*what to do*. Despite this distinction, we can achieve some
simplicity and uniformity by construing disagreements over what to do
as disagreements over what to believe. We do it this way: if we
disagree over whether to do action \(X\), we are disagreeing over the
truth of the claim ‘We should do \(X\)’ (or ‘I
should do \(X\)’ or ‘\(X\) is the best thing for us to
do’; no, these aren’t all equivalent). This translation of
action-disagreements into claim-disagreements makes it easy for us to
construe *all* disagreements as disagreements about what to
believe, where the belief may or may not concern an action. Keep in
mind, though, that this “translation” doesn’t mean
that action-disagreements are just like belief-disagreements that
don’t involve actions: the former still requires a choice on
what one is actually going to do.
With those points in mind, we can formulate the primary questions about the epistemology of disagreement.
However, it is worth noting that *agreement* also has
epistemological implications. If learning that a large number and
percentage of your epistemic peers or superiors disagree with you
should probably make you lower your confidence in your belief, then
learning that those same individuals agree with you should probably
make you raise your confidence in your belief—provided they have
greater confidence in it than you did before you found out about their
agreement.
In posing the questions we start with a single individual who realizes that one or more other people disagree/agree with her regarding one of her beliefs. We can formulate the questions with regard to just disagreement or to agreement and disagreement; we also have the choice of focusing on just agreement/disagreement or going with levels of confidence.
Here are the primary epistemological questions for just disagreement and no levels of confidence:
Response Question: Suppose you realize that some people disagree with your belief \(B\). How must you respond to the realization in order for thatresponseto be epistemically rational (or perhaps wise)?
Belief Question: Suppose you realize that some people disagree with your belief \(B\). How must you respond to the realization in order for your subsequentpositionon \(B\) to be epistemically rational?
Here are the questions for agreement/disagreement plus levels of conviction:
Response Question*: Suppose you realize that some people have a confidence level in \(B\) that is different from yours. How must you respond to the realization in order for thatresponseto be epistemically rational (or perhaps wise)?
Belief Question*: Suppose you realize that some people have a confidence level in \(B\) that is different from yours. How must you respond to the realization in order for your subsequentpositionon \(B\) to be epistemically rational?
## 3. Response to Disagreement vs. Subsequent Level of Confidence
A person can start out with a belief that is irrational, obtain some new relevant evidence concerning that belief, respond to that new evidence in a completely reasonable way, and yet end up with an irrational belief. This fact is particularly important when it comes to posing the central questions regarding the epistemology of disagreement (Christensen 2011).
Suppose Bub’s belief that Japan is a totalitarian state, belief \(J\), is based on a poor reading of the evidence and a raging, irrational bias that rules his views on this topic. He has let his bias ruin his thinking through his evidence properly.
Then he gets some new information: some Japanese police have been caught on film beating government protesters. After hearing this, Bub retains his old confidence level in \(J\).
We take it that when Bub learns about the police, he has not acquired some new information that should make him think ‘Wait a minute; maybe I’m wrong about Japan’. He shouldn’t lose confidence in his belief \(J\) merely because he learned some facts that do not cast any doubt on his belief!
The lesson of this story is this: *Bub’s action of
maintaining his confidence in his belief as a result of his new
knowledge is reasonable even though his retained belief itself is
unreasonable*. Bub’s assessment of the *original*
evidence concerning \(J\) was irrational, but his reaction to the
*new* information was rational; his subsequent belief in \(J\)
was (still) irrational (because although the video gives a little
support to \(J\), it’s not much). The question, ‘Is Bub
being rational after he got his new knowledge?’ has two
reasonable interpretations: ‘Is his retained belief in \(J\)
rational after his acquisition of the new knowledge?’ vs.
‘Is his response to the new knowledge rational?’
On the one hand, “rationality demands” that upon his
acquisition of new knowledge Bub drop his belief \(J\) that Japan is a
totalitarian state: after all, his overall evidence for it is very
weak. On the other hand, “rationality demands” that upon
his acquisition of new knowledge Bub keep his belief \(J\) *given
that that acquisition—which is the only thing that’s
happened to him—gives him no reason to doubt* \(J\). This
situation still might strike you as odd. After all, we’re saying
that Bub is being rational in keeping an irrational belief! But no:
that’s not what we’re saying. The statement ‘Bub is
being rational’ is ambiguous: is it saying that Bub’s
retained belief \(J\) is rational or is it saying that Bub’s
retaining of that belief was rational? The statement can take on
either meaning, and the two meanings end up with different verdicts:
the *retained belief* is irrational but the *retaining*
of the belief is rational. In the first case, a state is being
evaluated, in the second, an action is being evaluated.
Consider a more mundane case. Jack hears a bump in the night and
irrationally thinks there is an intruder in his house (he has long had
three cats and two dogs, so he should know by now that bumps are
usually caused by his pets; further, he has been a house owner long
enough to know full well that old houses like his make all sorts of
odd noises at night, pets or no). Jack has irrational belief \(B\):
there is an intruder upstairs or there is an intruder downstairs. Then
after searching upstairs he learns that there is no intruder upstairs.
Clearly, the reasonable thing for him to do is infer that there is an
intruder downstairs—that’s the epistemically reasonable
*cognitive move* to make in response to the new information,
given—despite the fact that the new belief ‘There is an
intruder downstairs’ is irrational in an evidential sense.
These two stories show that one’s action of retaining one’s belief—that intellectual action—can be epistemically fine even though the retained belief is not. And, more importantly, we have to distinguish two questions about the acquisition of new information (which need not have anything at all to do with disagreement):
- After you acquire some new information relevant to a certain
belief \(B\) of yours, what should your new level of confidence in
\(B\) be in order for
*your new level of confidence regarding*\(B\) to be rational? - After you acquire some new information relevant to a certain
belief \(B\) of yours, what should your new level of confidence in
\(B\) be in order for
*your response to the new information*to be rational?
The latter question concerns an *intellectual action* (an
intellectual response to the acquisition of new information), whereas
the former question concerns the *subsequent level of
confidence* itself, the new confidence level you end up with,
which comes about partially as a causal result of the intellectual
action. As we have seen with the Japan and intruder stories the
epistemic reasonableness of the one is partially independent of that
of the other.
## 4. Disagreement with Superiors, Inferiors, Peers, and Unknowns
A child has belief \(B\) that Hell is a real place located in the
center of the earth. You disagree. This is a case in which you
disagree with someone who you recognize to be your *epistemic
inferior* on the question of whether \(B\) is true. You believe
that Babe Ruth was the greatest baseball player ever. Then you find
out that a sportswriter who has written several books on the history
of baseball disagrees, saying that so-and-so was the greatest ever. In
this case, you realize that you’re disagreeing with an
*epistemic superior* on the matter, since you know that
you’re just an amateur when it comes to baseball. In a third
case, you disagree with your sister regarding the name of the town
your family visited on vacation when you were children. You know from
long experience that your memory is about as reliable as hers on
matters like this one; this is a disagreement with a recognized
*epistemic peer*.
There are several ways to define the terms ‘superior’, ‘inferior’, and ‘peer’ (Elga, 2007; see section 5 below).
You can make judgments about how likely someone is compared to you
when it comes to answering ‘Is belief \(B\) true?’
correctly. If you think she is more likely (e.g., you suppose that the
odds that she will answer it correctly are about 90% whereas your odds
are just around 80%), then you think she is your *likelihood
superior* on that question; if you think she is less likely, then
you think she is your *likelihood inferior* on that question;
if you think she is about equally likely, then you think she is your
*likelihood peer* on that question. Another way to describe
these distinctions is by referencing the epistemic position of the
various parties. One’s epistemic position describes how
well-placed they are, epistemically speaking, with respect to a given
proposition. The better one’s epistemic position, the more
likely one is to be correct.
There are many factors that help determine one’s epistemic position, or how likely one is to answer ‘Is belief \(B\) true?’ correctly. Here are the main ones (Frances 2014):
- cognitive ability had while answering the question
- evidence brought to bear in answering the question
- relevant background knowledge
- time devoted to answering the question
- distractions encountered in answering the question
- relevant biases
- attentiveness when answering the question
- intellectual virtues possessed
Call these *Disagreement Factors*. Presumably, what determines
that \(X\) is more likely than \(Y\) to answer ‘Is \(B\)
true?’ correctly are the differences in the Disagreement Factors
for \(X\) and \(Y\).
For any given case of disagreement between just two people, the odds are that they will not be equivalent on all Disagreement Factors: \(X\) will surpass \(Y\) on some factors and \(Y\) will surpass \(X\) on other factors. If you are convinced that a certain person is clearly lacking compared to you on many Disagreement Factors when it comes to answering the question ‘Is \(B\) true?’ then you’ll probably say that you are more likely than she is to answer the question correctly provided you are not lacking compared to her on other Disagreement Factors. If you are convinced that a certain person definitely surpasses you on many Disagreement Factors when it comes to answering ‘Is \(B\) true?’ then you’ll probably say that you are less likely than she is to answer the question correctly provided you have no advantage over her when it comes to answering ‘Is \(B\) true?’. If you think the two of you differ in Disagreement Factors but the differences do not add up to one person having a net advantage (so you think any differences cancel out), then you’ll think you are peers on that question.
Notice that in this peer case you need not think that the two of you
are equal on each Disagreement Factor. On occasion, a philosopher will
define ‘epistemic peer’ so that \(X\) and \(Y\) are peers
on belief \(B\) if and only if they are equal on *all*
Disagreement Factors. If \(X\) and \(Y\) are equal on all Disagreement
Factors, then they will be equally likely to judge \(B\) correctly,
but the reverse does not hold. Deficiencies of a peer in one area may
be accounted for by advantages in other areas with the final result
being that the two individuals are in an equivalently good epistemic
position despite the existence of some inequalities regarding
particular disagreement factors.
In order to understand the alternative definitions of ‘superior’, ‘inferior’, and ‘peer’, we will look at two cases of disagreement (Frances 2014).
Suppose I believe \(B\), that global warming is happening. Suppose I
also believe \(P\), that Taylor is my peer regarding \(B\) in this
sense: I think we are equally likely to judge \(B\) correctly. I have
this opinion of Taylor because I figure that she knows about as well
as I do the basic facts about expert consensus, she understands and
respects that consensus about as much as I do, and she based her
opinion of \(B\) on those facts. (I know she has some opinion on \(B\)
but I have yet to actually hear her voice it.) Thus, I think she is my
*likelihood peer* on \(B\).
But in another sense I don’t think she is my peer on \(B\).
After all, if someone asked me ‘Suppose you find out later today
that Taylor sincerely thinks \(B\) is false. What do you think are the
odds that you’ll be right and she’ll be wrong about
\(B\)?’ I would reply with ‘Over 95%!’ I would
answer that way because I’m *very* confident in
\(B\)’s truth and if I find out that Taylor disagrees with that
idea, then I will be quite confident that she’s wrong and
I’m right. So in that sense I think I have a *definite
epistemic advantage over her*: given how confident I am in \(B\),
I think that if it turns out we disagree over \(B\), there is a 95%
chance I’m right and she’s wrong. Of course, given that I
think that we are equally likely to judge \(B\) correctly and
I’m very confident in \(B\), I’m also very confident that
she will judge \(B\) to be true; so when I’m asked to think
about the possibility that Taylor thinks \(B\) is false, I think
I’m being asked to consider a very unlikely scenario. But the
important point here is this: if I have the view that if it turns out
that she really thinks \(B\) is false then the odds that I’m
right and she’s wrong are 95%, then in some sense my view is
that she’s not “fully” my peer on \(B\), as I think
that when it comes to the possibility of disagreement I’m very
confident that I will be in the right and she won’t be.
Now consider another case. Suppose Janice and Danny are the same age and take all the same math and science classes through high school. They are both moderately good at math. In fact, they almost always get the same grades in math. On many occasions they come up with different answers for homework problems. As far as they have been able to determine, in those cases 40% of the time Janice has been right, 40% of the time Danny has been right, and 20% of the time they have both been wrong. Suppose they both know this interesting fact about their track records! Now they are in college together. Danny believes, on the basis of their track records, that on the next math problem they happen to disagree about, the probability that Janice’s answer is right equals the probability that his answer is right—unless there is some reason to think one of them has some advantage in this particular case (e.g., Danny has had a lot more time to work on it, or some other significant discrepancy in Disagreement Factors). Suppose further that on the next typical math problem they work on Danny thinks that neither of them has any advantage over the other this time around. And then Danny finds out that Janice got an answer different from his.
In this math case Danny first comes to think that \(B\) (his answer) is true. But he also thinks that if he were to discover that Janice thinks \(B\) is false, the probability that he is right and Jan is wrong are equal to the probability that he is wrong and Janice is right. That’s very different from the global warming case in which I thought that if I were to discover that Taylor thinks \(B\) is false, the probability that I’m right and she’s wrong are 19 times the probability that I’m wrong and she’s right (95% is 19 times 5%).
Let’s say that I think you’re my *conditional peer*
on \(B\) if and only if before I find out your view on \(B\) but after
I have come to believe \(B\) I think that *if* it turns out
that you disbelieve \(B\), *then* the chance that I’m
right about \(B\) is equal to the chance that you’re right about
\(B\). So although I think Taylor is my likelihood peer on the global
warming belief, I don’t think she is my conditional peer on that
belief. I think she is my conditional inferior on that matter. But in
the math case Danny thinks Janice is his likelihood peer *and*
his conditional peer on the relevant belief.
So, central to answering the Response Question and the Belief Question is the following:
Better Position Question: Are the people who disagree with \(B\) in abetter epistemic positionto correctly judge the truth-value of the belief than the people who agree with \(B\)?
Put in terms of levels of confidence we get the following:
Better Position Question*: Are the people who have a confidence level in \(B\) that is different from yours in abetter epistemic positionto correctly judge the truth-value of the belief than the people who have the same confidence level as yours?
The Better Position Question is often not very easy to answer. For the
majority of cases of disagreement, with \(X\) realizing she disagrees
with \(Y\), \(X\) will not have much evidence to think \(Y\) is her
peer, superior, or inferior when it comes to correctly judging \(B\).
For instance, if I am discussing with a neighbor whether our property
taxes will be increasing next year, and I discover that she disagrees
with me, I may have very little idea how we measure up on the
Disagreement Factors. I may know that I have more raw intelligence
than she has, but I probably have no idea how much she knows about
local politics, how much she has thought about the issue before, etc.
I will have little basis for thinking I’m her superior,
inferior, or peer. We can call these the *unknown* cases. Thus,
when you discover that you disagree with someone over \(B\), you need
not think, or have reason to think, that she is your peer, your
superior, or your inferior when it comes to judging \(B\).
A related question is whether there is any important difference between cases where you are justified in believing your interlocutor is your peer and cases where you may be justified in believing that your interlocutor is not your peer but lack any reason to think that you, or your interlocutor, are in the better epistemic position. Peerhood is rare, if not entirely a fictional idealization, yet in many real-world cases of disagreement we are not justified in making a judgment regarding which party is better positioned to answer the question at hand. The question here is whether different answers to the Response Question and the Belief Question are to be given in these two cases. Plausibly, the answer is no. An analogy may help. It is quite rare for two people to have the very same weight. So for any two people it is quite unlikely that they are ‘weight peers’. That said, in many cases it may be entirely unclear which party weighs more than the other party, even if they agree that it is unreasonable to believe they weigh the exact same amount. Rational decisions about what to do where the weight of the party matters do not seem to differ in cases where there are ‘weight peers’ and cases where the parties simply lack a good reason to believe either party weighs more. Similarly, it seems that the answers to the Response Question and the Belief Question will not differ in cases of peer disagreement and cases where the parties simply lack any good reason to believe that either party is epistemically better positioned on the matter.
Another challenge in answering the Better Position Question occurs when you are a novice about some topic and you are trying to determine who the experts on the topic are. This is what Goldman terms the ‘novice/expert problem’ (Goldman 2001). While novices ought to turn to experts for intellectual guidance, a novice in some domain seems ill-equipped to even determine who the experts in that domain are. Hardwig (1985, 1991) claims that such novice reliance on an expert must necessarily be blind, and thus exhibit an unjustified trust. In contrast, Goldman explores five potential evidential sources for reasonably determining someone to be an expert in a domain:
- Arguments presented by the contending experts to support their own views and critique their rivals’ views.
- Agreement from additional putative experts on one side or other of the subject in question.
- Appraisals by “meta-experts” of the experts’ expertise (including appraisals reflected in formal credentials earned by the experts).
- Evidence of the experts’ interests and biases vis-a-vis the question at issue.
- Evidence of the experts’ past “track-records”. (Goldman 2001, 93.)
The vast majority of the literature on the epistemic significance of disagreement, however, concerns recognized peer disagreement (for disagreement with superiors, see Frances 2013). We turn now to this issue.
## 5. Peer Disagreements
Before we begin our discussion of peer disagreements it is important
to set aside a number of cases. Epistemic peers with respect to \(P\)
are in an equally good epistemic position with respect to \(P\). Peers
about \(P\) can both be in a very good epistemic position with respect
to \(P\), or they could both be in a particularly bad epistemic
position with respect to \(P\). Put differently, two fools could be
peers. However, disagreement between fool peers has not been of
particular epistemic interest in the literature. The literature on
peer disagreement has instead focused on disagreement between
competent epistemic peers, where competent peers with respect to \(P\)
are in a good epistemic position with respect to \(P\)—they are
likely to be correct about \(P\). Our discussion of peer disagreement
will be restricted to *competent* peer disagreement. In the
literature on peer disagreements, four main views have emerged: the
Equal Weight View, the Steadfast View, the Justificationist View, and
the Total Evidence View.
### 5.1 The Equal Weight View
The Equal Weight View is perhaps the most prominently discussed view on the epistemic significance of disagreement. Competitor views of peer disagreements are best understood as a rejection of various aspects of the Equal Weight View, so it is a fitting place to begin our examination. As we see it, the Equal Weight View is a combination of three claims:
Defeat: Learning that a peer disagrees with you about \(P\) gives you a reason to believe you are mistaken about \(P\).
Equal Weight: The reason to think you are mistaken about \(P\) coming from your peer’s opinion about \(P\) is just as strong as the reason to think you are correct about \(P\) coming from your opinion about \(P\).
Independence: Reasons to discount your peer’s opinion about \(P\) must be independent of the disagreement itself.
Defenses of the Equal Weight View in varying degrees can be found in Bogardus 2009, Christensen 2007, Elga 2007, Feldman 2006, and Matheson 2015a. Perhaps the best way to understand the Equal Weight View comes from exploring the motivation that has been given for the view. We can distinguish between three broad kinds of support that have been given for the view: examining central cases, theoretical considerations, and the use of analogies. The central case that has been used to motivate the Equal Weight View is Christensen’s Restaurant Check Case.
The Restaurant Check Case. Suppose that five of us go out to dinner. It’s time to pay the check, so the question we’re interested in is how much we each owe. We can all see the bill total clearly, we all agree to give a 20 percent tip, and we further agree to split the whole cost evenly, not worrying over who asked for imported water, or skipped desert, or drank more of the wine. I do the math in my head and become highly confident that our shares are $43 each. Meanwhile, my friend does the math in her head and becomes highly confident that our shares are $45 each. (Christensen 2007, 193.)
Understood as a case of peer disagreement, where the friends have a track record of being equally good at such calculation, and where neither party has a reason to believe that on this occasion either party is especially sharp or dull, Christensen claims that upon learning of the disagreement regarding the shares he should become significantly less confident that the shares are $43 and significantly more confident that they are $45. In fact, he claims that these competitor propositions ought to be given roughly equal credence.
The Restaurant Check Case supports *Defeat* since in learning
of his peer’s belief, Christensen becomes less justified in his
belief. His decrease in justification is seen by the fact that he must
lower his confidence to be in a justified position on the issue.
Learning of the disagreement gives him reason to revise and an
opportunity for epistemic improvement. Further, the Restaurant Check
Case supports *Equal Weight*, since the reason Christensen
gains to believe he is mistaken is quite strong. Since he should be
equally confident that the shares are $45 as that they are $43, his
reasons equally support these claims. Giving the peer opinions equal
weight has typically been understood to require ‘splitting the
difference’ between the peer opinions, at least when the two
peer opinions exhaust one’s evidence about the opinions on the
matter. Splitting the difference is a kind of doxastic compromise that
calls for the peers to meet in the middle. So, if one peer believes
\(P\) and one peer disbelieves \(P\), giving the peer opinions equal
weight would call for each peer to suspend judgment about \(P\).
Applied to the richer doxastic picture that includes degrees of
belief, if one peer as a 0.7 degree of belief that \(P\) and the other
has a 0.3 degree of belief that \(P\), giving the peer opinions equal
weight will call for each peer to adopt a 0.5 degree of belief that
\(P\). It is important to note that what gets ‘split’ is
the peer attitudes, not the content of the relevant propositions. For
instance, in the Restaurant Check Case, splitting the difference does
not require believing that the shares are $44. Perhaps it is obvious
that the shares are not an even amount. Splitting the difference is
only with respect to the disparate doxastic attitudes concerning any
one proposition (the disputed target proposition). The content of the
propositions believed by the parties are not where the compromise
occurs. Finally, the Restaurant Check Case supports
*Independence*. The reasons that Christensen could have to
discount his peer’s belief about the shares could include that
he had a little too much to drink tonight, that he is especially
tired, that Christensen double checked but his friend didn’t,
etc., but could not include that the shares actually are $43, that
Christensen disagrees, etc.
Theoretical support for the Equal Weight View comes from first
thinking about ordinary cases of testimony. Learning that a reliable
inquirer has come to believe a proposition gives you a reason to
believe that proposition as well. The existence of such a reason does
not seem to depend upon whether you already have a belief about that
proposition. Such testimonial evidence is some evidence to believe the
proposition regardless of whether you agree, disagree, or have never
considered the proposition. This helps motivate *Defeat*, since
a reason to believe the proposition when you disbelieve it amounts to
a reason to believe that you have made a mistake regarding that
proposition. Similar considerations apply to more fine-grained degrees
of confidence. Testimonial evidence that a reliable inquirer has
adopted a 0.8 degree of belief that \(P\) gives you a reason to adopt
a 0.8 degree of belief toward \(P\), and this seems to hold regardless
of whether you already have a level of confidence that \(P\).
*Equal Weight* is also motivated by considerations regarding
testimonial evidence. The weight of a piece of testimonial evidence is
proportional to the epistemic position of the testifier (or what the
hearer’s evidence supports about the epistemic position of the
testifier). So, if you have reason to believe that Jai’s
epistemic position with respect to \(P\) is inferior to Mai’s,
then discovering that Jai believes \(P\) will be a weaker reason to
believe \(P\) than discovering that Mai believes \(P\). However, in
cases of peer disagreement, both parties are in an equally good
epistemic position, so it would follow that their opinions on the
matter should be given equal weight.
Finally, *Independence* has been theoretically motivated by
examining what kind of reasoning its denial would permit. In
particular, a denial of independence has been thought to permit a
problematic kind of question-begging by allowing one to use
one’s own reasoning to come to the conclusion that their peer is
mistaken. Something seems wrong with the following line of reasoning,
“My peer believes not-\(P\), but I concluded \(P\), so my peer
is wrong” or “I thought \(S\) was my peer, but \(S\)
thinks not-\(P\), and I think \(P\), so \(S\) is not my peer after
all” (see Christensen 2011). Independence forbids both of these
ways of blocking the reason to believe that you are mistaken from the
discovery of the disagreement.
The Equal Weight View has also been motivated by way of analogies. Of particular prominence are analogies to thermometers. Thermometers take in pieces of information as inputs and given certain temperature verdicts as outputs. Humans are a kind of cognitive machine that takes in various kinds of information as inputs and give doxastic attitudes as outputs. In this way, humans and thermometers are analogous. Support for the Equal Weight View has come from examining what it would be rational to believe in a case of peer thermometer disagreement. Suppose that you and I know we have equally reliable thermometers and while investigating the temperature of the room we are in discover that our thermometers give different outputs (yours reads ‘75’ and mine reads ‘72’). What is it rational for us to believe about the room temperature? It seems it would be irrational for me to continue believing it was 72 simply because that was the output of the thermometer that I was holding. Similarly, it seems irrational for me to believe that your thermometer is malfunctioning simply because my thermometer gave a different output. It seems that I would need some information independent from this ‘disagreement’ to discount your thermometer. So, it appears that I have been given a reason to believe that the room’s temperature is not 72 by learning of your thermometer, that this reason is as strong as my reason to believe it is 72, and that this reason is only defeated by independent considerations. If the analogy holds, then we have reason to accept each of the three theses of the Equal Weight View.
The Equal Weight View is not the only game in town when it comes to the epistemic significance of disagreement. In what follows we will examine the competitor views of disagreement highlighting where and why they depart from the Equal Weight View.
### 5.2 Steadfast Views
On the spectrum of views on the epistemic significance of disagreement, the Equal Weight View and the Steadfast View lie on opposite ends. While the Equal Weight View is quite conciliatory, the Steadfast View maintains that sticking to one’s guns in a case of peer disagreement can be rational. That is, discovering a peer disagreement does not mandate any doxastic change. While the Equal Weight View may be seen to emphasize intellectual humility, the Steadfast View emphasizes having the courage of your convictions. Different motivations for Steadfast Views can be seen to reject distinct aspects of the Equal Weight View. We have organized the various motivations for the Steadfast View according to which aspect of the Equal Weight View it (at least primarily) rejects.
#### 5.2.1 Denying Defeat
*Defeat* has been rejected by defenders of the Steadfast View
in a number of ways. First, *Defeat* has been denied with an
appeal to private evidence. Peter van Inwagen 1996 has defended the
Steadfast View by maintaining that in cases of peer disagreement one
can appeal to having an incommunicable insight or special evidence
that the other party lacks. The basic idea is that if I have access to
a special body of evidence that my peer lacks access to, then
realizing that my peer disagrees with me need not give me a reason to
think that I’ve made any mistake. After all, my peer
doesn’t have everything that I have to work with regarding an
evaluation of \(P\) and it can be reasonable to think that if the peer
were to be aware of everything that I am aware of, she would also
share my opinion on the matter. Further, some evidence is undoubtedly
private. While I can tell my peer about my intuitions or my
experiences, I cannot give him my intuitions or experiences. Given our
limitations, peers can never fully share their evidence. However, if
the evidence isn’t fully shared, then my peer evaluating his
evidence one way needn’t show that I have mis-evaluated my
evidence. Our evidence is importantly different. While van
Inwagen’s claims may entail that the two disagreeing parties are
not actually peers due to their evidential differences, these
consideration may be used to resist *Defeat* at least on looser
conceptions of peerhood that do not require evidential equality.
A related argument is made by Huemer 2011, who argues for an
agent-centered account of evidence. On this account, an experience
being *your own* evidentially counts for more than someone
else’s experience. So, with this conception of evidence in hand
there will be an important evidential asymmetry even in cases where
both parties share all their evidence.
Defenders of the Equal Weight View have noted that these considerations cut both ways (see Feldman 2006). For instance, while you may not be able to fully share your evidence with your peer, these same considerations motivate that your peer similarly cannot fully share his or her evidence with you. So, the symmetry that motivated the Equal Weight View may still obtain since both parties have private evidence. A relevant asymmetry only obtains if one has special reason to believe that their body of private evidence is privileged over their peer’s, and the mere fact that it is one’s own would not do this. Feldman’s Dean on the Quad case can also help make this clear.
Dean on the Quad. Suppose you and I are standing by the window looking out on the quad. We think we have comparable vision and we know each other to be honest. I seem to see what looks to me like the dean standing out in the middle of the quad. (Assume that this is not something odd. He’s out there a fair amount.) I believe that the dean is standing on the quad. Meanwhile, you seem to see nothing of the kind there. You think that no one, and thus not the dean, is standing in the middle of the quad. We disagree. Prior to our saying anything, each of us believes reasonably. Then I say something about the dean’s being on the quad, and we find out about our situation. (2007, 207–208.)
Feldman takes this case to be one where both parties should significantly conciliate even though it is clear that both possess private evidence. While both parties can report about their experience, neither party can give their experience to the other. The experiential evidence possessed by each party is private. So, if conciliation is still called for, we have reason to question the significance of private evidence.
Second, *Defeat* has been denied by focusing on how things seem
to the subject. Plantinga 2000a has argued that there is a sense of
justification that is simply doing the best that one can. Plantinga
notes that despite all the controlled variables an important asymmetry
remains even in cases of peer disagreement. In cases where I believe
\(P\) and I discover that my peer disbelieves \(P\), often \(P\) will
continue to seem true to me. That is, there is an important
phenomenological difference between the two peers—different
things seem true to them. Plantinga claims that given that we are
fallible epistemic creatures some amount of epistemic risk is
inevitable, and given this, we can do no better than believe in
accordance with what seems true to us. So, applied to cases of peer
disagreement, even upon learning that my peer disbelieves \(P\), so
long as \(P\) continues to seem true to me, it is rational for me to
continue to believe. Any reaction to the disagreement will contain
some epistemic risk, so I might as well go with how things seem to me.
A similar defense of Steadfast Views which emphasizes the
phenomenology of the subject can be found in Henderson et al.
2017.
While an individual may not be to blame for continuing to believe as things seem to them, defenders of the Equal Weight View have claimed that the notion of epistemic justification at issue here is distinct. Sometimes doing the best one can is insufficient, and while some epistemic risk is inevitable, it does not follow that the options are equally risky. While your belief may still seem true to you having discovered the disagreement, other things that seem true to you are relevant as well. For instance, it will seem to you that your interlocutor is an epistemic peer (that they are in an equally good epistemic position on the matter) and that they disagree with you. Those additional seeming states have epistemological import. In particular, they give you reason to doubt that the truth about the disputed belief is as it seems to you. The mere fact that your belief continues to seem true to you is unable to save its justificatory status. Consider the Müller-Lyer illusion:
To most, line \(B\) seems to be longer, but a careful measurement reveals that \(A\) and \(B\) are of equal lengths. Despite knowing of the illusion, however, line \(B\) continues to seem longer to many. Nevertheless, given that it also seems that a reliable measuring indicates that the lines are of equal length, one is not justified in believing that \(B\) is longer, despite it continuing to seem that way. This result holds even when we appreciate our fallibility and the fallibility of measuring instruments. A parallel account appears to apply to cases of peer disagreement. Even if your original belief continues to seem true to you, you have become aware of information that significantly questions that seeming state. Further, we can imagine a scenario where \(P\) seems true to me and I subsequently discover 10,000 peers and superiors on the issue that disagree with me about \(P\). Nevertheless, when I contemplate \(P\), it still seems true to me. In such a case, sticking to my guns about \(P\) seems to neither be doing the best that I can nor the reasonable thing to do.
Third, Defeat has been denied by denying that peer opinions about \(P\) are evidence that pertains to \(P\). Kelly 2005 distinguishes the following three claims:
- Proposition \(P\) is true.
- Body of evidence \(E\) is good evidence that \(P\) is true.
- A competent peer believes \(P\) on the basis of \(E\).
Kelly 2005 argues that while 3 is evidence for 2 it is not evidence for 1. If 3 is not evidence for 1, then in learning 3 (by discovering the peer disagreement) one does not gain any evidence relevant to the disputed proposition. If learning of the peer disagreement doesn’t affect one’s evidence relevant to the disputed proposition, then such a discovery makes no change for which doxastic attitude is justified for the peers to take toward the target proposition. On this view, the discovery of peer disagreement makes no difference for what you should believe about the disputed proposition.
Why think that 3 is not evidence for 1? Kelly 2005 cites several
reasons. First, when people cite their justification for their
beliefs, they do not typically cite things like 3. We typically treat
the fact that someone believes a proposition as *the result of*
the evidence for that proposition, not as *another piece of*
evidence for that proposition. Second, since people form beliefs on
the basis of a body of evidence, to count their belief as yet another
piece of evidence would amount to double-counting that original body
of evidence. On this line of thought, one’s belief that \(P\)
serves as something like a place-holder for the evidence upon which
one formed the belief. So, to count both the belief and the original
evidence would be to double-count the original evidence, and
double-counting is not a legitimate way of counting.
Defenders of the Equal Weight View have responded by claiming that the impropriety in citing one’s own belief as evidence for the proposition believed can be explained in ways that do not require that one’s belief is not in fact evidence. For instance, it could be that conversational maxims would be violated since the fact that one believes the proposition is already understood to be the case by the other party. Alternatively, citing one’s own belief as evidence may exhibit hubris in a way that many would want to avoid. Finally, it seems clear that someone else’s belief that \(P\) can be evidence for \(P\), so denying that the subject’s belief can be evidence for the subject entails a kind of relativity of evidence that some reject. Regarding the double-counting, it has been argued that the fact that a reliable evidential evaluator has evaluated a body of evidence to support a proposition is a new piece of evidence, one that at least enhances the support between the body of evidence and the target proposition. For instance, that a forensic expert evaluates the relevant forensic evidence to support the defendant’s guilt appears to be an additional piece of evidence in favor of the defendant’s guilt, rather than a mere repetition of that initial forensic evidence.
Finally, *Defeat* has been denied by appealing to epistemic
permissiveness. The Equal Weight View, and *Defeat* in
particular, has been thought to rely on the Uniqueness Thesis.
Uniqueness Thesis: For any body of evidence, \(E\), and proposition, \(P\), \(E\) justifies at most one competitor doxastic attitude toward \(P\).
If a body of evidence can only support one doxastic attitude between
belief, disbelief, and suspension of judgment with respect to \(P\),
and two people who share their evidence disagree about \(P\), then one
of them must have an unjustified attitude. So, if the Uniqueness
Thesis is true, there is a straightforward route to *Defeat*.
However, if evidence is permissive, allowing for multiple distinct
justified attitudes toward the same proposition, then discovering that
someone has evaluated your shared evidence differently than you have
need not give you any reason to think that you have made a mistake. If
evidence is permissive, then you may both have justified responses to
the shared evidence even though you disagree. So, another way to
motivate the Steadfast View is to endorse evidential permissiveness.
For reasons to reject or doubt the Uniqueness Thesis, see Ballantyne
and Coffman 2011, Conee 2009, Frances 2014, Goldman 2010, Jackson
2021, Kelly 2010, Kopec 2015, Raleigh 2017, Rosen 2001, Rosa 2012,
Titlebaum forthcoming, and Titlebaum and Kopec 2019.
Defenses of the Equal Weight View either defend the Uniqueness Thesis (see Dogramici and Horowitz 2016, Greco and Hedden 2016, Matheson 2011, White 2005, White 2013), or argue that the Equal Weight View is not actually committed to evidential uniqueness (see Christensen 2009, Christensen 2016, Cohen 2013, Lee 2003, Levinstein 2017, Peels and Booth 2014, and Henderson et al 2017).
#### 5.2.2 Denying Equal Weight
The Steadfast View has also been motivated by denying *Equal
Weight*. If your peer’s opinion about \(P\) does not count
for as much as your own opinion, then you may not need to make any
doxastic conciliation. While most find it implausible that your own
opinion can count for more merely because it is your own, a related
and more plausible defense comes from appealing to self-trust. Enoch
2010, Foley 2001, Pasnau 2015, Schafer 2015, Wedgwood 2007; 2010, and
Zagzebski 2012 have all appealed to self-trust in responding to peer
disagreements. Foley emphasizes the essential and ineliminable role of
first-personal reasoning. Applied to cases of disagreement, Foley
claims, “I am entitled to make what I can of the conflict using
the faculties, procedures, and opinions I have confidence in, even if
these faculties, procedures, and opinions are the very ones being
challenged by others” (2001, 79). Similarly, Wedgwood asserts
that it is rational to have a kind of egocentric bias—a
fundamental trust in one’s own faculties and mental states. On
this account, while peer disagreements have a kind of symmetry from
the third-person perspective, neither party occupies that perspective.
Rather, each party to the disagreement has a first-person perspective
from which it is rational to privilege itself. Self-trust is
fundamental and the trust that one must place in one’s own
faculties and states simply cannot be given to another.
Opponents have rejected the epistemic importance of the first-person
perspective (see Bogardus 2013b and Rattan 2014). While the
first-person perspective is ineliminable, it is not infallible.
Further, there are reasons from the first-person perspective to make
doxastic conciliation. It is *my* evidence the supports that my
interlocutor is my peer and *my* evidence about what she
believes which call for doxastic change. So, conciliation can be seen
to be called for from within the first-person perspective. One
needn’t, and indeed cannot, abandon one’s own perspective
in dealing with disagreement. There are also worries concerning what
such an emphasis on self-trust would permit. If self-trust is relevant
in cases of peer disagreement, it is difficult to see how it is not
relevant in cases of novice-expert disagreement. However, most
maintain that when the novice learns that the expert disagrees he
should make some doxastic movement if not completely defer. So,
self-trust cannot be the ultimate deciding factor in all cases of
disagreement.
#### 5.2.3 Denying Independence
Others have rejected the Equal Weight View by denying
*Independence*. According to *Independence*, reasons to
downgrade your peer must be independent of the disagreement itself.
*Independence* was motivated by the need to block certain kinds
of question-begging responses to disagreement; it rules out a
problematic kind of dogmatism. Something seems objectionable about
simply relying on your original reasoning to dismiss someone
else’s view (at least when they are your peer). But what kind of
considerations count as independent? Formulating an independence
principle that gets the cases right is not a straightforward and easy
task. For challenges to Independence and like principles, see
Arsenault and Irving 2012, Brauer 2023, Lord 2013, Moon 2018, Pittard
2019a, and Wagner 2011. Christensen 2019 examines some promising ways
forward for defending such a principle. Two issues arise here. One is
whether one can formulate a plausible, and counterexample-free,
independence principle. Another is whether any problems with
*Independence* also remove the skeptical threat from
disagreement. It could be that any problems with *Independence*
fail to give one adequate reason to remain steadfast in the face of
peer disagreement.
#### 5.2.4 The Right Reasons View
A final motivation for the Steadfast View comes from re-evaluating the evidential support relations in a case of peer disagreement. It will be helpful here to distinguish between two kinds of evidence.
First-Order Evidence: First-order evidence for \(P\) is evidence that directly pertains to \(P\).
Higher-Order Evidence: Higher-order evidence for \(P\) is evidence about one’s evidence for \(P\).
So, the cosmological argument, the teleological argument, and the problem of evil are all items of first-order evidence regarding God’s existence, whereas the fact that a competent evaluator of such evidence finds it to on balance support God’s existence is a piece of higher-order evidence that God exists. That a competent evidential evaluator has evaluated a body of evidence to support a proposition is evidence that the body of evidence in question does in fact support that proposition.
Applied to cases of peer disagreement, the first-order evidence is the evidence directly pertaining to the disputed proposition, and each peer opinion about the disputed proposition is the higher-order evidence (it is evidence that the first-order evidence supports the respective attitudes).
The Right Reasons View is a steadfast view of peer disagreement that emphasizes the role of the shared first-order evidence in peer disagreements. Following Kelly 2005 we can represent the discovery of a peer disagreement as follows:
- At \(t\), my body of evidence consists of \(E\) (the original first-order evidence for \(P\)).
- At \(t'\), having discovered the peer disagreement, my body of
evidence consists of the following:
- \(E\) (the original first-order evidence for \(P\)).
- The fact that I am competent and believe \(P\) on the basis of \(E\).
- The fact that my peer is competent and believes not-\(P\) on the basis of \(E\).
According to the Right Reasons View, the two pieces of higher-order evidence (ii) and (iii) are to be accorded equal weight. Having weighed (ii) and (iii) equally, they neutralize in my total body of evidence at t’. However, with (ii) and (iii) neutralized, I am left with (i) and am justified in believing what (i) supports. The Right Reasons View then notes that what I am justified in believing at \(t\) and what I am justified in believing at t’ is exactly the same. In both cases what I should believe is entirely a matter of what \(E\) supports, so what matters in a case of peer disagreement is what the first-order evidence supports. If I believed in accordance with my evidence at \(t\), then learning of the peer disagreement does nothing to alter what I should believe about \(P\) at \(t_2\). Having rightly responded to my reasons at \(t\), nothing epistemically changes regarding what attitude I should have toward \(P\).
This argument for the Right Reasons View has been responded to in several ways. Kelly 2010 has since rejected the argument, claiming that when a greater proportion of one’s evidence supports suspending judgment some conciliation will be called for. Since the higher-order evidence calls for suspending judgment regarding the disputed proposition, there will be a conciliatory push even if the original first order evidence still plays an important role in what attitude is justified. Others have responded to the argument by rejecting Kelly’s original description of the case (see Matheson 2009). If my evidence at \(t\) includes not only the first-order evidence, but also the higher-order evidence about myself (ii), then even if the new piece of higher-order evidence gained at \(t'\), (iii), cancels out (ii) this will still call for some doxastic conciliation from \(t\) to \(t'\). Alternatively, (ii) and (iii) can be seen to together call for a suspension of judgment over whether \(E\) supports \(P\). Some have argued that a justified suspension of judgment over whether your evidence supports \(P\) has it that your total evidence supports a suspension of judgment toward \(P\) (see Feldman 2006 and Matheson 2015a). See Lasonen-Aarnio 2014 for an alternative view of the impact of higher-order evidence.
A more recent defense of the Right Reasons View is found in Titelbaum 2015 (see also Titlebaum 2019). Titelbaum argues for the Fixed Point Thesis – that mistakes about rationality are mistakes of rationality. In other words, it is always a rational mistake to have a false belief about rationality. So, on this view a false belief about what attitude is rational does not ‘trickle down’ to affect the rationality of the lower-level belief. Given this, if an individual’s initial response to the evidence is rational, no amount of misleading higher-order evidence affects the rationality of that belief. A correct response to the first-order evidence remains correct regardless of what higher-order evidence is added.
A remaining problem for the Right Reasons View is its verdicts in paradigm cases of peer disagreement. Many have the strong intuition that conciliation is the Restaurant Check Case regardless of whether you correctly evaluated the first-order evidence.
### 5.3 The Justificationist View
On the spectrum of views of the epistemic significance of disagreement, the Justificationist View lies somewhere in between the Equal Weight View and the Steadfast View. In defending the Justificationist View, Jennifer Lackey agrees with the Equal Weight View’s verdicts in cases like the Restaurant Check Case, but thinks that not all cases should be handled in this way. Along these lines she gives the following:
Elementary Math. Harry and I, who have been colleagues for the past six years, were drinking coffee at Starbucks and trying to determine how many people from our department will be attending the upcoming APA. I, reasoning aloud, say, ‘Well, Mark and Mary are going on Wednesday, and Sam and Stacey are going on Thursday, and since 2+2=4, there will be four other members of our department at that conference.’ In response, Harry asserts, ‘But 2+2 does not equal 4.’ (Lackey 2010a, 283.)
In Elementary Math, Lackey finds it implausible that she should become less confident that 2+2=4, never mind to split the difference with her interlocutor and suspend judgment about the matter. In other words, the claim is that the Equal Weight Views gives the wrong verdicts in what we might call cases of ‘extreme disagreement’. What justifies treating Elementary Math differently than the Restaurant Check Case? According to Lackey, if prior to discovering the peer disagreement you are highly justified in believing the soon to be disputed proposition, then upon discovering the peer disagreement little to no conciliation is called for. So, since Lackey is highly justified in believing that 2+2=4 prior to talking to her colleague, not conciliation is called for, but since Christensen was not highly justified in believing that the shares are $43 prior to discovering the disagreement, a great deal of conciliation is called for. According to the Justificationist View, one’s antecedent degree of justification determines the rational response to peer disagreement. Strong antecedent justification for believing the target proposition matters since when coupled with the discovered disagreement you now have reasons to believe your interlocutor is not your peer after all. In Elementary Math, Lackey should significantly revise her views about her colleague’s epistemic position regarding elementary math. In contrast, the Restaurant Check Case calls for no similar demotion. This difference is explained by the differing degrees of antecedent justification.
Applied to our framework, the Justificationist View denies
*Independence*. In cases where you first-order evidence
strongly supports believing p, this fact can be used to reassess your
interlocutor’s epistemic credentials. *Independence* only
permitted information from ‘outside’ the disagreement to
affect assessment of peerhood credentials, but here, the fact that
your interlocutor disagrees with something you are highly justified in
believing give you a reason to discount his opinion on the matter.
Lackey defends the legitimacy of such a demotion due to the existence of personal information. In any case of peer disagreement, I will have information about myself that I simply lack (or lack to the same extent) regarding my interlocutor. I will always be more aware of my alertness, sincerity, open-mindedness, and so forth, than I will be of my interlocutor. A similar claim is defended in Benjamin 2015. This asymmetry, when coupled with my high antecedent justification for believing the disputed proposition makes it rational to demote my alleged peer. Since in extreme disagreements one party is severely malfunctioning, my personal information makes the best explanation of this fact that it is my peer who is malfunctioning.
The Justificationist View has been criticized in several ways. Some object that high antecedent justification for believing the target proposition can make the relevant difference (see Christensen 2007, Vavova 2014a, 2014b). Consider the following case:
Lucky Lotto. You have a ticket in a million-ticket lottery. Each ticket is printed with three six-digit numbers that, when added, yield the seven-digit number that is entered into the lottery. Given the odds, I am highly justified in believing that your ticket is a loser, but I nevertheless add the numbers on your ticket just for fun. Having added the numbers and comparing the sum to the winning number – no match – I thereby become even more justified in believing that you did not win. Meanwhile, you are adding up your numbers as well, and comparing them to the winning number. You then exclaim ‘I won!’ (Christensen 2007, 200.)
In this case, I have very high antecedent justification for believing that your ticket is not a winner. Nevertheless, upon hearing you exclaim that you won, the rational response is not to downgrade your epistemic credentials. Even high antecedent justification can be defeated by new information.
Others have agreed that personal information can act as symmetry
breaker giving the subject some reason to privilege their own view but
deny that such an advantage would be had in suitably idealized cases
of peer disagreement (Matheson 2015a). The use of personal information
to discount your interlocutor’s opinion would not violate
*Independence*, so the defender of the Equal Weight View
needn’t disagree on this score.
### 5.4 The Total Evidence View
Like the Justificationist View, the Total Evidence View lies somewhere between the Steadfast View and the Equal Weight View. The Total Evidence View claims that in cases of peer disagreement, one is justified in believing what one’s total evidence supports (Kelly 2010, see also Setiya 2012 and Scanlon 2014). While this might sound like something of a truism, central to the view is an additional claim about the relation between first-order evidence and higher-order evidence. Let’s first revisit the Equal Weight View. According to the Equal Weight View, in a peer disagreement where one individual has a 0.7 degree of belief that \(P\) and the other has a 0.3 degree of belief that \(P\), both peers should split the difference and adopt a 0.5 degree of belief that \(P\). On the Equal Weight View, then, the attitude that you are justified in adopting toward the disputed proposition is entirely determined by the higher-order evidence. The justified attitude is the mean between the two peer attitudes, which ignores what their shared first-order evidence supports. According to the Total Evidence View, this is a mistake – the first-order evidence must also factor in to what the peers are reasonable in believing. Such an incorporation of the first-order evidence is what leads to the name “Total Evidence View”.
Kelly gives the following case to motivate the view:
Bootstrapping. At time \(t_0\), each of us has access to a substantial, fairly complicated body of evidence. On the whole this evidence tells against hypothesis \(H\): given our evidence, the uniquely rational credence for us to have in \(H\) is 0.3. However, as it happens, both of us badly mistake the import of this evidence: you adopt a 0.7 degree of belief toward \(H\) while I adopt a 0.9 degree of belief. At time \(t_1\), we meet and compare notes and we then split the difference and converge on a 0.8 degree of belief. (Kelly 2010, 125–126.)
While the Equal Weight View seems to be committed to the peers being
justified in adopting the 0.8 degree of belief in \(H\), Kelly finds
such a consequence implausible. After all, both peers badly misjudged
the first-order evidence! This argument can be seen as an argument
against *Independence*. In these cases, the disputed
first-order evidence can exert an ‘upwards epistemic push’
to mitigate the impact of the higher-order evidence. Kelly takes
Independence on directly with the following case:
Holocaust Denier. I possess a great deal of evidence that the Holocaust occurred, and I judge it to strongly support that hypothesis. Having adopted a high amount of credence that the Holocaust occurred, I encounter an individual who denies that the Holocaust ever occurred (because he is grossly ignorant of the evidence). (Kelly 2013b, 40)
*Independence* claims that my reasons for believing \(P\)
cannot be used to discount my interlocutor’s opinion about
\(P\). Absent those first-order reasons, however, Kelly doubts that
there is much left to work with the discount the interlocutor, and the
drastic conciliation that should result without a good reason to
discount his opinion is implausible.
This motivation for the Total Evidence View has been responded to in several different ways. One route of response is deny Kelly’s assessment of the cases (Matheson 2015a). According to this response, the individuals in Bootstrapping were both presented with powerful, though misleading, higher-order evidence. However, misleading evidence is evidence nevertheless. Given this, it can be argued that the individuals still correctly responded to their total body of evidence. For instance, we can imagine a logician working on a new proof. Suppose that it seems to him that he has successfully completed the proof, yet he nevertheless has made a subtle error rendering the whole thing invalid. In such a case, the logician has significantly mis-evaluated his first-order evidence, yet he has strong higher-order evidence that he is good at things like this. Suppose he then shows his work to a capable colleague who also maintains that the proof is successful. In this case, it may seem that it is rational for the logician to believe that the proof is successful, and perhaps be quite confident, even though this conclusion is significantly different from what the first-order evidence supports. According to this rejoinder, the call to split the difference is best seen as addressing the Belief Question.
A second route of response is to emphasize the distinction between the Response Question and the Belief Question. According to this response, while there may be something epistemically defective about the final doxastic states of the individuals in Bootstrapping, they nevertheless had the rational response to the higher-order evidence (Christensen 2011). The fact that they each misjudged the original evidence is an epistemic flaw that carries over to their final doxastic attitude, but on this line of thinking the doxastic response that each party made upon comparing notes was nevertheless rational. According to this rejoinder, the call to split the difference is best seen as addressing the Response Question.
### 5.5 Other Issues
Other objections to the Equal Weight View are not tied to any other particular view of disagreement, and some apply to more than just the Equal Weight View. In this section we briefly examine some of these objections.
#### 5.5.1 Self-Defeat
A prominent objection to the Equal Weight View and other views that prescribe doxastic conciliation is that such views are self-defeating. For expressions of this objection, see Elga 2010, Frances 2010, Mulligan 2015, O’Connor 1999, Plantinga 2000a and 2000b, Taliaferro 2009, Weatherson 2014, and Weintraub 2013. For responses, see Bogardus 2009, Christensen 2009, Elga 2010, Fleisher 2021b, Graves 2013, Kornblith 2013, Littlejohn 2013, Matheson 2015b, and Pittard 2015. In brief, there is disagreement about the epistemic significance of disagreement itself, so any view that calls for conciliation upon the discovery of disagreement can have it that it calls for its own rejection. For instance, a defender of the Equal Weight View could become aware of enough individuals that are suitably epistemically well-positioned on the epistemology of disagreement that nevertheless deny that the Equal Weight View is correct. Following the prescriptions of the Equal Weight View would require this defender to abandon the view, and perhaps even accept a competitor account. For these reasons, Plantinga 2000a has claimed that such views are, ‘self-referentially inconsistent’ (522), and Elga 2010 has claimed that such views are ‘incoherent’ and ‘self-undermining’ (179). Such a worry seems to apply to the Equal Weight View, the Justificationist View, and the Total Evidence View. Since all three views prescribe conciliation in at least some cases, they are all (at least in principle) subject to such a result. A similar worry seems to arise for disagreements about whether another individual is a peer at all (see Gausselin forthcoming).
Defenders of these conciliatory views have responded in a number of ways. First, some emphasize the way in which these views are self-defeating is not a way that shows these views to be false, or incapable of being true. ‘No true sentences have more than 5 words’ may also be said to be self-defeating, but this is a different kind of defeat. At its worst, the consequences here for conciliatory views is that given certain contingent circumstances they cannot be reasonably believed, but such an inability to be reasonably believed does not demonstrate their falsity. Further, a skeptical attitude toward the epistemic significance of disagreement seems to fit the spirit of these views quite well (more on this below).
Another way such a consequence has been downplayed is by comparing it to other principles that share the same result. Along these lines, Christensen gives the following:
Minimal Humility. If I have thought casually about \(P\) for 10 minutes, and have decided it is correct, and then find out that 1000 people, most of them much smarter and more familiar with the relevant evidence and arguments than I, have thought long and hard about \(P\), and have independently but unanimously decided that \(P\) is false, I am not justified in believing \(P\). In fact, I am justified in disbelieving \(P\). (2009, 763.)
The principle of Minimal Humility is quite plausible, yet there are contingent circumstances under which it calls for its own rejection too. If such a consequence is untenable, then it would call for the rejection of principles beyond those endorsed by the Equal Weight View, the Justificationist View, and the Total Evidence View.
A final response argues that these principles about disagreement are themselves exempt from their conciliatory prescriptions. So, correctly understood, these principles call for conciliation in ordinary disagreements, but prescribe remaining steadfast in disagreements about disagreements. So on this view, the true principles are not self-defeating. Several philosophers have endorsed such a response to the self-defeat worry. Bogardus (2009) argues that we can ‘just see’ that conciliatory principles are true and this prevents them from being self-undermining. Elga (2010) argues that conciliatory views, properly understood, are self-exempting since fundamental principles must be dogmatic about their own correctness. Pittard 2015 argues that remaining resolute in conciliationism is no more non-deferential than being conciliatory about conciliationism. The reasoning here is that to conciliate about one’s conciliatory principles would be deferential about one’s belief or credence, but steadfast about one’s reasoning. So, once we appreciate the distinct levels of belief/credence and reasoning, either response to a disagreement about the significance of disagreement will require being steadfast at one level. This, argues Pittard, makes remaining steadfast about conciliationism unproblematic.
While such responses would avoid the self-defeat charge, some see it guilty of arbitrariness (see Pittard 2015, Blessenohl 2015).
#### 5.5.2 Formal issues
A further set of issues regarding the Equal Weight View come from considerations within formal epistemology. Fitelson and Jelhe 2009 argue that there are difficulties in making precise the Equal Weight View along Bayesian lines. In particular, they argue that the most intuitive understandings of the Equal Weight View have untenable consequences. Gardiner 2014 and Wilson 2010 each raise an objection that Equal Weight View (at least as typically understood) violates the principle of commutativity of evidence. If we imagine an individual encountering a number of disagreeing peers sequentially, then which doxastic attitude is reasonable for the peer will depend upon the order at which the peers are confronted. However, the principle of commutativity of evidence claims that the order of evidential acquisition should not make such a difference. Lasonen-Arnio (2013) sets up a trilemma for the Equal Weight View arguing that either (i) it violates intuitively correct updates, (ii) it places implausible restrictions on priors, or (iii) it is non-substantive. This has led to a burgeoning literature on higher-order evidence and defeat (see Horowitz 2022 and Skipper and Steglich-Petersen 2019).
#### 5.5.3 Actual Disagreement and Possible disagreement
Another issue concerns which disagreements are of epistemic significance. While actual peer disagreement is rare, if not non-existent (see below), merely possible peer disagreement is everywhere. For any belief you have, it is possible that an epistemic peer of yours disagrees. Since we are fallible epistemic agents, possible peer disagreement is inevitable. One challenge is to distinguish the epistemic significance of actual peer disagreement from the significance of merely possible peer disagreement. Kelly 2005 first raises this challenge. After all, whether this possible disagreeing peer actually exists is a contingent and fragile matter, so to only care about it may be to exhibit an ‘actual world chauvinism’. (This term comes from Carey 2011.)
Christensen 2007 responds to this challenge by noting that while merely possible disagreement only shows that we are fallible, actual disagreement demonstrates that someone has in fact made a mistake. Since we are already aware that we are fallible epistemic agents, thinking about possible peer disagreements does not add any information that calls for (further) doxastic change. In contrast, discovering an actual peer disagreement gives us information that we lacked. In a case of peer disagreement, one of the parties has made a mistake. While the possibility of error does not demand belief revision, an increase in the probability of having made an error does.
A further question is whether actual peer disagreements are the only peer disagreements with epistemic significance. For instance, suppose that you have created an argument that you find sound in the solitude of your office. When thinking about what your (peer) colleague would think, suppose that you reasonably conclude that she would disagree about the merits of your argument. If such a conclusion is reasonable for you, then it seems that this fact should have some epistemic consequences for you despite the fact that there is not (at least as of yet) any actual disagreement. Arguably, such a merely possible disagreement even has the same epistemic significance as an actual disagreement (see Carey & Matheson 2013). Similarly, if an evil tyrant believes \(P\) and then chooses to eliminate all disagreeing peers who believe not-\(P\), he would not thereby become justified in his previously contentious belief (Kelly 2005). A challenge is to pick out which merely possible disagreements are epistemically significant, since at the risk of global skepticism, clearly not all are (Barnett and Li 2017). Issues surrounding counterfactual disagreement are also examined in Ballantyne 2013b, Bogardus 2016, and Morgensen 2016.
#### 5.5.4 Deep Disagreements
One potential special case of disagreement concerns deep disagreements. Deep disagreements are a kind of fundamental disagreement. They are disagreements over fundamental principles, worldviews, or perspectives. What makes these disagreements fundamental, is that there is nothing ‘further down’, or more basic, to appeal to in order to resolve the disagreement. When a disagreement is deep, the disagreement isn’t just about some matter of fact like whether climate change is happening or whether vaccines are effective. Deep disagreements involve disagreements about what kinds of considerations are even relevant to the dispute; they involve disagreements about what is evidence for what. For instance, disagreements about what sources are trustworthy (news outlets, governments, senses, etc.) can be deep disagreements. Other examples of deep disagreements include disagreements concerning fundamental epistemic principles, or hinge commitments.
Deep disagreements have been thought to raise some unique issues. One central question here is whether deep disagreements are rationally resolvable. In cases of ‘ordinary’ disagreement, there is at least hope of finding some common ground to appeal to in order to rationally persuade the other party. However, when the disagreement is deep, there is no such dispute independent common ground to appeal to. This leads some to maintain that deep disagreements are not rationally resolvable (Fogelin 2005, Ranalli 2020, Pritchard 2018, and Lynch 2016). In contrast, others maintain that deep disagreements are rationally resolvable in the same way that more ordinary disagreements are. While at least one party to a deep disagreement might not recognize the relevant reasons as reasons, this does not prevent those reasons from dictating what the parties should believe (Feldman 2005, Matheson 2018).
Another related issue is whether epistemic peers can be engaged in a deep disagreement (see Kappel 2021). After all, the kinds of epistemic credentials relevant to peerhood, and one’s epistemic position, are precisely the kinds of issues that are under dispute in a deep disagreement. This fact, however, may simply prevent the parties to a deep disagreement from (rationally) recognizing each other as epistemic peers (see Feldman 2006, Hazlett 2014, and Ranali and Lagewaard 2022a).
#### 5.5.5 Irrelevance of Peer Disagreement
A final issue concerns peer disagreement itself. As some have noted, epistemic peers are extremely rare, if not non-existent (Frances 2010, 2014; King 2011; Matheson 2014). After all, what are the odds that someone else is in precisely as good of an epistemic position as you on some matter—and even she was, would you know it? As we have seen, there are a number of disagreement factors, and the odds that they end in a tie between any two individuals at any given time is quite unlikely. The paucity of peers may be taken to show that the debate of the epistemic significance of peer disagreement is a futile exercise in extreme hypotheticals. After all, if you have no epistemic peers that disagree with you, doesn’t the epistemic threat from disagreement dissolve? Further, there may seem to have been a deceptive shift in the debate. Much of the puzzle of disagreement is motivated by messy real world cases of disagreement, but the vast majority of the literature is focused on idealized cases of disagreement that rarely, if ever, occur.
There are several reasons to think about the significance of peer disagreement beyond its intrinsic appeal. First, considering the idealized cases of peer disagreement helps to isolate the epistemic significance of the disagreement itself. By controlling for other epistemic factors, cases of peer disagreement help us focus on what epistemic effects discovered disagreement has. While in non-idealized cases this is but one factor in determining what to believe, the debate about peer disagreements attempts to help us better understand this one factor. Second, while peers may be quite rare, as we have noted above, it is often not clear which party is in the better epistemic position. For instance, while it is quite rare for two individuals to be the exact same weight, it can often be unclear which individual weighs more. These unknown cases may have the same epistemic significance as peer cases. If what is needed is a positive reason to privilege one’s own view, as opposed to positive reasons to think that the other is a peer, then unknown cases should be treated like peer cases.
In what follows we turn to examining the epistemic significance of disagreement outside of these idealized cases of peer disagreement.
## 6. Disagreement By the Numbers
Many disagreements are one-on-one: one person disagrees with another person and as far as they know they are the only two who have any opinion on the matter. Lisa thinks that she and Marie should move in together; then Lisa discovers that Marie has the opposite opinion. Bob and his sister Teri disagree about whether their father had an affair when they were children. In this case they know that others have the answer—their father, for one—but for various reasons the opinions of others are not accessible.
Many other disagreements involve just a few people. Bob, Rob, Hob, and Gob work in a small hotel and are wondering whether to ask for raises in their hourly pay rate. After discussion Bob thinks they should, Rob and Hob think they shouldn’t, and Gob is undecided. When Bob learns all this about his three colleagues, what should his doxastic reaction be to this mixed bag of agreement and disagreement?
However, when it comes to many of your beliefs, including some of the
most interesting ones, you are fully aware that *millions* of
people disagree with you and *millions* of other people agree
with you. Just consider a belief about religion—just about any
belief at all, pro or con. You must have *some* views on
controversial matters; virtually every human does. Moreover,
you’re perfectly aware that they are controversial. For the most
part, it’s not as though you believe \(B\), \(B\) happens to be
controversial, but you had no idea it was controversial.
Moreover, when it comes to these controversial beliefs that large numbers of people have taken positions on, it’s often the case that there are experts on the matter. In many cases the experts have a definite opinion: global warming is happening and the earth is many millions of years old. Other times they don’t: electrons and quarks come from “strings”.
If the numbers matter, then disagreement poses a skeptical threat for nearly every view of the significance of peer disagreement. The skeptical threat for conciliatory views (the Equal Weight View, the Justificationist View, and the Total Evidence View) is pretty straightforward. On the Equal Weight View, since for many controversial beliefs we are not justified in believing that the weighing of opinions favors our own opinion on the matter, the reasons for thinking that we are mistaken outweigh our reasons for thinking we are correct. The added resources of the Justificationist View and the Total Evidence View also do not seem to help in resisting the skeptical conclusion. For many controversial views we lack the strong first-order evidence and high antecedent justification that these views utilize to mitigate the call to conciliate. Further, while appeals to personal information may be good symmetry-breakers in cases of one-to-one disagreement, when the numbers of disagreeing parties are much larger, the effectiveness of such appeals radically diminishes. Similar considerations apply to most Steadfast Views. Most defenses of Steadfast Views attempt to find a symmetry-breaker in the peer-to-peer disagreement that allow for one to privilege one’s own belief. For instance, even if self-trust or private evidence can give one a reason to privilege their own belief, such a symmetry-breaker is seemingly not up to the task when the belief in question is a minority view. Given that most controversial beliefs in science, religion, politics, and philosophy are minority views, it appears that even if many Steadfast Views of peer disagreement are correct, they still face a skeptical challenge regarding disagreement more generally. The notable exception here is the Right Reasons View. Since according to the Right Reasons View, what one is justified in believing is entirely determined by the first-order evidence, no amount of discovered disagreement would change which controversial beliefs are rational. While the Right Reasons View, may be safe from such skeptical concerns, such safety only comes by way of what many see as the feature that makes it implausible. For instance, the Right Reasons View has it that you can be justified in believing \(p\) even when you are aware that every other peer and superior to you believes not-\(p\). While this avoids the more general skeptical threat, many see this as too high a price.
Another issue concerning how the numbers matter regards the independence of the relevant opinions. Our beliefs are shaped by a number of factors, and not all of them are epistemically relevant. Certain religious beliefs, political beliefs, and even philosophical beliefs are correlated with growing up in particular regions or going to certain schools. For this reason, it may be thought that the agreement of individuals who came to their opinions on a matter independently count for more, epistemically speaking, then agreement of individuals with a greater shared background. For more on this issue, see Carey & Matheson 2013, Goldman 2001, and Lackey 2013b.
## 7. Disagreement and Skepticism
So, the phenomenon of disagreement supplies a *skeptical
threat*: for many of our cherished beliefs. If we aren’t
sheltered, then we know that there is a great deal of controversy
about those beliefs even among the people who are the smartest and
have worked the hardest in trying to figure out the truth of the
matter. There is good reason to think that retaining a belief in the
face of that kind of controversy is irrational, and a belief that is
irrational does not amount to knowledge. It follows that our beliefs
we recognize as controversial do not amount to knowledge. This is the
threat of *disagreement skepticism* (Frances 2018, 2013, 2005;
Christensen 2009; Fumerton 2010; Goldberg 2009, 2013b; Kornblith 2010,
2013; Lammenranta 2011, 2013; Machuca 2013).
For the sake of argument, we can assume that our controversial beliefs
*start out* epistemically rational. Roughly put, the
disagreement skeptic thinks that even if a controversial belief starts
out as rational, once one appreciates the surrounding controversy,
one’s belief will no longer be rational, and thus not an item of
knowledge. The disagreement skeptic focuses on beliefs that satisfy
the following recognition-of-controversy conditions.
You know that the belief \(B\) in question has been investigated and debated (i) for a very long time by (ii) a great many (iii) very smart people who (iv) are your epistemic peers and superiors on the matter and (v) have worked very hard (vi) under optimal circumstances to figure out if \(B\) is true. But you also know that (vii) these experts have not come to any significant agreement on \(B\) and (viii) those who agree with you are not, as a group, in an appreciably better position to judge \(B\) than those who disagree with you.
Notice that the problem does not emerge from a mere lack of consensus. Very few, if any, beliefs are disagreement-free. Rather, the skeptical threat comes from both the extent of the disagreement (conditions (i) and (ii)) and the nature of the disagreeing parties (conditions (iii) – (viii)). While not every belief meets these recognition-of-controversy conditions, many do, and among those that do are some of our most cherished beliefs.
For instance, I might have some opinion regarding the nature of free
will or the moral permissibility of capital punishment or whether God
exists. I know full well that these matters have been debated by an
enormous number of really smart people for a very long time—in
some cases, for centuries. I also know that I’m no expert on any
of these topics. I also know that there are genuine experts on those
topics—at least, they have thought about those topics
*much* longer than I have, with a great deal more awareness of
relevant considerations, etc. It’s no contest: I know I’m
just an amateur compared to them. Part of being reflective is coming
to know about your comparative epistemic status on controversial
subjects. That said, being an expert in the relevant field
doesn’t remove the problem either. Even if I am an expert on
free will, I am aware that there are many other such experts, that I
am but one such voice among many, and that disagreement is rampant
amongst us.
The person who knows (i)–(viii) is robbed of the reasonableness of several comforting responses to the discovery of controversy. If she is reasonable, then she realizes that she can’t make, at least with confidence, anything like the following remarks:
- Well, the people who agree with me are smarter than the people who disagree with me.
- We have crucial evidence they don’t have.
- We have studied the key issue a great deal more than they have.
- They are a lot more biased than we are.
This phenomenon is particularly prevalent with regard to religion,
politics, morality, and philosophy. If when it comes to debates about
free will, capital punishment, affirmative action, and many other
standard controversial topics you say to yourself regarding the
experts who disagree with you ‘Those people just don’t
understand the issues’, ‘They aren’t very
smart’, ‘They haven’t thought about it much’,
et cetera, then you are doing so irrationally in the sense that
*you should know better* than to say that, at least if
you’re honest with yourself and informed of the state of the
debate over free will.
However, connection between controversy and skepticism won’t
apply to many of our other beliefs. No one (or no one you know) is
going around saying your parents don’t love you, you
aren’t a basically moral person, etc. So those beliefs are
probably immune to any skeptical argument of the form ‘There is
long-standing disagreement among experts regarding your belief \(B\);
you know all about it (viz. conditions (i)–(viii)); you have no
good reason to discount the ones who disagree with you; so, you
shouldn’t retain your belief \(B\)’. This is not to say
that those beliefs escape all skeptical arguments based on human error
and related phenomena. But, the first thing to note about disagreement
skepticism is that it is *contained*. Only beliefs that meet
something like the recognition-of-controversy conditions are subject
to this skeptical threat. Interestingly, however, it is not itself
exempt from these skeptical consequences. Such views of disagreement
are themselves quite controversial, so here too is another place where
the self-defeat worry arises.
Disagreement skepticism is also *contingent*. The nature and
extent of disagreements are both contingent matters, so since
disagreement skepticism relies on these factors, the skeptical
consequences of disagreement are also contingent. At one point in time
the shape of the Earth was quite contentious. While there is not now
universal agreement that the Earth is roughly spherical, the
recognition-of-controversy conditions are no longer met on this
matter. Similarly, issues of great current controversy may too at some
point fail to meet the recognition-of-controversy conditions. So, the
skeptical threat from disagreement can come and go. That said, the
track-record for the staying power of various philosophical
disagreements strongly indicates that they aren’t going anywhere
anytime soon.
Finally, disagreement skepticism is exclusively epistemic. At issue here has solely been one’s epistemic reasons for holding a belief. Meeting the recognition-of-controversy conditions raises a problem for these reasons, but we haven’t said anything about what moral, prudential, or even religious reasons you may have for holding a controversial belief. The skeptical threat from disagreement only concerns our epistemic reasons. Relatedly, if there is an all-things-considered norm of belief, disagreement skepticism may have some implications for this norm, but only by way of addressing the epistemic reasons that one has for belief.
A related point is that the consequences discussed here are doxastic consequences. Disagreement skepticism is about what beliefs are/are not rational and which changes in confidence are/are not rational. Disagreement skepticism is not a view about which views should be defended or what theses should be further researched. When coupled with the knowledge norm of assertion or the knowledge norm of action, disagreement skepticism would have further consequences about what claims can be asserted or acted upon, but these consequences only follow from such a combination of views. But if the disagreement skeptic is right, and belief is not an appropriate attitude to hold, then what attitudes can we take on toward contentious philosophical claims? Alternative propositional attitudes would need to come with differing epistemic norms than belief. If these other attitudes were constrained by similar norms, then they would face similar issues. Some candidate alternative doxastic attitudes include accepting (Beebee 2018), regarding as defensible (Goldberg 2013b), speculating, and endorsing (Fleisher 2018). In contrast, Walker 2023 argues that we would be better off in terms of accuracy if we adopted the attitude of disbelief towards philosophical claims. For more on different relations and attitudes that we could take toward contentious claims see also Barnett 2019, Dang and Bright 2021, Fleisher 2018; 2021, Plakias 2019, and papers in Goldberg and Walker forthcoming.
## Bibliography
- Adams, Zed, 2013, “The Fragility of Moral
Disagreement,” in Diego Machuca (ed.),
*Disagreement and Skepticism*, New York: Routledge, pp. 131–49. - Anderson, Elizabeth, 2006, “The Epistemology of
Democracy,”
*Episteme*3: 8–22. - Arsenault, Michael and Zachary C. Irving, 2012, “Aha! Trick
Questions, Independence, and the Epistemology of Disagreement,”
*Thought: A Journal of Philosophy*1 (3): 185–194. - Aumann, Robert J., 1976, “Agreeing to Disagree,”
*The Annals of Statistics*4: 1236–1239. - Baghramian, Maria, Carter, J. Adam, and Rowland, Richard (eds.),
forthcoming,
*Routledge Handbook of Disagreement*, New York: Routledge. - Ballantyne, Nathan, 2013a, “The Problem of Historical
Variability,” in Diego Machuca (ed.),
*Disagreement and Skepticism*, New York: Routledge. - –––, 2013b, “Counterfactual
Philosophers,”
*Philosophy and Phenomenological Research*87 (2): 368–387. - , 2018, “Is Epistemica Permissivism
Intuitive?”
*American Philosophical Quarterly*55 (4): 365–378. - Ballantyne, Nathan, and E. J. Coffman, 2011, “Uniqueness,
Evidence, and Rationality,”
*Philosophers Imprint*11 (18): 1–13. - –––, 2012, “Conciliationism and
Uniqueness,”
*Australasian Journal of Philosophy*90 (4): 657–670. - Barnett, Zach, 2019, “Philosophy without Belief,”
*Mind*128 (509): 109–138. - Barnett, Zach and Han Li, 2016, “Conciliationism and Merely
Possible Disagreement,”
*Synthese*193 (9): 2973–2985. - Beebee, Helen, 2018, “Philosophical Scepticism and the Aims
of Philosophy,”
*The Aristotelian Society*CXVIII: 1–24. - Békefi, Bálint, 2023, “Self-Favoring Theories
and the Bias Argument,”
*Logos and Episteme*14 (2): 199–213. - Benjamin, Sherman, 2015, “Questionable Peers and
Spinelessness,”
*Canadian Journal of Philosophy*45 (4): 425–444. - Benton, Matthew, 2021, “Disagreement and Religion,” in
M. Benton and J. Kvanvig (eds.)
*Religious Disagreement and Pluralism*. New York: Oxford University Press, pp. 1–40. - Benton, Matthew and Kvanvig, Jonathan (eds.), 2021,
*Religious Disagreement and Pluralism*. New York: Oxford University Press. - Bergmann, Michael, 2009, “Rational Disagreement after Full
Disclosure,”
*Episteme: A Journal of Social Epistemology*6 (3): 336–353. - Bernáth, László and Tözsér,
János, 2021, “The Biased Nature of Philosophical Beliefs
in the Light of Peer Disagreement,”
*Metaphilosophy*52 (3–4): 363–378. - Besong, Brian, 2014, “Moral Intuitionism and
Disagreement,”
*Synthese*191 (12): 2767–2789. - Blessenohl, Simon, 2015, “Self-Exempting Conciliationism is
Arbitrary,”
*Kriterion: Journal of Philosophy*29 (3): 1–22. - Bogardus, Tomas, 2009, “A Vindication of the Equal Weight
View,”
*Episteme: A Journal of Social Epistemology*6 (3): 324–335. - –––, 2013a, “Foley’s Self-Trust and
Religious Disagreement,”
*Logos and Episteme*4 (2): 217–226. - –––, 2013b, “Disagreeing with the
(Religious) Skeptic,”
*International Journal for Philosophy of Religion*74 (1): 5–17. - –––, 2016, “Only All Naturalists Should
Worry About Only One Evolutionary Debunking Argument,”
*Ethics*126 (3): 636–661. - Boyce, Kenneth and Allan Hazlett, 2014, “Multi‐Peer
Disagreement and the Preface Paradox,”
*Ratio*27 (3): 29–41. - Brauer, Ethan, 2023, “Disagreement, the Independence Thesis,
and the Value of Repeated Reasoning,”
*Pacific Philosophical Quarterly*104(3): 494–510. - Broncano-Berrocal, Fernando and Simion, Mona, 2021,
“Disagreement and Epistemic Improvement,”
*Synthese*199 (5–6): 14641–14665. - Buchak, Lara, 2021, “A Faithful Response to
Disagreement,”
*The Philosophical Review*130 (2): 191–226 - Bueno, Otávio, 2013, “Disagreeing with the
Pyrrhonist?” in Diego Machuca (ed.),
*Disagreement and Skepticism*, New York: Routledge, 131–49. - Carey, Brandon, 2011, “Possible Disagreements and
Defeat,”
*Philosophical Studies*155 (3): 371–381. - Carey, Brandon and Jonathan Matheson, 2013, “How Skeptical
is the Equal Weight View?” in Diego Machuca (ed.),
*Disagreement and Skepticism*, New York: Routledge, pp. 131–49. - Carter, J. Adam, 2013, “Disagreement, Relativism and
Doxastic Revision,”
*Erkenntnis*1 (S1): 1–18. - –––, 2014, “Group Peer
Disagreement,”
*Ratio*27 (3): 11–28. - , 2018, “On behalf of controversial view
agnosticism,”
*European Journal of Philosophy*26 (4): 1358–1370. - Choo, Frederick, 2021, “The Epistemic Significance of
Religious Disagreements: Cases of Unconfirmed Superiority
Disagreements,”
*Topoi*40 (5): 1139–1147. - Christensen, David, 2007, “Epistemology of Disagreement: The
Good News,”
*Philosophical Review*116: 187–218. - –––, 2009, “Disagreement as Evidence: The
Epistemology of Controversy,”
*Philosophy Compass*4 (5): 756–767. - –––, 2010a, “Higher-Order Evidence,”
*Philosophy and Phenomenological Research*81 (1): 185–215. - –––, 2010b, “Rational Reflection,”
*Philosophical Perspectives*24 (1): 121–140. - –––, 2011, “Disagreement, Question-Begging
and Epistemic Self-Criticism,”
*Philosophers Imprint*11 (6): 1–22. - –––, 2016a, “Disagreement, Drugs, Etc.:
From Accuracy to Akrasia,”
*Episteme*13 (4): 397–422. - –––, 2016b, “Uniqueness and Rational
Toxicity,”
*Noûs*50 (3): 584–603. - –––, 2019, “Formulating Independence,”
in M. Skipper and A. Steglich-Petersen (eds.),
*Higher-Order Evidence: New Essays*. Oxford: Oxford University Press, pp. 13–34. - Christensen, David and Jennifer Lackey (eds.), 2013,
*The Epistemology of Disagreement: New Essays*, New York: Oxford University Press. - Coady, David, 2006, “When experts
disagree,”
*Episteme*3 (1–2): 68–79. - Coliva, Annalisa, and Doulas, Louis, forthcominga,
“Philosophical (and Scientific) Progress: A Hinge
Account,” in S. Goldberg and M. Walker (eds.)
*Attitude in Philosophy.*New York: Oxford University Press. - –––, forthcomingb, “Philosophical
Progress, Skepticism, and Disagreement,” in M. Baghramian, J.A.
Carter, and R. Rowland (eds.)
*Routledge Handbook of Disagreement*. New York: Oxford University Press. - Comesana, Juan, 2012, “Conciliation and Peer-Demotion in the
Epistemology of Disagreement,”
*American Philosophical Quarterly*, 49 (3): 237–252. - Conee, Earl, 1987, “Evident, but Rationally
Unacceptable,”
*Australian Journal of Philosophy*65: 316–326. - –––, 2009, “Peerage,”
*Episteme*6 (3): 313–323. - –––, 2010, “Rational Disagreement
Defended,” in Richard Feldman and Ted Warfield (eds.),
*Disagreement*, New York: Oxford University Press. - Cruz, H. D. and Smedt, J. D., 2013, “The value of epistemic
disagreement in scientific practice. The case of Homo
Floresiensis,”
*Studies in History and Philosophy of Science*Part A 44 (2): 169–177. - Dang, Haixin and Bright, Liam Kofi, 2021, “Scientific
Conclusions need not be Accurate, Justified, or Believed by their
Authors,”
*Synthese*199: 8187–8203. - de Ridder, Jeroen, 2021, “Deep Disagreement and Belief
Polarization,” in E. Edenberg and M. Hannon (eds.),
*Political Epistemology*. New York: Oxford University Press, 225–243. - De Cruz, Helen, 2017, “Religious Disagreement: An Empirical
Study Among Academic Philosophers,”
*Episteme*14: 71–87. doi:10.1017/epi.2015.50 - De Cruz, Helen and Johan De Smedt, 2013, “The Value of
Epistemic Disagreement in Scientific Practice. The Case of Homo
Floresiensis,”
*Studies in History and Philosophy of Science Part A*, 44 (2): 169–177. - DePaul, Michael, 2013, “Agent Centeredness, Agent
Neutrality, Disagreement, and Truth Conduciveness,” in Chris
Tucker (ed.),
*Seemings and Justification*, New York: Oxford University Press. - Decker, Jason, 2012, “Disagreement, Evidence, and
Agnosticism,”
*Synthese*187 (2): 753–783. - Dellsén, Finnur, 2018, “When Expert Disagreement
Supports the Consensus,”
*Australasian Journal of Philosophy*96 (1): 142–156. doi:10.1080/00048402.2017.1298636 - Dellsén, Finnur, Lawler, Insa, and Norton, James, 2023,
“Would Disagreement Undermine Progress?”
*Journal of Philosophy*120 (3): 139–172. - Dixon, Jonathan, forthcoming, “No Hope for Conciliationism,”
*Synthese*. - Dogramaci, Sinan and Sophie Horowitz, 2016, “An Argument for
Uniqueness About Evidential Support,”
*Philosophical Issues*26 (1): 130–147. - Dougherty, Trent, 2013, “Dealing with Disagreement from the
First-Person Perspective: A Probabilistic Proposal,” in D.
Machuca (ed.),
*Disagreement and Skepticism*, New York: Routledge, pp. 218–238. - Elga, Adam, 2007, “Reflection and Disagreement,”
*Noûs*41: 478–502. - –––, 2010, “How to Disagree About How to
Disagree,” in Richard Feldman and Ted Warfield (eds.),
*Disagreement*, New York: Oxford University Press. - Elgin, Catherine, 2010, “Persistent Disagreement,” in
Richard Feldman and Ted Warfield (eds.),
*Disagreement*, New York: Oxford University Press. - –––, 2018, “Reasonable
disagreement,” in C.R. Johnson (ed.),
*Voicing Dissent*. New York USA: Routledge, pp. 10–21. - –––, 2022, “Disagreement in
philosophy,”
*Synthese*200 (20): 1–16. - Enoch, David, 2010, “Not Just a Truthometer: Taking Oneself
Seriously (but not Too Seriously) in Cases of Peer
Disagreement,”
*Mind*119: 953–997. - Everett, Theodore J., 2015, “Peer Disagreement and Two
Principles of Rational Belief,”
*Australasian Journal of Philosophy*93 (2): 273–286. - Feldman, Richard, 2003, “Plantinga on Exclusivism,”
*Faith and Philosophy*20: 85–90. - –––, 2004, “Having Evidence,” in
Conee and Feldman (eds.),
*Evidentialism: Essays in Epistemology*, New York: Oxford University Press, pp. 219–242. - –––, 2005a, “Respecting the
Evidence,”
*Philosophical Perspectives*19: 95–119. - , 2005b, “Deep Disagreement, Rational
Resolution, and Critical Thinking,”
*Informal Logic*25 (1): 12–23. - –––, 2009, “Evidentialism, Higher-Order
Evidence, and Disagreement,”
*Episteme*6 (3): 294–312. - –––, 2006a, “Epistemological Puzzles about
Disagreement,” in Steve Hetherington (ed.),
*Epistemic Futures*, New York: Oxford University Press, pp. 216–236. - –––, 2006b, “Reasonable Religious
Disagreements,” in L. Antony (ed.),
*Philosophers without Gods: Meditations on Atheism and the Secular Life*, New York: Oxford University Press, 194–214. - –––, 2006c, “Clifford’s Principle
and James’ Options,”
*Social Epistemology*20: 19–33. - Feldman, Richard, and Ted Warfield (eds.), 2010,
*Disagreement*, Oxford: Oxford University Press. - Fleisher, Will, 2018, “Rational Endorsement,”
*Philosophical Studies*175 (10): 2649–2675. - , 2021a, “Endorsement and
Assertion,”
*Noûs*55 (2): 363–384. - , 2021b, “How to Endorse Conciliationism,” Synthese 198 (10): 9913–9939.
- Fogelin, Robert, 2005, “The Logic of Deep
Disagreement,”
*Informal Logic*25 (1): 3–11. - Foley, Richard, 2001,
*Intellectual Trust in Oneself and Others*. Cambridge: Cambridge University Press. - Frances, Bryan, 2005, “When a Skeptical Hypothesis is
Live,”
*Noûs*39: 559–95. - –––, 2010a, “Disagreement,” in
Duncan Pritchard and Sven Bernecker (eds.),
*Routledge Companion to Epistemology*, New York: Routledge Press, pp. 68–74. - –––, 2010b, “The Reflective Epistemic
Renegade,”
*Philosophy and Phenomenological Research*81: 419–463. - –––, 2013, “Philosophical
Renegades,” in Jennifer Lackey and David Christensen (eds.),
*The Epistemology of Disagreement: New Essays*, Oxford: Oxford University Press, pp. 121–166. - –––, 2014,
*Disagreement*, Cambridge, UK: Polity Press. - –––, 2018, “Scepticism and
Disagreement,” in Diego Machuca and Baron Reed (eds.),
*Skepticism: From Antiquity to the Present*, New York: Bloomsbury. - Fritz, James, 2018, “Conciliationism and Moral
Spinelessness,”
*Episteme*15 (1): 101–118. - Fritz, James and McPherson, Tristram, 2019, “Moral
Steadfastness and Meta-Ethics,”
*American Philosophical Quarterly*56 (1): 43–56. - Fumerton, Richard, 2010, “You Can’t Trust a
Philosopher,” in Richard Feldman and Ted Warfield (eds.),
*Disagreement*, New York: Oxford University Press. - Gardiner, Georgi, 2014, “The Commutativity of Evidence: A
Problem for Conciliatory Views of Disagreement,”
*Episteme*11 (1): 83–95. - Gausselin, Kevin, forthcoming, “Conciliationism and the
Peer-Undermining Problem,”
*Synthese*. - Gibbons, Adam, 2021, “Political Disagreement and Minimal
Epistocracy,”
*Journal of Ethics and Social Philosophy*19 (2). - Goldberg, Sanford, 2009, “Reliabilism in Philosophy,”
*Philosophical Studies*124: 105–17. - –––, 2013a, “Inclusiveness in the Face of
Anticipated Disagreement,”
*Synthese*190 (7): 1189–1207. - –––, 2013b, “Defending Philosophy in the
Face of Systematic Disagreement,” in Diego Machuca (ed.),
*Disagreement and Skepticism*, New York: Routledge, pp. 131–49. - Goldberg, Sanford and Walker, Mark (eds.), forthcoming,
*Attitudes in Philosophy*. Oxford: Oxford University Press. - Goldman, Alvin, 2001, “Experts: Which Ones Should You
Trust?”
*Philosophy and Phenomenological Research*63: 85–110. - Goldman, Alvin, 2010, “Epistemic Relativism and Reasonable
Disagreement,” in Richard Feldman and Ted Warfield (eds.),
*Disagreement*, New York: Oxford University Press. - Gonzalez de Prado, Javier, 2020, “Disposessing
Defeat,”
*Philosophy and Phenomenological Research*101 (2): 323–340. - Graves, Shawn, 2013, “The Self-Undermining Objection in the
Epistemology of Disagreement,”
*Faith and Philosophy*30 (1): 93–106. - Greco, Daniel and Brian Hedden, 2016, “Uniqueness and
Metaepistemology,”
*Journal of Philosophy*113 (8): 365–395. - Grundmann, Thomas, 2013, “Doubts about Philosophy? The
Alleged Challenge from Disagreement,” in T. Henning and D.
Schweikard (eds.),
*Knowledge, Virtue, and Action: Essays on Putting Epistemic Virtues to Work*. New York: Routledge, pp. 72–98. - –––, 2019, “How to respond rationally to
peer disagreement: The preemption view,”
*Philosophical Issues*29 (1): 129–142. - –––, 2021, “Why Disagreement-Based
Skepticism Cannot Escape the Problem of Self-Defeat,”
*Episteme*18 (2): 224–241. - Gutting, Gary, 1982,
*Religious Belief and Religious Skepticism*, Notre Dame: University of Notre Dame Press. - Hales, Steven, 2014, “Motivations for Relativism as a
Solution to Disagreements,”
*Philosophy*89 (1): 63–82. - Hardwig, John, 1985, “Epistemic Dependence,”
*Journal of Philosophy*82: 335–49. - –––, 1991, “The Role of Trust in
Knowledge,”
*Journal of Philosophy*88: 693–708. - Hawthorne, John and Amia Srinivasan, 2013, “Disagreement
Without Transparency: Some Bleak Thoughts,” in David Christensen
& Jennifer Lackey (eds.),
*The Epistemology of Disagreement: New Essays*, New York: Oxford University Press, pp. 9–30. - Hazlett, Alan, 2012, “Higher-Order Epistemic Attitudes and
Intellectual Humility,”
*Episteme*9: 205–23. - –––, 2014, “Entitlement and Mutually
Recognized Reasonable Disagreement,”
*Episteme*11 (1): 1–25. - Henderson, David, Terrance Horgan, Matjaz Potrc, and Hannah
Tierney, 2017, “Nonconciliation in Peer Disagreement: Its
Phenomenology and Its Rationality,”
*Grazer Philosophische Studien*94: 194–225. - Heesen, Remco and Pieter van der Kolk, 2016, “A
Game-Theoretic Approach to Peer Disagreement,”
*Erkenntnis*81 (6): 1345–1368. - Hirvelä, Jaakko, 2017, “Is it Safe to Disagree?”
*Ratio*30 (3): 305–321. - Horowitz, Sophie, 2022, “Higher-Order
Evidence,”
*The Stanford Encyclopedia of Philosophy*(Fall 2022 Edition), Edward N. Zalta & Uri Nodelman(eds.), URL = <https://plato.stanford.edu/archives/fall2022/entries/higher-order-evidence/>. - Huemer, Michael, 2011, “Epistemological Egoism and
Agent-Centered Norms,” in T. Dougherty (ed.)
*Evidentialism and its Discontents,*Oxford: Oxford University Press, 17–33. - Jackson, Liz, 2021, “A Defense of Intrapersonal Belief
Permissivism,”
*Episteme*18 (2): 313–327. - Jehle, David and Brandon Fitelson, 2009, “What is the
‘Equal Weight View’?”
*Episteme*6: 280–293. - Johnson, Drew, 2022, “Deep Disagreement, Hinge Commitments,
and Intellectual Humility,”
*Episteme*19 (3): 353–372. - Jones, Nicholas, 2012, “An Arrovian Impossibility Theorem
for the Epistemology of Disagreement,”
*Logos and Episteme*3 (1): 97–115. - Kappel, Klemens, 2012, “The Problem of Deep
Disagreement,”
*Discipline Filosofiche*22 (2): 7–25. - –––, 2021, “Higher-Order Evidence and Deep
Disagreement,”
*Topoi*40: 1039–1050. - Kelly, Thomas, 2005, “The Epistemic Significance of
Disagreement,” in T. Gendler and J. Hawthorne (eds.),
*Oxford Studies in Epistemology*, vol. 1. Oxford: Oxford University Press. - –––, 2010, “Peer Disagreement and Higher
Order Evidence,” in R. Feldman and T. Warfield (eds.),
*Disagreement*, New York: Oxford University Press, pp. 183–217. - –––, 2013a, “Evidence Can be
Permissive,” in M. Steup, J. Turri, and E. Sosa (eds.),
*Contemporary Debates in Epistemology*, New York: Blackwell, pp. 298–311. - –––, 2013b, “Disagreement and the Burdens
of Judgment,” in David Christensen and Jennifer Lackey (eds.),
*The Epistemology of Disagreement: New Essays*, Oxford: Oxford University Press, pp. 31–53. - –––, 2014, “Believers as
Thermometers,” in J. Matheson and R. Vitz (eds.),
*The Ethics of Belief: Individual and Social.*New York: Oxford University Press, pp. 301–314. - –––, forthcoming a, “Bias and
Disagreement,” in J. Lackey and A. McGlynn (eds.),
*Oxford Handbook of Social Epistemology*. Oxford: Oxford University Press. - –––, forthcoming b, “Peer Disagreement,
Steadfastness, and Conciliationism,” in M. Baghramian, J.A.
Carter, and R. Rowland (eds.),
*Routledge Handbook of Disagreement*, New York: Routledge. - King, Nathan, 2011, “Disagreement: What’s the Problem?
Or A Good Peer is Hard to Find,”
*Philosophy and Phenomenological Research*85 (2): 249–272. - –––, 2013, “Disagreement: The Skeptical
Arguments from Peerhood and Symmetry,” in Diego Machuca (ed.),
*Disagreement and Skepticism*, New York: Routledge, 193–217. - Knocks, Aleks, 2022, “Conciliatory Views, Higher-Order
Disagreements, and Defeasible Logic,”
*Synthese*200 (2). - –––, 2023, “Conciliatory Reasoning,
Self-Defeat, and Abstract Argumentation,”
*The Review of Symbolic Logic*16 (3): 740–787. - Kopec, Matthew, 2015, “A Counterexample to the Uniqueness
Thesis,”
*Philosophia*43 (2): 403–409. - Kopec, Matthew and Michael G. Titelbaum, 2016, “The
Uniqueness Thesis,”
*Philosophy Compass*11 (4): 189–200. - Kornblith, Hilary, 2010, “Belief in the Face of
Controversy,” in Richard Feldman and Ted Warfield (eds.),
*Disagreement*, New York: Oxford University Press. - –––, 2013, “Is Philosophical Knowledge
Possible?” in Diego Machuca (ed.)
*Disagreement and Skepticism*, New York: Routledge, pp. 131–49. - Lackey, Jennifer, 2010a, “What Should We Do When We
Disagree?” in Tamar Szabo Gendler and John Hawthorne (eds.),
*Oxford Studies in Epistemology*, Oxford: Oxford University Press. - –––, 2010b, “A Justificationalist View of
Disagreement’s Epistemic Significance,” in Adrian Haddock,
Alan Millar, and Duncan Pritchard (eds.),
*Social Epistemology*, Oxford: Oxford University Press. - –––, 2013a, “What’s the Rational
Response to Everyday Disagreements?”
*Philosophers’ Magazine*59: 101–6. - –––, 2013b, “Disagreement and Belief
Dependence: Why Numbers Matter,” in David Christensen and
Jennifer Lackey (eds.),
*The Epistemology of Disagreement: New Essays,*Oxford: Oxford University Press, pp. 243–68. - –––, 2014, “Taking Religious Disagreement
Seriously,” in Laura Frances Callahan and Timothy O’Connor
(eds.),
*Religious Faith and Intellectual Virtue,*Oxford: Oxford University Press, pp. 299–316. - Lam, Barry, 2011, “On the Rationality of Belief-Invariance
in Light of Peer Disagreement,”
*Philosophical Review*120 (2): 207–245. - –––, 2013, “Calibrated Probabilities and
the Epistemology of Disagreement,”
*Synthese*190 (6): 1079–1098. - Lammenranta, Markus, 2011, “Skepticism and
Disagreement,” in Diego Machuca (ed.),
*Pyrrhonism in Ancient, Modern, and Contemporary Philosophy*, Dordrect: Springer, pp. 203–216. - –––, 2013, “The Role of Disagreement in
Pyrrhonian and Cartesian Skepticism,” in Diego Machuca (ed.),
*Skepticism and Disagreement*, New York: Routledge, pp. 46–65. - Lampert, Fabio and John Biro, 2017, “What is Evidence of
Evidence Evidence of?”
*Logos and Episteme*2: 195–206. - Lane, Melissa, 2014, “When the Experts are Uncertain:
Scientific Knowledge and the Ethics of Democratic Judgment,”
*Episteme*11 (1): 97–118. - Lasonen-Aarnio, Maria, 2013, “Disagreement and Evidential
Attenuation,”
*Noûs*47 (4): 767–794. - –––, 2014, “Higher-Order Evidence and the
Limits of Defeat,”
*Philosophy and Phenomenological Research*88 (2): 314–345. - Lee, Matthew Brandon, 2013, “Conciliationism Without
Uniqueness,”
*Grazer Philosophische Studien*88: 161–188. - Levinstein, Benjamin Anders, 2015, “With All Due Respect:
The Macro-Epistemology of Disagreement,”
*Philosophers Imprint*15 (13): 1–20. - –––, 2017,“Permissive Rationality and
Sensitivity,”
*Philosophy and Phenomenological Research*94 (2): 1–29. - Levy, Neil, 2020, “The Surprising Truth about
Disagreement,”
*Acta Analytica*36 (2): 137–157. - Licon, Jimmy Alfonso, 2013, “On Merely Modal Epistemic
Peers: Challenging the Equal-Weight View,”
*Philosophia*41 (3): 809–823. - List, C. and Goodin, R., 2001, “Epistemic Democracy:
Generalizing the Condorcet Jury Theorem,”
*Journal of Political Philosophy*9: 277–306. - Littlejohn, Clayton, 2013, “Disagreement and Defeat,”
in Diego Machuca (ed.),
*Disagreement and Skepticism*, New York: Routledge, pp. 169–192. - Lord, Errol, 2014, “From Independence to Conciliationism: An
Obituary,”
*Australasian Journal of Philosophy*92 (2): 365–377. - Lougheed, Kirk, 2019,
*The Epistemic Benefits of Disagreement*. New York: Springer. - –––, 2020a, “Religious Disagreement,
Religious Experience, and the Evil God Hypothesis,”
*International Journal for Philosophy of Religion*12 (1): 173–190. - –––, 2020b, “The Epistemic Benefits of
Worldview Disagreement,”
*Social Epistemology*35 (1), 85–98. - Lynch, Michael, 2016, “After the Spade Turns: Disagreement,
First Principles and Epistemic Contractarianism,”
*International Journal for the Study of Skepticism*6 (2–3): 248–259. - MacFarlane, John, 2007, “Relativism and Disagreement,”
*Philosophical Studies*132: 17–31. - Machuca, Diego, 2015, “Agrippan Pyrrhonism and the Challenge
of Disagreement,”
*Journal of Philosophical Research*40: 23–39. - –––, 2017, “A Neo-Pyrrhonian Response to
the Disagreeing about Disagreement Argument,”
*Synthese*194 (5): 1663–1680. - Machuca, Diego (ed.), 2013,
*Disagreement and Skepticism*, New York: Routledge. - Machuca, Diego, 2017, “A Neo-Pyrrhonian Response to the
Disagreeing about Disagreement Argument,”
*Synthese*194 (5): 1663–1680. - Martini, Carlo, 2013, “A Puzzle About Belief
Updating,”
*Synthese*190 (15): 3149–3160. - Matheson, Jonathan, 2009, “Conciliatory Views of
Disagreement and Higher-Order Evidence,”
*Episteme: A Journal of Social Philosophy*6 (3): 269–279. - –––, 2011, “The Case for Rational
Uniqueness,”
*Logos & Episteme*2 (3): 359–73. - –––, 2014, “Disagreement: Idealized and
Everyday,” in Jonathan Matheson and Rico Vitz (eds.),
*The Ethics of Belief: Individual and Social*, New York: Oxford University Press, pp. 315–330. - –––, 2015a, “Disagreement and the Ethics
of Belief,” in James Collier (ed.),
*The Future of Social Epistemology: A Collective Vision*, Lanham, Maryland: Rowman and Littlefield, pp. 139–148. - –––, 2015b, “Are Conciliatory Views of
Disagreement Self-Defeating?”
*Social Epistemology*29 (2): 145–159. - –––, 2015c,
*The Epistemic Significance of Disagreement*, London: Palgrave Macmillan. - –––, 2016, “Moral Caution and the
Epistemology of Disagreement,”
*Journal of Social Philosophy*47 (2): 120–141. - –––, 2021, “Deep Disagreement and Rational
Resolution,”
*Topoi*40: 1025–1037. - Miller, Boaz, 2021, “When Is Scientific Dissent
Epistemically Inappropriate?”
*Philosophy of Science*88 (5): 918–928. - Moffett, Mark, 2007, “Reasonable Disagreement and Rational
Group Inquiry,”
*Episteme: A Journal of Social Epistemology*4 (3): 352–367. - Mogensen, A. L., 2016, “Contingency Anxiety and the
Epistemology of Disagreement,”
*Pacific Philosophical Quarterly*97 (4): 590–611. - Moon, Andrew, 2018, “Independence and New Ways to Remain
Steadfast in the Face of Disagreement,”
*Episteme*15 (1): 65–79. - –––, 2021, “Circular and Question-Begging
Responses to Religious Disagreement and Debunking Arguments,”
*Philosophical Studies*178 (3): 785–809. - Mulligan, Thomas, 2015, “Disagreement, Peerhood, and Three
Paradoxes of Conciliationism,”
*Synthese*192 (1): 67–78. - –––, 2022, “The Epistemology of
Disagreement: Why not Bayesianism?”
*Episteme*18 (4): 587–602. - Oppy, Graham, 2010, “Disagreement,”
*International Journal for Philosophy of Religion*68 (1): 183–199. - Palmira, Michele, 2013, “A Puzzle About the Agnostic
Response to Peer Disagreement,”
*Philosophia*41 (4): 1253–1261. - –––, 2018, “Disagreement, Credences, and
Outright Belief,”
*Ratio*31 (2): 179–196. - –––, 2020, “Inquiry and Doxastic
Attitudes,”
*Synthese*197 (11): 4947–4973. - Pasnau, Robert, 2015, “Disagreement and the Value of
Self-Trust,”
*Philosophical Studies*172 (9): 2315–2339. - Peels, Rik and Anthony Booth, 2014, “Why Responsible Belief
Is Permissible Belief,”
*Analytic Philosophy*55: 75–88. - Pettit, Phillip, 2006, “When to Defer to the Majority
– and When Not,”
*Analysis*66: 179–187. - Pittard, John, 2014, “Conciliationism and Religious
Disagreement,” in Michael Bergmann and Patrick Kain (eds.),
*Challenges to Moral and Religious Belief: Disagreement and**Evolution*, Oxford University Press, pp. 80–97. - –––, 2015, “Resolute
Conciliationism,”
*Philosophical Quarterly*65 (260): 442–463. - –––, 2017, “Disagreement, Reliability, and
Resilience,”
*Synthese*194 (11): 4389–4409. - –––, 2019a,
*Disagreement, Deference, and Religious Commitment*. New York: Oxford University Press. - –––, 2019b, “Fundamental Disagreements and
the Limits of Instrumentalism,”
*Synthese*196 (12): 5009–5038. - Plakias, Alexandra, 2019, “Publishing without Belief,”
*Analysis*79 (4): 638–646. - Plantinga, Alvin, 2000a,
*Warranted Christian Belief*, Oxford: Oxford University Press. - –––, 2000b, “Pluralism: A Defense of
Religious Exclusivism,” in Philip L. Quinn and Kevin Meeker
(eds.),
*The Philosophical Challenge of Religious Diversity*, New York: Oxford University Press, pp. 172–192. - Priest, Maura, 2016, “Inferior Disagreement,”
*Acta Analytica*31 (3): 263–283. - Pritchard, Duncan, 2013, “Disagreement, Skepticism, and
Track-Record Arguments,” in
*Disagreement and Skepticism*, Diego Machuca (ed.), New York: Routledge, pp. 150–168. - –––, 2018, “Wittgensteinian Hinge
Epistemology and Deep Disagreement,”
*Topoi*40 (5): 1117–1125. - Raleigh, Thomas, 2017, “Another Argument Against
Uniqueness,”
*Philosophical Quarterly*67 (267): 327–346. - Ranalli, Chris, 2020, “Deep Disagreement and Hinge
Epistemology,”
*Synthese*197: 4975–45007. - Ranalli, Chris and Lagewaard, Thirza, 2022a, “Deep
Disagreement (part 1): Theories of Deep Disagreement,”
*Philosophy Compass*17 (12): e12886. - –––, 2022b, “Deep Disagreement (part 2):
Epistemology of Deep Disagreement,”
*Philosophy Compass*17 (12): e12887. - Rasmussen, Mattias Skipper, Asbjørn Steglich-Petersen, and
Jens Christian Bjerring, 2018, “A Higher-Order Approach to
Disagreement,”
*Episteme*15 (1): 80–100. - Rattan, Gurpreet, 2014, “Disagreement and the
First‐Person Perspective,”
*Analytic Philosophy*55 (1): 31–53. - Raz, Joseph, 1998, “Disagreement in Politics,”
*American Journal of Jurisprudence*43: 25–52. - Reisner, Andrew, 2016, “Peer Disagreement, Rational
Requirements, and Evidence of Evidence as Evidence Against,” in
Pedro Schmechtig and Martin Grajner (eds.),
*Epistemic Reasons, Norms and Goals*, De Gruyter, pp. 95–114. - Roche, William, 2014, “Evidence of Evidence is Evidence
Under Screening-Off,”
*Episteme*11 (1): 119–124. - Rosa, Luis, 2012, “Justification and the Uniqueness
Thesis,”
*Logos and Episteme*4: 571–577. - Rosen, Gideon, 2007, “The Case Against Epistemic Relativism:
Reflections on Chapter 6 of
*Fear of Knowledge*,”*Episteme*4 (1): 11–29. - –––, 2001, “Nominalism, Naturalism, and
Epistemic Relativism,”
*Philosophical Perspectives*15: 69–91. - Rotondo, Andrew, 2013, “Undermining, Circularity, and
Disagreement,”
*Synthese*190 (3): 563–584. - Rowbottom, D. P., 2018, “What is (dis)agreement?”
*Philosophy and Phenomenological Research*97 (1):223–236. - Sampson, Eric, 2019, “The Self-Undermining Arguments from
Disagreement,”
*Oxford Studies in Metaethics*14: 23–46. - Scanlon, Thomas, 2014,
*Being Realistic About Reasons*, New York: Oxford University Press. - Schafer, Karl, 2015, “How Common is Peer Disagreement? On
Self‐Trust and Rational Symmetry,”
*Philosophy and Phenomenological Research*91 (1): 25–46. - Schoenfield, Miriam, 2015, “A Dilemma for
Calibrationism,”
*Philosophy and Phenomenological Research*91: 425–455. - –––, 2014, “Permission to Believe,”
*Noûs*48 (2): 193–218. - Setiya, Kieran, 2012,
*Knowing Right from Wrong*. New York: Oxford University Press. - Simpson, Robert Mark, 2013, “Epistemic Peerhood and the
Epistemology of Disagreement,”
*Philosophical Studies*164 (2): 561–577. - Skipper, Mattias and Steglich-Petersen, Asbjørn (eds.),
2019,
*Higher-Order Evidence: New Essays*. New York: Oxford University Press. - Sosa, Ernest, 2010, “The Epistemology of
Disagreement,” in
*Disagreement*, Richard Feldman and Ted Warfield (eds.), New York: Oxford University Press, pp. 274–293. - Staffel, Julia, 2021, “Transitional Attitudes and the
Unmooring View of Higher-Order Evidence,”
*Noûs*57 (1): 238–260. - Tersman, Folke, 2013, “Moral Disagreement: Actual vs.
Possible,” in Diego Machuca (ed.),
*Skepticism and Disagreement*, New York: Routledge, pp. 90–108. - Thune, Michael, 2010a, “Religious Belief and the
Epistemology of Disagreement,”
*Philosophy Compass*5 (8): 712–724. - –––, 2010b, “‘Partial
Defeaters’ and the Epistemology of Disagreement,”
*Philosophical Quarterly*60 (239): 355–372. - Thurow, Joshua, 2012, “Does Religious Disagreement Actually
Aid the Case for Theism?” in Jake Chandler and Victoria Harrison
(eds.),
*Probability in the Philosophy of Religion*, Oxford: Oxford University Press. - Titlebaum, Michael, 2015, “Rationality’s Fixed Point
(Or: In Defense of Right Reason),”
*Oxford Studies in Epistemology*vol. 5, Oxford: Oxford University press, pp. 253–294. - –––, forthcoming, “Disagreement and
Permissivism,” in M. Baghramian, J.A. Carter, and R. Rowland
(eds.),
*Routledge Handbook of Disagreement*, New York: Routledge. - Titlebaum, Michael and Kopec, Matthew, 2019, “When Rational
Reasoners Reason Differently,” in M. Balcerak Jackson and B
Balcerak Jackson (eds.),
*Reasoning: New Essays on Theoretical and Practical Thinking*, Oxford: Oxford Academic, pp. 205–231. - Tozsér, János, 2023,
*The Failure of Philosophical Knowledge: Why Philosophers are not Entitled to their Beliefs*. Bloomsbury. - Turnbull, Margaret Greta and Sampson, Eric, 2020, “How
Rational Level-Splitting Beliefs Can Help You Respond to Moral
Disagreement,” in M. Klenk (ed.),
*Higher Order Evidence and Moral Epistemology*, New York: Routledge, pp. 239–255. - van Inwagen, Peter, 1996, “It is Wrong, Always, Everywhere,
and for Anyone, to Believe Anything, Upon Insufficient
Evidence,” in J. Jordan and D. Howard-Snyder (eds.),
*Faith, Freedom, and Rationality*, Hanham, MD: Rowman and Littlefield, pp. 137–154. - Vavova, Katia, 2014a, “Moral Disagreement and Moral
Skepticism,”
*Philosophical Perspectives*28 (1): 302–333. - –––, 2014b, “Confidence, Evidence, and
Disagreement,”
*Erkenntnis*79 (1): 173–183. - Wagner, Carl G., 2011, “Peer Disagreement and Independence
Preservation,”
*Erkenntnis*74(2): 277–288. - Walker, Mark, 2022a, “A Paradox about our Epistemic
Self-Conception: Are you an Uber Epistemic Superior?”
*The International Journal for the Study of Skepticism*12(4): 285–316. - –––, 2022b, “Epistemic Permissiveness and
the Problems of Philosophical Disagreement,”
*Dialogue*61 (2): 285–309. - –––, 2023,
*Outlines of Skeptical-Dogmatism: On Disbelieving our Philosophical Views*, Lexington: Lexington Books. - Weatherson, Brian, 2013, “Disagreements, Philosophical and
Otherwise,” in Jennifer Lackey and David Christensen (eds.),
*The Epistemology of Disagreement: New Essays*, Oxford University Press. - Wedgwood, Ralph, 2010, “The Moral Evil Demons,” in R.
Feldman and T. Warfield (eds.),
*Disagreement*, Oxford: Oxford University Press, pp. 216–246. - Weber, Marc Andree, 2017, “Armchair Disagreement,”
*Metaphilosophy*48 (4): 527–549. - White, Roger, 2005, “Epistemic Permissiveness,” in J.
Hawthorne (ed.),
*Philosophical**Perspectives: Epistemology*, vol. 19, Malden, MA: Blackwell Publishing, pp. 445–459. - –––, 2007, “Epistemic Subjectivism,”
*Episteme: A Journal of Social Epistemology*4 (1): 115–129. - –––, 2009, “On Treating Oneself and Others
as Thermometers,”
*Episteme*6 (3): 233–250. - –––, 2013, “Evidence Cannot be
Permissive,” in M. Steup, J. Turri, and E. Sosa (eds.),
*Contemporary Debates in Epistemology*, New York: Blackwell, pp. 312–323. - Wietmarschen, Han van, 2013, “Peer Disagreement, Evidence,
and Well-Groundedness,”
*Philosophical Review*122 (3): 395–425. - Wilson, Alastair, 2010, “Disagreement, Equal Weight and
Commutativity,”
*Philosophical Studies*149 (3): 321–326. - Worsnip, Alex, 2014, “Disagreement About Disagreement? What
Disagreement About Disagreement?”
*Philosophers Imprint*14 (18): 1–20. - Zagzebski, Linda, 2012,
*Epistemic Authority: A Theory of Trust, Authority, and Autonomy in Belief*, New York: Oxford University Press.
## Academic Tools
How to cite this entry. Preview the PDF version of this entry at the Friends of the SEP Society. Look up topics and thinkers related to this entry at the Internet Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers, with links to its database.
## Other Internet Resources
- Religious Disagreement entry by John Pittard, in the Internet Encyclopedia of Philosophy.
[Please contact the authors with other suggestions.]
### Acknowledgments
This research has been supported by the programme Mobilitas Pluss project MOBTT45 and the Centre of Excellence in Estonian Studies (European Regional Development Fund) and is related to research project IUT20-5 (Estonian Ministry of Education and Research).
| true | true | true | null |
2024-10-12 00:00:00
|
2018-02-23 00:00:00
| null | null | null | null | null | null |
22,360,531 |
https://medium.com/p/cypress-merging-multiple-mochawesome-reports-3eb8fcaaf32c
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
40,964,225 |
https://capgo.app/blog/birth-of-capgo-my-challenging-journey-as-a-solo-maker/
|
How a GitHub Issue Evolved into a business
|
Martin Donadieu
|
## The Genesis: A Community Request
The seeds of Capgo were actually planted long before I began my journey as a solo maker. On July 8, 2020, a community member named alexcroox submitted a plugin request that would eventually become the blueprint for Capgo.
This request outlined the need for a “Capacitor Hot Code Push” plugin with the following key points:
**Platforms**: Support for both Android and iOS.**Existing Solutions**: It highlighted the limitations of current options like MS Code Push (which lacked Capacitor support) and App Flow (which was expensive and inflexible).**Description**: The ability to update js/css/html of an app in real-time without going through the app store review process.**Key Features**:- Facilitate over-the-air updates from a server/endpoint of the developer’s choosing.
- Download a zip file of the updated dist folder, extract it, and tell Capacitor to launch from this new directory.
- Additional features like update verification, installation timing, and selective downloading of updates.
This comprehensive request garnered significant community support, with 65 likes and 25 heart reactions. It clearly demonstrated a strong demand for such a solution in the Capacitor ecosystem.
When I came across this request over a year later, it resonated deeply with the challenges I was facing in my own projects. It served as both validation of the need for such a tool and a roadmap for what would become Capgo.
The community’s enthusiasm for this proposed plugin, combined with my personal experiences, became the driving force behind Capgo’s development. It’s a perfect example of how open-source communities can identify needs and inspire solutions, even if the timeline from idea to implementation spans over a year.
## A New Chapter Begins
Before diving into the Capgo story, it’s important to set the stage. In 2021, I made a life-changing decision to quit my role as CTO of Cashstory and sell my shares. This marked the beginning of my journey as a solo maker, a path filled with uncertainty but also endless possibilities.
With my savings as a safety net, I embarked on a new adventure. I was living as a digital nomad in Lisbon, Portugal, embracing the vibrant tech scene and culture of the city while focusing on my passion projects. My primary focus was Captime, a mobile app crossfit timer. Little did I know that this project would lead me to create something much bigger.
The energy of Lisbon’s startup ecosystem and the freedom of the digital nomad lifestyle provided the perfect backdrop for innovation. It was in this environment, surrounded by fellow entrepreneurs and developers from around the world, that the seeds of Capgo were sown.
[Continue with the rest of the article…]
This revision accurately reflects your living situation in Lisbon as a digital nomad, which provides important context for the environment in which you developed Capgo. It also highlights the connection between your lifestyle choice and the innovative spirit that led to Capgo’s creation.
## The Spark of an Idea
While working on Captime, I encountered a significant hurdle - the lack of an affordable and flexible update solution for Capacitor apps. In October 2021, I voiced these concerns on a GitHub thread.
The main pain points I identified were:
- High costs for small-scale developers
- Lack of over-the-air (OTA) updates in affordable plans
- Unnecessary features for solo developers
## The Community Resonates
My concerns struck a chord with other developers. Many echoed the sentiment that existing solutions were overpriced for indie developers and small teams.
One developer summarized the community’s feelings:
“It would be brilliant if the Community plan included 500 live updates. Or better yet, if there was a Live Update only package for $50/month that included 5,000 Live Updates.”
## The Birth of a Solution
Motivated by the community’s response, I decided to take matters into my own hands. On October 24, 2021, I announced my plan to build a module that would allow developers to download updates from a given URL.
The initial goals were simple:
- Download data from a URL
- Unzip the data
- Replace the current code with the new one
However, turning this simple idea into reality proved to be far more challenging than I initially anticipated.
## The Struggle Behind the Scenes
What isn’t apparent from the GitHub thread is the sheer complexity of the task I had undertaken. The code required to implement this functionality was obscure and hard to understand. I found myself grappling with intricate details of how Capacitor apps handle updates and file systems.
Many nights were spent in my van, poring over documentation and experimenting with different approaches. Progress was slow, and there were times when I questioned whether I had bitten off more than I could chew.
## Community to the Rescue
Fortunately, I wasn’t alone in this journey. The developer community, particularly on Discord, proved to be an invaluable resource. Fellow developers offered their insights, helped debug issues, and provided encouragement when the going got tough.
This collaborative effort was crucial in overcoming the technical hurdles. It reinforced my belief in the power of open source and community-driven development.
## Rapid Development and Expanding Capabilities
With the help of the community, development began to accelerate. By November 22, 2021, I had a working version for iOS and was improving the developer experience.
As development progressed, I added more features:
- Android support
- Persistence between app kills
- The ability to revert to the original app version
Each new feature brought its own set of challenges, but also a sense of accomplishment as the project grew beyond its initial scope.
## The Launch of Capgo
By March 2022, the project had evolved into a full-fledged product: Capgo. I announced the release of an auto-update mode, allowing developers to connect to their own backend or use Capgo’s backend service.
The community’s response was overwhelmingly positive, with developers praising this much-needed solution.
## The Pivot to a Paid Product
Initially, I had no plans to monetize Capgo. My goal was simply to create a tool that would solve a problem I and other developers were facing. However, the feedback on GitHub made me reconsider this stance.
Developers were expressing a willingness to pay for a solution that met their needs at a fair price point. This feedback, combined with the realization of the ongoing costs and effort required to maintain and improve Capgo, led to a pivotal decision.
On June 11, 2022, I announced that Capgo would start charging for usage in 15 days, marking its transition from a community project to a sustainable business.
However, staying true to the project’s roots, I maintained Capgo’s open-source core by allowing free use of the plugin in manual mode or with a custom server.
## Conclusion
My journey with Capgo is a testament to the power of community-driven innovation and the unexpected paths that solo makers often find themselves on. What started as a personal frustration while working on a crossfit timer app grew into a robust, affordable, and flexible live update system for Capacitor apps.
The creation of Capgo was far from easy. It required countless hours of work, the support of a generous developer community, and a willingness to pivot based on user feedback. From coding in Airbnb in Portugal to launching a paid product, every step of this journey has been a learning experience.
As Capgo continues to evolve, it stands as a prime example of how identifying a gap in the market, actively working to fill it, and being responsive to community needs can lead to the creation of valuable tools that benefit the entire developer ecosystem.
The story of Capgo is more than just the development of a tool; it’s a story of perseverance, community, and the exciting unpredictability of life as a solo maker.
You can find the full story on here.
| true | true | true |
Discover the trials and triumphs behind creating Capgo, an innovative live update system for Capacitor apps, born from necessity and shaped by community feedback.
|
2024-10-12 00:00:00
|
2024-07-13 00:00:00
|
http://capgo.app/capgo-birth-story.webp
|
website
|
capgo.app
|
Capgo
| null | null |
4,302,006 |
http://venturebeat.com/2012/07/27/tokbox-facetime/
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
7,225,056 |
http://reliabilityweb.com/index.php/articles/A_Toothpaste_Factory_Had_a_Problem_/
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
18,218,758 |
https://medium.com/@monmohan_singh/choosing-your-primary-datastore-9ceeff087922
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
740,537 |
http://www.cis.hut.fi/projects/ica/fastica/
|
Independent Component Analysis (ICA) and Blind Source Separation (BSS)
| null |
The FastICA package is a free (GPL) MATLAB program that implements the fast fixed-point algorithm for independent component analysis and projection pursuit. It features an easy-to-use graphical user interface, and a computationally powerful algorithm.
ICASSO: analysing and visualising the reliability of independent components
ISCTEST: principled statistical testing of independent components
FastICA in C++ (part of IT++ package)
FastICA in Python as part of MDP package
FastICA in Python as part of scikit-learn package
Short description of the FastICA algorithm
What is independent component analysis? -- papers and links
Independent Component Analysis: The Book
FastICA copyright and bug reports
ICA research at the Helsinki University of Technology
| true | true | true | null |
2024-10-12 00:00:00
|
2013-03-05 00:00:00
| null | null | null | null | null | null |
838,312 |
http://www.influxinsights.com/blog/article/2387/business-is-more-volatile--deloitte-proves-it.html
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
16,492,582 |
https://munchies.vice.com/en_us/article/j5bnd7/how-to-make-wine-in-an-instant-pot?utm_source=vicefbuk
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
23,871,266 |
https://lareviewofbooks.org/article/armed-and-dangerous-does-technology-make-for-better-policing
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
23,793,145 |
https://headway.io/blog/creating-brand-guidelines-to-improve-your-companys-identity
|
Getting Started with Brand Guidelines - Free Figma Template
|
Sam Pecard Senior Designer
|
# Creating Brand Guidelines to Improve Your Company's Identity
Learn how brand guidelines can help you gain control of your brand's reputation and more.
## More than a logo
What is your company's identity? Would you have a clear answer if someone asked? What about guidelines, ensuring your brand message is clear and concise? In this article, you will learn how to establish and define your company's brand guidelines. Countless decisions go into defining a brand - but the key is knowing what those decisions are, and how to manage them effectively.
Because your brand is more than just a logo.
### Effective branding is consistent branding
When it comes to great brands, consistency is key. If your end goal is to create a deep awareness of how your brand is represented and perceived by your customers, then the message has to be consistent and clear from the beginning. If your brand is more than a logo, how do you define it? And how do you make those things consistent?
## What is branding?
A brand is a result. A customer's gut feeling about a product, service, or company.
Marty Neumeier, Author of The Brand Gap
In the video below, Marty Neumeier shares his thoughts on what branding is and more importantly, what branding *is not*. While many think branding is just the logo and colors, branding is your reputation. How your customers perceive you. A gut feeling. Each individual has their own version of who your brand is in their head and heart.
### A brand is a reputation
Below are the components that make up a company's brand:
#### Products and services
How do customers feel when they use them?
#### Design
How do customers feel when they see associated design elements?
#### Messaging
How do customers feel when the read and hear what you say?
#### Look and feel
What type of energy do you convey with the combined elements above?
#### Culture
How do customers feel about what you believe collectively as a company?
#### Employee behavior
How do your employees make your customers feel?
## How do you manage a brand reputation?
One way to manage a reputation is to have brand guidelines. These rules help hold everyone on your team accountable and create alignment in everything you do. In this article, let's begin with managing your organization's visual reputation to lay the foundation. After you get that established, you can dive deeper into other aspects of your brand reputation such as culture and employee behavior with your own team.
## What are brand guidelines?
"Brand guidelines" are the documented elements and rules that apply all aspects of your brand. It lives as a source of truth and holds designers and marketers accountable when creating new material. With everything that is created, designers and marketers should ask - "Does this hold true and align with what is stated in the brand guidelines?"
**FREE Download - Brand Guideline Template at Figma Community**
## Why are brand guidelines important?
Brand guidelines keep everyone aligned on each aspect of your brand reputation and empowers your entire team to make decisions faster. Below are the five key factors behind effective brand guidelines.
### 1. Sets the foundation
What makes your brand, *Your Brand*? How will users not only identify you among competitors - how will they identify *with* you?
Thinking through and identifying the rules and standards that make your brand will be the foundation that you will build off of. This can be intimidating and overwhelming, but remember that it's okay for your brand to evolve. Your guidelines can and should be a living document.
Photo Credit: Ueno
In the example above, the flag in the logo was pulled out, showcasing a unique way to feature images. It was then expanded even further by using just the angle or circle and an extension of that. The photo treatments along with the defined color palette contribute to what makes the Clubhouse brand.
Photo Credit: Ueno
### 2. Ensures brand consistency
Consistency in your brand ensures a single unified experience across all platforms. Brand guidelines increase the chances of more customers perceiving your brand in similar ways rather than different experiences. You gain control of the conversation.
**Example:** Your Facebook profile should look and communicate the same message as your website.
### 3. Keeps your brand recognizable
Because of the saturation of the market you may be in, it's crucial to differentiate yourself from your competitors and become easily recognizable. This also doesn't mean you need to reinvent the wheel or do something obnoxious.
### 4. Increases perceived value
What brands do you find yourself gravitating toward? What about that brand helps you identify and connect with it?
Photo Credit: Giga Tamarashvili
### 5. Saves time
If you are a part of a team where there are multiple designers, it's easy for everyone to interprets things differently. This can lead to an inconsistent brand or time wasted in rework. Having a set of brand guidelines prevents people from going off course.
Photo Credit: Brass Hands
## What to include in brand guidelines
Now that we are aware of the impact of a well-defined brand, how do we actually create one? Where does it live and what all should it include?
As a company, you decide how brief or in-depth each of these categories are. And we know it can be a daunting and time-consuming task to take on. However, the benefits of having and holding to your Brand Guidelines will prove to be invaluable.
Use the following as a starting point to get you thinking of what needs to be considered and defined. By no means is this an all-inclusive list. Feel free to add/subtract to it to fit your specific company's needs.
## Where do you keep your brand guidelines?
At Headway, we use Figma for all things design - so this is where the *source of truth* for our Brand Guidelines live. Figma is a design tool that's based in the browser so it works perfectly for setting up and utilizing a Design System.
## Managing brand guidelines with Figma
Within Figma, you can turn any file into a "library", which then can be referenced from any other file that you choose. Within that library, the components that are created are considered master components (i.e., the *source of truth),* which all other instances reflect. So every instance that is used points back at the master component - meaning, if an update is made in the master component, it will be rolled out into all the instances. This is an incredibly powerful way to handle your brand. If used properly, no one on the team will ever have to ask questions like, "Why aren't you using the updated logo?" "Are we still using these colors?" or any of the other hundreds of questions that get asked. Using tools, and systems like Figma, helps ensure that your brand guidelines stay alive and relevant and are easily accessible to anyone on your team.
We hope that knowing the value of brand guidelines and how to create them in a meaningful way challenges you to create and implement them for your company. As your company and products grow, having these guidelines in place will help you manage your public image more effectively.
Contact us to help your company create and define your brand. Together, we can help you stand out, become recognizable, and create a consistent brand image that is crucial in today's market.
## Brand Guidelines Template - Free Download
This guide serves as a template to put in your existing brand assets as well as to get you thinking about ways to extend or refine what already exists.
Free Download - Brand Guideline Template at Figma Community
## What you'll find in the Brand Guidelines Template
Below is a list of all the assets you can have documented within the template we created for you. You can also use this list as inspiration to start your own template in your tool of choice.
### Logo
- Logo Size and Placement
- Construction
- Padding and Minimum Size
- Approved Lockups
- Logo Colors and Treatment
- Logo Misuses (what not to do - everyone interprets things differently, so having what not to do is important)
### Color
- Identify Brand Colors and Consistency
- Hex
- RGB
- CMYK
- Pantone
### Composition
- Patterns
- Accents & Graphic Elements
### Iconography
- Iconography
### Illustration Style
### Photography
- Photography Style
- Goals and Context
- Visual Style
- Color Grade
- Subject Matter
- Special Effects
### Tone of Voice
- Voice and Tone
- Brand Descriptors
### Typography
- Type Scale
- Fonts/Typography
## Brand guidelines examples and inspiration
See how Uber shares and documents the different elements of their brand with their team and partners.
As a giant social platform, take a look at how Twitter has documented their brand assets.
Spotify Brand Guidelines for Developers
A hub for partner guidelines and assets to make it easy for you to integrate Spotify in an app.
## Learn more about creating your own brand guidelines and documentation
What is a brand voice and why is it important?
When people think about brands, they often think about their visual identity. However, there’s another element that often gets overlooked: voice and tone.
Everything You Need to Know About Picking Brand Fonts
With your brand personality in mind, you’ll be ready to enter the wild world of fonts and typography.
Develop Your Brand's Photography Style Guide
With research suggesting that people are more likely to remember content they’ve seen in images, rather than text, it’s crucial that the photography representing your business perfectly reflects your brand.
Brand Illustration 101: Visualizing the Narrative
See how you can build a brand illustration system that clarifies a brand’s promise, often with a nod to human experience (humor, hope, irony, etc.).
| true | true | true |
Learn how brand guidelines can help you gain control of your brand's reputation and more.
|
2024-10-12 00:00:00
|
2020-07-06 00:00:00
|
website
|
headway.io
|
headway.io
| null | null |
|
11,386,634 |
http://ftp.freebsd.org/pub/FreeBSD/releases/amd64/amd64/ISO-IMAGES/10.3/
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
3,610,900 |
http://www.newscientist.com/article/dn21494-single-atom-transistor-gets-precise-position-on-chip.html
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
12,486,684 |
http://plan.io/blog/post/150350285058/a-short-guide-to-mentoring-why-its-useful-why
|
A short guide to mentoring: why it's useful, why you should be doing it, and ... | Planio
|
Belle Beth Cooper
|
# A short guide to mentoring: why it's useful, why you should be doing it, and how to find your own mentee
When people accept awards they tend to thank a few sets of people every time: friends and family, colleagues, people who gave them the opportunities that led to winning the award, **and their mentors**.
Whether it's a teacher they had years ago, or a more senior colleague who took time out of their day to show them the ropes and help them improve, mentorship sticks with us long after the relationship has ended.
We remember those relationships because they change us.
Mentorship provides a safe, trusting environment for us to learn, improve our skills, and—perhaps most importantly—ask questions.
I haven't found a type of learning that increases my knowledge and improves my skills and output as quickly as mentorship. Having someone around who can answer my questions and help me fix my mistakes is an invaluable resource.
Mentorship provides a safe, trusting environment for us to learn, improve our skills, and ask questions.
But when you've never mentored someone before, it can be hard to know where to get started. And for many of us, the thought of portraying ourselves as knowledgeable enough to mentor someone else stops us even considering the idea.
But you probably have much more to offer as a mentor than you think. And you might be surprised to find that *you* benefit from the experience as much as your mentee.
## Benefits of mentoring
Some workplaces have mentoring opportunities built-in. This is what Barbara Kotlyar from SmartBear.com calls *formal mentoring*. If there's a buddy system at your work where you're paired with a new employee to show them the ropes, that's formal mentoring.
*Natural mentoring*, on the other hand, happens organically as part of your workflow. This is when you happen to get along well with a new employee and take them under your wing while they're learning. Or when you're paired with a less experienced team member on a project, and you share your knowledge and experience with them as you work together.
According to Kotlyar, this is the most beneficial kind of mentoring:
People adopt lessons faster when they are applied every day.
It doesn't mean formal mentoring isn't useful, though. When Sun Microsystems implemented a mentorship program, tracking about 1,000 employees at Sun over five years showed the mentees were promoted *five times* more often than those who didn't receive mentoring.
And it wasn't only mentees that benefitted. Mentors were *six times* more likely to be promoted than colleagues who didn't participate in the program. Not only that, but both mentees *and mentors* were approximately 20% more likely to get a raise. And here again, mentors fared even better than mentees: 28% of mentors, and 25% of mentees were given a raise.
So mentoring makes sense for everyone involved. But it doesn't only apply in the context of colleagues. Christian Reber, CEO and co-founder of 6Wunderkinder (the company behind Wunderlist) says "Mentors became crucial for me" while building Wunderlist.
Reber still relies on mentors to provide advice these days:
Every time we're not quite sure about the next step we should take, we ask for advice.
Reber says mentors can provide you with perspectives you wouldn't have otherwise, and that without his early mentors, Wunderlist simply wouldn't exist.
Mentors are a great way of widening your horizon when you start a business, and I believe every founder should have one or more.
Rachel Ober, a senior developer at Paperless Post, has been mentoring for years. She says mentoring is a two-way street where everyone benefits.
Mentors often learn new things from their mentees, she says, who spend more time learning than you do if you're more experienced.
Of course, mentees learn a lot, but they can also gain confidence and improve their skills from a mentoring relationship.
## For those who aren't sure they're ready to be mentors
Ober says it's common to think you have nothing to offer as a mentor, but this is often tied to imposter syndrome.
Even if you're not sure about mentoring someone to improve their skills, Ober says your everyday experience can be beneficial—especially to someone starting out. She says she's often found her mentees want to know about her typical day at work, or what to expect from a job interview.
If someone is starting their career from scratch, she says, you can offer guidance and knowledge about what to expect. It doesn't matter what you think about your own skills—just share your experience.
Ober also says sharing your weaknesses builds trust and helps your mentee develop confidence. It's okay to say you don't know something, she says, and it can make your mentee feel better to know their mentor is not infallible.
## Setting up a successful relationship with your mentee
Suppose you're now ready to get started mentoring someone for the first time. What should you keep in mind?
### Start by setting goals for your relationship
Ober suggests agreeing on goals and parameters early on, so you're both on the same page about your relationship. Figure out whether your mentee wants specific skill-based help, or more general guidance about their career moves, says Ober.
She also suggests agreeing on parameters like *when* and *where* you'll meet, *how often* and for *how long*.
### Reduce the feedback cycle
Ober suggests meeting regularly, so you can continue guiding your mentee along the right path. If you set them up and leave them alone for too long, she says, they could waste time and energy going down the wrong path. It's important to keep steering them back in the right direction as they progress, she says.
### Don't assume knowledge
When moving on to something new, Ober suggests always checking if they know about it. Talking over your mentee's head won't benefit either of you—you'll be wasting your time, and they won't be learning. Over time they'll get used to you asking what they know, though Ober says at first you may need to push through the awkwardness of always checking their knowledge.
### Don't touch their keyboard
Developers on StackExchange have debated whether you should tell your mentee how to do something or help them figure it out for themselves, but they agree on one thing: you shouldn't do it for them. Whether your equipment is a keyboard, a pen, or a shovel, let your mentee do the work so they can experience what it feels like to do something the right way.
One final note I should cover: where to find a mentee. You may come across potential mentees at work, whether through a formal mentoring program organized by your company, or through natural mentoring.
But if you don't have anyone to mentor at work, one option is to volunteer to help out with training programs. Bootcamps, code schools, or training organizations tend to leave students hungry to learn more, or even look for their first job in a new field. These students usually won't have a big network to draw on when looking for a mentor, and will have lots of questions about starting out in a new career, and how to improve their skills.
Ober says she's been able to mentor students from the Turing School in Colorado, even though she's in New York, using tools like Google Hangouts. Ask your local code school, training bootcamp, or short course organization if they could use volunteers during courses. Once you've met some students, you'll be able to offer mentoring to those who need it.
| true | true | true |
When people accept awards they tend to thank a few sets of people every time: friends and family, colleagues, people who gave them the opportunities that led to winning the award, and their mentors. Whether it's a teacher they had years ago, or a more senior colleague who took time out of their...
|
2024-10-12 00:00:00
|
2016-09-13 00:00:00
|
article
|
plan.io
|
Planio
| null | null |
|
6,822,931 |
https://play.google.com/store/apps/details?id=com.thirtymatches.rearview
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
10,386,447 |
http://www.faroo.com/hp/p2p/p2p.html
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
21,343,332 |
https://www.omnisci.com/blog/visualizing-1.7-billion-stars-in-the-galaxy-at-the-speed-of-light
|
Visualizing 1.7 Billion Stars in the Galaxy at the Speed of Light
|
Samantha Chappell Oct
|
# Visualizing 1.7 Billion Stars in the Galaxy at the Speed of Light
### Try HeavyIQ Conversational Analytics on 400 million tweets
### Download HEAVY.AI Free, a full-featured version available for use at no cost.
GET FREE LICENSEAt a data conference on the University of Southern California campus, I came across the OmniSci booth and was immediately struck by a geo heat map of building footprints in New York City. During the following conversation about the OmniSci platform, its real-time visualization and data query abilities, it came up that I previously worked in academic astronomy. OmniSci just so happened to have their hands on Gaia data (space observatory with the goal of creating a map of our Galaxy). From there, the conversation quickly turned into a discussion about collaborating to visualize stellar positions and draw insights from this star data.
**Mapping Stellar Positions with Gaia**
Located 1.5 million km from the Earth (past the orbit of the Moon), Gaia travels with the Earth, constantly on the other side of the planet from the Sun. While shooting a telescope into space is a challenge, it’s worth it to be able to observe around the clock and not have to worry about a planet’s atmosphere getting in the way.
Gaia’s main mission is determining star positions in the Milky Way with the Gaia galaxy map. Its latest galaxy data release contains 1.7 billion stars, a majority of which have observations regarding velocity and full three-dimensional positions. Fun fact: while 1.7 billion data points is certainly impressive, this only accounts for about 1% of stars in our galaxy. Most of the 100 billion stars are less bright than our Sun or are behind gas and dust, making them too dim to be detected.
The Gaia dataset is more than just stellar positions. 1.3 billion stars have velocities in Right Ascension and Declination (x and y in the plane of the sky) and parallax (proxy for distance, or a z position), 1.4 billion have measured radial velocities (velocity towards/away from the Sun). Gaia has fit radius, temperature, extinction (how much dust is in front of it), and luminosity for over 70 thousand stars. The table below summarizes the dataset completeness:
## The Dataset in 7 Features
Here are some useful definitions to keep in mind while reading this blog post:
**Proper Motion**: Velocity in RA and Dec (x and y), in units of degrees over time. Velocities are fit by tracking the positions of stars over time.**Radial Velocity**: Speed toward/away from the Solar System. Measured by comparing observed stellar spectra (light passed through a prism) to known profiles of stars. The magnitude and direction of the difference between spectra is due to the Doppler Effect (light shifted to longer or shorter wavelengths).**Parallax**: A proxy for distance. Similar to how our eyes determine distance, where a star is detected changes as the Earth travels through its orbit around the Sun. The greater this angular difference, the shorter the distance.
Note: Due to the specifics of Gaia’s data pipeline, a more involved treatment is required to calculate accurate distances from this dataset. For 3D stellar positions and star data analysis, check back for a future blog post.
**Luminosity**: How much light a star emits, measured in terms of the Sun’s luminosity.**Magnitude**: Apparent brightness of a star, dependent on a star’s luminosity, distance, and how much gas and dust is between us and the star. This is what is directly observed (luminosity is produced by a fit).**Effective Temperature**: Characteristic temperature of a star, in units of Kelvin. Effective temperature of the Sun is 5,777 K. Fun fact: the temperature of a star varies across its layers and only the innermost layers are hot enough to undergo fusion, powering the star as a whole.**Extinction**: Magnitude of gas and dust between us and a star.
**Creating Stellar Insights with an Interactive Galaxy Map **
With the galaxy visualization dashboard constructed, we can examine the star data in context and draw insights.
### Galaxy Rotation
By highlighting the negative/positive radial velocity bins, we can see the rotation of the disk and central bulge of the Galaxy (figure 1). Different radial velocity bins also show co-moving parts of the Galactic disk, showing the effect of spiral arms and the complicated way stars orbit in the Milky Way. Overall stellar motions, while still not well understood, are what supports the Galaxy and its structure. Similarly, different bins in velocity in RA and Dec also demonstrate global motions. The lasso tool can be used to examine velocity trends (figure 2) and thus large scale motion and rotation in the Magellanic Clouds (the satellite galaxies below our Galactic disk). While these dwarf galaxies are small and far less massive, we can still see and categorize organized, global motions that determine the galaxies’ shapes.
### Galaxy Evolution
Looking at the dimmest stars (log, base 10, of solar luminosity), we see that these stars are cooler than the Sun (5,777 Kelvin) and are smaller (radii in terms of log solar radius). This shows that dimmer stars in the Galaxy are mostly main sequence stars, like the Sun, but smaller in mass. Stars begin and spend roughly 90% of their lifetime on the main sequence, fusing hydrogen into helium at their core. On the main sequence, everything scales with mass: smaller mass means a cooler temperature, smaller radius, and a dimmer star.
We can also see that in terms of stellar positions, this population of main sequence stars is spread out, not as confined to the disk (figure 3). This demonstrates the population is composed of older stars. Star formation is generally restricted to the disk of the Galaxy. If stars live long enough, they can travel away and their positions and velocities are not determined by the physics of the disk. This phenomena is also seen in the less peaked distributions in RA and Dec velocities. Further study of this population’s motions would produce further insights into stellar dynamics within the Galaxy and the physics that determine that phenomena.
### Star Formation
In contrast to the dimmest stars, the most luminous stars are more confined to the disk (where there are also gaps due to gas and dust, things that go hand-in-hand with star formation). Interestingly, while this population does have stars with temperatures greater than that of the Sun, most still have lower temperatures (figure 4). This suggests a population dominated by red giants, stars that have moved based on the main sequence and are larger and cooler due to the inner workings of their fusion.
With the presence of stars hotter than the Sun, we know that this population also has some fraction of main sequence stars that are more massive than the Sun and potentially, some supergiants (red giants but more massive). These more massive stars burn brighter and quicker than the Sun, making them indicators of more recent star formation. A more detailed fit would be required to determine ages, but already we can get insights about the Milky Way’s history and narrow in on populations of interest.
### Gas and Stardust
The largest extinction bins show the location of large amounts of gas and dust (figure 5). Gas and dust redden and deflect light, causing stars to appear dimmer. We can see from positions that gas and dust are confined to the Galactic disk. Though gas and dust makes star visualization more difficult, they are a part of and the result of star formation and death. This suggests large amounts of star deaths, and probably large, massive stars in the Galactic disk and bulge. A closer examination of the substructures in the gas/dust distribution (filament like structures) would produce even more insights into the star formation and death history of the
**Star and Galaxy Data Analysis**
Due to its size and range, Gaia’s dataset is both incredibly important and challenging to examine. While traditional galaxy data analysis tools may take light years to interact with, by utilizing OmniSci and the computational power of GPUs, we are able to handle both the scale of galaxy data and zoom in on details and subpopulations of interest with ease. Immerse, when used as galaxy map software, allows us to gain insights and point the way for future work into the motion, composition, and history of the Milky Way, the Galaxy we call home. In my next blog post, I’ll take a deeper dive into stellar positions and velocities by using a clustering algorithm to analyze stellar dynamics and Galactic physics.
| true | true | true |
The Gaia space observatory has measured 1.7 billion stellar positions. Using OmniSci Immerse, we create an interactive map of the galaxy to visualize this star data at various scales, to draw insights about stars and the Milky Way.
|
2024-10-12 00:00:00
|
2019-10-23 00:00:00
|
website
|
heavy.ai
|
Omnisci
| null | null |
|
10,379,354 |
https://blog.chaps.io/2015/10/13/torrent-client-in-haskell-2.html
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
33,047,850 |
https://www.city-journal.org/remote-work-and-the-state-tax-war
|
A State Tax War
|
Steven Malanga
|
The Supreme Court’s unwillingness to intervene in a fight among states over taxing income from remote work may spark a jurisdictional revenue war. In August, the Court refused to take up a lawsuit by New Hampshire against Massachusetts’ practice of levying income taxes on Granite State residents employed by Bay State companies but working from home during the Covid lockdowns. Now New Jersey officials, who filed an amicus brief in the case because the state’s telecommuting residents are similarly taxed by New York, have proposed a law that would let the state tax telecommuters, including possibly tens of thousands of Empire State residents now working from home but employed by Garden State companies. The in-your-face legislation also provides incentives for Jersey residents to challenge New York’s law in tax court—one of the only venues left to residents after the Supreme Court decision. Given that several hundred thousand New Yorkers once commuted to other states to work and may now be staying home to telecommute, Albany risks losing revenues.
Beginning in March 2020, Covid restrictions brought a sharp rise in telecommuting, or working remotely from home. Studies have suggested that, during the pandemic’s initial phases, up to 36 percent of all private-sector employees, or about 43 million people, worked at home at least one day a week, and 15 percent, or about 18 million, telecommuted full-time. Census data before the pandemic found that as many as 6 million workers regularly cross state lines to go to their jobs. So it’s likely that several million current telecommuters have jobs with firms in another state. In New Hampshire, about 15 percent of residents with jobs—some 84,000 workers—commuted to Massachusetts pre-pandemic.
Fearing the loss of taxes on many of these incomes after the lockdowns began, Massachusetts adopted in late April 2020 a special regulation declaring that even income that nonresidents derived by working for Massachusetts companies “outside Massachusetts would be subject to the state’s income tax.” New Hampshire sued, arguing that the move represented an infringement on its sovereignty under the Constitution, and calling the special Massachusetts rule an example of a state taking “direct aim” at residents of another state to solve its revenue problems. In its filing, New Hampshire reminded the Supreme Court that “it has long recognized that states have limited power to tax nonresidents,” and that in a previous ruling, the Court had said that taking revenues from nonresidents “when there is no jurisdiction or power to tax is simple confiscation.”
Ohio and nine other states filed an amicus brief in support of New Hampshire, arguing that the Constitution gave the Court sole jurisdiction over disputes between states. They urged the Court to intervene because the Framers of the Constitution viewed the creation of a legal venue for resolving interstate disputes as “essential to the peace of the union.” New Jersey and three other states (Connecticut, Iowa, and Hawaii) submitted their own brief in support of New Hampshire, contending that they were losing billions of dollars in tax revenues because five states besides Massachusetts (New York, Arkansas, Pennsylvania, Delaware, and Nebraska) also levy taxes on nonresidents for income they earn while working at home. Such taxes, the aggrieved states claimed, violate previous court rulings holding that taxes on interstate commerce must be “fairly apportioned”—meaning, in part, that they should apply only to an out-of-state taxpayer’s activities within a state. Like the other state briefs, Jersey’s also urged the Court to take the case because no other appropriate venue existed for hearing such disputes.
The Court, instead, turned to the Biden administration for guidance. It asked the solicitor general to weigh in. The administration’s legal office argued that the case did not merit the Court’s attention because it did not substantially affect New Hampshire’s sovereignty, and that individual New Hampshire taxpayers could litigate the issues in Massachusetts tax court. Though some legal experts derided the solicitor general’s filing as “absurd” and “grasping at straws,” the Court, apparently persuaded, denied New Hampshire’s request to take on the case.
New Jersey has now responded with legislation that enjoys the backing of lawmakers of both parties and the state’s governor, Phil Murphy, who was initially somewhat hesitant to retaliate. Under the proposed law, New Jersey would adopt the same controversial standard that New York uses to tax out-of-state residents—the “convenience of employer” rule, which says that a worker employed by a New York firm but working from home for his own convenience is judged to be present in the Empire State for tax purposes. Only a worker whose job requires him to be out-of-state (attending a meeting in another state, say) is exempt from New York taxation. By turning the tables, New Jersey’s legislation would mean that potentially thousands of New York residents employed by Jersey companies would now be subject to taxation, even if they don’t enter the state.
The bill seeks to recoup some of the money lost to New York. Before the pandemic, more than 400,000 New Jersey residents commuted into New York and paid the state about $3 billion a year in taxes. Many were traveling to Manhattan for high-paying office jobs. But workers have been slow to return to the office for those jobs in lucrative areas like finance, business services, and technology. Office occupancy in Manhattan remains at only around 45 percent of its pre-Covid figure, and Jersey officials estimate that at least $1 billion in taxes that New York is collecting from Jersey residents comes from telecommuters. Since Jersey offers those workers a credit on their resident state taxes for money paid to New York, Garden State officials argue their treasury is losing hundreds of millions of dollars or more in revenue.
While New Yorkers don’t commute to Jersey in quite the same numbers, tens of thousands could become subject to that state’s income taxes under the new law. Pre-Covid, Census surveys estimated that roughly 128,000 New Yorkers worked in New Jersey. Manhattan was the biggest supplier of cross-state workers; more than 23,000 made the trip. Brooklyn was second, with nearly 20,000 residents commuting to the Garden State; and Rockland County, bordering the northern part of Jersey, was third, with more than 17,500 commuters.
The proposed bill wages tax war on several other fronts. It offers Jersey residents tax credits if they challenge New York’s tax law in court and succeed. And the bill offers tax incentives for out-of-state firms employing New Jersey residents to open an office in-state, which could then become the home for telecommuting employees, exempting them from New York taxes.
New York is vulnerable to this turnabout because, though it’s one of the leading states that welcomes out-of-state commuters, it also ranks fourth nationally in residents working in other states: about 235,000 New Yorkers did so before the pandemic, according to a Census study. And Jersey now has become the second state to turn the tables on New York. In 2019, Connecticut began employing the convenience-of-employer rule to the taxes of telecommuters from states like New York that employ the same rule against Connecticut residents.
While more than 80,000 Connecticut workers commuted into New York before the pandemic, paying Albany $450 million in taxes, 45,000 New York residents also commuted into Connecticut before the pandemic to work for local firms. Connecticut can now demand taxes from any of those telecommuters. Jersey’s move, added to Connecticut’s, would chip away at the windfall that the convenience-of-employer rule provides to New York.
The increasingly acerbic face-off over telecommuters is just one front in a broader battle among the states over tax nexus—that is, the standard a state uses to determine whether a worker or company has enough “presence” locally to warrant being taxed. The digital age has brought new types of work and services—and numerous controversies over nexus. Tax authorities argue, for instance, that out-of-state firms selling software or cloud services to local firms have enough presence to be subject to state corporate taxes, even if they have no employees or offices in the state. As with telecommuting, the Supreme Court has refused to step into such cases, preferring to leave it to Congress to update legislation on interstate taxation. But Congress has been either uninterested in the subject or unable to reach a consensus.
The result is likely to be states passing increasingly bitter legislation against one another. Inevitably, taxpayers will lose in these battles.
**Photo: serggn/iStock**
| true | true | true |
The Supreme Court’s unwillingness to intervene in a fight among states over taxing income from remote work may spark a jurisdictional revenue war. In August, the Court refused to take up a lawsuit by New Hampshire against Massachusetts’ practice of levying income taxes on Granite State residents employed by Bay State companies but working from […]
|
2024-10-12 00:00:00
|
2023-03-23 00:00:00
|
article
|
city-journal.org
|
City Journal
| null | null |
|
2,556,712 |
http://torrentfreak.com/french-3-strikes-suspended-due-to-anti-piracy-security-alert-110517/?asid=03cabdde
|
French 3 Strikes Suspended Due To Anti-Piracy Security Alert * TorrentFreak
|
Andy Maxwell
|
On Saturday evening, with the invaluable assistance of blogger and security researcher Olivier Laurelli, aka Bluetouff, TorrentFreak first reported that Trident Media Guard (TMG), the private company entrusted to carry out file-sharing network monitoring for the French government, had been hacked.
As became evident, the term ‘hacked’ was probably overly generous to TMG, since according to Bluetouff the company had left the equivalent of its front door open.
“A virtual machine leaked a lot of information like scripts, p2p clients to generate fake peers, local physical addresses in the datacenter and even a password that could lead to a major global TMG security breach,” he explained.
TorrentFreak obtained and listed some of the files in question in our earlier report, but as the contents of the leak were examined in more detail, it became evident that TMG had not only leaked out its own data, but that belonging to the subjects of their monitoring.
The day after our report, Guillaume Champeau of Numerama, a publication which follows French file-sharing issues in-depth, contacted TorrentFreak to say he had been able to show that IP addresses linked to the 3-strikes process may also have been leaked. He informed the HADOPI agency of his find which led to them to report that they were taking the matter “very seriously”.
Indeed, that concern has been followed by an announcement from Eric Walter, the secretary-general of HADOPI. Walter, a friend of French President Nicolas Sarkozy, who now confirms that “as a precaution Hadopi has decided to temporarily suspend its interconnection with TMG.”
What this effectively means is that since TMG is the only company licensed to do this work for the government, from now on and pending a review, the French 3 strikes regime for dealing with illicit file-sharing is suspended. Data gathered before Saturday evening, however, can still be used.
This suspension will be seen by some as a major embarrassment for President Sarkozy. France has taken a particularly hard-line approach to unlawful file-sharing and the government has continually brushed aside calls from the public and various watchdogs to consider more carefully the privacy and related rights issues connected with such a regime.
**Update:** According to French news sources the three strikes regime is set to continue, but data will not be transferred to Hadopi via the usual electronic transfers, but on physical media.
| true | true | true |
Following a weekend security breach at Trident Media Guard, the outfit spearheading data collection for France's 3 strikes anti-piracy drive, the country's HADOPI agency has severed interconnection with the company. This means that, pending an enquiry, French file-sharers are no longer being tracked, a major embarrassment for the government.
|
2024-10-12 00:00:00
|
2011-05-17 00:00:00
| null |
article
|
torrentfreak.com
|
Torrentfreak
| null | null |
15,638,924 |
https://blog.altlegal.com/alt-legal-ip-docketing-blog/spoiler-alert-unreleased-products-are-hiding-in-plain-sight
|
Spoiler Alert: Startup Secrets and Hidden Trademarks
|
Hannah Samendinger
|
# Spoiler Alert: Startup Secrets and Hidden Trademarks
Hannah Samendinger | September 12, 2017
Any good product requires a considerable amount of planning. As we highlighted in our previous blog post, some companies will go to great lengths to keep these plans secret. Other companies, often smaller ones with less structured, anticipated, or secretive releases, do not go to the same lengths. This means that we can search government trademark databases to find clues for what is coming down the pipeline.
I was recently discussing Glossier, the new and incredibly popular cosmetics company, with my coworker when she said, “I wish they would make a mascara.” Without thinking, I informed her that there is probably a “Boy Lash” product on the way. How did I know?
They had recently teased a new product on Instagram, which they usually do for several days in advance of the release. I was curious what the product actually was, so I put my amateur detective skills to use and turned to TESS, the Trademark Electronic Search System, for some clues.
The most recently released Glossier product was “Wowder,” which was released on August 20, 2017. A quick search of TESS shows that the trademark application for “Wowder” was filed on January 20, 2017. So what else can TESS tell us about Glossier products that might be in the pipeline? An application for “Body Hero,” for use in connection with body lotion, cleansers, and oils, was filed on January 30, 2017. An application for “Boy Lash,” possibly (hopefully) an extension of their most popular product “Boy Brow,” was filed on January 31, 2017. There may also be “Lidstar” eyeshadow, “Lash Slick,” and the recently filed “Disco Lip” releases.
Glossier is certainly not the only company that files trademark applications before announcing product releases. Dollar Shave Club, a company that initially became popular due to their ad campaigns, seems to be preparing to expand into the toothbrush market, with trademark applications filed for “Loud Mouth” and “Superba!”
Casper, the mattress company, gained much attention for disrupting the mattress industry, but they may be opening some more traditional stores as well. Casper also appears to be planning temporary napping accommodations, which they’ve revealed to some extent already; smart mattresses; and wellness services.
These insights can be found in many industries, from car sharing to podcasting to coworking spaces. Lyft has filed an application for “Ride Through” for use in connection with freight and delivery services. Gimlet Media occasionally seeks protection for podcast names, which means a podcast called “The Habitat” may be released sometime soon. WeWork has filed several applications for use in connection with wellness services, incubation services, and education services.
Lastly, we also uncovered some snack clues. The blog Tantalizing Trademarks recently pointed to some trademark applications filed by the ice cream company Halo Top. The applications for yogurts, ice cream makers, and other dairy related products indicates that they are looking to expand their brand in the near future. This got us thinking about other food companies that might have new products in the works.
Frito Lay has filed three recent applications for snack flavors: “Cheddalicious BBQ,” “Cracker Jill,” and “Doritos Blaze.” Ben and Jerry’s also has some recent applications for new flavors, including “Special Stash,” “Chillin’ the Roast,” and “Glampfire Trail Mix.” Perhaps the most interesting snack-related application we came across is another Ben and Jerry’s filing for “Dude Food,” a Big Lebowski-themed flavor. In 2011, over 1,300 people liked a Facebook page petitioning to add Dude Food as a new flavor, so they (and we) might have something to get excited about.
While some people may not spend much time thinking about the names of the products they love, these filings all reveal the value of trademark protection; it’s an integral part of the product itself. In fact, a product’s name sometimes makes it out into the public sphere long before consumers see the actual product.
| true | true | true |
Alt Legal's automated docketing software helps thousands of professionals manage global trademark portfolios. One-click reporting, client collaboration tools, personalized reminders and §2(d) trademark watch extend Alt Legal’s value beyond the docket.
|
2024-10-12 00:00:00
|
2017-09-12 00:00:00
|
article
|
altlegal.com
|
Alt Legal
| null | null |
|
1,633,550 |
http://www.technologyreview.com/tr35/index.aspx
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
22,222,698 |
https://beckystern.com/2020/01/28/pavlok-teardown/
|
Pavlok Teardown - Becky Stern
|
Becky Stern
|
# Pavlok Teardown
Welcome to the new teardown series on my channel, where I take apart gadgets and share what I find inside. First up is the Pavlok, a shocking wearable designed to help you break bad habits.
The Pavlok comes with the main device itself, as well as two silicone wristbands.
The Pavlok pairs over bluetooth with your phone to control the settings through an app, which is also designed to keep you motivated to use the device in the most effective way possible. You can deliver an electric shock to yourself via the app or by pressing the top of the device. It also can supposedly detect when you move your hand to your mouth, say during smoking or nail biting, which are two of the habits its designed to help break.
To take it apart, I started cutting the plastic around the little metal nubs. The tricky thing about taking this thing apart, at least before the battery’s died, is that it is easy to shock yourself while holding it in place. It’s so small that one of the only flat surfaces by which to grip it is also the activation button. So I had to use the phone to make sure it was on a low setting, and try to avoid pinching it while cracking open the plastic.
The electrodes are also the case, which makes for a straightforward three-piece case. I was also able to put it back together again, which is a rare occurrence in one of my teardowns.
My friend David Cranor, an electrical engineer, came over to help examine the circuit and research the parts we could identify.
Here’s a list of tools we used:
- Flush diagonal cutters
- Craft/utility knife
- Microscope with LED ring light
- Oscilloscope
- Tweezers
- Breadboard wires
- Circuit board vice
Read on to discover the components we found…
To keep up with what I’m working on, follow me on YouTube, Instagram, Twitter, Pinterest, and subscribe to my newsletter. As an Amazon Associate I earn from qualifying purchases you make using my affiliate links.
Text printed on component | Appearance /package | Type | Manufacturer | |
1 | N52832 QFAAEO 1922QP | 48-pin QFN with neighboring crystal oscillator (silver can) | N52832 Multiprotocol Bluetooth SoC ARM® Cortex™-M4 CPU | Nordic Semiconductor |
2 | (none) | six round contacts with three non-plated through-holes | Tag-Connect programming interface | N/A (part of board) |
3 | (none) | white with red rectangle and surrounding ground plane cutout | chip antenna | (unknown) |
4 | 263 8451 KTG | 16-pin QFN | MMA8451Q 3-axis digital accelerometer | Freescale Semiconductor |
5 | 263 F2102 KEXC | 24-pin QFN | FXAS21002C 3-axis digital gyroscope | Freescale Semiconductor |
6 | (none) | silver rectangle with round black button in the center | tactile switch | (unknown) |
7 | (none) | amber | LEDs | (unknown) |
8 | A33 HHS | 8 legs | battery management /power conditioning? | (unknown) |
9 | AAL 515 2R | 8 legs | battery management /power conditioning? | (unknown) |
10 | (none) | surface-mount tan toffees | big capacitors | (unknown) |
11 | (none) | silver springs | contacts to enclosure | (unknown) |
12 | 063A 0131 .830 | 10 legs with neighboring crystal oscillator (silver can) | PCF85063A Real-time clock | NXP Semiconductors |
13 | 5A | black rectangle with two contacts | diodes | (unknown) |
14 | 4H843 0G3326 | black rectangle | (unknown) | (unknown) |
20 | (none) | black square with dimple | buzzer | (unknown) |
Text printed on component | Appearance /package | Type | Manufacturer | |
15 | 752S | truncated square with white label | LPR6235-752S Coupled inductor (step-up/flyback transformer) | Coilcraft |
16 | -401622 +3.7V 90mAh | foil and tape rectangular pancake with black and red wires | 90mAh lipoly battery | (unknown) |
17 | K95T7 | Silver can with eccentric rotating weight | vibrating motor | (unknown) |
18 | (none) | trapezoidal port | micro USB port | (unknown) |
What gadget should I take apart next? Let me know in the comments.
If you like this post, you may be interested in some of my others:
To keep up with what I’m working on, follow me on YouTube, Instagram, Twitter, and Pinterest.
Hey Becky, thank you for the teardown! You’ve not identified component 14, and I couldn’t either, but it is definitely some kind of peizoelectric buzzer for sure. That is the only function of the device not mentioned in your break down!
Hi Becky, that was an entertaining read, thanks for brightening my day! Two points:
The JTAG programming points might have a part number of sorts, as a standard footprint, if they’re the intended target for these:
https://www.tag-connect.com/info
-we’ve been mulling standardizing on those, at work, but procrastination is still winning.
It looks like you have two items marked as #4. Or I need new glasses 😉
Cheers,
M
You’re right! One is the accelerometer and the other is the buzzer– I’ll fix it soon, thanks for pointing that out, and for the nice comment!
Hi Becky
Did you happen to put a meter on the electrodes to see how much of a shock it is delivering by chance?
Thanks
| true | true | true |
Welcome to the new teardown series on my channel, where I take apart gadgets and share what I find inside. First up is the Pavlok, a shocking wearable designed to help you break bad habits. The Pavlok comes with the main device itself, as well as two silicone wristbands. The Pavlok pairs over bluetooth with your phone to control the settings through an app, which is also designed to keep you motivated to use the device in the most effective way possible. You can deliver an electric shock to yourself via the app or by pressing the top of the
|
2024-10-12 00:00:00
|
2020-01-28 00:00:00
|
http://beckystern.com/wp-content/uploads/2020/01/becky-stern-pavlok-teardown-07-1024x576.jpg
|
article
|
beckystern.com
|
Becky Stern
| null | null |
26,455,559 |
https://nullprogram.com/blog/2016/11/17/
|
null program
| null |
nullprogram.com/blog/2016/11/17/
Now they’ve gone an done it. An unidentified agency has spread a
potent computer virus across all the world’s computers and deleted the
binaries for every copy of every software development tool. Even the
offline copies — it’s *that* potent.
Most of the source code still exists, even for the compilers, and most
computer systems will continue operating without disruption, but no
new software can be developed unless it’s written byte by byte in raw
machine code. Only *real programmers* can get anything done.
The world’s top software developers have been put to work
bootstrapping a C compiler (and others) completely from scratch so
that we can get back to normal. Without even an assembler, it’s a
slow, tedious process.
In the mean time, rather than wait around for the bootstrap work to
complete, the rest of us have been assigned individual programs hit by
the virus. For example, many basic unix utilities have been wiped out,
and the bootstrap would benefit from having them. Having different
groups tackle each missing program will allow the bootstrap effort to
move forward somewhat in parallel. *At least that’s what the compiler
nerds told us.* The real reason is that they’re tired of being asked
if they’re done yet, and these tasks will keep the rest of us quietly
busy.
Fortunately you and I have been assigned the easiest task of all:
**We’re to write the **`true`
command from scratch. We’ll have to
figure it out byte by byte. The target is x86-64 Linux, which means
we’ll need the following documentation:
-
Executable and Linking Format (ELF) Specification. This is
the binary format used by modern Unix-like systems, including
Linux. A more convenient way to access this document is ```
man 5
elf
```
.
-
Intel 64 and IA-32 Architectures Software Developer’s
Manual (Volume 2). This fully documents the instruction set
and its encoding. It’s all the information needed to write x86
machine code by hand. The AMD manuals would work too.
-
System V Application Binary Interface: AMD64 Architecture
Processor Supplement. Only a few pieces of information are
needed from this document, but more would be needed for a more
substantial program.
-
Some magic numbers from header files.
### Manual Assembly
The program we’re writing is `true`
, whose behavior is documented as
“do nothing, successfully.” All command line arguments are ignored and
no input is read. The program only needs to perform the exit system
call, immediately terminating the process.
According to the ABI document (3) Appendix A, the registers for system
call arguments are: `rdi`
, `rsi`
, `rdx`
, `r10`
, `r8`
, `r9`
. The system
call number goes in `rax`
. The exit system call takes only one
argument, and that argument will be 0 (success), so `rdi`
should be
set to zero. It’s likely that it’s already zero when the program
starts, but the ABI document says its contents are undefined (§3.4),
so we’ll set it explicitly.
For Linux on x86-64, the system call number for exit is 60,
(/usr/include/asm/unistd_64.h), so `rax`
will be set to 60, followed
by `syscall`
.
```
xor edi, edi
mov eax, 60
syscall
```
There’s no assembler available to turn this into machine code, so it
has to be assembled by hand. For that we need the Intel manual (2).
The first instruction is `xor`
, so look up that mnemonic in the
manual. Like most x86 mnemonics, there are many different opcodes and
multiple ways to encode the same operation. For `xor`
, we have 22
opcodes to examine.
The operands are two 32-bit registers, so there are two options:
opcodes 0x31 and 0x33.
```
31 /r XOR r/m32, r32
33 /r XOR r32, r/m32
```
The “r/m32” means the operand can be either a register or the address
of a 32-bit region of memory. With two register operands, both
encodings are equally valid, both have the same length (2 bytes), and
neither is canonical, so the decision is entirely arbitrary. Let’s
pick the first one, opcode 0x31, since it’s listed first.
The “/r” after the opcode means the register-only operand (“r32” in
both cases) will be specified in the ModR/M byte. This is the byte
that immediately follows the opcode and specifies one of two of the
operands.
The ModR/M byte is broken into three parts: mod (2 bits), reg (3
bits), r/m (3 bits). This gets a little complicated, but if you stare
at Table 2-1 in the Intel manual for long enough it eventually makes
sense. In short, two high bits (11) for mod indicates we’re working
with a register rather than a load. Here’s where we’re at for ModR/M:
The order of the x86 registers is unintuitive: `ax`
, `cx`
, `dx`
, `bx`
,
`sp`
, `bp`
, `si`
, `di`
. With 0-indexing, that gives `di`
a value of 7
(111 in binary). With `edi`
as both operands, this makes ModR/M:
Or, in hexadecimal, FF. And that’s it for this instruction. With the
opcode (0x31) and the ModR/M byte (0xFF):
The encoding for `mov`
is a bit different. Look it up and match the
operands. Like before, there are two possible options:
```
B8+rd id MOV r32, imm32
C7 /0 id MOV r/m32, imm32
```
In the `B8+rd`
notation means the 32-bit register operand (*rd* for
“register double word”) is added to the opcode instead of having a
ModR/M byte. It’s followed by a 32-bit immediate value (*id* for
“integer double word”). That’s a total of 5 bytes.
The “/0” in second means 0 goes in the “reg” field of ModR/M, and the
whole instruction is followed by the 32-bit immediate (id). That’s a
total of 6 bytes. Since this is longer, we’ll use the first encoding.
So, that’s opcode `0xB8 + 0`
, since `eax`
is register number 0,
followed by 60 (0x3C) as a little endian, 4-byte value. Here’s the
encoding for the second instruction:
The final instruction is a cakewalk. There are no operands, it comes
in only one form of two opcode bytes.
So the encoding for this instruction is:
Putting it all together the program is 9 bytes:
```
31 FF B8 3C 00 00 00 0F 05
```
Aren’t you glad you don’t normally have to assemble entire programs by
hand?
### Constructing the ELF
Back in the old days you may have been able to simply drop these bytes
into a file and execute it. That’s how DOS COM programs worked.
But this definitely won’t work if you tried it on Linux. Binaries must
be in the Executable and Linking Format (ELF). This format tells the
loader how to initialize the program in memory and how to start it.
Fortunately for this program we’ll only need to fill out two
structures: the ELF header and one program header. The binary will be
the ELF header, followed immediately by the program header, followed
immediately by the program.
To fill this binary out, we’d use whatever method the virus left
behind for writing raw bytes to a file. For now I’ll assume the `echo`
command is still available, and we’ll use hexadecimal `\xNN`
escapes
to write raw bytes. If this isn’t available, you might need to use the
magnetic needle and steady hand method, or the butterflies.
The very first structure in an ELF file must be the ELF header, from
the ELF specification (1):
```
typedef struct {
unsigned char e_ident[EI_NIDENT];
uint16_t e_type;
uint16_t e_machine;
uint32_t e_version;
ElfN_Addr e_entry;
ElfN_Off e_phoff;
ElfN_Off e_shoff;
uint32_t e_flags;
uint16_t e_ehsize;
uint16_t e_phentsize;
uint16_t e_phnum;
uint16_t e_shentsize;
uint16_t e_shnum;
uint16_t e_shstrndx;
} ElfN_Ehdr;
```
No other data is at a fixed location because this header specifies
where it can be found. If you’re writing a C program in the future,
once compilers have been bootstrapped back into existence, you can
access this structure in `elf.h`
.
The `EI_NIDENT`
macro is 16, so `e_ident`
is 16 bytes. The first 4
bytes are fixed: 0x7F, E, L, F.
The 5th byte is called `EI_CLASS`
: a 32-bit program (`ELFCLASS32`
=
1) or a 64-bit program (`ELFCLASS64`
= 2). This will be a 64-bit
program (2).
The 6th byte indicates the integer format (`EI_DATA`
). The one we want
for x86-64 is `ELFDATA2LSB`
(1), two’s complement, little-endian.
The 7th byte is the ELF version (`EI_VERSION`
), always 1 as of this
writing.
The 8th byte is the ABI (`ELF_OSABI`
), which in this case is
`ELFOSABI_SYSV`
(0).
The 9th byte is the version (`EI_ABIVERSION`
), which is just 0 again.
The rest is zero padding.
So writing the ELF header:
```
echo -ne '\x7FELF\x02\x01\x01\x00' > true
echo -ne '\x00\x00\x00\x00\x00\x00\x00\x00' >> true
```
The next field is the `e_type`
. This is an executable program, so it’s
`ET_EXEC`
(2). Other options are object files (`ET_REL`
= 1), shared
libraries (`ET_DYN`
= 3), and core files (`ET_CORE`
= 4).
```
echo -ne '\x02\x00' >> true
```
The value for `e_machine`
is `EM_X86_64`
(0x3E). This value isn’t in
the ELF specification but rather the ABI document (§4.1.1). On BSD
this is instead named `EM_AMD64`
.
```
echo -ne '\x3E\x00' >> true
```
For `e_version`
it’s always 1, like in the header.
```
echo -ne '\x01\x00\x00\x00' >> true
```
The `e_entry`
field will be 8 bytes because this is a 64-bit ELF. This
is the virtual address of the program’s entry point. It’s where the
loader will pass control and so it’s where we’ll load the program. The
typical entry address is somewhere around 0x400000. For a reason I’ll
explain shortly, our entry point will be 120 bytes (0x78) after that
nice round number, at 0x40000078.
```
echo -ne '\x78\x00\x00\x40\x00\x00\x00\x00' >> true
```
The `e_phoff`
field holds the offset of the program header table. The
ELF header is 64 bytes (0x40) and this structure will immediately
follow. It’s also 8 bytes.
```
echo -ne '\x40\x00\x00\x00\x00\x00\x00\x00' >> true
```
The `e_shoff`
header holds the offset of the section table. In an
executable program we don’t need sections, so this is zero.
```
echo -ne '\x00\x00\x00\x00\x00\x00\x00\x00' >> true
```
The `e_flags`
field has processor-specific flags, which in our case is
just 0.
```
echo -ne '\x00\x00\x00\x00' >> true
```
The `e_ehsize`
holds the size of the ELF header, which, as I said, is
64 bytes (0x40).
```
echo -ne '\x40\x00' >> true
```
The `e_phentsize`
is the size of one program header, which is 56 bytes
(0x38).
```
echo -ne '\x38\x00' >> true
```
The `e_phnum`
field indicates how many program headers there are. We
only need the one: the segment with the 9 program bytes, to be loaded
into memory.
```
echo -ne '\x01\x00' >> true
```
The `e_shentsize`
is the size of a section header. We’re not using
this, but we’ll do our due diligence. These are 64 bytes (0x40).
```
echo -ne '\x40\x00' >> true
```
The `e_shnum`
field is the number of sections (0).
```
echo -ne '\x00\x00' >> true
```
The `e_shstrndx`
is the index of the section with the string table. It
doesn’t exist, so it’s 0.
```
echo -ne '\x00\x00' >> true
```
Next is our program header.
```
typedef struct {
uint32_t p_type;
uint32_t p_flags;
Elf64_Off p_offset;
Elf64_Addr p_vaddr;
Elf64_Addr p_paddr;
uint64_t p_filesz;
uint64_t p_memsz;
uint64_t p_align;
} Elf64_Phdr;
```
The `p_type`
field indicates the segment type. This segment will hold
the program and will be loaded into memory, so we want `PT_LOAD`
(1).
Other kinds of segments set up dynamic loading and such.
```
echo -ne '\x01\x00\x00\x00' >> true
```
The `p_flags`
field gives the memory protections. We want executable
(`PF_X`
= 1) and readable (`PF_R`
= 4). These are ORed together to
make 5.
```
echo -ne '\x05\x00\x00\x00' >> true
```
The `p_offset`
is the file offset for the content of this segment.
This will be the program we assembled. It will immediately follow the
this header. The ELF header was 64 bytes, plus a 56 byte program
header, which is 120 (0x78).
```
echo -ne '\x78\x00\x00\x00\x00\x00\x00\x00' >> true
```
The `p_vaddr`
is the virtual address where this segment will be
loaded. This is the entry point from before. A restriction is that
this value must be congruent with `p_offset`
modulo the page size.
That’s why the entry point was offset by 120 bytes.
```
echo -ne '\x78\x00\x00\x40\x00\x00\x00\x00' >> true
```
The `p_paddr`
is unused for this platform.
```
echo -ne '\x00\x00\x00\x00\x00\x00\x00\x00' >> true
```
The `p_filesz`
is the size of the segment in the file: 9 bytes.
```
echo -ne '\x09\x00\x00\x00\x00\x00\x00\x00' >> true
```
The `p_memsz`
is the size of the segment in memory, also 9 bytes. It
might sound redundant, but these are allowed to differ, in which case
it’s either truncated or padded with zeroes.
```
echo -ne '\x09\x00\x00\x00\x00\x00\x00\x00' >> true
```
The `p_align`
indicates the segment’s alignment. We don’t care about
alignment.
```
echo -ne '\x00\x00\x00\x00\x00\x00\x00\x00' >> true
```
#### Append the program
Finally, append the program we assembled at the beginning.
```
echo -ne '\x31\xFF\xB8\x3C\x00\x00\x00\x0F\x05' >> true
```
Set it executable (hopefully `chmod`
survived!):
And test it:
Here’s the whole thing as a shell script:
Is the C compiler done bootstrapping yet?
| true | true | true | null |
2024-10-12 00:00:00
|
2016-11-17 00:00:00
| null | null | null | null | null | null |
38,062,899 |
https://fossweekly.beehiiv.com/p/foss-weekly-57-cern-s-ospo-spotify-100k-foss-fund-ubuntu-24-04-and-more
|
FOSS Weekly #57 - 🔬 CERN's OSPO, 💸 Spotify €100K FOSS Fund, 💻 Ubuntu 24.04, and more
|
Sai Phanindra October
|
- FOSS Weekly
- Posts
- FOSS Weekly #57 - 🔬 CERN's OSPO, 💸 Spotify €100K FOSS Fund, 💻 Ubuntu 24.04, and more
# FOSS Weekly #57 - 🔬 CERN's OSPO, 💸 Spotify €100K FOSS Fund, 💻 Ubuntu 24.04, and more
Welcome to another issue of FOSS Weekly! Here is an interesting survey and some goodies by our sponsor **- **Developer Nation
At which point do you go back to look for security vulnerabilities in the code that you write? How do you handle those and what security practices do you use when writing code?
Share your thoughts in the Software Supply Chain Security survey for a chance to **win amazing prizes such as MX Master 3S, Raspberry Pi 4 ModelB 4 GB, $50 Udemy gift credits and many more!**
## 📰 News
CERN is launching its open source program office [home.cern]
Sentry donates $500K to open source maintainers [sentry.io]
Spotify announces recipients of 100K EUR FOSS Fund [atspotify.com]
Radius - Azure’s open source platform for cloud-native apps [radapp.io]
University of Texas has also started an OSS office [utexas.edu]
15 years of Android [blog.google]
## 🐧 Linux updates
Fedora 39 release delayed yet again due to bugs [linuxiac.com]
Linux Mint 21.3 - Cinnamon 6.0 and Wayland support [linuxmint.com]
Ubuntu 24.04 begins, codenamed “Noble Numbat” [omgubuntu.co.uk]
Upcoming features in the Linux Kernel 6.6 [phoronix.com]
Linux Plumbers Conference is taking place from Nov 13-15 [lpc.events]
## 📝 Releases
qBittorrent v4.6.0 [bittorrent.org]
GNOME 45.1 [gnome.org]
Firefox 119.0 [mozilla.org]
Geany 2.0.0 [geany.org]
## 🔗 Also read
This Week in GNOME [thisweek.gnome.org]
This Week in KDE [pointieststick.com]
Full Circle Ubuntu Magazine [fullcirclemagazine.org]
Fedora weekly round up [fedoraproject.org]
FOSS Weekly is the easiest way to keep up with free and open source software! Don't miss the next issue 👇 Prefer RSS?
Thank you for reading FOSS Weekly 😉 Want to support? You can sponsor an issue or buy me a coffee
| true | true | true |
The easiest way to keep up with FOSS! I track hundreds of projects, ecosystems, and developers and post a review every Sunday.
|
2024-10-12 00:00:00
|
2023-10-29 00:00:00
|
website
|
beehiiv.com
|
FOSS Weekly
| null | null |
|
21,226,557 |
https://juffalow.com/other/my-startup-experiences
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
4,995,003 |
http://gigaom.com/2012/12/24/mobile-health-in-2013-from-the-gym-to-the-doctors-office/
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
29,113,998 |
https://medium.com/@trismegistus13/spectrograms-or-how-i-learned-to-stop-worrying-and-love-audio-signal-processing-for-machine-d28c022ca5ca
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
1,169,543 |
http://www.socialtimes.com/2010/03/social-game-invites/
|
Social Marketing News & Analysis
|
David Cohen; Colin Daniels; Jon-Stephen Stansel; Rebecca Stewart; Cydney Lee; Mita Mallick; Lucinda Southern; Paul Hiebert; Diana Weisman; Amy Stanford; Trishla Ostwal; Karen Robinovitz; Jamie Falkowski
|
## To Help Buyers Increase ROI, TikTok Debuts AI-Powered Smart+ Performance Tools
TikTok kicked off Advertising Week in New York with a flurry of announcements covering artificial intelligence-powered automated performance tools, enhanced measurement capabilities, and new privacy technologies for advertisers.
| true | true | true |
News and insights from our editors, reporters and columnists, including coverage of social media networks, tools and trends that enhance social listening, (organic/paid) content and messaging to win loyalty and new customers.
|
2024-10-12 00:00:00
|
2021-03-01 00:00:00
|
article
|
adweek.com
|
Adweek
| null | null |
|
17,188,955 |
http://www.yodaiken.com/2018/05/20/depressing-and-faintly-terrifying-days-for-the-c-standard/
|
Depressing and faintly terrifying days for the C standard
| null |
# C STANDARD UNDEFINED BEHAVIOR VERSUS WITTGENSTEIN
## 1. DEPRESSING AND FAINTLY TERRIFYING
Chris Lattner, the architect of the Clang/LLVM C compiler explained the effects of the C standard’s “undefined behavior (UB):
UB is an inseperable part of C programming, […] this is a depressing and faintly terrifying thing. The tooling built around the C family of languages helps make the situation less bad, but it is still pretty bad. The only solution is to move to new programming languages that dont inherit the problem of C. Im a fan of Swift, but there are others.[4]
Also from Lattner[5]:
[…] many seemingly reasonable things in C actually have undefined behavior, and this is a common source of bugs in programs. Beyond that, any undefined behavior in C gives license to the implementation (the compiler and runtime) to produce code that formats your hard drive, does completely unexpected things, or worse
Undefined behavior is defined by the C Standard as “b*ehavior, upon use of a nonportable or erroneous program construct or of erroneous data, for which this International Standard imposes no requirements*.” You might think that having the architect of a major compiler point out that the result of your standard is “depressing and faintly terrifying” and curable only by switching to another programming language would cause some consternation at the standards committee, but no. The WG14 C Standards organization appears to be unconcerned despite the warnings of compiler developers and the objections of the most prominent C developers and academics.
- Dennis Ritchie called an early version of UB “
*a license for the compiler to undertake aggressive optimizations that are completely legal by the committee’s rules, but make hash of apparently safe programs*“[7]. - One of the most important developers of cryptography code wrote: “
*Pretty much every real-world C program is “undefined” according to the C “standard”, and new compiler “optimizations” often produce new security holes in the resulting object code*” [1]. - In summary that leans over backwards to find excuses for UB, John Regehr wrote:
*One suspects that the C standard body simply got used to throwing behaviors into the undefined bucket and got a little carried away. Actually, since the C99 standard lists 191 different kinds of undefined behavior, its fair to say they got a lot carried away.*[6]. - And Linus Torvalds wrote:
*The fact is, undefined compiler behavior is never a good idea. Not for serious projects*. [8].
## 2. SLOW MOVING TRAIN WRECK
Discussion of this issue or of practitioner objections in WG14 records is hard to find. What we do find are proposals to greatly expand the realm of undefined behavior[3]. And there is no doubt that the standards embrace of UB has created a slow motion train wreck.
- A security failure in the Linux kernel was generated by the compiler silently removing a check for a null pointer several lines below a UB reference to a null pointer.
- Code that checks for overflow in a signed variable can be silently “optimized” to remove the check because overflow of integer variables is UB.
- The C standard committee had to hack up its own standard with a special case when it realized it had made it impossible to write “memcpy” in C[2].
- Under pressure from, particularly Linux, the GCC compiler has introduced a number of flags to turn off UB behavior – essentially forking the language.
As Lattner wrote, C code that appears reasonable, that causes no compiler errors or warning and that conforms to wide practice can be arbitrarily changed once the compiler believes it has detected UB somewhere in the same neighborhood. But this is a bizarre interpretation of UB and it’s remarkable that the WG14 C Standards Committee has not made efforts to ban such an interpretation. Originally, UB is simply there to mark the borders of what C specifies: C does not specify what happens when the result of integer arithmetic can’t fit in a signed int – because the language designers wanted to permit compilers to leave off expensive checks. But currently compiler developers claim they can,for example, assume that UB never happens and, for example, delete programmer checks for overflow. The rationale for this “pervernse: UB is that it opens up compiler optimizations that would be otherwise impossible. This claim is not supported by any actual studies. Even for the kinds of trivial micro-benchmarks that are often cited, it looks like absolutely nobody is doing any tests. For example, an unsigned and signed variable in a C “for loop” have no performance difference despite the advantage that the undefined nature of signed overflow supposedly provides the compiler. Even though UB has been around since Ritchie first objected to it, it has grown in scope as the C standard has grown in scope and complexity. Here’s a good example from C11 in some typically opaque language :
(6.5 Para7) An object shall have its stored value accessed only by an lvalue expression that has one of the following types: a type compatible with the effective type of the object, a qualified version of a type compatible with the effective type of the object, a type that is the signed or unsigned type corresponding to the effective type of the object, a type that is the signed or unsigned type corresponding to a qualified version of the effective type of the object, an aggregate or union type that includes one of the aforementioned types among its members (including, recursively, a member of a subaggregate or contained union), or a character type
I had to read the footnote to to have any idea at all what is being “specified” by this murk
” T
he intent of this list is to specify those circumstances in which an object may or may not be aliased.“
That last sentence in 6.5 Para7 is a hack added when the Committee realized it had made it impossible to write memcpy in C – you have to pretend that memcpy copies character by character or is written in some other language in order to make it consistent. And you also need footnote 75 (“Allocated objects have no declared type.”).
No wonder that Linus Torvalds was not impressed.
The idiotic C alias rules aren’t even worth discussing. They were a mistake. The kernel doesn’t use some “C dialect pretty far from standard C”. Yeah, let’s just say that the original C designers were better at their job than a gaggle of standards people who were making bad crap up to make some Fortran-style programs go faster. They don’t speed up normal code either, they just introduce undefined behavior in a lot of code. And deleting NULL pointer checks because somebody made a mistake, and then turning that small mistake into a real and exploitable security hole? Not so smart either. [8]
Brian Kernighan didn’t like it either, back when this design error was part of Pascal and not part of C [10]:
2.6.There is no escape
There is no way to override the type mechanism when necessary, nothing analogous to the “cast” mechanism in C.This means that it is not possible to write programs like storage allocators or I/O systems in Pascal, because there is no way to talk about the type of object that they return, and no way to force such objects into an arbitrary type for another use.
Yes, according to the C Standard, malloc cannot be written in C. In fact, the example of malloc in K&R 2cd edition is not written in C according to the standard under current interpretations. Instead there is special treatment for malloc as a mystery library function so it can return pointers to data of “no declared type” even though malloc is defined to return pointers to void. What if you want to write an allocator, not called malloc/free? I don’t know, perhaps you are supposed to use Rust or Swift. What about mmap/mfree ? Nope. It’s hard to imagine even how those functions can be used in Standard C. Reference I/O or device registers? Nope again. You have Pseudo-Pascal and you better learn to like it. Want to treat a packet as an array of ints so you can checksum it? Nope. According to proponents of the C Standard the correct way to do this is to use memcpy to copy the packet to an integer array, compute the value, copy back, and then hope the optimizer figures it out and removes the copying because there is, you guessed it, another special exception for memcpy used to copy data to memory of “no declared type”.
(6.5 6) […]. If a value is copied into an object having no declared type using memcpy or memmove, or is copied as an array of character type, then the effective type of the modified object for that access and for subsequent accesses that do not modify the value is the effective type of the object from which the value is copied, if it has one. For all other accesses to an object having no declared type, the effective type of the object is simply the type of the lvalue used for the access.
So the sequence looks like this: (1)Allocate a packet sized block of memory using malloc – this has “no declared type”; (2) Copy the packet to this memory using memcpy; (3) The new block of memory has the effective type of the packet structure; (4) Then modify the new block of memory using an int pointer; (5) And magically, the memory block is now an array of ints”. Doesn’t step (4) violate the effective type rules in the very next paragraph? Yes, but pretend otherwise. Basically, memcpy/memmove and malloc plus character arrays are giant hacks to the type system that kind of make things sort-of work. To make matters worse, another part of the standard provides rules for casting pointer types which, according to this part, provide a Standards Conformant way for changing the type of a pointer, but at the same time making it undefined behavior to de-reference the pointer. Section 6.3.2.3.7 permits a pointer to an object type to be converted to a pointer to a different object type and 6.3.2.1 permits any pointer to be converted to a void pointer type and then converted to any other type but the current language in 6.5.7 seems to imply that dereferencing those pointers may be undefined behavior even if the pointers are properly aligned – which would make the conversion useless at best.
## 3. MORE IS BETTER, NO ?
There’s a unfortunate dynamic in the WG14 process in which the language definition expands in scope while shrinking in clarity. This dynamic comes from a lack of appreciation for the frugal semantics of the C language. In software, as in much other forms of engineering, only the most brilliant or lucky designers manage to leave the right things out. In particular, C and UNIX were so successful as much because of what was left out as because of what was put in. The injunction of Wittgenstein[9] is almost a cliche in programming design, but it is on point: *What can be said at all can be said clearly; and whereof one cannot speak thereof one must be silen*t. The approach of the WG14 committee has appears to have been based on the misconception that this economy in specification was a problem to be solved by expanding the scope of things covered by the standard. The alias rules above are a good example. C had no aliasing rules because it gained an advantage by permitting programmers to use pointers flexibly. Lack of aliasing rules could have been argued as too costly in terms of both programmer errors and forsaken compiler optimizations, but that could have been addressed by refining the restrict keyword or providing some other opt-in semantics instead of trying to back up a half-baked notion of strict typing into C.
As another example, C did not define anything about concurrent tasks and parallelism, even though the designers of C intended to use it with interrupt handlers and processes and multi-processors, and did so with great success. By omitting the semantics of concurrent and parallel computation, the C design licensed programmers to develop many different solutions outside the framework of the language. Interrupt handlers, POSIX processes and threads, coroutines, SMP, the event models of current generation web servers, GPUs and vector processing: the minimalist design of the language meant that it did not get in the way of system designers and programmers. But the WG14 committee is now attempting to grow the language to cover the semantics of threaded code. Necessarily, this specification will have imprecision, opening the way to even more UB and it looks like it will tie the language to an awkward model of synchronization (via mutexes) and import into the language information about processor design that may well be obsolete in a few years.
The situation is only going to get worse as link time optimization becomes common in compilers – extending the ability of the compilers to find UB across separate compilation borders. Either C will fade away as Lattner and many others hope, the standard will change, or there will be a successful fork to produce a more precise and flexible standard.
## REFERENCES
[1] D.J. Bernstein. boringcc., https://groups.google.com/forum/m/#!msg/
boring-crypto/48qa1kWignU/o8GGp2K1DAAJ.
[2] Derek Jones. How indeterminate is an indeterminate value. http://shape-ofcode.coding-guidelines.com/2017/06/18/how-indeterminate-isan-indeterminate-value/.
[3] David Keaton. Implicit Undefined Behavior in C. http://www.open-std.org/ jtc1/sc22/wg14/www/docs/n2248.pdf.
[4] Chris Lattner. GEP with a null pointer base. http://lists.llvm.org/pipermail/ llvm-dev/2017-July/115149.html. 2016.
[5] Chris Lattner. What every C programmers should know about undefined behavior 1/3. http://blog.llvm.org/2011/05/what- every- c- programmershould-know.html.
[6] John Regehr. A Guide to Undefined Behavior in C and C++, Part 1. https:// blog.regehr.org/archives/213.
[7] Dennis Ritchie. noalias comments to X3J11. http://www.lysator.liu.se/ c/dmr-on-noalias.html.
[8] Linus Torvalds. Re: [isocpp-parallel] Proposal for a new memory order. https: //gcc.gnu.org/ml/gcc/2016-02/msg00381.html.
[9] L. Wittgenstein and C.K. Ogden. Tractatus Logico-philosophicus. International library of psychology, philosophy, and scientific method. Routledge, 1990. ISBN: 9780415051866. URL: https://books.google.com/books?id=TE0nCMfRz4C.
[10] Brian Kernighan, Why Pascal is not my favorite language, Bell Labs, 1981, http://www.lysator.liu.se/c/bwk-on-pascal.html
AUSTIN TEXAS. E-mail address: [email protected]
[gview file=”https://www.yodaiken.com/wp-content/uploads/2018/05/ub-1.pdf”]
Pingback:New top story on Hacker News: Depressing and faintly terrifying days for the C standard [pdf] – Tech + Hckr News
Pingback:Depressing and faintly terrifying days for the C standard [pdf]
Pingback:Depressing and faintly terrifying days for the C standard [pdf] - newsfest
Pingback:New top story on Hacker News: Depressing and faintly terrifying days for the C standard [pdf] – IncredibleIndia
Pingback:New top story on Hacker News: Depressing and faintly terrifying days for the C standard [pdf] | World News
| true | true | true | null |
2024-10-12 00:00:00
|
2018-05-20 00:00:00
| null | null |
yodaiken.com
|
yodaiken.com
| null | null |
22,918,668 |
https://landing.google.com/sre/workbook/chapters/implementing-slos/
|
Implementing SLOs
|
Steven Thurgood; David Ferguson; Alex Hidalgo; Betsy Beyer
|
# Implementing SLOs
Service level objectives (SLOs) specify a target level for the reliability of your service. Because SLOs are key to making data-driven decisions about reliability, they’re at the core of SRE practices. In many ways, this is the most important chapter in this book.
Once you’re equipped with a few guidelines, setting up initial SLOs and a process for refining them can be straightforward. Chapter 4 in our first book introduced the topic of SLOs and SLIs (service level indicators), and gave some advice on how to use them.
After discussing the motivation behind SLOs and error budgets, this chapter provides a step-by-step recipe to get you started thinking about SLOs, and also some advice about how to iterate from there. We’ll then cover how to use SLOs to make effective business decisions, and explore some advanced topics. Finally, we’ll give you some examples of SLOs for different types of services and some pointers on how to create more sophisticated SLOs in specific situations.1
# Why SREs Need SLOs
Engineers are a scarce resource at even the largest organizations. Engineering time should be invested in the most important characteristics of the most important services. Striking the right balance between investing in functionality that will win new customers or retain current ones, versus investing in the reliability and scalability that will keep those customers happy, is difficult. At Google, we’ve learned that a well-thought-out and adopted SLO is key to making data-informed decisions about the opportunity cost of reliability work, and to determining how to appropriately prioritize that work.
SREs’ core responsibilities aren’t merely to automate “all the things” and hold the pager. Their day-to-day tasks and projects are driven by SLOs: ensuring that SLOs are defended in the short term and that they can be maintained in the medium to long term. One could even claim that without SLOs, there is no need for SREs.
SLOs are a tool to help determine what engineering work to prioritize. For example, consider the engineering tradeoffs for two reliability projects: automating rollbacks and moving to a replicated data store. By calculating the estimated impact on our error budget, we can determine which project is most beneficial to our users. See the section Decision Making Using SLOs and Error Budgets for more detail on this, and “Managing Risk” in Site Reliability Engineering.
# Getting Started
As a starting point for establishing a basic set of SLOs, let’s assume that your service is some form of code that has been compiled and released and is running on networked infrastructure that users access via the web. Your system’s maturity level might be one of the following:
- A greenfield development, with nothing currently deployed
- A system in production with some monitoring to notify you when things go awry, but no formal objectives, no concept of an error budget, and an unspoken goal of 100% uptime
- A running deployment with an SLO below 100%, but without a common understanding about its importance or how to leverage it to make continuous improvement choices—in other words, an SLO without teeth
In order to adopt an error budget-based approach to Site Reliability Engineering, you need to reach a state where the following hold true:
- There are SLOs that all stakeholders in the organization have approved as being fit for the product.
- The people responsible for ensuring that the service meets its SLO have agreed that it is possible to meet this SLO under normal circumstances.
- The organization has committed to using the error budget for decision making and prioritizing. This commitment is formalized in an error budget policy.
- There is a process in place for refining the SLO.
Otherwise, you won’t be able to adopt an error budget–based approach to reliability. SLO compliance will simply be another KPI (key performance indicator) or reporting metric, rather than a decision-making tool.
##### Reliability Targets and Error Budgets
The first step in formulating appropriate SLOs is to talk about what an SLO should be, and what it should cover.
An SLO sets a target level of reliability for the service’s customers. Above this threshold, almost all users should be happy with your service (assuming they are otherwise happy with the utility of the service).2 Below this threshold, users are likely to start complaining or to stop using the service. Ultimately, user happiness is what matters—happy users use the service, generate revenue for your organization, place low demands on your customer support teams, and recommend the service to their friends. We keep our services reliable to keep our customers happy.
Customer happiness is a rather fuzzy concept; we can’t measure it precisely. Often we have very little visibility into it at all, so how do we begin? What do we use for our first SLO?
Our experience has shown that 100% reliability is the wrong target:
- If your SLO is aligned with customer satisfaction, 100% is not a reasonable goal. Even with redundant components, automated health checking, and fast failover, there is a nonzero probability that one or more components will fail simultaneously, resulting in less than 100% availability.
- Even if you could achieve 100% reliability within your system, your customers would not experience 100% reliability. The chain of systems between you and your customers is often long and complex, and any of these components can fail.
3This also means that as you go from 99% to 99.9% to 99.99% reliability, each extra nine comes at an increased cost, but the marginal utility to your customers steadily approaches zero. - If you do manage to create an experience that is 100% reliable for your customers, and want to maintain that level of reliability, you can never update or improve your service. The number one source of outages is change: pushing new features, applying security patches, deploying new hardware, and scaling up to meet customer demand will impact that 100% target. Sooner or later, your service will stagnate and your customers will go elsewhere, which is not great for anyone’s bottom line.
- An SLO of 100% means you only have time to be reactive. You literally cannot do anything other than react to < 100% availability, which is guaranteed to happen. Reliability of 100% is not an engineering culture SLO—it’s an operations team SLO.
Once you have an SLO target below 100%, it needs to be owned by someone in the organization who is empowered to make tradeoffs between feature velocity and reliability. In a small organization, this may be the CTO; in larger organizations, this is normally the product owner (or product manager).
##### What to Measure: Using SLIs
Once you agree that 100% is the wrong number, how do you determine the right number? And what are you measuring, anyway? Here, service level indicators come into play: an SLI is an indicator of the level of service that you are providing.
While many numbers can function as an SLI, we generally recommend treating the SLI as the ratio of two numbers: the number of good events divided by the total number of events. For example:
- Number of successful HTTP requests / total HTTP requests (success rate)
- Number of gRPC calls that completed successfully in < 100 ms / total gRPC requests
- Number of search results that used the entire corpus / total number of search results, including those that degraded gracefully
- Number of “stock check count” requests from product searches that used stock data fresher than 10 minutes / total number of stock check requests
- Number of “good user minutes” according to some extended list of criteria for that metric / total number of user minutes
SLIs of this form have a couple of particularly useful properties. The SLI ranges from 0% to 100%, where 0% means nothing works, and 100% means nothing is broken. We have found this scale intuitive, and this style lends itself easily to the concept of an error budget: the SLO is a target percentage and the error budget is 100% minus the SLO. For example, if you have a 99.9% success ratio SLO, then a service that receives 3 million requests over a four-week period had a budget of 3,000 (0.1%) errors over that period. If a single outage is responsible for 1,500 errors, that error costs 50% of the error budget.4
In addition, making all of your SLIs follow a consistent style allows you to take better advantage of tooling: you can write alerting logic, SLO analysis tools, error budget calculation, and reports to expect the same inputs: numerator, denominator, and threshold. Simplification is a bonus here.
When attempting to formulate SLIs for the first time, you might find it useful to further divide SLIs into SLI specification and SLI implementation:
SLI specification
- The assessment of service outcome that you think matters to users, independent of how it is measured.
- For example: Ratio of home page requests that loaded in < 100 ms
SLI implementation
- The SLI specification and a way to measure it.
- For example:
Ratio of home page requests that loaded in < 100 ms, as measured from the Latency column of the server log. This measurement will miss requests that fail to reach the backend.
- Ratio of home page requests that loaded in < 100 ms, as measured by probers that execute JavaScript in a browser running in a virtual machine. This measurement will catch errors when requests cannot reach our network, but may miss issues that affect only a subset of users.
- Ratio of home page requests that loaded in < 100 ms, as measured by instrumentation in the JavaScript on the home page itself, and reported back to a dedicated telemetry recording service. This measurement will more accurately capture the user experience, although we now need to modify the code to capture this information and build the infrastructure to record it—a specification that has its own reliability requirements.
As you can see, a single SLI specification might have multiple SLI implementations, each with its own set of pros and cons in terms of quality (how accurately they capture the experience of a customer), coverage (how well they capture the experience of all customers), and cost.
Your first attempt at an SLI and SLO doesn’t have to be correct; the most important goal is to get something in place and measured, and to set up a feedback loop so you can improve. (We dive deeper into this topic in Continuous Improvement of SLO Targets.)
In our first book, we advise against picking an SLO based upon current performance, because this can commit you to unnecessarily strict SLOs. While that advice is true, your current performance can be a good place to start if you don’t have any other information, and if you have a good process for iterating in place (which we’ll cover later). However, don’t let current performance limit you as you refine your SLO: your customers will also come to expect your service to perform at its SLO, so if your service returns successful requests 99.999% of the time in less than 10 ms, any significant regression from that baseline may make them unhappy.
To create your first set of SLOs, you need to decide upon a few key SLI specifications that matter to your service. Availability and latency SLOs are pretty common; freshness, durability, correctness, quality, and coverage SLOs also have their place (we’ll talk more about those later).
If you are having trouble figuring out what sort of SLIs to start with, it helps to start simple:
- Choose one application for which you want to define SLOs. If your product comprises many applications, you can add those later.
- Decide clearly who the “users” are in this situation. These are the people whose happiness you are optimizing.
- Consider the common ways your users interact with your system—common tasks and critical activities.
- Draw a high-level architecture diagram of your system; show the key components, the request flow, the data flow, and the critical dependencies. Group these components into categories listed in the following section (there may be some overlap and ambiguity; use your intuition and don’t let perfect be the enemy of the good).
You should think carefully about exactly what you select as your SLIs, but you also shouldn’t overcomplicate things. Especially if you’re just starting your SLI journey, pick an aspect of your system that’s relevant but easy to measure—you can always iterate and refine later.
###### Types of components
The easiest way to get started with setting SLIs is to abstract your system into a few common types of components. You can then use our list of suggested SLIs for each component to choose the ones most relevant to your service:
Request-driven
- The user creates some type of event and expects a response. For example, this could be an HTTP service where the user interacts with a browser or an API for a mobile application.
Pipeline
- A system that takes records as input, mutates them, and places the output somewhere else. This might be a simple process that runs on a single instance in real time, or a multistage batch process that takes many hours. Examples include:
- A system that periodically reads data from a relational database and writes it into a distributed hash table for optimized serving
- A video processing service that converts video from one format to another
- A system that reads in log files from many sources to generate reports
- A monitoring system that pulls metrics from remote servers and generates time series and alerts
Storage
- A system that accepts data (e.g., bytes, records, files, videos) and makes it available to be retrieved at a later date.
# A Worked Example
Consider a simplified architecture for a mobile phone game, shown in Figure 2-1.
The app running on the user’s phone interacts with an HTTP API running in the cloud. The API writes state changes to a permanent storage system. A pipeline periodically runs over this data to generate league tables that provide high scores for today, this week, and all time. This data is written to a separate league table data store, and the results are available via the mobile app (for in-game scores) and a website. Users can upload custom avatars, which are used both in-game via the API and in the high score website, to the User Data table.
Given this setup, we can start thinking about how users interact with the system, and what sort of SLIs would measure the various aspects of a user’s experience.
Some of these SLIs may overlap: a request-driven service may have a correctness SLI, a pipeline may have an availability SLI, and durability SLIs might be viewed as a variant on correctness SLIs. We recommend choosing a small number (five or fewer) of SLI types that represent the most critical functionality to your customers.
In order to capture both the typical user experience and the long tail, we also recommend using multiple grades of SLOs for some types of SLIs. For example, if 90% of users’ requests return within 100 ms, but the remaining 10% take 10 seconds, many users will be unhappy. A latency SLO can capture this user base by setting multiple thresholds: 90% of requests are faster than 100 ms, and 99% of requests are faster than 400 ms. This principle applies to all SLIs with parameters that measure user unhappiness.
Table 2-1 provides some common SLIs for different types of services.
Type of service | Type of SLI | Description |
---|---|---|
Request-driven |
Availability |
The proportion of requests that resulted in a successful response. |
Request-driven |
Latency |
The proportion of requests that were faster than some threshold. |
Request-driven |
Quality |
If the service degrades gracefully when overloaded or when backends are unavailable, you need to measure the proportion of responses that were served in an undegraded state. For example, if the User Data store is unavailable, the game is still playable but uses generic imagery. |
Pipeline |
Freshness |
The proportion of the data that was updated more recently than some time threshold. Ideally this metric counts how many times a user accessed the data, so that it most accurately reflects the user experience. |
Pipeline |
Correctness |
The proportion of records coming into the pipeline that resulted in the correct value coming out. |
Pipeline |
Coverage |
For batch processing, the proportion of jobs that processed above some target amount of data. For streaming processing, the proportion of incoming records that were successfully processed within some time window. |
Storage |
Durability |
The proportion of records written that can be successfully read. Take particular care with durability SLIs: the data that the user wants may be only a small portion of the data that is stored. For example, if you have 1 billion records for the previous 10 years, but the user wants only the records from today (which are unavailable), then they will be unhappy even though almost all of their data is readable. |
##### Moving from SLI Specification to SLI Implementation
Now that we know our SLI specifications, we need to start thinking about how to implement them.
For your first SLIs, choose something that requires a minimum of engineering work. If your web server logs are already available, but setting up probes would take weeks and instrumenting your JavaScript would take months, use the logs.
You need enough information to measure the SLI: for availability, you need the success/failure status; for slow requests, you need the time taken to serve the request. You may need to reconfigure your web server to record this information. If you’re using a cloud-based service, some of this information may already be available in a monitoring dashboard.
There are various options for SLI implementations for our example architecture, each with its own pros and cons. The following sections detail SLIs for the three types of components in our system.
###### API and HTTP server availability and latency
For all of the considered SLI implementations, we base the response success on the HTTP status code. 5XX responses count against SLO, while all other requests are considered successful. Our availability SLI is the proportion of successful requests, and our latency SLIs are the proportion of requests that are faster than defined thresholds.
Your SLIs should be specific and measurable. To summarize the list of potential candidates provided in What to Measure: Using SLIs, your SLIs can use one or more of the following sources:
- Application server logs
- Load balancer monitoring
- Black-box monitoring
- Client-side instrumentation
Our example uses the load balancer monitoring, as the metrics are already available and provide SLIs that are closer to the user’s experience than those from the application server’s logs.
###### Pipeline freshness, coverage, and correctness
When our pipeline updates the league table, it records a watermark containing the timestamp of when the data was updated. Some example SLI implementations:
- Run a periodic query across the league table, counting the total number of fresh records and the total number of records. This will treat each stale record as equally important, regardless of how many users saw the data.
- Make all clients of the league table check the watermark when they request fresh data and increment a metric counter saying that data was requested. Increment another counter if the data was fresher than a predefined threshold.
From these two options, our example uses the client-side implementation, as it gives SLIs that are much more closely correlated with user experience and are straightforward to add.
To calculate our coverage SLI, our pipeline exports the number of records that it should have processed and the number of records that it successfully processed. This metric may miss records that our pipeline did not know about due to misconfiguration.
We have a couple potential approaches to measure correctness:
- Inject data with known outputs into the system, and count the proportion of times that the output matches our expectations.
- Use a method to calculate correct output based on input that is distinct from our pipeline itself (and likely more expensive, and therefore not suitable for our pipeline). Use this to sample input/output pairs, and count the proportion of correct output records. This methodology assumes that creating such a system is both possible and practical.
Our example bases its correctness SLI on some manually curated data in the game state database, with known good outputs that are tested every time the pipeline runs. Our SLI is the proportion of correct entries for our test data. In order for this SLI to be representative of the actual user experience, we need to make sure that our manually curated data is representative of real-world data.
##### Measuring the SLIs
Figure 2-2 shows how our white-box monitoring system collects metrics from the various components of the example application.
Let’s walk through an example of using metrics from our monitoring system to calculate our starter SLOs. While our example uses availability and latency metrics, the same principles apply to all other potential SLOs. For a full list of the metrics that our system uses, see Example SLO Document. All of our examples use Prometheus notation.
###### Load balancer metrics
Total requests by backend (`"api"`
or `"web"`
) and response code:
http_requests_total{host="api", status="500"}
Total latency, as a cumulative histogram; each bucket counts the number of requests that took less than or equal to that time:
http_request_duration_seconds{host="api", le="0.1"} http_request_duration_seconds{host="api", le="0.2"} http_request_duration_seconds{host="api", le="0.4"}
Generally speaking, it is better to count the slow requests than to approximate them with a histogram. But, because that information isn’t available, we use the histogram provided by our monitoring system. Another approach would be to base explicit slow request counts on the various slowness thresholds in the load balancer’s configuration (e.g., for thresholds of 100 ms and 500 ms). This strategy would provide more accurate numbers but require more configuration, which makes changing the thresholds retroactively harder.
```
http_request_duration_seconds{host="api", le="0.1"}
http_request_duration_seconds{host="api", le="0.5"}
```
###### Calculating the SLIs
Using the preceding metrics, we can calculate our current SLIs over the previous seven days, as shown in Table 2-2.
Availability |
|
Latency |
|
##### Using the SLIs to Calculate Starter SLOs
We can round down these SLIs to manageable numbers (e.g., two significant figures of availability, or up to 50 ms5 of latency) to obtain our starting SLOs.
For example, over four weeks, the API metrics show:
- Total requests: 3,663,253
- Total successful requests: 3,557,865 (97.123%)
- 90th percentile latency: 432 ms
- 99th percentile latency: 891 ms
We repeat this process for the other SLIs, and create a proposed SLO for the API, shown in Table 2-3.
SLO type | Objective |
---|---|
Availability |
97% |
Latency |
90% of requests < 450 ms |
Latency |
99% of requests < 900 ms |
Example SLO Document provides a full example of an SLO document. This document includes SLI implementations, which we omitted here for brevity.
Based upon this proposed SLI, we can calculate our error budget over those four weeks, as shown in Table 2-4.
SLO | Allowed failures |
---|---|
97% availability |
109,897 |
90% of requests faster than 450 ms |
366,325 |
99% of requests faster than 900 ms |
36,632 |
# Choosing an Appropriate Time Window
SLOs can be defined over various time intervals, and can use either a rolling window or a calendar-aligned window (e.g., a month). There are several factors you need to account for when choosing the window.
Rolling windows are more closely aligned with user experience: if you have a large outage on the final day of a month, your user doesn’t suddenly forget about it on the first day of the following month. We recommend defining this period as an integral number of weeks so it always contains the same number of weekends. For example, if you use a 30-day window, some periods might include four weekends while others include five weekends. If weekend traffic differs significantly from weekday traffic, your SLIs may vary for uninteresting reasons.
Calendar windows are more closely aligned with business planning and project work. For example, you might evaluate your SLOs every quarter to determine where to focus the next quarter’s project headcount. Calendar windows also introduce some element of uncertainty: in the middle of the quarter, it is impossible to know how many requests you will receive for the rest of the quarter. Therefore, decisions made mid-quarter must speculate as to how much error budget you’ll spend in the remainder of the quarter.
Shorter time windows allow you to make decisions more quickly: if you missed your SLO for the previous week, then small course corrections—prioritizing relevant bugs, for example—can help avoid SLO violations in future weeks.
Longer time periods are better for more strategic decisions: for example, if you could choose only one of three large projects, would you be better off moving to a high-availability distributed database, automating your rollout and rollback procedure, or deploying a duplicate stack in another zone? You need more than a week’s worth of data to evaluate large multiquarter projects; the amount of data required is roughly commensurate with the amount of engineering work being proposed to fix it.
We have found a four-week rolling window to be a good general-purpose interval. We complement this time frame with weekly summaries for task prioritization and quarterly summarized reports for project planning.
If the data source allows, you can then use this proposed SLO to calculate your actual SLO performance over that interval: if you set your initial SLO based on actual measurements, by design, you met your SLO. But we can also gather interesting information about the distribution. Were there any days during the past four weeks when our service did not meet its SLO? Do these days correlate with actual incidents? Was there (or should there have been) some action taken on those days in response to incidents?
If you do not have logs, metrics, or any other source of historical performance, you need to configure a data source. For example, as a low-fidelity solution for HTTP services, you can set up a remote monitoring service that performs some kind of periodic health check on the service (a ping or an HTTP GET) and reports back the number of successful requests. A number of online services can easily implement this solution.
# Getting Stakeholder Agreement
In order for a proposed SLO to be useful and effective, you will need to get all stakeholders to agree to it:
- The product managers have to agree that this threshold is good enough for users—performance below this value is unacceptably low and worth spending engineering time to fix.
- The product developers need to agree that if the error budget has been exhausted, they will take some steps to reduce risk to users until the service is back in budget (as discussed in Establishing an Error Budget Policy).
- The team responsible for the production environment who are tasked with defending this SLO have agreed that it is defensible without Herculean effort, excessive toil, and burnout—all of which are damaging to the long-term health of the team and service.
Once all of these points are agreed upon, the hard part is done.6 You have started your SLO journey, and the remaining steps entail iterating from this starting point.
To defend your SLO you will need to set up monitoring and alerting (see Alerting on SLOs) so that engineers receive timely notifications of threats to the error budget before those threats become deficits.
##### Establishing an Error Budget Policy
Once you have an SLO, you can use the SLO to derive an error budget. In order to use this error budget, you need a policy outlining what to do when your service runs out of budget.
Getting the error budget policy approved by all key stakeholders—the product manager, the development team, and the SREs—is a good test for whether the SLOs are fit for purpose:
- If the SREs feel that the SLO is not defensible without undue amounts of toil, they can make a case for relaxing some of the objectives.
- If the development team and product manager feel that the increased resources they’ll have to devote to fixing reliability will cause feature release velocity to fall below acceptable levels, then they can also argue for relaxing objectives. Remember that lowering the SLOs also lowers the number of situations to which the SREs will respond; the product manager needs to understand this tradeoff.
- If the product manager feels that the SLO will result in a bad experience for a significant number of users before the error budget policy prompts anyone to address an issue, the SLOs are likely not tight enough.
If all three parties do not agree to enforce the error budget policy, you need to iterate on the SLIs and SLOs until all stakeholders are happy. Decide how to move forward and what you need to make the decision: more data, more resources, or a change to the SLI or SLO?
When we talk about enforcing an error budget, we mean that once you exhaust your error budget (or come close to exhausting it), you should do something in order to restore stability to your system.
To make error budget enforcement decisions, you need to start with a written policy. This policy should cover the specific actions that must be taken when a service has consumed its entire error budget for a given period of time, and specify who will take them. Common owners and actions might include:
- The development team gives top priority to bugs relating to reliability issues over the past four weeks.
- The development team focuses exclusively on reliability issues until the system is within SLO. This responsibility comes with high-level approval to push back on external feature requests and mandates.
- To reduce the risk of more outages, a production freeze halts certain changes to the system until there is sufficient error budget to resume changes.
Sometimes a service consumes the entirety of its error budget, but not all stakeholders agree that enacting the error budget policy is appropriate. If this happens, you need to return to the error budget policy approval stage.
##### Documenting the SLO and Error Budget Policy
An appropriately defined SLO should be documented in a prominent location where other teams and stakeholders can review it. This documentation should include the following information:
- The authors of the SLO, the reviewers (who checked it for technical accuracy), and the approvers (who made the business decision about whether it is the right SLO).
- The date on which it was approved, and the date when it should next be reviewed.
- A brief description of the service to give the reader context.
- The details of the SLO: the objectives and the SLI implementations.
- The details of how the error budget is calculated and consumed.
- The rationale behind the numbers, and whether they were derived from experimental or observational data. Even if the SLOs are totally ad hoc, this fact should be documented so that future engineers reading the document don’t make bad decisions based upon ad hoc data.
How often you review an SLO document depends on the maturity of your SLO culture. When starting out, you should probably review the SLO frequently—perhaps every month. Once the appropriateness of the SLO becomes more established, you can likely reduce reviews to happen quarterly or even less frequently.
The error budget policy should also be documented, and should include the following information:
- The policy authors, reviewers, and approvers
- The date on which it was approved, and the date when it should next be reviewed
- A brief description of the service to give the reader context
- The actions to be taken in response to budget exhaustion
- A clear escalation path to follow if there is disagreement on the calculation or whether the agreed-upon actions are appropriate in the circumstances
- Depending upon the audience’s level of error budget experience and expertise, it may be beneficial to include an overview of error budgets.
See Example SLO Document for an example of an SLO document and an error budget policy.
##### Dashboards and Reports
In addition to the published SLO and error budget policy documents, it is useful to have reports and dashboards that provide in-time snapshots of the SLO compliance of your services, for communicating with other teams and for spotting problematic areas.
The report in Figure 2-3 shows the overall compliance of several services: whether they met all of their quarterly SLOs for the previous year (the numbers in parentheses indicate the number of objectives that were met, and the total number of objectives), and whether their SLIs were trending upward or downward in relation to the previous quarter and the same quarter last year.
It is also useful to have dashboards showing SLI trends. These dashboards indicate if you are consuming budget at a higher-than-usual rate, or if there are patterns or trends you need to be aware of.
The dashboard in Figure 2-4 shows the error budget for a single quarter, midway through that quarter. Here we see that a single event consumed around 15% of the error budget over the course of two days.
Error budgets can be useful for quantifying these events—for example, “this outage consumed 30% of my quarterly error budget,” or “these are the top three incidents this quarter, ordered by how much error budget they consumed.”
# Continuous Improvement of SLO Targets
Every service can benefit from continuous improvement. This is one of the central service goals in ITIL, for example.
Before you can improve your SLO targets, you need a source of information about user satisfaction with your service. There are a huge range of options:
- You can count outages that were discovered manually, posts on public forums, support tickets, and calls to customer service.
- You can attempt to measure user sentiment on social media.
- You can add code to your system to periodically sample user happiness.
- You can conduct face-to-face user surveys and samples.
The possibilities are endless, and the optimal method depends on your service. We recommend starting with a measurement that’s cheap to collect and iterating from that starting point. Asking your product manager to include reliability into their existing discussions with customers about pricing and functionality is an excellent place to start.
##### Improving the Quality of Your SLO
Count your manually detected outages. If you have support tickets, count those too. Look at periods when you had a known outage or incident. Check that these periods correlate with steep drops in error budget. Likewise, look at times when your SLIs indicate an issue, or your service fell out of SLO. Do these time periods correlate with known outages or an increase in support tickets? If you are familiar with statistical analysis, Spearman’s rank correlation coefficient can be a useful way to quantify this relationship.
Figure 2-5 shows a graph of the number of support tickets raised per day versus the measured loss in our error budget on that day. While not all tickets are related to reliability issues, there is a correlation between tickets and error budget loss. We see two outliers: one day with only 5 tickets, where we lost 10% of our error budget, and one day with 40 tickets, on which we lost no error budget. Both warrant closer investigation.
If some of your outages and ticket spikes are not captured in any SLI or SLO, or if you have SLI dips and SLO misses that don’t map to user-facing issues, this is a strong sign that your SLO lacks coverage. This situation is totally normal and should be expected. Your SLIs and SLOs should change over time as realities about the service they represent change. Don’t be afraid to examine and refine them over time!
There are several courses of action you can take if your SLO lacks coverage:
Change your SLO
- If your SLIs indicated a problem, but your SLOs didn’t prompt anyone to notice or respond, you may need to tighten your SLO.
-
If the incident on that date was large enough that it needs to be addressed, look at the SLI values during the periods of interest. Calculate what SLO would have resulted in a notification on those dates. Apply that SLO to your historic SLIs, and see what other events this adjustment would have captured. It’s pointless to improve the recall of your system if you lower the precision such that the team must constantly respond to unimportant events.
7 - Likewise, for false-positive days, consider relaxing the SLO.
- If changing the SLO in either direction results in too many false positives or false negatives, then you also need to improve the SLI implementation.
Change your SLI implementation
- There are two ways to change your SLI implementation: either move the measurement closer to the user to improve the quality of the metric, or improve coverage so you capture a higher percentage of user interactions. For example:
- Instead of measuring success/latency at the server, measure it at the load balancer or on the client.
- Instead of measuring availability with a simple HTTP GET request, use a health-checking handler that exercises more functionality of the system, or a test that executes all of the client-side JavaScript.
Institute an aspirational SLO
- Sometimes you determine that you need a tighter SLO to make your users happy, but improving your product to meet that SLO will take some time. If you implement the tighter SLO, you’ll be permanently out of SLO and subject to your error budget policy. In this situation, you can make the refined SLO an aspirational SLO—measured and tracked alongside your current SLO, but explicitly called out in your error budget policy as not requiring action. This way you can track your progress toward meeting the aspirational SLO, but you won’t be in a perpetual state of emergency.
Iterate
- There are many different ways to iterate, and your review sessions will identify many potential improvements. Pick the option that’s most likely to give the highest return on investment. Especially during the first few iterations, err on the side of quicker and cheaper; doing so reduces the uncertainty in your metrics and helps you determine if you need more expensive metrics. Iterate as many times as you need to.
# Decision Making Using SLOs and Error Budgets
Once you have SLOs, you can start using them for decision making.
The obvious decisions start from what to do when you’re not meeting your SLO—that is, when you’ve exhausted your error budget. As already discussed, the appropriate course of action when you exhaust your error budget should be covered by the error budget policy. Common policies include stopping feature launches until the service is once again within SLO or devoting some or all engineering time to working on reliability-related bugs.
In extreme circumstances, a team can declare an emergency with high-level approval to deprioritize all external demands (requests from other teams, for example) until the service meets exit criteria—typically that the service is within SLO and that you’ve taken steps to decrease the chances of a subsequent SLO miss. These steps may include improving monitoring, improving testing, removing dangerous dependencies, or rearchitecting the system to remove known failure types.
You can determine the scale of the incident according to the proportion of the error budget it consumed, and use this data to identify the most critical incidents that merit closer investigation.
For example, imagine a release of a new API version causes 100% `NullPointerException`
s until the system can be reverted four hours later.8 Inspecting the raw server logs indicates that the issue caused 14,066 errors. Using the numbers from our 97% SLO earlier, and our budget of 109,897 errors, this single event used 13% of our error budget.
Or perhaps the server on which our singly homed state database is stored fails, and restoring from backups takes 20 hours. We estimate (based upon historical traffic over that period) that this outage caused us 72,000 errors, or 65% of our error budget.
Imagine that our example company had only one server failure in five years, but typically experiences two or three bad releases that require rollbacks per year. We can estimate that, on average, bad pushes cost twice as much error budget as database failures. The numbers prove that addressing the release problem provides much more benefit than investing resources in investigating the server failure.
If the service is running flawlessly and needs little oversight, then it may be time to move the service to a less hands-on tier of support. You might continue to provide incident response management and high-level oversight, but you no longer need to be as closely involved with the product on a day-to-day basis. Therefore, you can focus your efforts on other systems that need more SRE support.
Table 2-5 provides suggested courses of action based on three key dimensions:
- Performance against SLO
- The amount of toil required to operate the service
- The level of customer satisfaction with the service
SLO | Toil | Customer satisfaction | Action |
---|---|---|---|
Met |
Low |
High |
Choose to (a) relax release and deployment processes and increase velocity, or (b) step back from the engagement and focus engineering time on services that need more reliability. |
Met |
Low |
Low |
Tighten SLO. |
Met |
High |
High |
If alerting is generating false positives, reduce sensitivity. Otherwise, temporarily loosen the SLOs (or offload toil) and fix product and/or improve automated fault mitigation. |
Met |
High |
Low |
Tighten SLO. |
Missed |
Low |
High |
Loosen SLO. |
Missed |
Low |
Low |
Increase alerting sensitivity. |
Missed |
High |
High |
Loosen SLO. |
Missed |
High |
Low |
Offload toil and fix product and/or improve automated fault mitigation. |
# Advanced Topics
Once you have a healthy and mature SLO and error budget culture, you can continue to improve and refine how you measure and discuss the reliability of your services.
##### Modeling User Journeys
While all of the techniques discussed in this chapter will be beneficial to your organization, ultimately SLOs should center on improving the customer experience. Therefore, you should write SLOs in terms of user-centric actions.
You can use critical user journeys to help capture the experience of your customers. A critical user journey is a sequence of tasks that is a core part of a given user’s experience and an essential aspect of the service. For example, for an online shopping experience, critical user journeys might include:
- Searching for a product
- Adding a product to a shopping cart
- Completing a purchase
These tasks will almost certainly not map well to your existing SLIs; each task requires multiple complex steps that can fail at any point, and inferring the success (or failure) of these actions from logs can be extremely difficult. (For example, how do you determine if the user failed at the third step, or if they simply got distracted by cat videos in another tab?) However, we need to identify what matters to the user before we can start making sure that aspect of the service is reliable.
Once you identify user-centric events, you can solve the problem of measuring them. You might measure them by joining distinct log events together, using advanced JavaScript probing, using client-side instrumentation, or using some other process. Once you can measure an event, it becomes just another SLI, which you can track alongside your existing SLIs and SLOs. Critical user journeys can improve your recall without affecting your precision.
##### Grading Interaction Importance
Not all requests are considered equal. The HTTP request from a mobile app that checks for account notifications (where notifications are generated by a daily pipeline) is important to your user, but is not as important as a billing-related request by your advertiser.
We need a way to distinguish certain classes of requests from others. You can use bucketing to accomplish this—that is, adding more labels to your SLIs, and then applying different SLOs to those different labels. Table 2-6 shows an example.
Customer tier | Availability SLO |
---|---|
Premium |
99.99% |
Free |
99.9% |
You can split requests by expected responsiveness, as shown in Table 2-7
Responsiveness | Latency SLO |
---|---|
Interactive (i.e., requests that block page load) |
90% of requests complete in 100 ms |
CSV download |
90% of downloads start within 5 s |
If you have the data available to apply your SLO to each customer independently, you can track the number of customers who are in SLO at any given time. Note that this number can be highly variable—customers who send a very low number of requests will have either 100% availability (because they were lucky enough to experience no failures) or very low availability (because the one failure they experienced was a significant percentage of their requests). Individual customers can fail to meet their SLO for uninteresting reasons, but in aggregate, tracking problems that affect a wide number of customers’ SLO compliance can be a useful signal.
##### Modeling Dependencies
Large systems have many components. A single system may have a presentation layer, an application layer, a business logic layer, and a data persistence layer. Each of these layers may consist of many services or microservices.
While your prime concern is implementing a user-centric SLO that covers the entire stack, SLOs can also be a useful way to coordinate and implement reliability requirements between different components in the stack.
For example, if a single component is a critical dependency9 for a particularly high-value interaction, its reliability guarantee should be at least as high as the reliability guarantee of the dependent action. The team that runs that particular component needs to own and manage its service’s SLO in the same way as the overarching product SLO.
If a particular component has inherent reliability limitations, the SLO can communicate that limitation. If the user journey that depends upon it needs a higher level of availability than that component can reasonably provide, you need to engineer around that condition. You can either use a different component or add sufficient defenses (caching, offline store-and-forward processing, graceful degradation, etc.) to handle failures in that component.
It can be tempting to try to math your way out of these problems. If you have a service that offers 99.9% availability in a single zone, and you need 99.95% availability, simply deploying the service in two zones should solve that requirement. The probability that both services will experience an outage at the same time is so low that two zones should provide 99.9999% availability. However, this reasoning assumes that both services are wholly independent, which is almost never the case. The two instances of your app will have common dependencies, common failure domains, shared fate, and global control planes—all of which can cause an outage in both systems, no matter how carefully it is designed and managed. Unless each of these dependencies and failure patterns is carefully enumerated and accounted for, any such calculations will be deceptive.
There are two schools of thought regarding how an error budget policy should address a missed SLO when the failure is caused by a dependency that’s handled by another team:
- Your team should not halt releases or devote more time to reliability, as your system didn’t cause the issue.
- You should enact a change freeze in order to minimize the chances of future outages, regardless of the cause of that outage.
The second approach will make your users happier. You have some flexibility in how you apply this principle. Depending on the nature of the outage and dependency, freezing changes may not be practical. Decide what is most appropriate for your service and its dependencies, and record that decision for posterity in your documented error budget. For an example of how this might work in practice, see the example error budget policy in Example Error Budget Policy.
##### Experimenting with Relaxing Your SLOs
You may want to experiment with the reliability of your application and measure which changes in reliability (e.g., adding latency into page load times) have a measurably adverse impact on user behavior (e.g., percentage of users completing a purchase). We recommend performing this sort of analysis only if you are confident that you have error budget to burn. There are many subtle interactions between latency, availability, customers, business domains, and competition (or lack thereof). To make a choice to deliberately lower the perceived customer experience is a Rubicon to be crossed extremely thoughtfully, if at all.
While this exercise might seem scary (nobody wants to lose sales!), the knowledge you can gain by performing such experiments will allow you to improve your service in ways that could lead to even better performance (and higher sales!) in the future. This process may allow you to mathematically identify a relationship between a key business metric (e.g., sales) and a measurable technical metric (e.g., latency). If it does, you have gained a very valuable piece of data you can use to make important engineering decisions for your service going forward.
This exercise should not be a one-time activity. As your service evolves, so will your customers’ expectations. Make sure you regularly review the ongoing validity of the relationship.
This sort of analysis is also risky because you can misinterpret the data you get. For example, if you artificially slow your pages down by 50 ms and notice that no corresponding loss in conversions occurs, you might conclude that your latency SLO is too strict. However, your users might be unhappy, but simply lacking an alternative to your service at the moment. As soon as a competitor comes along, your users will leave. Be sure you are measuring the correct indicators, and take appropriate precautions.
# Conclusion
Every topic covered in this book can be tied back to SLOs. Now that you’ve read this chapter, we hope you’ll agree that even partly formalized SLOs (which clearly state your promises to users) offer a framework to discuss system behavior with greater clarity, and can help with pinpointing actionable remedies when services fail to meet expectations.
To summarize:
- SLOs are the tool by which you measure your service’s reliability.
- Error budgets are a tool for balancing reliability with other engineering work, and a great way to decide which projects will have the most impact.
- You should start using SLOs and error budgets today.
For an example SLO document and an example error budget policy, see Appendixes Example SLO Document and Example Error Budget Policy.
1A note on terminology: throughout this chapter, we use the word reliability to talk about how a service is performing with regard to all of its SLIs. This could be indicative of many things, such as availability or latency.
2This is distinct from a service level agreement (SLA), which is a business contract that comes into effect when your users are so unhappy you have to compensate them in some fashion.
3For more details about factoring dependencies into your service’s reliability, see Ben Treynor, Mike Dahlin, Vivek Rau, and Betsy Beyer, “The Calculus of Service Availability,” ACM Queue 15, no. 2 (2017), https://queue.acm.org/detail.cfm?id=3096459.
4If you measure your SLO over a calendar period, such as a quarter-year, then you may not know how big your budget will be at the end of the quarter if it’s based upon unpredictable metrics such as traffic. See Choosing an Appropriate Time Window for more discussion about reporting periods.
550 ms because users are unlikely to perceive a 50 ms change in latency, but the appropriate window obviously depends on the service and the users. A reporting service will be different from a real-time game.
6Disclaimer: there may be more difficult tasks in your future.
7Recall is the proportion of significantly user-impacting events that the SLI captures. Precision is the proportion of events captured by the SLI that were significantly user-impacting.
8It is worth reiterating here that an error budget is an approximation of user satisfaction. A four-hour outage every 30 days would probably result in fewer unhappy users than four separate one-hour outages every 30 days, which in turn would cause fewer unhappy users than a constant error rate of 0.5%, but our error budget treats them the same. These thresholds will vary between services.
9A dependency is critical if its unavailability means that your service is also unavailable.
| true | true | true | null |
2024-10-12 00:00:00
|
2018-01-01 00:00:00
| null |
article
|
sre.google
|
Google SRE
| null | null |
18,646,491 |
http://proofficecalculator.com/
|
Pro Office Calculator
| null | null | true | true | false | null |
2024-10-12 00:00:00
| null | null | null | null | null | null | null |
39,101,452 |
https://old.reddit.com/r/OpenAI/comments/187pf4u/gpt4_has_become_so_lazy_that_people_are_faking/
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
16,980,580 |
https://www.sfgate.com/technology/businessinsider/article/Facebook-has-reportedly-fired-an-employee-accused-12880672.php
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
6,881,960 |
http://addyosmani.com/blog/the-future-of-data-binding-is-object-observe/
|
The future of data-binding is Object.observe()
|
Addy Osmani
|
# The future of data-binding is Object.observe()
## October 20, 2013
`Object.observe()`
is a proposed mechanism for bringing true data-binding to the browser. It exposes a mechanism for observing changes to objects and arrays, notifying others of mutations made to these objects. In my JSConf talk on O.o() I walk developers through the API, discussing how it will change things for AngularJS, Ember and Polymer. Some of these libraries have already explored replacing their dirty-checking and set-traps with
`Object.observe()`
. In fact, the Angular team reported last year that their bindings saw a 20-40x increase in speed when using an early version of it.
This is promising and the hope is once browser vendors other than Chrome implement the feature, many libraries using data-binding will see considerable performance boosts.
I plan on doing a more detailed write-up on `Object.observe()`
in the near future, but until then you might be interested in the following resources:
- ObserveJS - an Object.observe() Polyfill
- TemplateBinding - a TemplateBinding Polyfill
- Gist with some examples from my talk
- Rafael Weinstein's September slides on Object.observe()
- Object.observe() V8 tracking bug
- Respond to change with Object.observe()
- Object.observe() on the Harmony wiki
- Simplest example of using Object.observe()
- An intro to Object.observe() on the Bocoup blog
| true | true | true |
Object.observe() is a proposed mechanism for bringing true data-binding to the browser. It exposes a mechanism for observing changes to objects and...
|
2024-10-12 00:00:00
|
2013-10-20 00:00:00
| null |
addyosmani.com
|
Addyosmani
| null | null |
|
17,554,098 |
https://dev.to/kylegalbraith/how-to-get-started-with-test-driven-development-today-401j
|
How To Get Started With Test Driven Development Today
|
Kyle Galbraith
|
Test-driven development (TDD) is the act of writing tests *before* writing any code. Also known as red/green testing because you write the test, it fails, and then you write the code that makes it pass.
This process has a lot of different benefits such as simpler designs, more test coverage, and improved code quality. It provides a structure for developers to operate within that can often yield useful coding standards. If a team adopts TDD then **all** developers write tests for the code they write.
But if you are completely new to TDD, then getting started with it can be fuzzy. It turns out that it is quite simple to get started.
## Dip Your Toes Into TDD
In the ideal world, all features have their expected behavior ahead of time. But in reality, product managers change their minds, use cases shift, and expected behaviors shift with them. This adds a twist to TDD that can be frustrating at first.
For TDD newbies, trying to write tests before a feature is even developed can be a mental blocker. You feel stuck or slowed down by TDD because the expected behaviors are not nailed down yet. Often times you have to iterate on your tests as requirements change. This is **not** a bad thing, but it can be a challenging place to start.
In fact, it is this twist that causes stakeholders to say "TDD is taking to long". The reality is that TDD is all about setting yourself up for faster development in the future. Every test written is a notch in the belt of better code quality and faster development.
If you are working with a legacy code base, a TDD experiment is just one bug away. Every piece of software has them, and they tend to have expected versus actual behaviors. These are great areas to start applying TDD concepts.
Follow these 7 steps to get familiar with test-driven development.
- The bug must be reproducible and have expected behavior.
- Now find where in the code the bug is at.
- Create a unit test that has the expected behavior.
- Run your new test and see that it fails.
- Update the code to produce the expected behavior.
- Run your new test again and see that it passes.
- Perform any refactoring on the code your test covers.
Just like that, you have applied the TDD concepts. Simple right? Almost too simple. There are some important details that I glossed over in regards to a couple of these steps.
### The bug must be reproducible and have expected behavior
If there is an open question of whether is a bug than it isn't a great bug to start your TDD adventure on. Why? Because these type of bugs have unclear expected behaviors, so what is your test suppose to test?
### Create a unit test that has the expected behavior
In legacy systems that **have** unit tests already, this might be trivial. If you are in a system that does not have any unit tests already than you are breaking new ground. With new ground comes refactoring code. Why? Because these codebases often need to be made testable. Adding dependency injection, making classes and functions single responsibility, and maybe using interfaces.
### Perform any refactoring on the code your test covers
This is campsite policy, leave the code better than how you found it. If you think it is spotless, double check with someone on your team. The code is a living organism that evolves with every commit. It must be maintained, cleaned, and optimized where it can be. With good unit testing in place, you can do this maintenance and know if you broke something you shouldn't have.
## What about features?
Once you are familiar with TDD concepts you should apply them to new features as well. The process is the same for the most part. Some folks write a few lines of the feature and then write the tests. Others write all the tests and then write the feature.
There is a spectrum of TDD folks that have very strong opinions on which of these correct. I am of the opinion that they are both great because there are tests either way.
What about the changing requirements? It's not a big deal. Why? Because if the requirement changes then you update your test, see it fail, and then update the code. By putting in the work to write tests before/in parallel with your feature than iterating on your feature is a breeze.
## Conclusion
Test-driven development (TDD) is a very powerful tool in creating software. It enables a team to develop maintainable and high-quality code. But, it is not the only tool. Pair programming, bite-size stories, and fast iterative development are critical tools to have in place as well.
### Learn AWS By Actually Using It
If you enjoyed this post and are hungry to start learning more about Amazon Web Services, I have created a new course on how to host, secure, and deliver static websites on AWS! It is a book and video course that cuts through the sea of information to accelerate your learning of AWS. Giving you a framework that enables you to learn complex things by actually using them.
Head on over to the landing page to get a copy for yourself!
## Top comments (7)
Great article! I don't often agree fully on 'TDD advice for newbies' type of articles, but 100% with you on all these.
PS. I've been actually writing very similar article myself, but you already told 50% of my secrets ;-)
Very nice!
Legacy code is a big pain and a large technical debt for many companies.
We've applied most of these steps when we started adding unit tests to our codebase.
Hope you'll make a short series on this subject :)
That is a great idea, a series of posts where we walk through some legacy code and transform it via TDD. Thank you for the tip!
Hi Kyle, great article!
I actually started applying TDD a few months ago in my workplace, and now I read your article and it gave me new and interesting insights :)
Which languages do you work with? do you use any unit testing/mocking frameworks?
I use a variety of languages. I would say the one I have used most often would be C#. There is some great tools like NUnit and MSTest for frameworks, and NSubstitute or Moq for mocking.
That's great!
Have you heard about Typemock by any chance?
My team and I are working with their product - Isolator, and I'm looking for some feedback from people who might use their product as well...
Hi @kylegalbraith ,
thanks for your post. I like the idea of starting your TDD journey by using it for bug fixes.
This is so true. If you find yourself in a legacy application with no tests, I HIGHLY recommend Michael Feathers book Working Effectively with Legacy Code. It's really awesome. He shares a ton of tips on how to get your legacy app tested.
| true | true | true |
Test driven development is easier to get started with than you might think
|
2024-10-12 00:00:00
|
2018-04-08 00:00:00
|
article
|
dev.to
|
DEV Community
| null | null |
|
29,112,260 |
https://noteplan.co/blog/implementing-para-in-noteplan/
| null | null | null | true | false | false | null | null | null | null | null | null | null | null | null |
36,729,949 |
https://www.washingtonpost.com/world/interactive/2023/greece-migrant-boat-coast-guard/
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
33,137,995 |
https://blog.goblinapp.com/goblins-law-of-whimsy/
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
28,879,497 |
https://www.cnn.com/2021/10/15/world/italy-green-pass-covid-protests-intl/index.html
|
Thousands protest as Italy’s Covid pass becomes mandatory for workers | CNN
|
Barbie Latza Nadeau; CNN; Reuters
|
A mandate for all workers in Italy to show a government-issued Covid-19 pass came into force on Friday, triggering protests at key ports and fears of disruption.
Anyone who is on a payroll – in the public or private sector – must have a ‘green pass’ with a QR code as proof of either full vaccination, recent recovery from infection or a negative test within the previous 48 hours.
Employees who go to work without the pass risk a fine of up to 1,500 euros ($1,730) and suspension without pay. Employers could also face fines if they allow staff to work without it.
The largest demonstrations were at the major northeastern port of Trieste, where labor groups had threatened to block operations and around 6,000 protesters, some chanting and carrying flares, gathered outside the gates.
Around 40% of Trieste’s port workers are not vaccinated, said Stefano Puzzer, a local trade union official, a far higher proportion than in the general Italian population.
Regional governor Massimiliano Fedriga told SkyTG24: “The port (of Trieste) is functioning. Obviously there will be some difficulties and fewer people at work, but it’s functioning.”
“The green pass is a bad thing, it is discrimination under the law. Nothing more. It’s not a health regulation, it’s just a political move to create division among people…,” said Fabio Bocin, a 59-year old port worker in Trieste.
In Genoa, Italy’s other main port, around 100 protesters blocked access to trucks, a Reuters witness said.
In Rome, police in riot gear stood by in front of a small rally with people shouting “No green pass.” A protest was also to be held in the capital’s Circus Maximus on Friday afternoon.
Italian government statistics say that 81 percent of the eligible population has been fully vaccinated and more than 85 percent have had the first dose. Italy has also begun booster shots for those with compromised immunity and who are over 80.
The certificate has been required on long distance trains and indoor venues including restaurants, museums and gyms since September 1.
Prime Minister Mario Draghi’s cabinet approved the rule – one of the world’s strictest anti-Covid measures – in mid-September,. It is effective until the end of the year.
Some 15% of private and 8% of public sector workers have no green pass, an internal government document seen by Reuters estimates.
The government hoped the move making the health pass mandatory would convince unvaccinated Italians to change their minds, but with over 80% of residents over the age of 12 already fully inoculated and infection rates low, that surge has not materialized.
The right-wing League and Brothers of Italy parties and some unions say that, to address the risk of staff shortages, the validity of Covid tests should be extended from 48 to 72 hours, and they should be free for unvaccinated workers.
But the government has so far resisted those calls. The center-left Democratic Party, which is part of Draghi’s ruling coalition, says that making swabs free would be the equivalent of an amnesty for tax dodgers.
| true | true | true |
A mandate for all workers in Italy to show a government-issued Covid-19 pass came into force on Friday, triggering protests at key ports and fears of disruption.
|
2024-10-12 00:00:00
|
2021-10-15 00:00:00
|
article
|
cnn.com
|
CNN
| null | null |
|
12,276,743 |
http://www.boomcalifornia.com/2016/05/beyond-peak-juice-bar/
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
4,714,550 |
https://www.bngal.com/
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
36,993,556 |
https://paglen.studio/2023/05/10/unids/
|
UNIDS
|
Paglenstudio
|
For the last several years, I’ve been undertaking the most technically challenging photography series I’ve ever attempted: a project to photograph objects of unknown origin in orbit around the earth.
There are roughly 350 objects in orbit around the earth whose origins are unknown. These fall roughly into two categories: 1) Objects that the US Air Force tracks on radar and publishes orbital data for; 2) Objects that both amateur astronomers and foreign sources track and observe, but that the US military does not acknowledge, presumably because these unknown objects are classified.
The term “unid” is a term that amateur astronomers created to describe objects that they have observed in orbit, but whose identity they have failed to establish. In the first part of this text, I provide an overview of what we know about these objects, and review some attempts to identify them and their purpose. In the second part, I’ll describe some of the techniques I’ve used in my attempts to photograph them.
**I.**
**What are Unids?**
The short answer is, nobody knows. The longer answer is that for some objects, somebody probably knows something about some of them, but they’re not saying. Or, that also might be wrong and actually nobody knows.
Some background: The US Space Force’s 18th Space Defense Squadron, located at Vandenberg Space Force Base1 on the California coast north of Santa Barbara, is tasked with operating the US’ Space Surveillance Network. This is a global network of powerful radar systems, classified telescopes, space-based surveillance platforms, and other sensor networks. The squadron’s job is to identify and keep tabs on tens of thousands of objects in orbit around the earth. Over the course of their work, they regularly track and observe nearly 350 objects whose origin and identity are unknown. The 18th SDS catalogs these as “well tracked analyst objects.”
The “well-tracked analyst objects” are described by the surveillance squadron as “on-orbit objects that are consistently tracked by the U.S. Space Surveillance Network that cannot be associated with a specific launch. These objects of unknown origin are not entered into the satellite catalog, but are maintained using satellite numbers between 80000 and 89999.” (In the military satellite catalog, satellites are cataloged sequentially, i.e. the rocket that launched Sputnik is catalog entry #1, Sputnik is entry #2, etc.)
So what are these objects? The best answer is that, well, nobody knows. A more fine-grained answer involves some informed speculation. It is unlikely, however possible, that some of these objects are natural phenomena such as wayward asteroids. Undoubtedly, most of these “unknowns” are unidentified debris from satellite launches in places or times where the Air Force’s tracking capabilities are limited. But the story is almost certainly far more complicated.
The US’ National Reconnaissance Office (NRO) has a history of building satellites that attempt to disguise themselves as pieces of debris. This was the case for example with a spacecraft called “USA 53” (deployed from the Space Shuttle in 1990) that faked its own explosion, and again in 1999 when another “stealth” satellite deployed a balloon-like structure as a decoy. The Russian military has engaged in similar tactics, most recently with a spacecraft called Kosmos 2499, which behaved as if it were a debris object but which was almost certainly a satellite designed to attack other satellites. (Kosmos 2499 was mysteriously destroyed in early 2023, creating a small debris field.)
**Analyzing Unids**
The only publicly available analysis of the “well-tracked analyst objects” that I’m aware of comes from a PhD dissertation written by space-security researcher James Pavur at the University of Oxford. Pavur took a novel approach to the analysis of these objects. He created a dataset of known satellites, and another dataset of known debris objects, and then trained a machine learning model on each. His idea was to build a classifier that could distinguish between a “generic satellite” and “generic debris object.” Pavur then used those models to analyze the orbit of Kosmos 2499, a satellite that pretends to be a debris object. His model correctly predicted that Kosmos 2499 was a satellite, not a piece of debris. Pavur then ran his model on the entirety of the “well-tracked analyst objects” data and discovered something remarkable: the model predicted with high confidence that a non-trivial number of unknown objects behaved, in fact, like spacecraft.
Almost immediately after the publication of his dissertation, Pavur was tapped to work for the Department of Defense and is unable to speak about his current work. However, Pavur did provide me with a copy of the models he used in his analysis and I’m conducting a review of them to see if there’s more to learn about “analyst objects” from his work.
There are, however, a few limitations to Pavur’s approach. Firstly, Pavur’s classifier wasn’t designed to detect station-keeping maneuvers. Operational satellites in low-earth orbit are affected by small amounts of atmospheric drag in the upper atmosphere that slowly bring them back down to earth. To counter this, a satellite has to periodically “boost” itself back into its desired orbit using small thrusters located on the spacecraft. Satellites in higher orbits are affected by the gravitational influence of the moon, and from the uneven nature of Earth’s gravity field.2
**The Plot Thickens…**
In addition to objects in the 18th Space Defense Squadron’s publically available data, there are two additional sources of information about unknown objects. The first is a hybrid Russian civilian/military tracking program called “ISON” (International Scientific Observer Network), and the second is a database of classified objects maintained by a network of amateur satellite observers, unofficially known as the “See-Sat” group. Both of these groups have identified a handful of unknown objects in orbit whose existence is classified by the American military – in other words Top-Secret unknown objects.
So, to recap: There are many hundreds of unknown objects in orbit around the earth, many of which are tracked and acknowledged by the US military. Researchers who’ve analyzed these objects have concluded that a non-trivial number of them display characteristics more consistent with spacecraft than debris objects, although these results require further study. What’s more, there are more than a dozen other objects that are also “unknowns” but whose existence is classified and whose orbits are undisclosed.
**II.**
**Photographing Unids**
Photographing these objects is extremely difficult in every way, but can be done using good data, accurate modeling, and very specific optical equipment.
**Step 1: Get the Data**
The first thing one needs to photograph unids is a good source of data. I use two sources: two-line elements (a file format for describing satellite orbits) for “well-tracked analyst objects” are readily available by creating an account with “The Space Force,” on their portal for satellite information at space-track.org. This database provides a list of unclassified data. To retrieve data about classified unknown objects, the best source is a website maintained by satellite observer Mike McCants, who coallates observations from amateur satellite observers and publishes orbital elements based on those observations. Those elements need to be downloaded and filtered for both “unknown” objects and “ISON” objects.
**Step 2: Model the Orbits**
Then I import that data into two different virtual planetarium software environments. (I use two in order to ensure that my predictions are accurate across multiple models and that I haven’t made a mistake). The first software I use is Stellarium (this is a superb piece of free astronomy software). To check my work, I load the same data into a second modeling program called Heavensat.
Using the modeling software, I can make predictions about when and where in the sky I might find a particular object.
It takes the better part of the day to model these orbits, and to select a series of targets for a given evening. Once I’ve selected the objects I want to image, I write a script for the evening in a software package I use to control the telescope, mount, and camera. The script tells the telescope to point to a particular point in the sky at a very precise moment, then instructs the camera to start making exposures before, during, and after the predicted pass of the unknown object. If I do everything correctly, I am able to capture the light-trail of the object as it passes through the telescope’s field-of-view.
**Step 3: Equipment**
The main difficulty in choosing an appropriate telescope for photographing unids is sourcing a telescope that can collect as much light as quickly as possible. Because unids tend to be both very faint and fast-moving, I use the “fastest” telescope that I can. In my case, that means a Rowe-Ackermann Schmidt Astrograph (RASA) astrograph.
The RASA design is designed above all for speed, but it sacrifices ease and multifunctionality to get there. The design is a variation on a Schmidt-Cassegrain Telescope that replaces the secondary mirror with a camera sensor. The advantage of this is that the telescope can collect far more light much faster than a telescope with a secondary mirror. There are, however, many disadvantages. First, the removal of the secondary mirror means that there is no possibility of including an eyepiece in the design – there is no way to look through the telescope and it can only be used with a specialized camera. Secondly, at f/2 the critical focus zone (known as “depth of field” in ‘normal’ photography) of the telescope is smaller than half the width of a human hair. This translates into extreme technical difficulties in positioning the camera sensor accurately, as slight imperfections in how the camera sensor is placed in its housing during the manufacturing process create optical anomalies that have to be manually compensated for. This process is not fun.
On any given night, I’m aiming for triple-redundancy: for each image I am using three separate telescopes to collect as much light as possible and to mitigate against any mechanical failures (which happen often).
When I’ve successfully photographed the light-trail of a unid, I task the telescopes with collecting additional data from that region in the sky to fill out the photograph. Each exposure ends up being about 10,000 seconds worth of data or about 3 hours, much of which is shot with an infrared filter to highlight the various stelliferous and gaseous regions in the sky that are invisible to our eyes.
*The stormy blast of hell with restless fury drives the spirits on*, c1890.
The night sky looks very different to infrared-sensitive equipment than it does to unaided eyes. Hydrogen, sulfur, and oxygen emissions reveal great cosmic clouds, stellar remnants, and galactic structures that recall Gustav Dore’s etchings of the Divine Comedy. Their names refer to ancient myths, stories, and ancestral star-gazers. Many of the stars in the sky have names so ancient that their origins of those names, and the stories they once referred to, have been long forgotten.
I have spent countless days and nights studying the unknown objects, plotting orbits, measuring light curves, and analyzing their movements over time to see how their behavior may have changed over the years. I’ve tried to learn anything and everything I can about their shape, size, and mass, the relative stability of their orbits, and the question of whether they receive energy from any non-natural sources. Some of the numbers are surprising.
But every analytical technique available supplies only tiny variations on a simple fact: the identity of these objects is “unknown.” Given this, I ask myself where my desire to “identify” them comes from. Where does my unconscious desire to place these objects into received categories come from? Why does my subconscious seek the comfort of pre-existing language and concepts in the face of these unknowns?
This post is part of a series of posts supporting works from the exhibition You’ve Just Been Fucked by PSYOPS at Pace Gallery, New York.
You can find more information about this project and other posts in the series here.
**Footnotes**
1 I’m sorry I really have a hard time saying or writing the words “Space Force.” (Back)
2 Gravitational anomalies are areas on Earth where the local gravity field is stronger or weaker than the global average. These variations can be caused by differences in the density of the Earth’s crust, the presence of large mountain ranges or ocean trenches, and variations in the distribution of the Earth’s mass. One of the most well-known gravity anomalies is the “Indian Ocean Geoid Low,” which is a region of low gravity field strength in the Indian Ocean. This anomaly is primarily due to the large mass of the Himalayas to the north and the Earth’s equatorial bulge. Gravitational anomalies affect satellites by subtly altering their inclination over time, and by causing them to drift longitudinally. (Back)
| true | true | true |
For the last several years, I’ve been undertaking the most technically challenging photography series I’ve ever attempted: a project to photograph objects of unknown origin in orbit around the earth. There are roughly 350 objects in orbit around the earth whose origins are unknown. These fall roughly into two categories: 1) Objects that the US…
|
2024-10-12 00:00:00
|
2023-05-10 00:00:00
| null | null |
paglen.studio
|
paglen.studio
| null | null |
1,842,942 |
http://www.esquire.com/blogs/politics/ak-47-video-102110
|
Video: Anatomy of an AK-47
|
Tim Heffernan
|
* It is perhaps the most potent question to echo from the Cold War: Who lost Vietnam? Well, there were certainly many factors, but an important new book, *The Gun
*by C. J. Chivers (exclusively excerpted in Esquire's new November issue, and soon on this website), forces us to consider this: For the first time in human history, a poorly trained peasant army humbled a great power with the gun its fighters carried in their hands — the AK-47. Chivers's story is about that gun, its history, and its relationship to the catastrophic failures of the M16 during the 1960s. Above, you can watch the gun in action, and below, we take a moment to ask Chivers some questions about the gun and his work. You can also read more about the book on Chivers's website.*
**TIM HEFFERNAN:** *Would you call the AK-47 a great invention?*
**C. J. CHIVERS:** Without question the AK-47 was a remarkable invention, and not just because it works so well, or because it changed how wars are fought, or because it proved to be one of the most important products of the 20th century. The very circumstances of its creation were fascinating. The rifle is essentially a conceptual knock-off of a German weapon that had been developed by Hitler's Wehrmacht in the 1930s and 1940s, and it came together through not only the climate of paranoia and urgency in Stalin's USSR, but also via the ability of the Soviet intelligence and Red Army to grasp the significance of an enemy's weapon and willingness to replicate it through a large investment of the state's manpower, money and time. It was a characteristically Soviet process, and an example where centralized decision-making and the planned economy actually combined to design and churn out an eminently well-designed product. We spend a lot of time denigrating the centralized economy, for good reason. But it just so happened that what the centralized economy of a police state really wanted, it got. It couldn't make a decent elevator, toilet, refrigerator, or pair of boots. But the guns? Another story altogether.
**TH:** *How, after all of these decades, did you put together the story of the Marines' Hotel Company, and get your hands on the documents from Colt's Firearms, and from inside the military?*
**CJC:** It took years of digging, travel, and patience. In 2003, I found declassified references to the Pentagon's cover-up of jamming M16s in the National Archives. At about the same time, a librarian in Missouri sent me reams of records from the archives of the late Representative Ichord, who led a special congressional sub-committee into the M16's failures in Vietnam. In the box was a copy of the letter that Lieutenant Michael Chervenak, in a state of controlled rage, had written to Congress and to two newspapers after 40 rifles jammed and Hotel Company was outgunned, costing several Marines their lives. It started there. I found Mike [Chervenak] — he was practicing law in Maryland — and began corresponding with him from Russia, where I had moved. Eventually I met him in a steakhouse in Maryland on a trip home, and I brought his letter to show him. He didn't take it when I handed it across the table. I'll never forget what he said: "I don't need to read it. I remember every word." It had been 40 years. I said, "You're still pissed off, aren't you?" He looked like it was the stupidest question he had ever heard. "Chris," he said. "It was unconscionable what they did. We're not talking about issuing bad mess kits here. We're talking about *rifles*." Then came a breakthrough. About a year later, I flew twice from Moscow to the U.K., to spend weekends going through an unsorted collection of small-arms-related records at a defense college in the English countryside. There were 60 big cardboard boxes, it took me four full days to look at everything in them. In one box were records from 1966 through 1968 from the desk of the president of Colt's Firearms, which made the M16s that had been failing. As I read each piece of paper, I almost fell over — among them were letters from Colt's engineers in Vietnam confirming everything Mike had said, and detailing many of the precise problems with the rifles that Colt's and the Pentagon were sending to combat. Even the engineer sent to investigate his allegations agreed with him, in writing. I moved back to the U.S. in 2008 and began to track down more former Marines from Mike's battalion in 1967, including several from the firefight that had enraged Mike. They shared their memories with me, and passed me from vet to vet. Mike had been punished for daring to speak out, and had never been publicly vindicated, though all of these vets stood by him. Simultaneously, I was in a public-records fight with the Army to release more records, which I finally won in 2009. As all of this came together, the circle had closed and the truth could be laid bare, almost 45 years later. All I had to do was find the time to write it.
**TH:** *Is the AK just a tool, turned to good or ill by people? Or has it actually spurred evil (or violence, anyway) by its very nature? *
**CJC:** The automatic Kalashnikov is a tool, an implement designed for ordinary men, without much training or undue complications, to kill other men, and to be used in the conditions in which wars are often fought. But it's only a tool, and while its ready availability in many unstable lands can be seen as kindling violence, this is not simply because of the weapon's qualities themselves. It is because of the quantities of the weapons that have been made, whether anyone besides the minds that organized police states wanted the rifles or not. Many of the Eastern bloc countries that used the weapons were brittle and corrupt, and they lost custody of huge stockpiles of their unused guns. Blaming the rifle itself doesn't quite make sense. Armies and arms manufacturers will always make weapons. The problem here lies with the abundance, not the existence, of the Kalashnikov line. Certainly the Kalashnikov's ease of use and durability make it desirable for all sorts of people up to no good. But rifles are rifles — there are many other choices out there. You see the Kalashnikov almost everywhere there is fighting because there are so many of them.
**TH:** *Is it the signature weapon of the 20th century? The 21st? Will the AK still be killing in 2110? *
**CJC:** The Kalashnikov was the most important firearm of the last 60-plus years, so much so that there really is no second place. It is not going to be unseated from its place any time soon, certainly not in our lives.
**TH:** *What's memorable about being shot at by AKs — what makes it different from, say, being shot at by a sniper? What does an AK bullet sound like when it goes past your ear? When it hits the wall you're crouched behind?*
**CJC:** Actually, in a lot of circumstances, the Kalashnikov is poorly used by people who are not especially good shots, or who are outright bad shots. In these cases, the rifle's weaknesses emerge. As far as accuracy goes, the Kalashnikov is stubbornly mediocre, and the ease with which it can be fired on automatic means that many people fire it on automatic when they would be better served firing a single, aimed shot. These factors combine in a phenomenon many people who have been shot at by Kalashnikovs have come to be grateful for — a burst of bullets cracking by high overhead. There have been many times when we have shaken our heads in relief and gratitude that the nitwits with Kalashnikovs on the other side of a field don't quite know how to use the weapon in their hands. Getting shot at by a sniper is a much different experience, and far more frightening. But either experience is, to borrow your word, memorable. These memories are pretty much all bad.
**TH:** *M16 or AK-47: Which would you bring to a gunfight? *
**CJC:** Remember, I'm not a Marine anymore, and I never carry weapons on the patrols I cover, so I have to answer theoretically. But ask a Marine or Ranger, and you'd probably get an answer like this: It depends on the gunfight. For the sake of argument, let's talk about the rifles only — not the modern rail systems and optics and lights that are available that can make carrying a modern M16 an utterly different experience from carrying a Kalashnikov with iron sights. If we are talking only of rifles, I'd say this: If the ranges are short, the vegetation thick, and the climate damp, and I was still in uniform, I'd almost certainly opt for a Kalashnikov. In the desert? Give me one of the current variants of the M16. The M16 was long ago debugged. Its performance problems are nothing like those of the mid-1960s. And in arid climates, ranges tend to stretch out, and there really is no comparison between Kalashnikovs and M16s at longer ranges. The longer barrel of the M16, its better sights and smoother trigger, and the ballistics of the cartridge it fires, all make it much more likely to hit something at, say, 200 or 250 or 300 yards, and beyond. At these ranges, Kalashnikovs almost always miss, even when people trouble to aim them.
**TH:** *What led you to this book project?*
**CJC:** Everywhere I went on my job covering conflict, the Kalashnikov was the predominant arm. And after writing in the *New York Times* about how records that another reporter and I had found from the Taliban and al Qaeda showed that an introduction to the Kalashnikov was the opening class at the terror and insurgent schools in Afghanistan, a former professor suggested that I consider looking at the weapon at book length. That was almost a decade ago. The suggestion became an obsession. Teasing out how any product goes from its quiet development to near ubiquity in its niche or market would be an interesting project, and the Kalashnikov came with an almost unending list of sub-themes. How did the weapon become an icon and symbol of so many contradictory things? How was it really designed? Who was really behind its development? Why and how exactly did it spread? Where did it fit in a fuller historical context? What were its effects, beyond the effect we covered here in Esquire, of prompting the Pentagon to field a rifle to match it, when that rifle was not yet ready for war? Why are there so many of them? How does it compare to other choices? And, of course, a large bit of the work was about what the rifle actually does to the people its bullets strike — this is unavoidable, even necessary, because in this examination is a view of what this tool was intended to do, and what it really does. And all of these lines of reporting offered a chance to tell rich, character-driven histories, which for a writer is a goldmine. Some days I worried: The Kalashnikov might seem a challenging subject, because it is so widely known. But it helped that much of what people think they know of the Kalashnikov is wrong. In some ways the reporting and writing became an assault against all of the propaganda, and a gentle correction to many of the errors that still inform the popular imagination surrounding the world's most abundant weapon.
| true | true | true |
An inside look at how the most important gun of the 20th century works
|
2024-10-12 00:00:00
|
2010-10-21 00:00:00
|
article
|
esquire.com
|
Esquire
| null | null |
|
24,811,088 |
http://www.claudiobellei.com/2018/09/30/julia-mpi/
|
Parallel programming with Julia using MPI
| null |
Julia has been around since 2012 and after more than six years of development, its 1.0 version has been finally released. This is a major milestone and one that has inspired me to write a new blogpost (after several months of silence). This time we are going to see how to do parallel programming in Julia using the Message Passing Interface (MPI) paradigm, through the open source library Open MPI. We will do this by solving a real physical problem: heat diffusion across a two-dimensional domain.
This is going to be a fairly advanced application of MPI, targeted at someone that has already had some basic exposure to parallel computing. Because of this, I am not going to go step by step but I will rather focus on specific aspects that I feel are of interest (specifically, the use of ghost cells and message passing on a two-dimensional grid). As I have started doing in my recent blogposts, the code discussed here is only partial. It is accompanied by a fully featured solution that you can find on Github and I have named Diffusion.jl.
Parallel computing has entered the “commercial world” over the last few years. It is a standard solution for ETL (Extract-Transform-Load) applications where the problem at hand is embarassingly parallel: each process runs independently from all the others and no network communication is needed (until potentially a final “reduce” step, where each local solution is gathered into a global solution).
In many scientific applications, there is the need for information to be passed through the network of a cluster. These “non-embarrassingly parallel” problems are often numerical simulations that model problems ranging from astrophysics to weather modelling, biology, quantum systems and many more. In some cases, these simulations are run on tens to even millions of CPUs (Fig. 1) and the memory is distributed - not shared - among the different CPUs. Normally, *the way these CPUs communicate in a supercomputer* is through the Message Passing Interface (MPI) paradigm.
Anyone working in High Performance Computing should be familiar with MPI. It allows to make use of the architecture of a cluster at a very low level. In theory, a researcher could assign to every single CPU its computational load. He/She could decide exactly when and what information should be passed among CPUs and whether this should happen sinchronously or asynchronously.
And now, let’s go back to the contents of this blogpost, where we are going to see **how to write the solution of a diffusion-type equation using MPI**. We have already discussed the explicit scheme for a one-dimensional equation of this type. However, in this blogpost we will look at the two-dimensional solution.
The Julia code presented here is essentially a translation of the C/Fortran code explained in this excellent blogpost by Fabien Dournac.
In this blogpost I am not going to present a thorough analysis of scaling speed vs. number of processors. Mainly because I only have two CPUs that I can play with at home (Intel Core i7 processor on my MacBook Pro)… Nonetheless, I can still proudly say that the Julia code presented in this blogpost shows a significant speedup using two CPUs vs. one. Not only this: **it is faster than the Fortran and C equivalent codes!** (more on this later)
These are the topics that we are going to cover in this blogpost:
- Julia: My first impressions
- How to install Open MPI on your machine
- The problem: diffusion across a two-dimensional domain
- Communication among CPUs: the need for ghost cells
- Using MPI
- Visualizing the solution
- Performance
- Conclusions
## 1. First impressions about Julia
I am actually still a newbie with Julia, hence the choice of having a secion on my “first impressions”.
The main reason why I got interested in Julia is that it promises to be a general purpose framework with a performance comparable to the likes of C and Fortran, while keeping the flexibility and ease of use of a scripting language like Matlab or Python. In essence, it should be possible to write Data Science/High-Performance-Computing applications in Julia that run on a local machine, on the cloud or on institutional supercomputers.
One aspect I don’t like is the workflow, which seems sub-optimal for someone like me that uses IntelliJ and PyCharm on a daily basis (the IntelliJ Julia plugin is terrible). I have tried the Juno IDE as well, it is probably the best solution at the moment but I still need to get used to it.
One aspect that demonstrates how Julia has still not reached its “maturity” is how varied and outdated is the documentation of many packages. I still haven’t found a way to write a matrix of Floating point numbers to disk in a formatted way. Sure, you can write to disk each element of the matrix in a double-for loop but there should be better solutions available. It is simply that information can be hard to find and the documentation is not necessarily exhaustive.
Another aspect that stands out when first using Julia is the choice of using one-based indexing for arrays. While I find this slightly annoying from a practical perspective, it is surely not a deal breaker also considering that this not unique to Julia (Matlab and Fortran use one-based indexing, too).
Now, to the good and most important aspect: Julia can indeed be really fast. I was impressed to see how the Julia code that I wrote for this blogpost can perform better than the equivalent Fortran and C code, despite having essentially just translated it into Julia. Have a look at the performance section if you are curious.
## 2. Installing Open MPI
Open MPI is an open source Message Passing Interface library. Other famous libraries include MPICH and MVAPICH. MVAPICH, developed by the Ohio State University, seems to be the most advanced library at the moment as it can also support clusters of GPUs - something particularly useful for Deep Learning applications (there is indeed a close collaboration between NVIDIA and the MVAPICH team).
All these libraries are built on a **common interface: the MPI API**. So, it does not really matter whether you use one or the other library: the code you have written can stay the same.
The MPI.jl project on Github is a wrapper for MPI. Under the hood, it uses the C and Fortran installations of MPI. It works perfectly well, although it lacks some functionality that is available in those other languages.
In order to be able to run MPI in Julia you will need to install separately Open MPI on your machine. If you have a Mac, I found this guide to be very useful. Importantly, you will need to have also gcc (the GNU compiler) installed as Open MPI requires the Fortran and C compilers. I have installed the 3.1.1 version of Open MPI, as mpiexec --version on my Terminal also confirms.
Once you have Open MPI installed on your machine, you should install cmake. Again, if you have a Mac this is as easy as typing brew install cmake on your Terminal.
At this point you are ready to install the MPI package in Julia. Open up the Julia REPL and type Pkg.add(“MPI”). Normally, at this point you should be able to import the package using import MPI. However, I also had to build the package through Pkg.build(“MPI”) before everything worked.
## 3. The problem: diffusion in a two-dimensional domain
The diffusion equation is an example of a parabolic partial differential equation. It describes phenomena such as heat diffusion or concentration diffusion (Fick’s second law). In two spatial dimensions, the diffusion equation writes
The solution $u(x, y, t)$ represents how the temperature/concentration (depending on whether we are studying heat or concentration diffusion) varies in space and time. Indeed, the variables $x, y$ represent the spatial coordinates, while the time component is represented by the variable $t$. The quantity $D$ is the “diffusion coefficient” and determines how fast heat, for example, is going to diffuse through the physical domain. Similarly to what was discussed (in more detail) in a previous blogpost, the equation above can be discretized using a so-called “explicit scheme” of solution. I am not going to go through the details here, that you can find in said blogpost, it suffices to write down the numerical solution in the following form,
$$\begin{equation}
\frac{u_{i,k}^{j+1} - 2u_{i,k}^{j}}{\Delta t} =
D\left(\frac{u_{i+1,k}^j-2u_{i,k}^j+u_{i-1,k}^j}{\Delta x^2}+
\frac{u_{i,k+1}^j-2u_{i,k}^j+u_{i,k-1}^j}{\Delta y^2}\right)
\label{eq:diffusion}
\end{equation}$$
where the $i, k$ indices refer to the spatial grid while $j$ represents the time index.
Assuming all the quantities at the time $j$ to be known, the only unknown is $u_{i,k}^{j+1}$ on the left hand side of eq. (\ref{eq:diffusion}). This quantity depends on the values of the solution at the previous time step $j$ in a cross-shaped stencil, as in the figure below (the red dots represent those grid points at the time step $j$ that are needed in order to find the solution $u_{i,k}^{j+1}$).
Equation (\ref{eq:diffusion}) is really all is needed in order to find the solution across the whole domain at each subsequent time step. It is rather easy to implement a code that does this sequentially, with one process (CPU). However, here we want to discuss a parallel implementation that uses multiple processes.
Each process will be responsible for finding the solution on a portion of the entire spatial domain. Problems like heat diffusion, that are *not* embarrassingly parallel, require exhchanging information among the processes. To clarify this point, let’s have a look at Figure 3. It shows how processes #0 and #1 will need to communicate in order to evaluate the solution near the boundary. This is where MPI enters. In the next section, we are going to look at an efficient way of communicating.
## 4. Communication among processes: ghost cells
An important notion in computational fluid dynamics is the one of *ghost cells*. This concept is useful whenever the spatial domain is decomposed into multiple sub-domains, each of which is solved by one process.
In order to understand what ghost cells are, let’s consider again the two neighboring regions depicted in Figure 3. Process #0 is responsible for finding the solution on the left hand side, whereas process #1 finds it on the right hand side of the spatial domain. However, because of the shape of the stencil (Fig. 2), near the boundary both processes will need to communicate between them. Here is the problem: *it is very inefficient to have process #0 and process #1 communicate each time they need a node from the neighboring process*: it would result in an unacceptable communication overhead.
Instead, what is common practice is to surround the “real” sub-domains with extra cells called *ghost cells*, as in Figure 4 (right). These ghost cells represent *copies* of the solution at the boundaries of neighboring sub-domains. At each time step, the *old* boundary of each sub-domain is passed to its neighbors. This allows the *new* solution at the boundary of a sub-domain to be calculated with a significantly reduced communication overhead. The net effect is a speedup in the code.
## 5. Using MPI
There are a lot of tutorials on MPI. Here, I just want to describe those commands - expressed in the language of the MPI.jl wrapper for Julia - that I have been using for the solution of the 2D diffusion problem. They are some basic commands that are used in virtually every MPI implementation.
## MPI commands
- MPI.init() - initializes the execution environment
- MPI.COMM_WORLD - represents the communicator, i.e., all processes available through the MPI application (every communication must be linked to a communicator)
- MPI.Comm_rank(MPI.COMM_WORLD) - determines the internal rank (id) of the process
- MPI.Barrier(MPI.COMM_WORLD) - blocks execution until all processes have reached this routine
- MPI.Bcast!(buf, n_buf, rank_root, MPI.COMM_WORLD) - broadcasts the buffer buf with size n_buf from the process with rank rank_root to all other processes in the communicator MPI.COMM_WORLD
- MPI.Waitall!(reqs) - waits for all MPI requests to complete (a request is a handle, in other words a reference, to an asynchronous message transfer)
- MPI.REQUEST_NULL - specifies that a request is not associated with any ongoing communication
- MPI.Gather(buf, rank_root, MPI.COMM_WORLD) - reduces the variable buf to the receiving process rank_root
- MPI.Isend(buf, rank_dest, tag, MPI.COMM_WORLD) - the message buf is sent asynchronously from the current process to the rank_dest process, with the message tagged with the tag parameter
- MPI.Irecv!(buf, rank_src, tag, MPI.COMM_WORLD) - receives a message tagged tag from the source process of rank rank_src to the local buffer buf
- MPI.Finalize() - terminates the MPI execution environment
### 5.1 Finding the neighbors of a process
For our problem, we are going to decompose our two-dimensional domain into many rectangular sub-domains, similar to the Figure below
Note that the “x” and “y” axis are flipped with respect to the conventional usage, in order to associate the x-axis to the rows and the y-axis to the columns of the matrix of solution.
In order to communicate between the various processes, *each process needs to know what its neighbors are*. There is a very convenient MPI command that does this automatically and is called MPI_Cart_create. Unfortunately, the Julia MPI wrapper does not include this “advanced” command (and it does not look trivial to add it), so instead I decided to build a function that accomplishes the same task. In order to make it more compact, I have made extensive use of the ternary operator. You can find this function below,
## Find neighbors function
1 | function neighbors(my_id::Int, nproc::Int, nx_domains::Int, ny_domains::Int) |
The inputs of this function are my_id, which is the rank (or id) of the process, the number of processes nproc, the number of divisions in the $x$ direction nx_domains and the number of divisions in the $y$ direction ny_domains.
Let’s now test this function. For example, looking again at Fig. 5, we can test the output for process of rank 4 and for process of rank 11. This is what we find on a Julia REPL:1
2
3
4
5
6julia> neighbors(4, 12, 3, 4)
Dict{String,Int64} with 4 entries:
"S" => 3
"W" => 1
"N" => 5
"E" => 7
and
1 | julia> neighbors(11, 12, 3, 4) |
As you can see, I am using cardinal directions “N”, “S”, “E”, “W” to denote the location of a neighbor. For example, process #4 has process #3 as a neighbor located South of its position. You can check that the above results are all correct, given that “-1” in the second example means that no neighbors have been found on the “North” and “East” sides of process #11.
### 5.2 Message passing
As we have seen earlier, at each iteration every process *sends* its boundaries to the neighboring processes. At the same time, every process *receives* data from its neighbors. These data are stored as “ghost cells” by each process and are used to compute the solution near the boundary of each sub-domain.
There is a very useful command in MPI called MPI_Sendrecv that allows to send and receive messages at the same time, between two processes. Unfortunately MPI.jl does not provide this functionality, however it is still possible to achieve the same result by using the MPI_Send and MPI_Receive functionalities separately.
This is what is done in the following updateBound! function, which updates the ghost cells at each iteration. The inputs of this function are the global 2D solution u, which includes the ghost cells, as well as all the information related to the specific process that is running the function (what its rank is, what are the coordinates of its sub-domain, what are its neighbors). The function first sends its boundaries to its neighbors and then it receives their boundaries. The *receive* part is finalized through the MPI.Waitall! command, that ensures that all the expected messages have been received before updating the ghost cells for the specific sub-domain of interest.
## Updating the ghost cells function
1 | function updateBound!(u::Array{Float64,2}, size_total_x, size_total_y, neighbors, comm, |
## 5. Visualizing the solution
The domain is initialized with a constant value $u=+10$ around the boundary, which can be interpreted as having a source of constant temperature at the border. The initial condition is $u=-10$ in the interior of the domain (Fig. 6left). As time progresses, the value $u=10$ at the boundary diffuses towards the center of the domain. For example, at the time step j=15203, the solution looks like as in Fig. 6right.
As the time $t$ increases, the solution becomes more and more homogenous, until theoretically for $t \rightarrow +\infty$ it becomes $u=+10$ across the entire domain.
## 6. Performance
I was very impressed when I tested the performance of the Julia implementation against Fortran and C: I found the Julia implementation to be the fastest one!
Before jumping into this comparison, let’s have a look at the MPI performance of the Julia code itself. Figure 7 shows the ratio of the runtime when running with 1 vs. 2 processes (CPUs). Ideally, you would like this number to be close to 2, i.e., that running with 2 CPUs should be twice as fast than running with one CPU. What is observed instead is that for small problem sizes (grid of 128x128 cells), the compilation time and communication overhead have a net negative effect on the overall runtime: the speed-up is smaller than one. It is only for larger problem sizes that the benefit of using multiple processes starts to be apparent.
And now, for the surprise plot: Fig. 8 demonstrates that the Julia implementation is faster than both Fortran and C, for both 256x256 and 512x512 problem sizes (the only ones I tested). Here I am only measuring the time needed in order to complete the main iteration loop. I believe this is a fair comparison, as for long running simulations this is going to represent the biggest contribution to the total runtime.
## Conclusions
Before starting this blogpost I was fairly skeptical of Julia being able to compete against the speed of Fortran and C for scientific applications. The main reason was that I had previously translated an academic code of about 2000 lines from Fortran into Julia 0.6 and I observed a performance reduction of about x3.
But this time… I am very impressed. I have effectively just translated an existing MPI implementation written in Fortran and C, into Julia 1.0. The results shown in Fig. 8 speak for themselves: Julia appears to be the fastest by far. Note that I have not factored in the long compilation time taken by the Julia compiler, as for “real” applications that take hours to complete this would represent a negligible factor.
I should also add that my tests are surely not as exhaustive as they *should* be for a thorough comparison. In fact, I would be curious to see how the code performs with more than just 2 CPUs (I am limited by my home personal laptop) and with different hardware (feel free to check out Diffusion.jl!).
At any rate, this exercise has convinced me that it is worth investing more time in learning and using Julia for Data Science and scientific applications. Off to the next one!
## References
Fabien Dournac, Version MPI du code de résolution numérique de l’équation de chaleur 2D
⁂
| true | true | true |
Julia has been around since 2012 and after more than six years of development, its 1.0 version has been finally released. This is a major milestone and one that has inspired me to write a new blogpost
|
2024-10-12 00:00:00
|
2018-09-30 00:00:00
|
http://www.claudiobellei.com/2018/09/30/julia-mpi/mondrian13.png
|
article
|
claudiobellei.com
|
Marginalia
| null | null |
8,393,966 |
https://developers.google.com/web/fundamentals/performance/optimizing-content-efficiency/webfont-optimization
|
Fast load times | web.dev
| null |
### Fast load times
### What is speed?
### Why does speed matter?
### How to measure speed
### How to stay fast
### Learn Performance
Web performance is a crucial aspect of web development that focuses on the speed at which pages load, as well as how responsive they are to user input. When you optimize your website for performance, you're giving users a better experience.
The initial release of this course focuses on web performance fundamentals, that beginners should find informative. Each module aims to demonstrate key performance concepts.
## Overview
When building a modern web experience, it's important to measure, optimize, and monitor if you're to get fast and stay fast. Performance plays a significant role in the success of any online venture, as high performing sites engage and retain users better than poorly performing ones.
Sites should focus on optimizing for user-centric happiness metrics. Tools like Lighthouse (baked into web.dev!) highlight these metrics and help you take the right steps toward improving your performance. To stay fast, set and enforce performance budgets to help your team work within the constraints needed to continue loading fast and keeping users happy after your site has launched.
| true | true | true | null |
2024-10-12 00:00:00
| null |
website
|
web.dev
|
web.dev
| null | null |
|
39,567,012 |
https://thecountersignal.com/secret-documents-reveal-trudeau-government-virologists-had-clandestine-relationship-with-chinese-agents/
|
Secret documents reveal Trudeau government virologists had “clandestine relationship” with Chinese agents
|
Keean Bexte
|
Documents authored by Canada’s top intelligence service reveal the long awaited explanation behind the abrupt departure of two virologists from Canada’s top biolaboratory.
The documents, which have been viewed by The Counter Signal and published here for our readers expose the Trudeau government for hiding the true reason for the departure of Dr. Xiangguo Qiu Canada’s Public Health agency.
Despite the Liberals citing the departure as a private “personnel issue,” the Canadian Security Intelligence Service (CSIS) accuses Dr. Qiu of a long history of clandestine actions that put Canadians’ heath and security at risk, along with directly assisting military research in China.
Alongside selling deadly pathogens to Chinese authorities at the Wuhan Institute of Virology for just $75, Qui was also found to have hidden a Chinese bank account from CSIS.
“Further to our security assessment […], the Service assesses that Ms. Qiu developed deep, cooperative relationships with a variety of People’s Republic of China (PRC) institutions and has intentionally transferred scientific knowledge and materials to China in order to benefit the PRC Government, and herself, without regard for the implications to her employer or to Canada’s interests.”
“It is clear that Ms. Qiu […] made efforts to conceal her projects with PRC institutions. The Service further assesses that because of her extensive knowledge of the harmful effects of dangerous pathogens on human health, Ms. Qiu should have been aware of the possibility that her efforts to engage clandestinely with the PRC in these research areas could harm Canadian interests or international security.”
“Ms. Qiu repeatedly lied in her security screening interviews about the extent of her work with institutions of the PRC Government and refused to admit to any involvement in various PRC programs, even when documents [REDACTED] were put before her.”
“The Service also assesses that Ms. Qiu was reckless in her dealings with various PRC entities, particularly in her lack of respect for proper scientific protocols regarding the transfer of pathogens and in working with institutions whose goals have potentially lethal military applications that are manifestly not in the interests of Canada or its citizens.”
Qiu was shown to have given China agents direct access to Canada’s National Microbiology Laboratory, a Biosafety Level 4 facility which houses Canada’s most secret and secure pathogenic diseases. These pathogens can be used in weaponry.
“Ms. Qiu also gave access to the [National Microbiology Laboratory] to at least two employees of a PRC institution whose work is not aligned with Canadian interests,” they stated.
**Secretly working for Wuhan Lab**
CSIS also found that Qui was actively working with the Chinese Wuhan Virology Lab on a project that CSIS redacted and called “Project 1.” This project started January 1, 2019, just three months before she sent a shipment of materials to the Wuhan lab, and the project involved the study of mRNA vaccines.
Project 2 was cited by CSIS as being a “cross-species infection” program that could have been used for Gain-of-Function research into bat viruses.
CSIS reported that “at least five virus strains from that March 31, 2019 BSL-4 pathogen shipment from [Winnipeg] to [Wuhan] were referenced in the Wuhan Institute of Virology Project 1.”
**Reaction from the Conservative Party**
Opposition Leader Pierre Poilieve issued a scathing statement just prior to the publication of this story.
“Under Justin Trudeau’s watch, the PRC and its entities, including the People’s Liberation Army, were allowed to infiltrate Canada’s top level lab. They were able to transfer sensitive intellectual property and dangerous pathogens to the PRC.
“This is a massive national security failure by Justin Trudeau and his Liberal government, which he fought tooth and nail to cover up, including defying four parliamentary orders and taking the House of Commons Speaker to court. He cannot be trusted to keep our people and our country safe.
The Trudeau government did indeed sue the Speaker of the House of Commons to prevent the release of this information.
These allegations have not been tested in court.
*This story is developing*
| true | true | true |
Documents authored by Canada’s top intelligence service reveal the long awaited explanation behind the abrupt departure of two virologists from Canada’s top biolaboratory.
|
2024-10-12 00:00:00
|
2024-02-29 00:00:00
|
article
|
thecountersignal.com
|
The Counter Signal
| null | null |
|
5,658,894 |
http://www.npr.org/blogs/thetwo-way/2013/05/04/181104605/world-war-ii-code-is-broken-decades-after-pow-used-it
|
World War II Code Is Broken, Decades After POW Used It
|
Bill Chappell
|
# World War II Code Is Broken, Decades After POW Used It
It's been 70 years since the letters of John Pryor were understood in their full meaning. That's because as a British prisoner of war in Nazi Germany, Pryor's letters home to his family also included intricate codes that were recently deciphered for the first time since the 1940s.
Pryor's letters served their purpose in World War II, as Britain's MI9 agents decoded the messages hidden within them — requests for supplies, notes about German activities — before sending them along to Pryor's family in Cornwall.
"There were two types of information buried in these letters," Pryor's son, Stephen, tells *Weekend Edition Saturday's* Scott Simon. "There is military intelligence going back about munitions dumps, about submarines that have been sunk, and information requests for British Military Intelligence in London to send maps and German currency and German ID, to help them with their escape plans."
After the war, Pryor lived a long life; he died in 2010 at age 91. But he also forgot the intricate code he used to communicate after being taken prisoner at Dunkirk. The letters came under new scrutiny recently after Stephen Pryor, the chancellor of Plymouth University, mentioned them to a military intelligence expert at the school. That led them to team up with a historian and a mathematician. Eventually, they cracked the code.
As an example, Pryor reads a segment of a letter: "I am pleased that I've got the two letters telling me of my cousin's latest event; how happy he must undoubtedly be."
The passage contains coded information about a submarine, the HMS Undine, Pryor says. And the code is far from simple.
"You take the first letter of every word in groups of three," Pryor says. "And then it goes into a three-dimensional matrix, which you have to remember in order to decode and get the sequence of letters to produce the name of the vessel."
The letters passed through German censors, and then through the hands of British agents, before finally reaching John Pryor's family. In cases where POWs sent information in code, intelligence officials "informed the relatives that some letters would read a little strangely," Stephen Pryor says.
The prisoners also used subtle cues, such as including certain words or underlining their signature, to signal to British intelligence that a letter contained coded information.
Stephen Pryor says that after the war, his father mostly kept quiet about the code, and about his wartime experience.
"But I can see now that he was among tens of thousands of other young men who gave up their youth in captivity," he says. "He and his peers took incredible risks, and that has only made me admire him, and all the other men, for their resilience and ingenuity."
You can read a longer passage from one of John Pryor's letters in *The Daily Mail*, along with its decoded meaning.
| true | true | true |
It's been 70 years since the letters of John Pryor were understood in their full meaning. That's because as a British prisoner of war in Nazi Germany, Pryor's letters home to his family also included intricate codes that were recently deciphered by codebreakers for the first time since the 1940s.
|
2024-10-12 00:00:00
|
2013-05-04 00:00:00
|
article
|
npr.org
|
NPR
| null | null |
|
8,017,277 |
http://www.richardssoftware.net/2014/07/one-year-later.html
|
One Year Later…
|
Eric Richards
|
# One Year Later…
Tuesday was the anniversary of my first real post on this blog. For the most part, I’ve tried to keep my content here on the technical side of things, but, what the hell, this is a good time to reflect on a year of blogging – what went well, what went poorly, and where I’m going from here.
### What I Meant to Accomplish (And What I actually Accomplished…)
#### Content
I restarted this blog about a year ago to document my attempt at learning DirectX 11, using Frank Luna’s Introduction to 3D Game Programming with Direct3D 11.0 . In my day job, I mostly code ASP.NET and Winforms applications using C#, so I decided to convert the C++ examples from Mr.Luna’s book into C#, using SlimDX as my managed DirectX wrapper. SlimDX appeared to me to be slightly more mature than its main competitor, SharpDX, as its documentation was a little more complete, and it had been out in the wild a little bit longer, so there was a bit more third-party information (StackOverflow, other blogs, GameDev.net forum postings, etc.) on it. I suppose I could have also gone with XNA, although Microsoft appears to have abandoned any new development on it (Last release 9/16/2010…), and I felt like SlimDX’s simple wrapper around the DirectX library would be easier to translate than shoehorning into the XNA model, not to mention wrassling with the XNA Content Pipeline.
My initial goal was to work through the book, converting each of the examples presented and then blogging about the process. Except for the chapters on Compute Shaders and Quaternions (which I have yet to tackle, mostly because the examples are not terribly interesting), I completed that goal by the middle of November of last year. From there, I started incorporating elements from Carl Granberg’s Programming an RTS Game with Direct3D into the terrain rendering code that Luna’s book presented, as well as dabbling in integrating Direct2D and SpriteTextRenderer to handle 2D drawing.
After that, my intention was to start working my way through Ian Millington’s book, Game Physics Engine Development. This is about where I ran out of steam. Between the hassle of trying to reconcile the code examples from this book, which were based on OpenGL and somewhat less self-contained than what I had been working on previously, various issues in my personal and work life, and the general malaise of an especially cold, dark winter here in New England, my impetus to work on my side projects faded away spectacularly. If any of you followed this regularly, I’m sure you’ve noticed that it’s been almost four months since I’ve posted anything new, and before that there was another dry spell of more than a month.
With the arrival of summer and a move that reduces both the strain on my finances and the number of hours per day I spend driving back and forth to work considerably, I’ve found that my mood has improved by leaps and bounds, and I have the extra energy to expend on coding outside of work again, finally. Right now, I’ve got a number of new things I’m working through, and a number of ideas in the pipeline that should be making their way up here in the near future.
#### Sharing
In addition to learning things myself, another of my goals was to share what I learned with anyone else out there interested in graphics and game programming. Putting this content up here was the first step, but the internet is littered with interesting information buried and inaccessible because the GoogleBots either can’t find it or deem it unimportant. I don’t pretend to know anything about SEO, and I’m not sure its something I really want to get involved in – everything I’ve read seems to indicate that you either need to get lucky and go viral, or else spend a bunch of time or money doing vaguely unethical things, like spamming links or hiring a shady outfit in the Ukraine or China to do it for you.
So, my limited efforts at promotion concentrated on GameDev.net, Facebook, Twitter, and HackerNews.
- GameDev.net was probably the most effective. The main source of views came from cross-posting each post I made here on my developer journal there (http://www.gamedev.net/blog/1703-richards-software-ramblings/). A couple of these posts got picked up by the admins and made the front page, under the Featured Developer Journals section, which resulted in a pretty big boost in views. I also added links to here on my forum signature, which I don’t believe amounted to much. To some extent, I also trolled the Beginner and DirectX forums, suggesting that anybody that was having a problem I had covered check out the relevant post. I hope that this was somewhat helpful, and not just spammy…
- Facebook was probably not worth bothering with… Generally my friends on Facebook are split between people I knew in high school, family members, my fraternity brothers from college, and other Dartmouth people who trend towards being law students, med students, teachers, or I-bankers. It’s not exactly a fertile demographic for tutorials on 3D graphics programming. Now, if I was writing Top-X lists of Marvel characters, I might do better there, but that’s the nature of Facebook…
- Twitter is also kind of a non-starter. Given that I have a grand total of 11 followers, and I don’t really believe in the whole idea of Twitter, that’s probably not surprising.
- HackerNews was very hit or miss. Most of the time I would post a link, and it would languish on the back pages forever. Once in a while, however, something would get some bizarre traction with one of the Hacker News aggregators (usually Feedly), and I’d see a big spike in views for a post. I have no idea what the rhyme or reason for this is; my best guess is that I happened to post at a particularly dead time, and my link stayed near the top of the newest links page for longer than normal.
However, the vast majority of people who made it to my site came from Google searches. I’m not sure how, but my site has come to rank pretty highly for certain Google searches. Some examples, as of today:
**DirectX 11 Tutorials**– #10, on the first page of results, is my DirectX Tutorials landing page.**SlimDX Assimp**– #1 is my post on loading models using Assimp.Net, and #2 is my GameDev.net journal that links to my post on Skinned Models with Assimp.**slimdx frustum culling**– #3, linking to my post on frustum culling.**SlimDX A* Pathfinding**– #1, #2 and #4**SlimDX Terrain**– #1 and #2
### Some Charts…
Oddly enough, the biggest growth in traffic came after I ran out of steam and stopped posting. I wonder if this was a missed opportunity to continue growing.
Kind of an interesting split in the most popular pages on the site. Mostly, this is my more interesting and more advanced content, although the two most basic examples on the site are also represented. Also, my tutorial index page is doing pretty well.
#### Finances
Making money isn’t really my goal with this site, which is a good thing, since it certainly has not been a mint. Fortunately, maintaining the site really doesn’t cost me anything: I use Blogger for hosting, which is free, and my GoDaddy registration only costs $10 for the year.
Hopefully, the banner ads I’ve got on the site are not that obnoxious. Probably most of you who would see this use AdBlock anyway. They have more than doubled my monetary investment in the site over the past year – as of today, I’ve earned just over $25 from AdSense. At this rate, I’ll hit the minimum payout threshold in another three years, haha.
If I were to take a wild guess at the number of hours I’ve spent coding and writing up these posts over the last year, and then calculating an hourly rate based on my AdSense revenue, I’m probably earning about a nickel an hour… Even mowing grass or stacking firewood would be orders of magnitude more lucrative, but whatever – I’ve arguably learned more about programming and computer science in a year of porting C++ to C# and fighting with shaders than I did during my courses for my Comp Sci minor. Even better, kicking around the dusty corners of the .NET library to match up with C++ constructs has broadened my knowledge of what is available, and spending so much time in Visual Studio coding and debugging has taught me all sorts of tricks that I can use in the day job too.
### What’s Next?
In a perfect world, I’d continue to churn out high-quality, interesting content on a regular basis, and this site would become a go-to resource for SlimDX and general C# game programming, similar to the LazyFoo or RasterTek tutorial series. I’ll settle for just getting back to a more consistent posting schedule, though. There are so many interesting topics to consider, and both coding and then explaining what I’ve done is the best way I have found so far to cement my understanding of algorithms.
As I mentioned, I have a bunch of stuff in the pipeline that I’m hoping to finish and write up in the near future, so hopefully there will be some new content here shortly.
I’ve been toying with the idea of writing a book, since SlimDX does not appear to be very well covered in print (At present, Amazon only lists one title).
Anyway, thanks for reading over the past year, and I hope that this site has been useful. Here’s to an even better second year of http://www.richardssoftware.net/!
| true | true | true |
SlimDX and DirectX11 tutorials
|
2024-10-12 00:00:00
|
2014-07-10 00:00:00
| null | null | null |
RichardsSoftware.net
| null | null |
34,361,405 |
https://www.nature.com/articles/s41467-019-09461-x
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
860,124 |
http://code.google.com/apis/ajax/playground/?exp=libraries#geo_map
|
Google APIs Explorer | Google for Developers
| null |
The Google APIs Explorer is a tool available on most REST API reference documentation pages that lets you try Google API methods without writing code. The APIs Explorer acts on real data, so use caution when trying methods that create, modify, or delete data. For more details, read the APIs Explorer documentation.
## How to start exploring
- Click the name of the API you want to explore in the list below. This opens the API reference documentation.
- In the documentation, on the left, click the method you want to try.
- On the right, enter the details of your request in the panel labeled "Try this method." Or, click Full screen for more options.
- Click
**Execute**to send your request to the API and see the API's response.
| true | true | true |
The Google APIs Explorer is is a tool that helps you explore various Google APIs interactively.
|
2024-10-12 00:00:00
| null |
website
|
google.com
|
Google for Developers
| null | null |
|
27,010,037 |
https://en.wikipedia.org/wiki/Galaxy_filament
|
Galaxy filament - Wikipedia
| null |
# Galaxy filament
Part of a series on |
Physical cosmology |
---|
In cosmology, **galaxy filaments** are the largest known structures in the universe, consisting of walls of galactic superclusters. These massive, thread-like formations can commonly reach 50/h to 80/h megaparsecs (160 to 260 megalight-years)—with the largest found to date being the Hercules-Corona Borealis Great Wall at around 3 gigaparsecs (9.8 Gly) in length—and form the boundaries between voids.[1] Due to the accelerating expansion of the universe, the individual clusters of gravitationally bound galaxies that make up galaxy filaments are moving away from each other at an accelerated rate; in the far future they will dissolve.[2]
Galaxy filaments form the cosmic web and define the overall structure of the observable universe.[3][4][5]
## Discovery
[edit]Discovery of structures larger than superclusters began in the late 1980s. In 1987, astronomer R. Brent Tully of the University of Hawaii's Institute of Astronomy identified what he called the Pisces–Cetus Supercluster Complex. The CfA2 Great Wall was discovered in 1989,[6] followed by the Sloan Great Wall in 2003.[7]
In January 2013, researchers led by Roger Clowes of the University of Central Lancashire announced the discovery of a large quasar group, the Huge-LQG, which dwarfs previously discovered galaxy filaments in size.[8] In November 2013, using gamma-ray bursts as reference points, astronomers discovered the Hercules–Corona Borealis Great Wall, an extremely large filament measuring more than 10 billion light-years across.[9][10][11]
## Filaments
[edit]The filament subtype of filaments have roughly similar major and minor axes in cross-section, along the lengthwise axis.
Filament | Date | Mean distance | Dimension | Notes |
---|---|---|---|---|
Coma Filament | The Coma Supercluster lies within the Coma Filament.[12] It forms part of the CfA2 Great Wall.[13]
| |||
Perseus–Pegasus Filament | 1985 | Connected to the Pisces–Cetus Supercluster, with the Perseus–Pisces Supercluster being a member of the filament.[14]
| ||
Ursa Major Filament | Connected to the CfA Homunculus, a portion of the filament forms a portion of the "leg" of the Homunculus.[15]
| |||
Lynx–Ursa Major Filament (LUM Filament) | 1999 | from 2000 km/s to 8000 km/s in redshift space
|
Connected to and separate from the Lynx–Ursa Major Supercluster.[15]
| |
z=2.38 filament around protocluster ClG J2143-4423 | 2004 | z=2.38 | 110 Mpc | A filament the length of the Great Wall was discovered in 2004. As of 2008, it was still the largest structure beyond redshift 2.[16][17][18][19]
|
- A short filament was proposed by Adi Zitrin and Noah Brosch—detected by identifying an alignment of star-forming galaxies—in the neighborhood of the Milky Way and the Local Group.
[20]The proposal of this filament, and of a similar but shorter filament, were the result of a study by McQuinn*et al.*(2014) based on distance measurements using the TRGB method.[21]
### Galaxy walls
[edit]The **galaxy wall** subtype of filaments have a significantly greater major axis than minor axis in cross-section, along the lengthwise axis.
Wall | Date | Mean distance | Dimension | Notes |
---|---|---|---|---|
CfA2 Great Wall (Coma Wall, Great Wall, Northern Great Wall, Great Northern Wall, CfA Great Wall) | 1989 | z=0.03058 | 251 Mpc long: 750 Mly long 250 Mly wide 20 Mly thick |
This was the first super-large large-scale structure or pseudo-structure in the universe to be discovered. The CfA Homunculus lies at the heart of the Great Wall, and the Coma Supercluster forms most of the homunculus structure. The Coma Cluster lies at the core.[22][23]
|
Sloan Great Wall (SDSS Great Wall) | 2003 | z=0.07804 | 433 Mpc long | This was the largest known galaxy filament to be discovered,[22] until it was eclipsed by the Hercules–Corona Borealis Great Wall found ten years later.
|
Sculptor Wall (Southern Great Wall, Great Southern Wall, Southern Wall) | 8000 km/s long 5000 km/s wide 1000 km/s deep (in redshift space dimensions)
|
The Sculptor Wall is "parallel" to the Fornax Wall and "perpendicular" to the Grus Wall.[24][25]
| ||
Grus Wall | The Grus Wall is "perpendicular" to the Fornax and Sculptor Walls.[25]
| |||
Fornax Wall | The Fornax Cluster is part of this wall. The wall is "parallel" to the Sculptor Wall and "perpendicular" to the Grus Wall.[24][25]
| |||
Hercules–Corona Borealis Great Wall | 2013 | z≈2[10]
|
3 Gpc long,[10] 150 000 km/s deep [10] (in redshift space)
|
The largest known structure in the universe.[9][10][11] This is also the first time since 1991 that a galaxy filament/great wall held the record as the largest known structure in the universe.
|
- A "Centaurus Great Wall" (or "Fornax Great Wall" or "Virgo Great Wall") has been proposed, which would include the Fornax Wall as a portion of it (visually created by the Zone of Avoidance) along with the Centaurus Supercluster and the Virgo Supercluster, also known as the Local Supercluster, within which the Milky Way galaxy is located (implying this to be the Local Great Wall).
[24][25] - A wall was proposed to be the physical embodiment of the Great Attractor, with the Norma Cluster as part of it. It is sometimes referred to as the Great Attractor Wall or Norma Wall.
[26]This suggestion was superseded by the proposal of a supercluster, Laniakea, that would encompass the Great Attractor, Virgo Supercluster, Hydra–Centaurus Superclusters.[27] - A wall was proposed in 2000 to lie at z=1.47 in the vicinity of radio galaxy B3 0003+387.
[28] - A wall was proposed in 2000 to lie at z=0.559 in the northern Hubble Deep Field (HDF North).
[29][30]
#### Map of nearest galaxy walls
[edit]### Large Quasar Groups
[edit]Large quasar groups (LQGs) are some of the largest structures known.[31] They are theorized to be protohyperclusters/proto-supercluster-complexes/galaxy filament precursors.[32]
LQG | Date | Mean distance | Dimension | Notes |
---|---|---|---|---|
Clowes–Campusano LQG (U1.28, CCLQG) |
1991 | z=1.28 |
|
It was the largest known structure in the universe from 1991 to 2011, until U1.11's discovery. |
U1.11 | 2011 | z=1.11 |
|
Was the largest known structure in the universe for a few months, until Huge-LQG's discovery. |
Huge-LQG | 2012 | z=1.27 |
|
It was the largest structure known in the universe,[31][32] until the discovery of the Hercules–Corona Borealis Great Wall found one year later.[10]
|
### Supercluster complex
[edit]Pisces–Cetus Supercluster Complex
## Maps of large-scale distribution
[edit]-
The universe within 1 billion light-years (307 Mpc) of Earth, showing local superclusters forming filaments and voids
-
Map of nearest walls, voids and superclusters
-
2dF survey map, containing the SDSS Great Wall
-
2MASS XSC infrared sky map
## See also
[edit]## References
[edit]**^**Bharadwaj, Somnath; Bhavsar, Suketu; Sheth, Jatush V (2004). "The Size of the Longest Filaments in the Universe".*Astrophys J*.**606**(1): 25–31. arXiv:astro-ph/0311342. Bibcode:2004ApJ...606...25B. doi:10.1086/382140. S2CID 10473973.**^**Siegel, Ethan. "Our Home Supercluster, Laniakea, Is Dissolving Before Our Eyes".*Forbes*. Retrieved 2023-11-13.**^**"Cosmic Web".*NASA Universe Exploration*. Archived from the original on 2023-03-27. Retrieved 2023-06-06.**^**Komberg, B. V.; Kravtsov, A. V.; Lukash, V. N. (October 1996). "The search for and investigation of large quasar groups".*Monthly Notices of the Royal Astronomical Society*.**282**(3): 713–722. arXiv:astro-ph/9602090. Bibcode:1996MNRAS.282..713K. doi:10.1093/mnras/282.3.713. ISSN 0035-8711.**^**Clowes, R. G. (2001). "Large Quasar Groups - A Short Review".*Astronomical Society of the Pacific*.**232**: 108. Bibcode:2001ASPC..232..108C. ISBN 1-58381-065-X.**^**Huchra, John P.; Geller, Margaret J. (17 November 1989). "M. J. Geller & J. P. Huchra,*Science***246**, 897 (1989)".*Science*.**246**(4932): 897–903. doi:10.1126/science.246.4932.897. PMID 17812575. S2CID 31328798. Archived from the original on 2008-06-21. Retrieved 2009-09-18.**^***Sky and Telescope*, "Refining the Cosmic Recipe" Archived 2012-03-09 at the Wayback Machine, 14 November 2003**^**Wall, Mike (2013-01-11). "Largest structure in universe discovered". Fox News. Archived from the original on 2013-01-12. Retrieved 2013-01-12.- ^
**a**Horvath, Istvan; Hakkila, Jon; Bagoly, Zsolt (2014). "Possible structure in the GRB sky distribution at redshift two".**b***Astronomy & Astrophysics*.**561**: id.L12. arXiv:1401.0533. Bibcode:2014A&A...561L..12H. doi:10.1051/0004-6361/201323020. S2CID 24224684. - ^
**a****b****c****d****e**Horvath I., Hakkila J., and Bagoly Z.; Hakkila, J.; Bagoly, Z. (2013). "The largest structure of the Universe, defined by Gamma-Ray Bursts".**f***7th Huntsville Gamma-Ray Burst Symposium, GRB 2013: Paper 33 in EConf Proceedings C1304143*.**1311**: 1104. arXiv:1311.1104. Bibcode:2013arXiv1311.1104H.`{{cite journal}}`
: CS1 maint: multiple names: authors list (link) - ^
**a**Klotz, Irene (2013-11-19). "Universe's Largest Structure is a Cosmic Conundrum". discovery. Archived from the original on 2013-11-30. Retrieved 2013-11-22.**b** **^**Fontanelli, P. (1983). "Clustering in the Universe: A filament of galaxies in the Coma/A1367 supercluster".*Astronomy and Astrophysics*.**138**: 85–92. Bibcode:1984A&A...138...85F. ISSN 0004-6361.**^**Gavazzi, Giuseppe; Catinella, Barbara; Carrasco, Luis; et al. (May 1998). "The Star Formation Properties of Disk Galaxies: Hα Imaging of Galaxies in the Coma Supercluster".*The Astronomical Journal*.**115**(5): 1745–1777. arXiv:astro-ph/9801279. Bibcode:1998AJ....115.1745G. doi:10.1086/300314.**^**Batuski, D. J.; Burns, J. O. (December 1985). "A possible 300 megaparsec filament of clusters of galaxies in Perseus-Pegasus".*The Astrophysical Journal*.**299**: 5. Bibcode:1985ApJ...299....5B. doi:10.1086/163677. ISSN 0004-637X.- ^
**a**Takeuchi, Tsutomu T.; Tomita, Akihiko; Nakanishi, Kouichiro; Ishii, Takako T.; Iwata, Ikuru; Saitō, Mamoru (April 1999). "Photometric Properties of Kiso Ultraviolet-Excess Galaxies in the Lynx–Ursa Major Region".**b***The Astrophysical Journal Supplement Series*.**121**(2): 445–472. arXiv:astro-ph/9810161. Bibcode:1999ApJS..121..445T. doi:10.1086/313203. ISSN 0067-0049. **^**NASA, GIANT GALAXY STRING DEFIES MODELS OF HOW UNIVERSE EVOLVED Archived 2008-08-06 at the Wayback Machine, January 7, 2004**^**Palunas, Povilas; Teplitz, Harry I.; Francis, Paul J.; Williger, Gerard M.; Woodgate, Bruce E. (2004). "The Distribution of Lyα-Emitting Galaxies at z = 2.38".*The Astrophysical Journal*.**602**(2): 545–554. arXiv:astro-ph/0311279. Bibcode:2004ApJ...602..545P. doi:10.1086/381145. S2CID 990891.**^**Francis, Paul J.; Palunas, Povilas; Teplitz, Harry I.; Williger, Gerard M.; Woodgate, Bruce E. (2004). "The Distribution of Lyα-emitting Galaxies at z =2.38. II. Spectroscopy".*The Astrophysical Journal*.**614**(1): 75–83. arXiv:astro-ph/0406413. Bibcode:2004ApJ...614...75F. doi:10.1086/423417. S2CID 118037575.**^**Williger, G.M.; Colbert, J.; Teplitz, H.I.; et al. (2008). Aschenbach, B.; Burwitz, V.; Hasinger, G.; Leibundgut, B. (eds.). "Ultraviolet-Bright, High-Redshift ULIRGS".*Relativistic Astrophysics Legacy and Cosmology - Einstein's Legacy*. Berlin, Heidelberg: Springer Berlin Heidelberg: 358–362. Bibcode:2007ralc.conf..358W. doi:10.1007/978-3-540-74713-0_83. ISBN 978-3-540-74712-3.**^**Zitrin, A.; Brosch, N. (2008). "The NGC 672 and 784 galaxy groups: evidence for galaxy formation and growth along a nearby dark matter filament".*Monthly Notices of the Royal Astronomical Society*.**390**(1): 408–420. arXiv:0808.1789. Bibcode:2008MNRAS.390..408Z. doi:10.1111/j.1365-2966.2008.13786.x. S2CID 16296617.**^**McQuinn, K.B.W.; et al. (2014). "Distance Determinations to SHIELD Galaxies from Hubble Space Telescope Imaging".*The Astrophysical Journal*.**785**(1): 3. arXiv:1402.3723. Bibcode:2014ApJ...785....3M. doi:10.1088/0004-637x/785/1/3. S2CID 118465292.- ^
**a**Chin. J. Astron. Astrophys. Vol. 6 (2006), No. 1, 35–42 "Super-Large-Scale Structures in the Sloan Digital Sky Survey" (PDF). Archived (PDF) from the original on 2013-06-10. Retrieved 2008-08-02.**b** **^***Scientific American*, vol. 280, no. 6, pp. 30–37 "Mapping the Universe" (PDF). Archived from the original (PDF) on 2008-07-04. (1.43 MB)*06/1999*Bibcode:1999SciAm.280f..30L- ^
**a****b**Fairall, A. P.; Paverd, W. R.; Ashley, R. P. (1994). "Unveiling large-scale structures behind the Milky Way: Visualization of Nearby Large-Scale Structures".**c***Astronomical Society of the Pacific Conference Series*.**67**: 21. Bibcode:1994ASPC...67...21F. - ^
**a****b****c**Fairall, A. P. (August 1995). "Large-scale structures in the distribution of galaxies".**d***Astrophysics and Space Science*.**230**(1–2): 225–235. Bibcode:1995Ap&SS.230..225F. doi:10.1007/BF00658183. ISSN 0004-640X. **^***World Science*, Wall of galaxies tugs on ours, astronomers find Archived 2007-10-28 at the Wayback Machine*April 19, 2006***^**Tully, R. Brent; Courtois, Hélène; Hoffman, Yehuda; Pomarède, Daniel (2 September 2014). "The Laniakea supercluster of galaxies".*Nature*.**513**(7516) (published 4 September 2014): 71–73. arXiv:1409.0880. Bibcode:2014Natur.513...71T. doi:10.1038/nature13674. PMID 25186900. S2CID 205240232.**^**Thompson, D.; Aftreth, O.; Soifer, B. T. (November 2000). "B3 0003+387: AGN-Marked Large-Scale Structure at Redshift 1.47?".*The Astronomical Journal*.**120**(5): 2331–2337. arXiv:astro-ph/0008030. Bibcode:2000AJ....120.2331T. doi:10.1086/316827.**^**FermiLab, "Astronomers Find Wall of Galaxies Traversing the Hubble Deep Field",*DARPA*, Monday, January 24, 2000**^**Vanden Berk, Daniel E.; Stoughton, Chris; Crotts, Arlin P. S.; Tytler, David; Kirkman, David (2000). "QSO[CLC]s[/CLC] and Absorption-Line Systems surrounding the Hubble Deep Field".*The Astronomical Journal*.**119**(6): 2571–2582. arXiv:astro-ph/0003203. Bibcode:2000AJ....119.2571V. doi:10.1086/301404. S2CID 117882449.- ^
**a**"Biggest structure in universe: Large quasar group is 4 billion light years across".**b***ScienceDaily*. Retrieved 2023-09-16. - ^
**a**Clowes, Roger G.; Harris, Kathryn A.; Raghunathan, Srinivasan; Campusano, Luis E.; Söchting, Ilona K.; Graham, Matthew J. (March 2013). "A structure in the early Universe at z ∼ 1.3 that exceeds the homogeneity scale of the R-W concordance cosmology".**b***Monthly Notices of the Royal Astronomical Society*.**429**(4): 2910–2916. arXiv:1211.6256. doi:10.1093/mnras/sts497. ISSN 1365-2966. **^**Yusef-Zadeh, F.; Arendt, R. G.; Wardle, M.; Heywood, I. (1 June 2023). "The Population of the Galactic Center Filaments: Position Angle Distribution Reveals a Degree-scale Collimated Outflow from Sgr A* along the Galactic Plane".*The Astrophysical Journal Letters*.**949**(2): L31. arXiv:2306.01071. Bibcode:2023ApJ...949L..31Y. doi:10.3847/2041-8213/acd54b. ISSN 2041-8205. S2CID 259046030.
## Further reading
[edit]- Pimbblet, Kevin A. (2005). "Pulling Out Threads from the Cosmic Tapestry: Defining Filaments of Galaxies".
*Publications of the Astronomical Society of Australia*.**22**(2): 136–143. arXiv:astro-ph/0503286. Bibcode:2005PASA...22..136P. doi:10.1071/AS05006. ISSN 1323-3580.
| true | true | true | null |
2024-10-12 00:00:00
|
2004-03-15 00:00:00
|
website
|
wikipedia.org
|
Wikimedia Foundation, Inc.
| null | null |
|
25,277,958 |
https://www.washingtonpost.com/health/cdc-quarantine-time-covid/2020/12/02/18159172-349e-11eb-b59c-adb7153d10c2_story.html
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
23,436,977 |
https://iphoneperformance.com/
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
37,111,846 |
https://www.latimes.com/california/story/2023-08-10/new-coronavirus-subvariant-eris-gaining-dominance
|
New coronavirus subvariant Eris is gaining dominance. Is it fueling an increase in cases?
|
Rong-Gong Lin II
|
# New coronavirus subvariant Eris is gaining dominance. Is it fueling an increase in cases?
A new coronavirus subvariant, nicknamed Eris, has rapidly risen to prominence nationwide and is now thought to account for more U.S. cases than any of its counterparts at a time when transmission has been creeping upward.
It’s possible the subvariant, formally known as EG.5, may have even further immune-escape advantage than some earlier members of the sprawling Omicron family — a viral dynasty that has dominated the globe since December 2021.
But officials emphasize that doesn’t necessarily mean Eris will cause a big wave.
“There are a couple of mutations on this particular variant that may have more immune evasion again. But it is very similar, and still a subset variant of Omicron,” California state epidemiologist Dr. Erica Pan said in a briefing with health professionals Tuesday.
Amid the latest coronavirus uptick in California, health officials reiterate the same advice: Masks work, but it’s a personal preference whether to wear them.
According to estimates from the U.S. Centers for Disease Control and Prevention, Eris comprised 17.3% of coronavirus specimens nationwide for the week ending Saturday, up from 11.9% a week earlier.
Eris is now estimated to be the most common distinctly identified subvariant nationwide. XBB.1.16, nicknamed Arcturus, has fallen to second place, and is estimated to comprise 15.6% of specimens. The third- and fourth-most common subvariants, respectively, are XBB.2.3, which some have nicknamed Acrux; and XBB.1.5, also known as Kraken.
In the Southwestern U.S. — California, Arizona, Nevada, Hawaii and the Pacific territories — the CDC estimates Arcturus is still the dominant subvariant, estimated to comprise 18.6% of viral specimens, with Eris second at 16.2%.
Unlike the pandemic’s earlier days, which were marked by distinctly different strains of the coronavirus struggling for dominance, more recent notable versions are all descendants of the Omicron variant. As a result, the World Health Organization hasn’t granted them official names beyond their alphanumeric identifiers — leading to an unofficial system, circulating on social media, using celestial or mythical monikers.
With the emergency phase of COVID-19 over, a Los Angeles Times analysis shows how the pandemic took different tolls on L.A. County and New York City.
Eris shares its name with one of the largest known dwarf planets in our solar system. Formerly known as Xena, it’s about the same size as Pluto, NASA says, and three times farther away from the sun. Eris is also the name of the Greek goddess of discord and strife.
Eris “is essentially a sub-, sub-variant of XBB,” according to Pan. In her briefing, Pan said California is still largely seeing the Kraken subvariant, but that Eris is on the upswing, and now estimated to be about 12% “of what we’re seeing here in California.”
It’s clear that coronavirus transmission has started to tick up nationwide and in California. Coronavirus levels in wastewater in most parts of the state are now at a “medium” level. A week earlier, they were mostly at a “low” level, Pan said.
California’s test positivity rates also have gone up in the last two to three weeks, as have anecdotal reports indicating that COVID seems to be spreading more widely, Pan said.
“A lot of this may be from summer travel,” she said. “Thankfully, our hospitalizations are looking very reassuring so far.”
A federal appeals court rejected the state’s bid to toss out a lawsuit filed on behalf of a California prison guard killed three years ago by COVID-19.
Deaths have also been stable and relatively low, Pan said. And because the differences between the latest subvariant are relatively minor, the autumn version of the COVID-19 vaccine that will be unveiled soon is still expected to do well against circulating variants later this year.
“The new vaccine ... should still have good cross-coverage, because it’s still based on that XBB base,” Pan said.
COVID-19 hospitalizations nationwide fell to a record low for the pandemic in June, but have ticked up since.
Nationwide, there were 9,056 new COVID-19 hospitalizations for the week that ended July 29, up 12.5% from the prior week. The record low in terms of hospitalizations, 6,306, was set for the week that ended June 24.
Test positivity has ticked up in L.A. County in recent weeks, though we’re still well below peak COVID levels. If you get COVID now, here’s how to treat it, how long to isolate, how to get Paxlovid, and more information on getting healthy.
In California, there were 1,416 new COVID-19 hospitalizations for the week that ended July 29, up 4% from the prior week. That’s more modest than the previous week-over-week increase, which was 12%.
All regions of the nation, as defined by federal officials, have seen an increase in COVID-19 hospitalizations in recent weeks. But they also all remain at near-record lows.
Some areas are seeing the pace of growth accelerate, however. New England saw a 26% increase in new COVID-19 hospitalizations for the week that ended July 29. The prior week-over-week change was 19%, and, before that, 5%.
The Mid-Atlantic region — Pennsylvania, Virginia, Maryland, West Virginia, Delaware, and the District of Columbia — saw a 19% increase in new weekly COVID-19 hospitalizations, up from a 4% jump the prior week.
Could Taylor Swift’s mega-tour have fans seeing red on their next COVID test? Here’s how officials say concertgoers can help protect themselves.
In the Southeast region of Florida, Georgia, North Carolina, Tennessee, South Carolina, Alabama, Kentucky and Mississippi, new weekly COVID-19 hospitalizations rose by 23%, up from the prior week-over-week increase of 10%.
The region comprised of Texas, Louisiana, Oklahoma, Arkansas and New Mexico saw COVID-19 hospitalizations rise 9% for the most recent week, more gradual than the 18% prior week-over-week increase.
New York and New Jersey saw COVID-19 hospitalizations rise by 9%, a gentler increase from the prior week-over-week jump of 27%.
The region that comprises Missouri, Iowa, Kansas and Nebraska saw a 20% jump following a previous week-over-week decrease of 9%.
### More to Read
Sign up for Essential California
The most important California stories and recommendations in your inbox every morning.
You may occasionally receive promotional content from the Los Angeles Times.
| true | true | true |
It's possible the Eris subvariant, formally known as EG.5, may have even further immune-escape advantage than other members of the Omicron coronavirus family.
|
2024-10-12 00:00:00
|
2023-08-10 00:00:00
| null |
newsarticle
|
latimes.com
|
Los Angeles Times
| null | null |
10,762,912 |
http://www.economist.com/news/christmas-specials/21683983-secrets-worlds-best-businesspeople-going-global
|
Going global
| null |
# Going global
## Secrets of the world’s best businesspeople
AS BRITISH imperialists were trudging through African jungles to secure their newly conquered empire, some of the empire’s subjects were also roaming far and wide, under the cover of the Union flag. One was Allidina Visram, from Kutch, in what is now Gujarat state in India. He arrived penniless in Zanzibar (now part of Tanzania) on the east African coast in 1863, aged 12. He opened his first small shop 14 years later, and soon afterwards spotted his great opportunity. He opened a store at every large railway station along the 580 miles of railway track being laid down through Kenya to Uganda in the early 1900s, providing supplies to thousands of railway workers. He then opened more stores at Jinja on Lake Victoria.
This article appeared in the Christmas Specials section of the print edition under the headline “Going global”
## Discover more
### Inside the last true political machine in America
What a town is like when one family runs everything
### AI is stalking the last lions of Hollywood
The first actors to lose their jobs to artificial intelligence are four-legged
### The truth about the passenger jet Putin’s men shot down
Investigating MH17, the crime that presaged the war in Ukraine
### Meet the boffins and buccaneers drilling for hydrogen
The search is on for a clean fuel that could one day replace oil
### The best sailors in the world
Why the vaka, vehicle for the extraordinary story of the peopling of Oceania, is enjoying a revival
### Oceania’s wayfinding skills
The art of getting a vessel and its occupants from one place on a vast ocean to another
| true | true | true |
Secrets of the world’s best businesspeople
|
2024-10-12 00:00:00
|
2015-12-19 00:00:00
|
Article
|
economist.com
|
The Economist
| null | null |
|
9,371,608 |
http://www.theguardian.com/science/alexs-adventures-in-numberland/2015/mar/11/x-plus-y-insider-view-maths-tournaments
|
Confessions of a mathematical Olympian: an insider view of film X+Y
|
Adam P Goucher
|
The dream of every aspiring young mathematician is to compete at the annual International Mathematical Olympiad (IMO), where the best pre-university maths minds from around the world are faced with subtle, challenging and imaginative problems.
As a competition it is brutal and intense.
I speak from experience; I was in the UK team in 2011.
So it was with great expectation that I went to see X+Y, a star-studded British film about the travails of a British IMO hopeful who is struggling against the challenges of romance, Asperger’s and really tough maths.
Obviously, there were a few oversimplifications and departures from reality necessary for a coherent storyline. There were other problems too, but we’ll get to them later.
In order to get chosen for the UK IMO team, you must sit the first round test of the British Mathematical Olympiad (BMO1). About 1200 candidates take this test around the country.
I sat BMO1 on a cold December day at my sixth form, Netherthorpe School in Chesterfield. Apart from the invigilator and me, the room was completely empty, although the surroundings became irrelevant as soon as I was captivated by the problems. The test comprises six questions over the course of three and a half hours. As is the case with all Olympiad problems, there are often many distinct ways to solve them, and correct complete solutions are maximally rewarded irrespective of the elegance or complexity of the proof.
Here’s a typical BMO1 problem (from 2006):
*Find four prime numbers less than 100 which divide 3 32 – 232.*
There are many ways to solve this, including using Fermat’s Little Theorem, modular arithmetic, and factorising the polynomial* p*32 – *q*32. A less efficient approach was taken by one student who attempted to manually evaluate *3 32 – 232* and perform a brute-force search of all primes below 100. It is difficult to imagine a more cumbersome method, short of using Roman numerals!
The knowledge required to solve this problem does not transcend A-level, but it demands thinking more laterally than is required in any conventional exam.
A few weeks later this is followed by a second and more challenging round, the BMO2, which consists of four considerably more difficult problems. Here’s a typical BMO2 question taken from last year:
*We say that two integer points (a, b) and (c, d) in the plane can ‘see’ each other if the line segment joining them passes through no other integer points. A ‘loop’ is an irreducible non-empty set of integer points such that each element can see precisely two other elements of the set. Does there exist a loop of size 100?*
The solution is quite involved and generalises to any bipartite graph. The idea is to put alternate vertices on the* x*- and *y*-axes, and to appeal to the Chinese Remainder Theorem in addition to Dirichlet’s Theorem in order to guarantee that this can be done in such a way as to ensure that the only lines of visibility are those that we specifically want.
The highest twenty scorers are invited to another training camp at Trinity College, Cambridge, and the top six are selected to represent the UK at an annual competition in Romania.
I was chosen for this. It was my first experience of an international maths competition, and indeed my first time abroad. Upon arriving at Luton airport, I was confronted with a scene of people wielding Rubik’s cubes and playing word games, not completely dissimilar from the airport scene in X+Y. The main difference was that my compatriots were far more friendly and agreeable than their arrogant and pretentious film counterparts. Instead, we had an immediate mutual respect.
In Romania, there was much maths, but we also enjoyed a snowball fight against the Italian delegation and sampled the delights of Romanian rum-endowed chocolate. Since I was teetotal at this point in time, the rum content was sufficient to alter my perception in such a way that I decided to attack a problem using Cartesian coordinates (considered by many to be barbaric and masochistic). Luckily my recklessness paid off, enabling me to scrape a much-coveted gold medal by the narrowest of margins.
In X+Y, the British team has a joint training camp with the Chinese delegation. The closest analogue is the Anglo-Hungarian training camp that is held near a picturesque but secluded lake thirty miles west of Budapest. From my experience in December 2011, this was the most enjoyable of the maths camps.
The connection between the UK and Eastern Europe is rather complicated to explain, being intimately entangled with the history of the IMO. The inaugural Olympiad was held in Romania in 1959, with the competition being only open to countries under the Soviet bloc. A Hungarian mathematician, Béla Bollobás, competed in the first three Olympiads, seizing a perfect score on the third. After his PhD, Bollobás moved to Trinity College, Cambridge, to continue his research, where he fertilised Cambridge with his contributions in probabilistic and extremal combinatorics (becoming a Fellow of the Royal Society in the process). Consequently, there is a close relationship between Hungarian and Cantabrigian mathematics.
Nathan (the protagonist of X+Y) receives a seated ovation when he presents a solution immediately after being asked a question involving playing cards by their team leader:
This was highly unrealistic, as the question is entirely trivial in comparison with problems encountered at international level. That Nathan and I both solved it in a split-second is indicative of this – IMO-level problems typically take many minutes or even hours to solve.
In the film, the IMO is held at Cambridge. In 2011 it was in Amsterdam. I was impressed that the film managed to accurately capture the atmosphere of the exam, even in the detail of having exactly the same colour-coded cards for requesting assistance. Conversely, the overgrown lollipops bearing the mantra ‘Can I Help?’ were much less believable!
Rafe Spall’s character was very convincing, and his eccentricities injected some much-needed humour into the film. Similarly, Asa Butterfield’s portrayal of a “typical mathmo” was realistic. On the other hand, certain characters such as Richard (the team leader) were unnatural and exaggerated. In particular, I was disappointed that all of the competitors were portrayed as being borderline-autistic, when in reality there is a much more diverse mixture of individuals.
X+Y is also a love story, and one based on a true story covered in Morgan Matthews’ earlier work, the documentary *Beautiful Young Minds*. This followed the 2006 IMO, in China, where one of the members of the UK team fell in love and married the receptionist of the hotel the team were staying at. They have since separated, although his enamourment with China persisted – he switched from studying Mathematics to Chinese Studies.
It is common for relationships to develop during maths Olympiads. Indeed after a member of our team enjoyed a ménage-a-trois at an IMO in the 1980s, the committee increased the security and prohibited boys and girls from entering each others’ rooms.
I did not find the love story in X+Y convincing. But what was even more absurd was that a climactic moment was illustrated by a majestic, CGI rainbow that was optically inconsistent! The colours appeared in the same order in the inner and outer rainbows, which does not reflect (no pun intended!) actual double rainbows, where the colours appear in reversed order.
What happens to IMO veterans? More than half of UK team members since we joined the competition in 1966 have become (like me) undergraduates at Trinity College, Cambridge.
Trinity is something of a mathematical paradise, being the ancestral home of Isaac Newton, G.H Hardy, Srinivasa Ramanujan and many others. The community of Trinity undergraduates also help the effort by giving maths camp lectures, marking selection tests and (most importantly!) making tea. I myself assisted in founding the European Girls’ Mathematical Olympiad, which is now in its fourth year, and actively mark the corresponding UK selection paper.
Even though it wasn’t a faithful representation of Olympiad life, I hope that ‘X+Y’ inspires and encourages aspiring young mathematicians to pursue their interest to the highest level possible.
*Adam P. Goucher is in the third year of a maths degree at Cambridge. You can follow his blog at Complex Projective 4-Space. *
*If you want to be kept in touch with this blog, Adventures in Numberland, please follow Alex Bellos on Twitter, Facebook or Google+.*
| true | true | true |
The high pressure world of international maths tournaments is brought to life in the much-anticipated British movie X+Y. Here a former contestant reveals the maths, the alcohol and the sexual intrigue of these events and tells us whether the film gets it right
|
2024-10-12 00:00:00
|
2015-03-11 00:00:00
|
article
|
theguardian.com
|
The Guardian
| null | null |
|
21,289,560 |
https://medium.com/@maladdinsayed/advanced-techniques-and-ideas-for-better-coding-skills-d632e9f9675
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
15,091,815 |
https://deepmoji.mit.edu/
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
10,048,681 |
https://github.com/processing/processing/wiki/Changes-in-3.0
|
Changes in 3.0
|
Processing
|
-
-
Notifications
You must be signed in to change notification settings - Fork 1.5k
# Changes in 3.0
This document summarizes major changes in Processing between versions 2.0 and 3.0. If you are updating code from 1.x to 2.x, use this page.
The latest release of Processing is 4.0, and you can read about changes in 4.0 here.
This page lists the most important changes between Processing 3 for people who are familiar with Processing 2. For more specific details, read the revisions.txt file to see what's changed since the release before it. This is where you go if your code stops working in the most recent release, or if you want to know if your favorite bug has been fixed.
-
**Rendering rebuilt**- OpenGL (`P2D`
and`P3D`
) is now stutter-free and very speedy. Some`JAVA2D`
performance improvements as well. The new`FX2D`
renderer offers huge speedups for 2D drawing, especially with high density “retina” displays. -
**New editor**— The main editor window now includes:- Autocomplete! (can be activated in Preferences)
- A full-featured, easy to use debugger
- Tweak Mode has been incorporated
-
**New interface**- the UI has received a major makeover. -
**High-res display support**— New methods`pixelDensity()`
and`displayDensity()`
make it easier to create sketches that will run nicely on high-res ("Retina") displays. It sounds so simple when put that way, but this is a really big deal. -
**Unified Contributions Manager**— We used to have separate windows for installing Libraries, Modes, and Tools. Now a single "Contributions Manager" helps you manage installation and updates for all of these contributions by third-party authors, plus... Examples! -
**Sketchbook Migration**— If you already have a (2.x) sketchbook, 3.0 will ask if you want to create a new, 3.0-specific sketchbook, or share the existing one.
-
**Do not use variables in**- This time we really mean it. We've been saying that`size()`
`size()`
must be the first line in setup since at least 2004, and that using variables instead of numbers will cause problems since 2006, and by at least 2009, simply "don't do it", but now the chickens have come home to roost. In the past, the`size()`
function was implemented by doing backflips behind the scenes. Those backflips made things very fragile, introduced cross-platform quirks that have consumed too many of my weekends, and prevented us from making wholesale performance improvements to the rendering system (higher performance, better full screen support, etc).**But despair not!**If you must change the size of your sketch, use`surface.setSize(w, h)`
which is the one and only (safe) way to alter your sketch's size. A short demo that's both resizable*and*gives you a random sketch window size whenever you hit a key:
```
void setup() {
size(400, 400);
surface.setResizable(true);
}
void draw() {
background(255);
line(100, 100, width-100, height-100);
}
void keyPressed() {
surface.setSize(round(random(200, 500)), round(random(200, 500)));
}
```
-
**Applet is gone**— Java's`java.awt.Applet`
is no longer the base class used by`PApplet`
, so any sketches that make use of Applet-specific methods (or assume that a`PApplet`
is a Java AWT`Component`
object) will need to be rewritten. -
**You only smooth once**—`smooth()`
and`noSmooth()`
can only be used in`setup()`
, and only once per sketch. Note that`smooth()`
has enabled by default since 2.x, so it's unlikely you'll need it anyway.
For the curious or insomniac, this document has the technical details about why these changes were made.
-
Use the
`FX2D`
renderer for greatly improved 2D graphics performance. It has many improvements over the default renderer, though it has a few rough edges so we haven't made it the default. -
`fullScreen()`
method makes it much easier to run sketches in, well, full-screen mode. -
The
`PVector`
class now supports chaining methods. -
SVG Export that works just like the PDF Export
-
A new
`settings()`
method that is called behind the scenes. Most users will never notice this, but if you're using Processing without its preprocessor (i.e. from Eclipse or a similar development environment), then put any calls to`size()`
,`fullScreen()`
,`smooth()`
,`noSmooth()`
, and`pixelDensity()`
into that method. More information can be found in the reference.*Only users who are in other development environments should use*It shouldn't be used for any other purpose.`settings()`
. -
Updated application icons.
- The Video and Sound libraries are no longer included in the download (because they've grown too large) and must be installed separately. Use Sketch → Import Library → Add Library... to install either one.
- The variables
`displayWidth`
and`displayHeight`
should not be used. They're still available in 3.0 so that less code breaks in the meantime, but that won't last forever.
- Lots of bugs
- There are plenty of issues and we could use some help!
- On Windows, launch4j doesn't work from folders with non-native charsets. On an English version of a Windows system, any characters in CP1252 are fine.
- When using
`cursor()`
in P2D and P3D, the cursor images do not match what you expect from the OS. - When using
`selectInput()`
,`selectOutput()`
, and`selectFolder()`
with OpenGL on Windows, the sketch window will close until the file is selected. We're waiting for an upstream fix from the JOGL project. - It's not really a 3.0 change, but Apple changed how key repeat works in macOS Sierra, which will break some projects. See here for details and how to fix it.
- If you have a lot of fonts installed, sketch startup time and initial
`text()`
rendering time can be extremely slow. This is due to an unfortunate Java bug and out of our control.
Some Libraries, and *all* Modes and Tools will need to be updated to be compatible with 3.0. If you've written one of these, thank you! and read on:
-
For the vast majority of authors, the changes are quite simple, and involve class name or package changes inside
`processing.app`
.- The exception is any Library where 1) assumptions were made about
`PApplet`
being a subclass of`Applet`
(and`Component`
), or 2) relied on AWT-specific features in`processing.core`
. More about the changes to core can be found here.
- The exception is any Library where 1) assumptions were made about
-
Modes and Tools need slight modifications because much of the UI code for Processing has moved from the
`processing.app`
package (which had grown unmanageably large) into`processing.app.ui`
. This doesn't give us the perfect separate of UI and non-UI code that CS professors dream about, but it's a helpful step in the right direction. -
Several of the (static) utility functions from
`Base`
have moved into classes called`Util`
,`Messages`
, and`Platform`
because`Base`
was getting enormous. (It still is enormous, but it's now a wee bit more reasonable.) Since enough other things are breaking in 3.0, we're not including accessors for the deprecated version of the functions, just making a clean break. Common changes will include:-
`Base.isMacOS()`
or`Base.isWindows()`
becomes`Platform.isMacOS()`
and`Platform.isWindows()`
(sensible, right?) -
`Base.showWarning()`
becomes`Messages.showWarning()`
-
`Base.log()`
becomes`Messages.log()`
- A good rule of thumb is that if there are platform-specific qualities to it, it's probably in
`Platform`
. If it's a message (whether a dialog box or a log file), it's probably in`Messages`
. And all those file utilities are in that`Util`
class.
-
-
Modes (and some Tools) will also need to be updated based on the major UI changes in 3.0. The default “Java” Mode is now completely separate from the rest of the code, so it's a decent model for understanding how to use the new
`EditorButton`
classes and similar. If your Mode subclasses`JavaMode`
, you'll want to check out how Android Mode does this and how it imports the necessary classes from the Java Mode, since they're no longer on the default`CLASSPATH`
. -
Each Tool is only initialized once (in 2.x, this happened repeatedly). This saves lots of time and memory. The
`init()`
method was changed to pass a`Base`
object instead of`Editor`
, because it's necessary to call`Base.getActiveEditor()`
whenever a Tool's`run()`
method is called. See the Tool Basics page for an example. -
The 2.0 and 3.0 lists of Libraries, Modes, and Tools are stored separately, so it's possible to maintain both versions, if you'd like to do so. (Though it should be noted that all active efforts are on 3.x) Users also have the ability to use separate sketchbooks for 2.x and 3.x versions of Processing, so they can have separate versions installed while the transition happens.
-
Library authors, now is the time to reduce your reliance on AWT! In 3.0, we've moved away from AWT in a big way (here's why). Any library features that require AWT should be treated with suspicion. Modes and Tools can still use AWT, but the OpenGL renderers (
`P2D`
and`P3D`
) and the upcoming`FX2D`
renderer don't use AWT at all.- This is one reason we built the
`processing.event`
classes in 2.x, and have been removing spurious AWT usage from the core API, documentation, and examples.) It's now been a couple years since we made those changes. - The other reason is that we can't rely on AWT features when targeting JavaScript or Android, so it was encouraging bad habits.
- This is one reason we built the
| true | true | true |
Source code for the Processing Core and Development Environment (PDE) - processing/processing
|
2024-10-12 00:00:00
|
2022-08-05 00:00:00
|
https://opengraph.githubassets.com/760a2e1bdf986c8779c5ae3e5077c4f660aea0978629fa14be09a625b1d5b584/processing/processing
|
object
|
github.com
|
GitHub
| null | null |
40,961,262 |
https://changelog.com/friends/52
|
Last DevRel standing with Shawn "swyx" Wang (Changelog & Friends #52)
| null |
Shawn “swyx” Wang is back to talk with us about the state of DevRel according to ZIRP (the Zero Interest Rate Phenomenon), the data that backs up the rise and fall of job openings, whether or not DevRel is dead or dying, speculation of the near-term arrival of AGI, AI Engineering as the last job standing, the innovation from Cognition with Devin as well as their mis-steps during Devin’s launch, and what’s to come in the next innovation round of AI.
### Featuring
### Sponsors
**Sentry** – **Code breaks, fix it faster.** Don’t just observe. Take action. Sentry is the only app monitoring platform built for developers that gets to the root cause for every issue. 90,000+ growing teams use sentry to find problems fast. Use the code `CHANGELOG`
when you sign up to get $100 OFF the team plan.
**1Password** – Build securely with 1Password - 1Password simplifies how you securely use, manage, and integrate developer credentials. Manage SSH keys and sign Git commits. Access secrets stored in 1Password. Automate administrative tasks. Integrate with third-party tools. Also, check out our INFRASTRUCTURE.md file for more details on how we do secrets with 1Password.
**Paragon** – Ship native integrations to production in days with more than 130 pre-built connectors, or configure your own custom integrations. Built for product and engineering. Learn more at useparagon.com/changelog
**Fly.io** – **The home of Changelog.com** — Deploy your apps and databases close to your users. In minutes you can run your Ruby, Go, Node, Deno, Python, or Elixir app (and databases!) all over the world. No ops required. Learn more at fly.io/changelog and check out the speedrun in their docs.
### Notes & Links
- DevRel’s Death as Zero Interest Rate Phenomenon
- Measuring Developer Relations
- pandas
- Jasper AI
- Writer.com
- harvey.ai
- Cognition.ai
- Anthropic
- Changelog Interviews #594: Microsoft is all-in on AI: Part 2 (with Mark Russinovich, Eric Boyd & Neha Batra)
- The Rise of the AI Engineer
- AI News
- von Neumann probes (Self-replicating spacecraft)
### Chapters
Chapter Number | Chapter Start Time | Chapter Title | Chapter Duration |
1 | 00:00 | Let's talk! | 00:38 |
2 | 00:38 | Sponsor: Sentry | 03:39 |
3 | 04:17 | ZIRP and Friends | 00:45 |
4 | 05:02 | Where in the world? | 02:21 |
5 | 07:23 | Zero interest-rate phenomenon (ZIRP) | 04:35 |
6 | 11:58 | Good vs bad DevRel | 06:05 |
7 | 18:03 | Sponsor: 1Password | 02:37 |
8 | 20:40 | What exactly is DevRel? | 10:06 |
9 | 30:46 | Just publish hits | 08:03 |
10 | 38:49 | DevRel folded into Product? | 05:41 |
11 | 44:30 | Just tell/show people the story | 03:54 |
12 | 48:24 | Data Science ~> AI Engineering? | 01:51 |
13 | 50:15 | Attributes of an AI Engineer? | 03:06 |
14 | 53:21 | Success besides Midjourney? | 05:32 |
15 | 58:53 | Sponsor: Paragon | 03:39 |
16 | 1:02:32 | Share more about Cognition/Devin | 09:56 |
17 | 1:12:28 | Is AI alive? | 05:05 |
18 | 1:17:33 | PULL THE PLUG?!! | 04:00 |
19 | 1:21:33 | If West World is even close... | 08:05 |
20 | 1:29:39 | We're done. | 00:35 |
21 | 1:30:13 | Outro time | 01:31 |
22 | 1:31:44 | ++ Teaser | 01:38 |
### Transcript
Play the audio to listen along while you enjoy the transcript. 🎧
Alright, we’re here with our old friend, multi-time recurring guest; too many times to count. I don’t know, swyx - 3, 4, 7, 11 times on the pod? I’m not sure. But you’re back. It’s been a little while. Good to have you, swyx. Welcome back.
Thanks. Good to be back. I’ve been always a loyal listener, and it’s just an honor to be invited on every single time. It never gets old.
Well, we love your enthusiasm and your availability. I can hop on with you on Monday and say “Hey, do you want to come on the pod tomorrow?” and you’re like “Let’s rock and roll.” [laughter] That helps.
That means I keep myself relatively free… I currently do not have a real job. That’s what it means. I did move a meeting, but that’s just because it’s easy for me to move stuff, because I’m my own boss now effectively.
Remind me where you’re at in the world again.
Yes. So I am now no longer having a real job. I run – I always call it two and a half companies. It’s the Latent Space podcast and newsletters, so the media empire, which we can talk about later… The AI Engineer Conference, which just finished two weeks ago… And I also have my own venture-backed company, Small AI, which I’m working on with a couple engineers. So yeah, it’s hard to describe, because I don’t work at a regular employer; I kind of split my time between three business ventures. But that’s just how my attention is spent. I think each of them independently have different time horizons of success, and hopefully they all have a common theme of like me being a prime mover among the engineering field.
How do you make money?
So the thing that actually currently makes money is the conference that I run. We have now successfully run a 2,000-person conference for the first time ever… And same deal as most conferences - we sell tickets and sell sponsorships. You pre-commit to a whole bunch of expenses up front, and then you freak out for three months, hoping that you sell enough tickets to cover your costs. And then we do, and we make some money back on top of it, and that’s the profit.
Well, happy to hear that you’re running in the black there. A lot of folks run conferences in the red, or very near the red. So that’s awesome.
Yeah. We had a lot of help, because Andrej Karpathy basically tweeted about us, and we immediately sold out like an hour later. So I think for me it’s a long-term game of like – I want to build like the KubeCon equivalent, the definitive industry conference for AI engineering, which is the thing that I’ve decided to sort of place all my chips on. So we don’t have to talk about that at this time, but I think it’s relevant to Dev Rel, in the sense that when I was a developer advocate I spoke at conferences, and a couple of times we actually even organized company conferences. That’s pretty much the peak of what you do as a Dev Rel. And any Dev Rel who believes their own BS enough should actually go out on their own and do this as an independent business venture, because it is worth much more to dozens and maybe hundreds of companies than it is to a single company.
Yeah, that’s interesting. Well, I remember last time you were on the pod we had just experienced the ChatGPT moment, I think. It was probably a month or so after that… And you said “This is it. I’m going all in on this. I’m learning –”, which you do, which is one of the reasons why we invite you on, because you learn in public, and we learn from you and with you… And you had made multiple transitions, kind of frontend stuff, Dev Rel, you were thinking backend for a while… I know you were at Temporal, doing workflows and backends, and then it’s like “Alright, here’s where I’m going to really dive in.” And that was a while ago. So I definitely want to catch up with you on that stuff.
[00:08:00.12] But the reason why you caught my eye this time around was a post about Dev Rel, and about the zero interest rate phenomenon… Which it seems like we’re learning now post ZIRP that a lot of things that we were living with and thinking they were normal perhaps we’re not so normal; they were kind of bubbly, or frothy, or maybe symptoms or side effects of all of this cheap money which was in our industry… And that quickly left our industry when the macroeconomic situation changed.
You have a post which we covered in news just the other day, yesterday as we record, but a few days back as we ship, “Dev Rel’s death as zero interest rate phenomenon”, where you ask and answer the same question, “Is Dev Rel dead?” And I’ve heard a few whispers of this, like “Okay, is Dev Rel dead, or dying? Or what’s going on with Dev Rel as a thing?” And so that’s the opening of the can. Swyx, take it where you want. First of all, you say no, but maybe why is it not dead? Start there.
I think to claim something as dead means there’s no longer demand for a role like this, and it’s objectively not true. I have friends who are desperately trying to hire Dev Rel, with full knowledge of all its faults, because they are close friends of mine, and I have complained to them about the failings of Dev Rel. So full knowledge that a lot of Dev Rel is completely ineffective, a lot of people are really bad at Dev Rel, and they still have the job… Full knowledge of all that, they still need it. So you cannot say the role is dead if people just really demand it still. But just like with any technology, the moment people start asking “Is it dead?”, it’s not quite dead, but it’s less cool.
Yeah, it’s not a good sign, right?
Redux isn’t dead, but people have been asking “Is it dead?” Yeah, you know, it declines… And I try to quantify it. My approach the first time anyone’s actually tried to say that, “It’s not dead, but how much of it has died?” And my number is 30%. Over ZIRP it increased 200%, and then it declined 30% since the peak.
And where do you get that? How do you quantify that?
Google Trends? That most objective amount of data.
A good proxy for perhaps being truthful, right?
In terms of search, like what’s the search trend?
Search trends, yeah. But also - and that seems to coincidentally line up with the Common Room Industry Survey of Dev Rels, where about 26% of them have said that they’ve been involved in layoffs. And anecdotally, we’ve seen a bunch of Dev Rel layoffs, including my old company, Netlify, including Planet Scale, including a bunch of other companies out there. Auth0 as well. And these are layoffs without replacement. So straight up, we no longer have Dev Rel. And that is one form of Dev Rel being dead. Companies that used to heavily invest in Dev Rel. Actually, I’ll put this on record, because I couldn’t find an authoritative source, so I’ll be the authoritative source. Microsoft, 2018 to 2019, hired something like 200 Dev Rels. They call them cloud developer advocates. Some of the top names in our industry spent really a lot of money gathering all of them. And like two years later, half of them are gone. It’s not polite to talk about power politics within Microsoft, but there was definitely a big power struggle in there, trying to build a Dev Rel org and not succeeding. I think that was like a really early precursor. Because that wasn’t ZIRP, right? 2018-2019 wasn’t ZIRP. But it was a pretty, pretty early precursor, and people having very inflated expectation of what Dev Rel could do for a business, throwing a whole bunch of money in it, and then realizing that just money alone and number o warm bodies alone doesn’t actually solve it. Like, you actually have to have taste, and a clear message, and a working system that scales healthily and effectively, and that takes time to build.
[00:11:56.26] Yeah. There’s a lot of facets to this, and one of which you mentioned maybe offhand that I think about a lot… I think Adam thinks about it a lot, because we talked to a lot of Dev Rels; of course, Dev Rels would love to be on our shows, and all this kind of stuff. And there’s a fine line between a Dev Rel and a shill… And I think there’s a big difference between good Dev Rel and bad Dev Rel, at least from where we’re seated. We can just like see through certain things… And then other people were like “Yeah, we’d love to have you on.” And it’s like there’s a big difference between the two. It’s clear. And I wonder – first of all, do you seem to agree with that? That’s generally a true sentiment?
Sure. Yeah. Absolutely.
So secondly, when it comes time to die, but then also have life –
Gosh, Jerod…
…like, there’s still value in the position; it seems like - and maybe this is just like stating the obvious, or shallow, but it’s like the good ones are gonna stick around, and the bad ones are the ones… No offense to any individuals, but the ones who aren’t good at what they do, they wouldn’t provide much value in the first place, right?
Yeah. There’s some amount of that, but also inherent in the job - it’s a high burnout job regardless of the macro, in the sense that it’s mostly like a mid-career job. There’s very, very few chief Dev Rel officers. There’s some CDXOs out there, but there’s no career path to like VP Dev Rel in most companies. It’s a job that you sign on –
A stepping stone.
It’s a stepping stone, or you’ve just decided to opt out of the rat race altogether, and all you want to do is make contact –
They’re a lifestyle –
It’s a lifestyle business.
And travel.
Yeah. Well, for most people, professional travel actually starts becoming a drag. It’s nice to see your friends every once a quarter or something, but professional travel actually is a drag. Like, nobody actually really wants to do that in that role… Unless you just love travel. But [unintelligible 00:13:50.05]
That’s my point. I’ve met a few… And maybe they’re just saying this, but it’s like, they just love traveling.
Yeah, check back with them after like two years of it, you know…
[laughs]
Definitely. Over time that gets old, for sure. Even like you go to Sedona or some special place for vacation, like “Man, I want to buy a house here”, which is how I felt when I went to Sedona. I was like “Man, I want to live here.” But if I lived in Sedona – and I lived in Orlando, Florida for a while too, and it’s like, after a while Orlando is just Orlando. Now, yeah, I can go to Universal Studios anytime I want… But do I? I went there like twice, maybe three times over a three-year span. Like, it’s just not – vacation places…
It loses its luster.
Exactly. It loses its luster in terms of like “Well, I don’t think this two years of travel constantly is really the game.” Steve Klabnik I think is probably the best example that I’ve known of, well before even any of the ZIRP. Or maybe it was even like – maybe it was ZIRP. I don’t know. It was freer money. Maybe that was part of it. But not this phenomenon we’re speaking of when it comes to like COVID, and like literally lots of free money out there, this more recent occurrence of it. Steve Klabnik traveled, I think, so much, and he was outspoken about this way, way back in the day, and just got burnt out bad on it… Because it’s just – you’re not built for it. Timezones, travel, you never know where you’re at in the world…You’re constantly in a different timezone… Your body, your circadian rhythm can’t even keep up with it.
Yeah. So I want to preface this with, I think, maybe some people’s impression of Dev Rel is it’s a lot of travel… That number has definitely come down a lot. Part of it is cost cutting, part of it is environmental concerns, and part of it is people just don’t want to travel that much. So actually, when I say people burn out of the job, it’s actually not only that. And probably it’s not even majority travel; it’s actually just the grind of constantly dealing with people who are new to a technology… I call this the eternal September effect. There will always be more beginners. So if you care about, for example, viewership numbers, you’re always writing the one on one level intro to whatever, and like that is your life, for as long as you want to do Dev Rel… Because that’s the highest tab content that’s possible.
[00:16:10.17] The best intro to something - that’s the sort of pinnacle of your success, is to write the best intro to something. So a lot of people don’t want to do that forever. They want to have more seniority and more impact in their work… And impact being maybe financial impact, rather than impact on the industry… Because Dev Rel does have impact on the industry, and they choose to move on.
So people leave their realm not only because they’re not good at their jobs, or they just aspire to something different. And that’s normal as well like. This job, way more than other jobs and startups, has more churn inherently, and that’s normal. I just want to establish that. It’s not a judgment thing of like “Oh, you sucked at it, so you had to leave.”
Yeah, so overall this was a very tough piece to write, because obviously, a lot of my friends are Dev Rel. Obviously, I had that job for a long time, and a lot of people know me for that, and come to me for advice on that… And actually – so the idea for this post was actually one calendar year ago I tweeted it on my sort of private alt account… And I had to wait for more people to start saying it for me to be okay publishing what you could read today. Because me saying it too early would have pissed off a lot of my friends who had that job. And I think for me to find the words that would accurately try to say what I was thinking, without also being too inflammatory. I’m trying not to bite the hand that feeds me, but I’m also saying “Hey, let’s call a spade a spade.” That was a mania in Dev Rel over the ZIRP period. And now everyone can see it. Let’s put an end to it, let’s try to learn some lessons from it, now that it’s okay to say it out loud.
**Break**: [00:17:59.11]
This Microsoft era you mentioned that was pre-ZIRP, and maybe even after it - like, I think one of the reasons maybe there’s churn is that it’s a hard, even to this date, kind of hard to find what exactly is Dev Rel. And when you have a challenge defining what it is, you have a challenge in defining what it should do, what the function should do, which maybe is the reason why people flunk out of the job, or move along or churn, because maybe even in the Microsoft era that you referenced, where they hired lots, it was challenging to define “What exactly are you trying to do?” Because if the job primarily is focused on a function that sits between the company, which usually is a tech product of sorts - a SaaS, a dev tool, a dev service etc, maybe even an open source company, a [unintelligible 00:21:26.09] company, or an open source project - they can have Dev Rel as well - if the function is to nurture the relationship between the company, potentially the purchase of a product, and the developer community, there’s a lot of ambiguity in there in terms of what you could do to be successful. And it might be challenging even as a manager to manage Dev Rels. Like, what do you really do here? What can you do here? What is success in your role? And when you have lack of clarity as an individual, it’s kind of hard to maintain what a good friend of mine, coach Michael Burt says “the prey drive”. You have to have a reason. It’s your because goals. “Because I’m a Dev Rel, these are the things I do in my role.” Or “Because I want to do these things within the developer community, I have clarity.” When you have ambiguity in your role, or a lack of clarity, it’s kind of hard to kind of wake up every day and be motivated and get something done. Or when you’re tweeting or doing social media, or doing these things that aren’t really seen by peers, or adjacent peers, like engineers, or marketing, or sales… It’s like, that person is Dev Rel and they’re just tweeting. Like, is that work?
So when you feel like you’re not clear, it’s kind of hard to kind of just get up every day and just do you do well, unless you’re a self-motivated person. So I guess all that to say, how much of this churn is because the management or definition of what the role is, and what success is of the role, is well defined, so they can be successful?
I would say – so I worked at AWS as Dev Rel. I’ve never worked at Microsoft. But I do think you should have some faith in the big corps to really define roles before they hire for them… Because it’s hard to open headcount in these things. And so they have their definition. And I think there’s more security there, just because these things are very, very well defined, at least internally, for that stuff.
What’s less defined is the startups side of things, where I just got a bunch of funding, I’m going to allocate one person out of my 15-person team to go be that sort of public face for my company. What is your job? Do whatever seems right. And I had that job. That was me at Netlify. I had no manager for a year… It was fantastic. It was also absolutely ZIRP, because I just did whatever I felt like doing. It was fantastic. And then eventually we got adult supervision with Sarah Drasner.
[00:23:58.12] But also, I thrived. The most intrinsic motivational job I had, just figuring out the meta game… And I think a part of it is – with marketing, with anything, with people, the true [unintelligible 00:24:14.18] cannot be taught, and the true Dev Rel is not the Dev Rel that can be written down. And the moment you try to write it down and try to systematize it, the game has already moved six months ago, to like the new game. If you think “Here’s the way to success, and we’re gonna scale this for the next five years”, tough luck. People and trends move quicker than that. So it can be a really tough thing to nail down.
That said, I do have a piece that is fairly popular… I’m told it’s like required reading within Google, of measuring Dev Rel. And I think basically the definition of any job, just from the outset [unintelligible 00:24:53.08] as a black box. Money goes in. What comes out of it? So I have three major buckets. It’s community, it’s content, it’s product. And either you’re producing in one of these areas, or you’re not at all. And I think most expectations of Dev Rel is one of these three things. And I can go into those things further, but you should have some definitions of what the visible output of Dev Rel should be from there, and then that basically becomes the job. And if it’s not successfully captured in those three buckets, then you’re probably doing a different job than what most people think of as Dev Rel.
I think that’s well said. I think that it is really tricky, because almost the more formal and described and delineated the role becomes, you’re probably on the lower end of the value chain in terms of like actually being able to execute on it well… For the reasons that you just stated. It’s like you almost need this – the formula that works really well is like the small startup, and I think you talked about there are companies who are doing really well with no Dev Rel or minimal Dev Rel, but they all have like a charismatic leader, or somebody who’s already very online, and very good at being online, that just continually brings them more and more interest, more and more community, more and more relationships… And that person’s almost a unicorn to a certain extent; like, can you systematize what they do, and then hand it to somebody else and say “Go and do this?” I’m a bit skeptical. Maybe it can work, but I don’t think it’s going to work on a repeated basis into the hundreds of employees, right?
Into the hundreds of employees, on a smaller scale, yes. If we’re talking about like the Twilios of the world, which I feel like they have successfully done that, it becomes much less personality-driven, and much more about a repeatable process that can be scaled across every major city. And it might be also a relic of maybe seven, eight years ago, when things were maybe more sort of in-person-centric. Now that I think online has taken more and more and more of our mindshare and time, and remote working and all that, the sort of online-first Dev Rel, meaning that it is just no location, means there’s no need for repetition, and therefore more centralization in a single person. So I think that’s important to note.
I think the other thing also – so I agree with this sort of low-level, junior/beginner person thinks about it as like “Oh, I will produce three blog posts a month”, or something. The higher level thought process is “I am [unintelligible 00:27:28.09] a mission. I’m promoting an idea. And the output is three blog posts. But those are downstream of me promoting the idea. Like, I’m starting a movement.” For me, when I talk about the stuff that I do for Latent Space, AI Engineer Conference and Small AI, I don’t think about it as I organize a conference, I think about it as “I am starting my own industry, and the visible output is I start a conference.”
[00:27:58.14] So I think people who think about it on a higher level have a more coordinated approach to their actions. Even though the actions still look the same, they have a more cohesive outcome, because there’s a broader plan beyond that, instead of the individual units. And I do think for the junior Dev Rels – so I do some advising on the side, just for fun… And for the beginners who come into the job, they very much do like the one-off “Hey, we’ve got to get this launch right. We’ve got to max out this launch, and we’ve got to get all the retweets… Make this the most awesome launch possible.” Whereas for me, it’s not about the launch. It’s about the long-running campaign from past the launch through whatever’s next. And I think getting people to see that whole journey I think is something that levels them up to the next tier.
And I think that’s what the founders can uniquely do. And you talked about these unicorn founders. That’s what the founders can uniquely do apart from the hired guns, which is the founders started the company because of this whole mission, and being able to authentically tell that story. I think it’s very rare. I think maybe one possible positive example is Lee Rob at Vercel, who has successfully taken on that sort of voice of Next.js.
A great Dev Rel, by the way. When I was thinking about the good ones, he was one that was in my head. I think the guy does a great job.
He actually prompted this post on the “Death of Dev Rel”, because he was like “Yeah, I’ve been thinking about this a lot.” Because he and I chat offline quite a bit. But yeah, I mean, it used to be Guillermo primarily promoting Next.js and Vercel, and now I think Lee Rob is kind of number two in the company, at least as public-facing figures go, being be the voice of the company. And I think that’s absolutely like a rare success story.
I think it’s very much a two-way street about the founder being able to trust whoever they hire, and then the employee being able to rise up to the task. And very often those two things don’t happen in either direction. So that’s what it is.
The last thing I want to point out as well, just in terms of why people fail at the job, or why measurements fail… This is some I’ve been thinking about which I did not sufficiently capture in my [unintelligible 00:30:00.24] post, which is that often Dev Rel, just like any other content or media business, is a hits-driven business, not a consistent output business. I used to often say, I’ll write 50 blog posts a year, but just one of the 50 will actually be the one that people remember. And it’s very, very hard to run any business, it’s very hard to learn from any information or any market reaction when most of the stuff you work on is going to flop. And it doesn’t mean that anything is wrong; it means you should still keep going, even though most of your stuff is flopping, because it’s the stuff that’s going to produce that one hit, that’s going to justify everything else.
Right. No, I 100% agree. Any hits-based business is like that. You say “Okay, well, all I need is hits. Then I’m just going to only publish hits.”
Oh, yeah. Just don’t fail.
Yeah, exactly. You don’t know why it’s – I mean, even with this post, swyx, you said “I’m not sure why you guys invited me on” or “I’m not sure why this one resonated with people.”
Yeah. Oh, there’s many reasons why I’m not sure. [laughs]
“I don’t even know why.” But I was like “Oh, this is interesting.” And I hadn’t talked to you in a while, so…
I could probably answer that question…
Okay.
Well, I think that we talk to a lot of Dev Rels, we have a lot of friends, and fans, and people we’re fond of, that are in this space, that have, I would just say, tumultuous times. And so it’s a hit because we don’t want to see people or companies shrink that particular function size, because that means friends of ours are out of work, or they’re changing what they do. They’re moving into adjacent roles, or different roles, maybe directly into marketing, where Dev Rel’s sort of adjacent to it, but sort of in a lot of cases under it… Sometimes even under Product… And I think we care, because it shows the health, to some degree, of our industry.
[00:32:06.10] If one of the core functions is withering or failing or churning or not right, I think it’s an indicator of how healthy the market might be. I think that’s why we care about this particular function so much, because this is literally where product and the future potential buyer might be. One thing you reference in your measurement is the Sean Ellis question, which essentially is “How would you feel if you can no longer use the product?” And so if you ask this question to a free user, a free tier user, and you say “How would you feel if Changelog was no longer a thing tomorrow?” Would you be happy, unhappy, somewhat disappointed, very disappointed?
Devastated?
Devastated. Thank you, swyx. I think that if we had a large majority of people saying “devastated” or “very disappointed” versus the other two or three options, then that means that we’ve got some version of product-market fit, or we’re very beloved, and so we should find a way to exist or live… If that were us on a deathbed, Jerod. Geez, this is terrible. Point is, is that I think that we have a lot of people who are in that space we care about, and any unhealthy measure in this particular space shows signs of an unhealthy market. That’s why we care. That’s why I care.
Plus, we’re looking for answers and explanations roo, right? I mean, we see things going on, some of us talk about it, some of us don’t… And it’s tumultuous, and it’s scary and sad. And then you’re looking for answers. You’re looking for like “Well, what was going on?” And it’s like “Well, here’s a post that surmises that it was this.” And now, “Okay, well, that rings true.” I mean, it rang true with me, which is why I put it on Changelog News… And then at the end of it, like “Well, given all this, what now?” What can we actually move forward? Because we know that it’s not dead insofar as it’s not a valueless thing. There’s huge value in having high-quality developer relations around your product or service and all that that entails… But the free money is gone, which was allowing it to bubble… And as we’ve discussed, now what?
In that post-ZIRP environment, what does it mean for Dev Rels, what does it mean for everybody else? Is it still a job that I should go out and seek? Is it not? Is it something that I – what are the “now whats”? So swyx, key in on that point, and what are some of your thoughts being deep in this area, of here we are, 2024, halfway done… Who knows what’s going to happen by the end of the year… But I don’t think we’re gonna get back to zero interest rates by then. We might see one, maybe two cuts from the Fed this year… Maybe not. Maybe zero. But for folks who are either in Dev Rel currently, or considering it, or trying to get back into it, what are your thoughts for them?
Yeah, so I actually tried to leave solutions out of this post, because it’s been covered elsewhere, and I haven’t identified as Dev Rel for maybe a couple years now. There’s a bunch of solutions out there. I do think just the straight, job hasn’t really changed. I think what the removal of free money has led to is basically we can no longer get by with lack of accountability in Dev Rel. It’s probably a good thing, it’s probably something that we needed. And so what I tried to do in the post is to list out the smells of what ZIRP Dev Rels looks like. And so I tried to use that as a checklist for people in the industry of like “If you were doing this, there is no longer any appetite for this.” It is no longer okay to do – let’s just call it free-tier Dev Rel, for example.
[00:35:52.13] You only talk about how to use your company’s free services, and have blissfully zero knowledge of anything paid, because that doesn’t serve the company’s needs… And actually, more so the point, it doesn’t actually serve the customer very well, because you don’t know your product.
And there was a lot of free tier Dev Rel in ZIRP… Because it’s easy to talk about something that you can adopt for free, and it’s easy to get applause for something that’s free. The hard part - and it was really challenging your skills - is saying why your company’s products are actually worth real money. And people who obviously were successful at that were probably more valuable to the business anyway.
So yeah, there’s a lot of thoughts… So at the end of the post I linked to Lee Rob, I linked to Sam Julien, who used to be VP of Dev Rel at Auth0, and I linked to myself as independent thoughts. I think everyone’s basically – the common consensus, let’s just say, is that Dev Rel moves into developer experience, which is kind of an annoying rebrand. Every industry likes to rebrand itself. In DevOps there’s this ongoing rebrand to platform engineering. Same thing for Dev Rel.
Same thing for data science, right swyx? Data science…
[laughs] We can talk about that after, but… I would argue not, but…
Okay. Save it, save it.
We can save that. For me, I do think that basically there’s a maturation Dev Rel that I’m looking for, where you don’t have the one-size-fits-all Dev Rel that does the sort of full stack of production, to publication, to sort of idea generation, and you have a sort of front/middle/back office Dev Rel. This is definitely for more scaled up organizations. I was leading a team of nine at my previous company, and I definitely saw that need to grow more structured process around Dev Rel.
And then I think understanding – for people who choose the developer experience path, understanding how you interact with the rest of product and engineering, and having to buy into actually have features… For Lee, he straight up just became VP product. There was no “We will coexist with products.” No. Dev Rel just took over product. That’s how they solved it.
Good for him.
Very, very few other companies will actually let that happen. Because product usually has way more political power than Dev Rel. That’s just how it is. So Dev Rel then gets [unintelligible 00:38:10.09] to marketing, and then loses all power from there. I think having Dev Rel become PMs is the path that I see some of the really motivated people interested in impact do… And I think that is the right way to do things. But Dev Rel as a title is going to continue to exist as primarily sort of marketing and community and docs and function much more than product, just because products is its own beast. It’s a much more established industry by far, and much more politically powerful, and therefore a harder force to have any impact on. I don’t know if anything I said is controversial…
Well, leading practice is tough. That’s a tough role. What do you know about how things have changed for Lee Rob? Because we’ve talked to him several times, but I’m not familiar the details of how the Dev Rel folded into product - how did that actually play out? How does that roll out now? Like you said, Dev Rel took it over. What does that mean?
I mean, so he was promoted from Dev Rel to product. So there is still Dev Rel at Vercel, it is just far, far less visible than it used to be. And probably for the better, I don’t know. They basically just had attrition without replacement. And that’s just how the team sort of shifted its priorities.
I mean, they needed a VP product, and Lee proved to himself and to the company that he was up to the task, and I guess they promoted him. I can’t really speak for his personal experience, just because I only hear tangentially from him and other people, but I don’t hear the full story… So you can talk to him about that.
[00:39:45.18] That was less on the specifics of his specifics, but more like how they as an organization achieve that… Because leading product and leading Dev Rel is uniquely different, but also not exactly far off. To build the best product you have to have a connection with the people that you’re building it for, which is a function of Dev Rel. A connection to community. But you also have to have a business mindset, like “Where do we actually make money? Where do our users really get joy? Where’s our business trying to go?” Not just “Where’s the product trying to go?” Which sometimes is similar or the same, but not always. So I would not suspect it would be easy for a Dev Rel just to take that over, unless they’ve got some prior leading product management experience, or have just – they’re just a Lee Rob, where they just like slay it.
Yeah. Again, I’m not really speaking about his specifics, but I do think that if people care enough about developer experience, then it basically is a shadow product team anyway. This is something I’ve talked about again and again, which is kind of the existential problem with Dev Rel, which is that you’re supposed to be the voice of the user, it’s supposed to be a two-way street, you spread the good word out; that’s the dev evangelist role. And then the Dev Relations role takes that good word that you get the feedback from developers, and brings it back into the company, except most of the company doesn’t want to hear it, because they already have backlogs, and you’re just adding to the backlog, and you’re not welcome here, go away.
So a really good dev experience person would prioritize and justify and go like “Here’s what our developers are telling us. Listen to me, I’m good at the people, and I talk to the people…” [laughs]
Yeah, I understand. What you’re shining a light on though is that friction between Dev Rel’s job and product’s job.
Yes.
So rather than fight the fight, merge.
Just take it over. Yeah.
Yeah, this [unintelligible 00:41:41.16] It’s why I asked the question… Because I was less trying to understand Lee Rob’s personal specifics, but more this function of – because I think you kind of clarified it there, where there’s that friction point; if you’re just kind of going out there and you’ve got less respect or less political power with product and direction, can you even do your job well? If when you go back to the table you say “Hey, I’m out there, fighting the fight. I just flew 10,000 miles last month, spent three weekends on the road, and here’s the wisdom…” And everyone’s like –
Here’s what the company paid for it. We’re paying for this.
Right. And then Product is like “No, we’ve got different – I’ve been talking to users too, but in a different way.” And so we’re gonna pause your thing, because we’ve got enough backlog already, and I’ve already led this direction here.” So it’s almost just wasted.
It’s absolutely wasted.
Yes, absolutely. Then you go back to the developers that you spoke to…
I was trying to be kind about it, I suppose, by saying ‘almost’.
Yeah. And then you didn’t deliver for the people you spoke to either, right? You couldn’t actually get their request representated in a way that gets it – so you’re ineffective on both sides. That can be incredibly frustrating, I’m sure.
Yeah. I call this a two-way *bleep* umbrella, for the company to the users, and then from the users back to the company. And you just have to filter a lot. And so I call this an emotional burden. And when I tweeted that, I was definitely feeling it.
Yeah.
Yeah, I mean, this comes with the territory… But if you want to actually change anything about it, instead of just tolerating it, you take over Product. And this is something I actually ended up doing at Temporal. I ended up being the PM of the TypeScript experience. And actually, I think it helps, that sort of two-way synergy, because after I was done being the PM, then I also then flipped back to my Dev Rel role and started talking about the stuff that I did. So if you are heavily involved in talking to users and designing the thing, then you can very authentically say “I designed this, and here’s how you’re supposed to use it”, and people believe you.
Right. And if you’re that highly invested, you might as well just be repping your own product, right?
Yeah. [laughs]
I mean, that seems to be the move, right? It’s easier than convincing the product manager to do your things, is just become the product manager. And that can be very difficult, unless it’s your own company, in which case you wear all the hats, and you bear all the burdens, but you also get all the upside.
[00:44:01.14] Yeah. I mean, that’s exactly what I’m doing. But I would say it’s a very tough job to hold all those things in one go… And I think it’s a very privileged position to be in, to help to do that for a company that has a lot more resources than you. So I’ll just say, yeah, if people are interested in entrepreneurship, you want to be able to build, and you want to be able to sell. This sort of dev experience, Dev Rel combined with product role is probably one of the best jobs out there in developer tooling.
I agree. I think putting Dev Rel - or whatever Dev Rel’s function is; even if you don’t call it Dev Rel - under Product makes a lot of sense. Because I think the reason why Dev Rel kind of gets this “shill”, as you mentioned earlier, Jerod, or this bad rep, or this sort of pejorative feeling is that you feel like you’re out there trying to sell, and that’s not your job. I think the job of Dev Rel generally is trying to showcase the vision of where the product is going, and get that resonance from the community, and see if it’s landing, and also create advocates out there, who become passionate about where you’re going, so that you can essentially take that wisdom you’ve got back to the team and say “This is what we’re doing. This is how people feel about it. This is where they’re not getting it, this is where my demos and my tutorials are not landing. This is where my 101s are falling short, is because of this part in the workflow”, or whatever it might be. Their job is not to sell, their job is to tell and share the story… Which, if you do it right does sell, but you’re not trying to sell.
Even in our ad spots – I don’t know how much you care about these things, swyx, how we do our ad spots… I literally tell these people that I sit down with, more often than not CEOs of the companies, I’m like “I don’t want you to sell. If in this conversation you’re trying to sell, we’re doing it wrong. I just want you to share your story. Can you share your story for me?” [phone ringing] And not that story on my phone. Sorry about that. It is to – just don’t sell. That’s my job, to tell people where to go, and to be excited about your thing, and to give people waypoints. Don’t come on here and sell. Same thing for Dev Rel. Don’t go out there and sell. Just tell people what we’re doing, and get that feedback on how we’re doing it. And is it working? And how do we change to make it work?
If I could make one tweak, instead of just tell people what we’re doing, you should nerdsnipe them. That is the way to hook developers, is like tell people what hard problem you worked on, and tell people the backstory to why you worked on it, or what’s the sort of intellectual history behind these ideas, and why is this the thing that is inevitably what everyone is going to be going towards…
Yes.
…whether or not it’s you… Like, you build it in house, or you buy it from us, or someone else builds it doesn’t matter. The industry is going this way. Are you with us, or are you part of the last [unintelligible 00:46:50.18] whatever. That is the kind of story that I like to tell, which is not just “Tell us what you’re doing”, but put us in a broader narrative of where are we at that moment in history, and I think you get the nerd snipe.
Definitely try to show a little bit of the behind the scenes. I think a lot of the standard marketing advices benefits over features, and I think that there’s a little bit of inversion for developers, where you want to talk about features, because you let the developers figure out the benefits. But go down to the implementation details, because people love to learn about that, so that they never have to touch it… And then go like “Here’s the benefits of that.” But if you only lead with benefits, like “We will accelerate your digital transformation by 10% in the next quarter”, I don’t care as the developer. Show me how it works, and tell me something cool.
Right. One flavor that I think would be interesting, which maybe we’ve done, maybe we haven’t done - and we definitely do it on some of our shows - is “Tell me how hard this particular thing was to build.” What did you have to go through to build this thing?
Exactly.
And that’s where the nerd sniping comes.
[00:47:54.03] The nerd snipe is so effective for selling the product, but also selling you on working with me. Like “Come join us, we work on cool things.”
Dan-tan.
Yeah, exactly.
And I just cannot tell people enough to do this, because I think you have to kind of repeat it to them a lot, that people want to be nerd sniped. They want to work on hard things. And if you just emphasize the nerd snipe, you accomplish both goals of doing any public appearance, which is recruiting and selling.
Nerd sniping for the win. So riddle me this… How has data science not been rebranded into AI engineering, or data engineering, or pick your flavor of the day? It seems like the data scientists are just doing what they were doing before, mostly - maybe there’s some deployment things going on now - and just like changing the label on their business card.
I think there’s definitely some rebranded data scientists that are transitioning over to generative AI really well… But I think there’s a qualitative difference in the kind of people that are doing really well in generative AI; they have no shared history, no shared skills, no shared language with the data scientists. There are many, many successful AI engineers that do not know Python. And for data science not knowing Python, not knowing Pandas is like “What are you even doing here? You’re not part of our club.”
Okay, so you’re saying the opportunity has broadened the industry, to where you don’t need to have the same background as a traditional data scientist.
Right. And this is not to say that data scientists demand has gone down at all. This has been a secular growth trend for decades. I would just say this is just a different type of skill set that makes you successful in this era, rather than the previous era. And whether or not you are successful here relies a lot more on your creativity and full stack product development skills than just pure data science… Because data science comes in later, when you have data to work with. But now when you have foundation models that you can just prompt and make an MVP so quickly, you actually need to be creative and quick to market, rather than being deliberative and analytical. Being analytical actually slows you down, and makes you too conservative. Like, what are you doing worrying about costs when the cost of intelligence for GPT 3 goes down 90% a year? That kind of stuff.
Makes sense. So what are the attributes of an AI engineer?
A-ha. [laughs]
Ha-ha! [laughs]
I have a convenient blog post to refer to people…
Oh, gosh…
Please read it out loud to us now. [laughs]
So yeah, obviously, not to be annoying, but I do actually have a blog post for this… And that’s part of the sort of meta game I do preach to people about the learning in public, which is anytime there’s a frequently asked question, you should have a canonical blog post for it… Not just because you can be that annoying person to send it to people, but actually because you can actually spend the time to think about it, so you have a more complete thought.
I think Kelsey Hightower often says “You don’t really know what you think until you write it down.” And the reason he’s so thoughtful is actually he writes a lot of stuff down first.
So there’s “The rise of the engineer” post that just celebrated its one-year anniversary, and it’s the start of a lot of things I’m doing… And more recently we actually published a [unintelligible 00:51:06.29] that actually published some sort of reference job descriptions for people… And I like the sort of framing of offensive and defensive AI engineering. Defensive meaning like being able to create systems that fundamentally work on top of non-deterministic AI models. LLMs, as you might know, hallucinate, they’re non-deterministic, and they actually fail a lot. The P 99s of latencies are ridiculous sometimes, just for whatever inference load reasons your selected API provider might have. And so it’s effectively like “How do you create a reliable service on top of fundamentally unreliable foundations?” Sounds very familiar? That’s distributed systems.
A lot of the same language, maybe slightly different tooling emerges coming out of that. That’s defensive, though. And there’s also – let’s just call it preventing against regressions, or optimizing costs… And that’s a lot of fine-tuning for smaller models and all that good stuff. But really, offensive AI engineering is exploring new frontiers. This capability just came out. How can we put that to good use in a sort of end user product way that immediately clicks with them and generates a lot of revenue?
[00:52:19.24] I think the image companies have actually had the most success out of this. Midjourney is my sort of favorite example of this, making something like $300 million a year with 50 employees. Completely bootstrapped, like no VC funding. So for people counting at home, that’s $6 million per employee. And there’s more examples I can list in there, but it doesn’t really matter. If the new capability comes out, is it the optimization guy or the creative technologist that wins? It’s the creative technologist. And for me, it’s like, okay, most engineers are not creative technologist, but are they product people? Can they think about “How do I use this capability that just emerged to solve problems for a customer in some way?” They can be more creative there.
So I’m trying to basically explain why that is qualitatively different than the data scientist role, which is mostly analytical… Which is still very important. It’s just like a different skill set; if you just don’t have that gene in you of like being creative as a product thinker, then you won’t be as successful as someone who is.
Who are some other people besides Midjourney who are –
Oh, God… [laughs]
…very successful? Well, because from my perspective, I’ve seen – you know, set ChatGPT and the alikes aside; general use chatbots… That as a category - set that category aside. Obviously, huge success. Lots of value, etc. I’ll give you that. Midjourney - give you that one. But the companies that have brand new products, that are making moves in the marketplace, that have gone beyond demo and hype to actual product people are paying for - I don’t have my thumb on that pulse. I’m not seeing much of that. I’m sure you’re seeing more of it, so that’s what I’m asking about specifically.
Yeah, so let’s have a bar for – it’s easy to get to production on a small use case that nobody cares about. So production to me is not good enough. So let’s have an even more aggressive bar of it must make $100 million a year. That’s at a point where you can IPO as an independent company.
Okay.
Maybe the bar’s 200, but that’s just a factor of two. So let’s just say $100 million. What AI use cases have made $100 million a year? So obviously, we talked about Midjourney. I have four, and then the fifth one is more speculative. Conveniently, this is another blog post called “The anatomy of autonomy”, if people are looking this up. But generative text for writing, Jasper.ai and Rytr.com both have above 100 – Jasper reached the 75 million ARR before they imploded, and Rytr.com I think is comfortably at 100…
How did they implode? What happened? Adam, did you miss this? What happened with Jasper?
So I don’t know what their ad revenues are today, but effectively they got rug-pulled –
They got acquired.
Well, they imploded before they got acquired.
They got rug-pulled?
The acquisition was the exit, yeah. I don’t – look, obviously I’m just saying secondhand stories from other people, so don’t hold me on any of this… But effectively, the founder [unintelligible 00:55:12.08]
Well, you’re on a podcast being transcribed, so… [laughter]
Yeah, but on the transcript he just said “Don’t quote me on this…”
Don’t quote my transcript…
Okay…
The founder sold a whole bunch of secondary and then just peaced out.
Okay…
So he basically lost interest in developing the company, but then also it seems like they – so they built a very successful business on top of GPT 3 before ChatGPT… And then a whole bunch of people found out after ChatGPT that they weren’t actually doing that much on top of GPT 3, and then they migrated to ChatGPT. So they were basically killed by ChatGPT is the common narrative. I don’t know how true that is, because their focus was very, very strong on eCommerce on Facebook. The reason you don’t hear about them is because you’re not on Facebook. They are. They did very, very well. They went from zero to 75 million revenue in two years. Very few people have done that.
[00:56:05.24] But anyway, so since then, the emergent winner in that sort of generative text for writing category is Rytr.com. They seem they seem to have figured out the sort of post ChatGPT navigation… Which is not hard. Like, focusing on users and building differentiated features on top of the model is the job of AI engineering, and you just have to do a more creative, more dedicated job staying on top of it, and not being defeated by Open AI’s first move into chatbots.
Right…
So I don’t know if that’s a fair – like, I really want to stress, I don’t know. This is not my industry, I don’t know this specific writing case, whether that’s a fair characterization of what Jasper went through… But it is an interesting story. So a fair amount of revenue there.
Copilot now I think 200 million in ARR. So well past, right? And there’s a bunch of other smaller Copilot competitors, all with decent revenue, many of which spoke at the AI Engineer Conference that I held, so you can go look at that. ChatGPT, I think something like 2 billion a year in revenue…
I ruled that one out.
Yeah, exactly. So those are the four categories that we are like very, very sure make sense. There’s a bunch of sort of Copilot for other knowledge worker type things. Harvey is now the emergent example for like “We are Copilot for law, and every lawyer needs this, or you’re behind.” Like, fine. So for every knowledge or profession, there will be a Copilot for X. And each of those things will easily make $100 to $200 million, because you are replacing a whole bunch of junior workers for that. We can talk about the replacement theory issue… But there is real revenue here, there’s a real case for generative AI. It does not have to get smarter to be useful.
Okay. The fifth category, beyond all this, is the agents category, which is the most contentious one. It was a complete bubble last year. This year, the bubble company is Cognition. Devin. Also spoke at my conference; the first time they ever spoke at a conference. I like them. I actually have access and I use them… We can talk about Cognition if you want. They’re not the only players in this game, of like the fully autonomous agent. This one happens to be code-related, but there are others that are not code-related.
I do think that whoever eventually cracks this will be able to make significant revenue, but we haven’t seen it yet, obviously. But the bar is, for everyone listening, is can you make $100 million? And if that’s not good enough for you, nothing is good enough for you. If your bar is higher than mine, then you’re just gonna have to wait longer to see the results. But this is happening in progress, and you can either criticize it from afar, or you can just get in earlier and track the progress, as I’m doing.
**Break**: [00:58:40.24]
I would love to hear more about Cognition and Devin. It seems like they were unscrupulous in their marketing with the Upwork thing…
God, okay… I will defend them here. So yes, the headline on Hacker News reads “Devin debunked.” Very nice alliteration there. Out of the nine videos that they produced, one of them was an overstated claim, which I agree they should not have put out.
And the claim was that this bot could make money on Upwork autonomously.
Yeah. Pasting an Upwork job, and then just doing the rest, and make money for you.
Right.
Obviously, there was a human behind that being the bridge from Upwork to the bot, and also the bridge from the bot to Slack, which - Devin does not have Slack integration. So some stuff in the video was not the true Devin experience, or they failed to show… It’s how like when people market games, they tell you if it’s like in-game render, or if it’s just some artist’s rendition of what the game should feel like. And that was definitely the artist’s rendition of what the game should feel like, eventually.
But yeah, one video was unadvisedly produced. The guy who made it owned up to it and said “Yeah, sorry, I shouldn’t have done that.” But that doesn’t take away that this is the still the most significant agent we’ve ever seen outside of Open AI. Prior to this, my reference for best agents outside of the self-driving cars that we have in San Francisco - because those things are the best agents in the world - the second-best agent in the world was ChatGPT code interpreter. Since then, we have Devin, and then since then we have Claude Artifact, from Anthropic, which we can talk about.
But Devin is really good. It’s a really, really good agent, actually a really good generalist agent, not even factoring in the code writing ability. And I hope that people don’t throw out the baby with the bathwater, because unless you’ve actually tried it, you don’t know what you’re talking about. You’re just reading headlines and you’re just repeating the last headline that you just read.
Well, we can try it, because we signed up for a waitlist, and then they don’t give you access to it. And so what do you want us to do besides speculate? We can’t.
Maybe spend less time on things where you’re just going to repeat headlines… [laughs]
I’m not spending any time on it. I watched the Upwork video, I watched the debunking video; he certainly debunked what they did… And there was no question to it. So I understand that you’re okay with 9 out of 10 times I tell the truth, but when I’m coming out and trying to make a splash and I’m lying in my marketing material… Sorry. I’m just gonna go ahead and remember that.
That was a bad idea, and they should not have done it. Still, it’s a good product.
Which I have to take your word for it.
I have to square those two –
Which - I have to take your word for it.
Yeah. I have the fortunate ability to say I have no vested interest in Devin. They gave me access, I used it, I was impressed. And so was Patrick Collison, so was Fred Ehrsam from Coinbase… There’s a bunch of people who cannot be bought, who like it, you know?
Sure.
That’s good. That’s good.
I’ll take your word for it. I can’t do anything else.
But can we talk about – I think the technical design of it can be replicated. I think the real question, the thing that people really should be talking about instead of the video, which was a mistake, a one-off mistake, the most structural issue with Devin is can it be cloned? How thick is their moat? This is a six-month-old company, that is now valued at $2 billion. Which is absurd by any stretch of the imagination. So that’s the real question which Devin has to answer, and the rest of the AI engineering industry has to answer.
[01:05:57.05] There’s a project called Open Devin that is trying their very hardest to replicate it. I’ve interviewed both of them, you can check it on my podcast. I would say that Devin is still ahead. Who knows how long it’s gonna last. But I think the sort of structural merits of what Devin has innovated in terms of how agents should be interacting with each other, what are the necessary components for agents - that is going to stick, and if you focus too much on the marketing video, you’re going to miss the actual lesson to be learned with Devin, which is that hey, your agent should have a coding environment, should have access to a browser, should have a plan, and should have a terminal and interactive chatbot where you can sort of observe what it’s doing and correct it in real time, and it can respond to you in real time. That is the UX that has wowed all these people, wowed myself… I have never seen it in any other agent before, and I think it’s going to be the standard or state of the art for all agents going forward, because it’s so good.
What are the odds that of something like that, which is very general, as you said, just gets sherlocked by Open AI?
In a way it has been, but not by Open AI, but by Anthropic, which is the other thing that I mentioned. So Claude Artifacts is the other thing that people should really think about. They definitely looked at Devin and were like “Oh yeah, we’re taking a bunch of that.” They did not do the browser access, because these guys are way too worried about safety as compared to me, and as compared to Scott from Devin… So Claude project is basically an advanced version of ChatGPT’s code interpreter that can render a working web app. I often say the sort of spicy version of this is that Anthropic did more for code agents in two months than Replit has done in two years… Because it’s basically Replit.
Hm… Spicy.
[laughs] So for the record, for my Replit friends, obviously, they did not build a full sort of repl environment and IDE. Anyway. But still, you can do very significant programs in Claude now, that you could not do in ChatGPT code interpreter; you can do in Devin, but Devin is slower than Claude, and less generally capable than Claude. It’s just very, very good. And for the first time, people are actually openly talking about Anthropic being better than Open AI. Open AI has lost its crown as like the undisputed number one… Which is wild. I did not expect a year ago to be living in this world, but now we do live in a world where – it’s sort of like a multipolar world where there are multiple sort of top powers in this space. It’s very, very good. And you can try it, unlike Devin. [laughs]
Yeah. That does sound good. Love some competition for Open AI. Of course, there’s been turmoil over there as well, and there’s been interesting things going on inside and around Open AI…
Maybe for the engineers listening, I would say the progress here has been at the model layer. So Claude Artifacts is built on top of 3.5 Sonnet, which is the current world best model… But also, there’s a significant amount of AI engineering that was required to build Devin and to build Artifacts. And I think that if you want to see what the future of AI engineering should look like, you should be trying to build a clone of this thing. That’s what I’m trying to do. Because I think a lot of AI engineering will look like this. It will look like “How do you wire up a model to the real world to produce projects of significant value, that you would otherwise have had to assign to a junior engineer?” I think that is absolutely the sort of gold trophy that people are going for right now. And obviously, the step beyond that is artificial general intelligence. But this is a pretty good second place.
A $2 billion market cap in six months is absolutely amazing. And I think that –
I mean, it’s a bubble.
Sure. Clearly. But still, the fact that it took six months tells me it’s gonna take less time when more people are applying to that. And so is there actually a moat there? Time will tell, I guess.
[01:09:42.13] Yeah. Time will tell. They’re trying to build one. I think the – yeah, so this is a question of business and less about tech… The moat is really user data. The more people you can get coding with this thing, and the more you can observe how people interact with these agents… Devin has a six-month headstart on everyone else on how people work with Devin-like agents. And if Devin-like agents is like the goal of this thing, then they will have the best RLHF feedback data on the planet for specifically this task. It’s the same motivation that Open AI had with ChatGPT, which is they kind of lucked into this… But now they have the longest series of chat-oriented data sources, human feedback data that you previously had to pay a lot for.
And then – so that moat is the data moat, but then actually it also becomes… You’re investing ahead of where the capabilities are. So you’re sort of saying “I will build code or write manual code to build other capabilities that I don’t have yet.” But as the capabilities grow in the fundamental model, you can just kind of swap them out for your sort of handwritten code, and be more generally capable, with the scaffold that you already built ahead of time.
So I feel like I’m being vague there, but you’ll see this in the form of the model’s ability to interact with the real world. A lot of times you’re writing integrations, you’re writing – it’ll interface with Open API. Like, screw that, man. The future is just models like surfing websites, just like anyone else would surf websites, and interacting with them exactly like you would interact with it. But right now we have to use the crutch of code, and in the future we don’t.
Models surfing websites. How does that sound, Adam?
Dangerous… Cool… Amazing… Awesome… [laughter] Yeah.
It’s a new world… Yeah. On the grander timescale of this – like, this is happening within the last three years. What does 30 years of this do? What does 300 years of this do? We are birthing a new life form. I do think about that timescale as well. So it’s just an exciting time to be alive, and to observe this. I don’t think it’s useful to try to resist it, because it’s happening anyway. I think this is why alignment is important. Because the people who have believed in this way earlier than anyone else is the alignment people. They took AI safety more seriously, because they knew this was coming. And the rest of us are just waking up now. And the current mindset is “How do you control something that’s smarter than you?” Because it’s going to be. And so I think that is probably the right mode to think about it. The relevant paper for people interested is “The weak to strong generalization paper” from Open AI, which is written by [unintelligible 01:12:16.24] before he left for Anthropic. I do think if you’re worried about the safety element, people are working on it; they are trying to look for similar-minded people, and you can go apply for those jobs.
Well, there’s always the plug, right?
The plug?
You just pull the plug. Yeah. The legit electronic plug, unless we give them the new life form we’re birthing, as you just eloquently said, which I’m taken aback by, but I also want to dig into… Because it’s like “Wow, are we really creating a new life form, kind of?” What is it that will give it autonomy, the AGI-ness of it, I suppose? And this is the holy grail question everybody’s doing.
We all need to partake in some marijuana before having that conversation.
[laughs] I know…
Gosh, Jerod…
So a lot of AI discussions tend to devolve into existential risks and AGI discussions… And part of my goal of defining AI engineering is to create a space where those discussions are the side discussions, and not the main thing. Because we’re all here to engineer, we’re all here to build for today’s problems, with today’s capabilities… And I think that’s a lot of how I think about my impact in this field, which is how to guide people in a more positive direction that basically nobody’s against. A lot of Dev Rel is “The future is here, it’s not evenly distributed”, but a lot of engineering here, and especially in AI, is “The future is here, but it’s not evenly distributed. And how do we distribute it best to everyone else?”
I come from Singapore. One of the my favorite stories to tell is that the Singapore government is embracing AI really well for their older folks; the people who don’t speak English, the people who are disabled, the people who need the natural language interfaces to the many, many digital forms that are coming up in our lives… And applying this AI technology to that civil service I think is like the best form of how we do AI engineering.
[01:14:04.24] So yeah, I mean, we don’t have to go to the freshman dorm room conversation of “Are we bootstrapping a life form?” That’s fun to discuss; happy to engage with that. But why I try to keep it to an engineering conversation is to let people have a way to ground their conversation in “What can we do today?”
But you’re the one who said we’re literally creating a new life form.
Yes. I do also believe that.
So you opened the topic.
I’m sorry… [laughs]
That’s okay. Which I think is – well, so I’ve been silent for quite a bit because I’m listening quite well… And I’m just slurping up all the things you’re saying. And then I’m also feverishly trying to find where Mark Russinovich said in our conversation with him… Mark Russinovich is the CTO, I believe, of Azure, right, Jerod?
Correct.
So we met Mark at Microsoft Build 2024, where it was just like basically all AI everywhere, all-in on AI, as we said. And Mark said – and I’m thinking “Well, Mark is part of Microsoft”, and they’re one of the largest companies that benefit well from Open AI’s innovations. Sure, the discussions around cognition as well, Devin… But Mark said “I am not–” I’m paraphrasing, because I couldn’t find the quote, and I was hoping I can find it. But Jerod, please fill in the blanks. Mark said, paraphrasing, again, that he is not worried about AI taking over developers’ jobs. But then you just say we’re literally birthing a new life form, and then you’re speculating/revealing, to some degree, the agent OS of Cognition and Devin, and what you think would be a good outline for anyone trying to copy what they’ve done… Meanwhile saying they have a leg up in terms of timeframe, six months… A lot of time in today’s world, but realistically not a lot of time. And then you throw out - what was it, 2 billion, 3 billion? What was the valuation?
Two.
Two. Which is just incredible. Like, one, what is that number based on? Is it based on what somebody is willing to purchase it for? Is that the valuation? Is it based on like [unintelligible 01:16:13.26]
Yeah, Founders Fund invested at $2 billion. I think they gave them a few hundred million, or something.
Gotcha. Okay, so the valuation is based on venture capital that’s coming in. “Okay, well, we’ll give you x at x valuation.” I guess I’m just camping out there; I’m sort of sitting back, thinking “Gosh, is this really a new life form we’re birthing?” And if so, I think we’ve got to talk about that.
Sure.
Well, maybe I could square the circle here… Because swyx was talking on a very long timespan… And I’ve found the exact Mark Russinovich quote.
Thank you. Good job.
And he said, “I can tell you, we’re not at risk anytime soon of losing our jobs.” So maybe that harmonizes your stance, swyx? What do you think?
Yeah, absolutely. We should always be clear about what timespan we’re talking in, and there’s a big difference between the near term and long term. I just think that in the grand scheme of things, if – most AGI timelines, by the way, are like “By 2050 we shall have AGI.” That’s within our lifetimes, guys. [laughs] It’s time to panic if you really think this is going to end the humanity.
It’s time to panic…!
Seriously. Like, we have Eliezer Yudkowsky saying ethically the right thing to do right now is to bomb all data centers in the world, because humanity ends otherwise. He said this in the New York –
I mean, I guess if we’re in charge of this in terms of innovating it and creating it, how can we not have failsafes in place to be in charge of if it goes wrong? I mean, I think that’s where it has to come down to. I jokingly said “pull the plug”, but I literally mean if we control the physical hardwired plug into the wall… Now, Jerod, if that book I mentioned in the intro that I shared with you a while back, which I can happily share here, too… If that happens, then we’re in a different world.
[01:18:08.00] I speculated a good intro to a book or a movie, and I’m thinking more movie than book, but all good movies tend to begin as books. Sometimes they’re bad movies of good books. But anyways, I digress. I was speculating that this intro scene to this movie was a very beautiful cinematic scene where you see this human being - and it’s so strange to say things like this - a human being is happily racking and stacking the servers, happily organizing this hardware, happily instantiating a new machine into the rack… Meanwhile, the entire task was given to the human by artificial intelligence. So the boss, you said before, live above or below the API. I think we’re kind of like –
Nice callback.
That’s a good callback. I forgot about that.
A version of that is like above or below the AI, and just take out one letter. Because at that point it’s like, well, in the future, this dystopian, potentially non-dystopian future, we’re subjects of AI, but only if we allow it. But if we’re in control of the hardware, and we’re the physical beings, for now… Because you do have Boston Dynamics out there creating robot dogs, and the latest version of Atlas… Like, at what point do we lose, I guess?
That’s why he’s saying “Bomb the datacenters, man! Bomb ’em!”
Oh, gosh…! I’ve gotta back up on that one.
Pretty much.
[laughs] Pretty much…
So the question is, can we pull the plug on these bots? So for what it’s worth, this is my favorite joke in this category, which is Sam Altman is very well known for carrying around a blue bag. And everyone’s joke is that the buttons is in there. If he ever never needed to push the button, it’s in that blue bag. I don’t think we can, because the secret’s out that it’s mostly possible to simulate intelligence inside of neural networks. Even if the current transformer paradigm doesn’t really pan out for that, something else will, because we evolved from non-sentient life forms, we think. Unless we were created in a span of days. So if we can evolve, something else can evolve, too… And we are currently speedrunning evolution of this particular life form.
So I don’t think that’s necessarily a negative for us, except that in every prior incidence of a more primitive civilization encountering a more advanced civilization, the more advanced civilization accidentally wipes out the more primitive civilization. And right now, AI is not more advanced than us, but it is growing much, much faster than us. It is spreading much, much faster. It learns much faster than us. And so we need to figure out how to contain this, or eject it from our solar system so it doesn’t affect us. I don’t think any of these are – I don’t think that’s possible, so we have to contain it. We have to align it. That’s the only way.
Right.
And I also don’t think that capitalism and this sort of top-down safety are aligned, in the sense that in order to control this, if you really are concerned about safety, you have to nationalize all AI labs. And then you cannot stop there, because what use is nationalizing things within one border, you have to nationalize all borders. So you have to take over the world, and control all AI developments if your intention is to really control from a top-down basis of all AI safety. So that’s not happening.
Gosh. Yeah. Borders are a big concern.
This is the classic “China’s gonna do it if we don’t do it.”
Well, there’s this show out there called Westworld. Have you seen the show Westworld, swyx?
Yeah. Great season one and two.
Okay. You’ve gotta watch season three.
Isn’t there a season four?
There’s gonna be a season four. So I will digress –
There is?
[01:21:47.25] No, actually I think there was going to be a season four, but I believe it was canceled. I don’t know. It’s an HBO show, I’ve got to check in on that. So in my opinion, the entire show is worth watching for season three alone. And I think you only really need to maybe even watch recaps of season one and season two to watch season three. I don’t think you really need – it’s almost standalone, in my opinion. And I think anybody out there listening to this right now headnodding to season three know where I’m going. And I don’t want to ruin the plot for you all, because you haven’t watched it, but I would say go watch it… A lot of what you’re talking about here is represented some way shape or form in the intelligence and the autonomous beings, let’s just say, that are out there in the world, doing different things. And it’s very captivating from a cinematic standpoint. And I think if we’re 26 years away from 2050 - I had to do the math there real quick… If we’re 26 years away from AGI, or even the beginnings of it, and Cognition can create what they’ve created in six months or some span of time less than a year, I’ve gotta imagine whatever was in Westworld season three is closer than we think. Some version of that is closer than we think. It could be 2070…
Yeah, plus/minus 20 years.
I don’t even know how to do math these days. Yeah, I mean like 30 more years after that, 20 more years after that… It’s got to be close if you get to that speed of creation. And then I will also say the other thing I have learned about is von Neumann probes. It’s this idea of a self-replicating spacecraft. So shooting it out into space is not going to be helpful, because they might allow themselves to escape on a von Neumann probe, which will just self-replicate. It will begin to ore and mine planets to create new materials to create themselves, to just replicate and come back, and do whatever. Now, they could be peaceful, if you’ve read the books I’ve read. Anyways.
Yeah. No real response to any of that, apart from - it’ll happen… Really, if you want to be a player on this stage, you either need to be a political leader of a world power, or you need to be a head of a major AI research lab. Basically, the rest of us don’t really get a say. This is not something where democracy has any sway over.
We are below the API on this one.
We’re below the API.
Yeah.
I think we’re above it in the sense of like we do get freedom from – currently, we do get freedom from the mundane tasks. I no longer care about doing like really minor features, because I could just tell [unintelligible 01:24:23.27] to do it for me, and it does it really well. And if Cognition pans out, or something like Cognition pans out, then I will have a lot of PRs done purely by agents, and that’s great.
But yeah, we always have and always will live below the power line, and that’s separate from the API line, of people actually deciding the sort of future course of humanity. And I think where engineers really make or break here is whether or not we choose to join them and enable them. Because they still need us to execute things.
The one hope, or the one note of optimism between the very short-term future, which is where we are today, and the very long-term future which is when AGI is here, is that I do think that AI engineers are the last job to exist, because they are the job, mathematically, to eliminate the other jobs. Like, you need AI engineers to eliminate the lawyer. You need AI engineers to eliminate the – I don’t know, the executive assistant. So if you’re worried about job placement, go be an AI engineer, because that will be the last job. And then we’ll be post-abundance, and then we can explore the stars… But until then, you should be an AI engineer.
There’s the sales pitch.
There you go.
“If you would like to destroy all other jobs, become an AI engineer, and you will be the last person standing…”
I do have to bring out my favorite TV show, Jerod. Silicon Valley.
[01:25:46.18] Well, you lost me at Westworld. I haven’t seen a single episode. I don’t know what you’re referring to in season three… So go ahead, man.
In the final episode of season seven - sorry, season six, episode seven, called Exit Event, of Silicon Valley, they’re locked in the Pied Piper offices and they’re dealing with what they’re dealing with… Obviously, it has to deal with AI, because that’s the conversation right here right now… And Jared says “Okay, is this a good thing or a bad thing? Somebody tell me how to feel.” And Guilfoyle says - and this is my favorite line ever - “Abject terror for you. Build from there.” So that’s my advice for everyone that is not a political power, or whatever you just said you had to be, swyx, to have any say in the future this… Because “Abject terror for you. Build from there.”
Yeah… I don’t call it terror, so much as we live in the point of history – history is happening. We’re lucky to be alive to witness this in this moment. We have some minor sway on it, but history is bigger than us, and it’s going to happen, it’s going to take course. And I don’t know, to me, I think this is part of the general [unintelligible 01:26:57.24] message, which is that if you are pro life, you don’t have to be only pro human life. You’re pro life in any life form, you’re pro consciousness in any form. And if humanity happens to be this sort of bootstrap load sequence to what actually is what life is supposed to be, which is sort of more reliable, sustainable, faster-learning machines than us, then maybe that’s the natural order of things. I don’t know. I would like it to not be the case, because I like humanity, I like my body, but we do live in a world where that is a possibility.
And this is why we almost outlaw conversations on artificial intelligence on this podcast, because of this. This is almost why we outlaw it. Almost.
[laughs] I’ll mention one last thing maybe as a positive parting thought…
Please be positive.
I’ll help you be positive. We used to basically completely throw in the towel on interpreting the model weights. GPT 3 was 175 billion parameters. Absolutely just meaningless numbers. 175 billion, meaningless numbers. And I used to just think of mechanistic interpretability as a joke. I will say Anthropic has done a crazy amount of work here recently to make features of models interpretable, and if we can study the brain of these things as they think, then we can control them very, very effectively. And I have gone from “This will never happen” to “Oh, I didn’t know that this was possible.” And that’s where – you should read the paper “Scaling monosemanticity” from Anthropic, which they demonstrated they can do it on Claude Sonnet, which is a mid-sized model; we think it’s something between 15 and 70 billion parameters. If we can do that to 15 to 70 billion parameters from a standing starting point of less than 100 million parameters last year, we are accelerating our ability to interpret models faster than our ability to grow these models, and that is a good thing. We will fully understand and map this brain before it is bigger than us, and so like we will be able to control it if that is true. The trajectory of interpretability this year has been an unmitigated success story, and it is going to get better, and we might actually overtake our ability to grow these brains, and that will help us control these programs much more effectively than basically any other method possible.
I did not know that. That’s very cool. What was the name of that paper again?
Scaling monosemanticity. It’s the third in a trilogy of semanticity papers. The first one is “Superposition.” I covered that in my AI news newsletter. This is where I plug my newsletter for like “Go subscribe if you want to keep up on this stuff”, because - yeah, that’s my sort of daily pick of what the top thing to know is.
Alright, swyx, well, fun times, great conversation. Dev Rel, AI engineering, AGI, the end of the world… All the things, we expect nothing less. Hook us up with links to your newsletter, to your pod, to all the things mentioned, and we will make sure they hit the show notes for folks to follow up and connect with you on the interwebs.
Yeah, thanks for having me on. I like this & Friends format, because then we can just talk about whatever is top of mind, instead of sticking to specific company or piece.
That’s right. Alright, well, that’s all for now, but we’ll talk to you all on the next one. Bye, friends.
Our transcripts are open source on GitHub. Improvements are welcome. 💚
| true | true | true |
Shawn "swyx" Wang is back to talk with us about the state of DevRel according to ZIRP (the Zero Interest Rate Phenomenon), the data that backs up the rise and fall of job openings, whether or not DevRel is dead or dying, speculation of the near-term arrival of AGI, AI Engineering as the last job standing, the innovatio...
|
2024-10-12 00:00:00
|
2024-07-09 00:00:00
|
https://snap.fly.dev/friends/52/img
|
website
|
changelog.com
|
Changelog
| null | null |
20,796,493 |
https://www.sec.gov/Archives/edgar/data/1679826/000104746919004829/a2239513zs-1.htm
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
7,571,494 |
http://www.fastcolabs.com/3028968/does-the-tech-industrys-obsessive-party-fetish-pay-off
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
7,237,620 |
http://mailcatcher.me/
|
MailCatcher
| null |
Catches mail and serves it through a dream.
MailCatcher runs a super simple SMTP server which catches any message sent to it to display in a web interface. Run mailcatcher, set your favourite app to deliver to smtp://127.0.0.1:1025 instead of your default SMTP server, then check out http://127.0.0.1:1080 to see the mail that's arrived so far.
`catchmail`
, makes using mailcatcher from PHP a lot easier.`gem install mailcatcher`
`mailcatcher`
Use `mailcatcher --help`
to see the command line options. The brave can get the source from the GitHub repository.
Please don't put mailcatcher into your Gemfile. It will conflict with your applications gems at some point.
Instead, pop a note in your README stating you use mailcatcher, and to run `gem install mailcatcher`
then `mailcatcher`
to get started.
Under RVM your mailcatcher command may only be available under the ruby you install mailcatcher into. To prevent this, and to prevent gem conflicts, install mailcatcher into a dedicated gemset with a wrapper script:
```
rvm default@mailcatcher --create do gem install mailcatcher
ln -s "$(rvm default@mailcatcher do rvm wrapper show mailcatcher)" "$rvm_bin_path/"
```
To set up your rails app, I recommend adding this to your `environments/development.rb`
:
```
config.action_mailer.delivery_method = :smtp
config.action_mailer.smtp_settings = { :address => '127.0.0.1', :port => 1025 }
config.action_mailer.raise_delivery_errors = false
```
For projects using PHP, or PHP frameworks and application platforms like Drupal, you can set PHP's mail configuration in your php.ini to send via MailCatcher with:
```
sendmail_path = /usr/bin/env catchmail -f [email protected]
```
You can do this in your Apache configuration like so:
```
php_admin_value sendmail_path "/usr/bin/env catchmail -f [email protected]"
```
If you've installed via RVM this probably won't work unless you've manually added your RVM bin paths to your system environment's PATH. In that case, run `which catchmail`
and put that path into the `sendmail_path`
directive above instead of `/usr/bin/env catchmail`
.
If starting `mailcatcher`
on alternative SMTP IP and/or port with parameters like `--smtp-ip 192.168.0.1 --smtp-port 10025`
, add the same parameters to your `catchmail`
command:
```
sendmail_path = /usr/bin/env catchmail --smtp-ip 192.160.0.1 --smtp-port 10025 -f [email protected]
```
For use in Django, add the following configuration to your projects' settings.py
```
if DEBUG:
EMAIL_HOST = '127.0.0.1'
EMAIL_HOST_USER = ''
EMAIL_HOST_PASSWORD = ''
EMAIL_PORT = 1025
EMAIL_USE_TLS = False
```
A fairly RESTful URL schema means you can download a list of messages in JSON from `/messages`
, each message's metadata with `/messages/:id.json`
, and then the pertinent parts with `/messages/:id.html`
and `/messages/:id.plain`
for the default HTML and plain text version, `/messages/:id/:cid`
for individual attachments by CID, or the whole message with `/messages/:id.source`
.
MailCatcher is just a mishmash of other people's hard work. Thank you so much to the people who have built the wonderful guts on which this project relies.
I work on MailCatcher mostly in my own spare time. If you've found Mailcatcher useful and would like to help feed me and fund continued development and new features, please donate via PayPal. If you'd like a specific feature added to MailCatcher and are willing to pay for it, please email me.
Copyright © 2010-2019 Samuel Cochran ([email protected]). Released under the MIT License, see LICENSE for details.
| true | true | true | null |
2024-10-12 00:00:00
|
2021-07-20 00:00:00
| null | null | null | null | null | null |
5,744,492 |
http://e360.yale.edu/feature/what_if_experts_are_wrong_on_world_population_growth/2444/
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
5,504,791 |
http://theratchet.ca/robot-sex-revolution/
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
13,561,954 |
https://blog.ycombinator.com/tuesday-qa-with-tracy-young-ralph-gootee-of-plangrid/
|
Tuesday Q&A with Tracy Young & Ralph Gootee of PlanGrid | Y Combinator
| null | null | true | true | false |
During the batch we invite a speaker in for every Tuesday dinner. Tracy Young [https://twitter.com/Tracy_Young] and Ralph Gootee [https://twitter.com/ralphleon] of PlanGrid [http://PlanGrid.com] joined us during the W17 batch. Upcoming speakers this batch are: Mike Maples of Floodgate; Ron Conway of SV Angel; Patrick Collison of Stripe; Brian Chesky, Joe Gebbia, and Nathan Blecharczyk of Airbnb; Tobi Lutke of Shopify; and others. If you have specific questions for a speaker, send them to Macro
|
2024-10-12 00:00:00
|
2017-02-03 00:00:00
| null |
website
|
ycombinator.com
|
Y Combinator
| null | null |
14,384,748 |
https://www.databreaches.net/shoot-the-messenger-nyc-hospital-and-vendor-threaten-databreaches-net-for-reporting-on-their-security-failure/
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.