id
int64 3
41.8M
| url
stringlengths 1
1.84k
| title
stringlengths 1
9.99k
⌀ | author
stringlengths 1
10k
⌀ | markdown
stringlengths 1
4.36M
⌀ | downloaded
bool 2
classes | meta_extracted
bool 2
classes | parsed
bool 2
classes | description
stringlengths 1
10k
⌀ | filedate
stringclasses 2
values | date
stringlengths 9
19
⌀ | image
stringlengths 1
10k
⌀ | pagetype
stringclasses 365
values | hostname
stringlengths 4
84
⌀ | sitename
stringlengths 1
1.6k
⌀ | tags
stringclasses 0
values | categories
stringclasses 0
values |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
28,728,090 |
http://backreaction.blogspot.com/2021/10/how-close-is-nuclear-fusion-power.html
|
How close is nuclear fusion power?
|
Sabine Hossenfelder
|
*[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]*
Today I want to talk about nuclear fusion. I’ve been struggling with this video for some while. This is because I am really supportive of nuclear fusion, research and development. However, the potential benefits of current research on nuclear fusion have been incorrectly communicated for a long time. Scientists are confusing the public and policy makers in a way that makes their research appear more promising than it really is. And that’s what we’ll talk about today.
There is a lot to say about nuclear fusion, but today I want to focus on its most important aspect, how much energy goes into a fusion reactor, and how much comes out. Scientists quantify this with the energy gain, that’s the ratio of what comes out over what goes in and is usually denoted Q. If the energy gain is larger than 1 you create net energy. The point where Q reaches 1 is called “Break Even”.
The record for energy gain was just recently broken. You may have seen the headlines. An experiment at the National Ignition Facility in the United States reported they’d managed to get out seventy percent of the energy they put in, so a Q of 0.7. The previous record was 0.67. It was set in nineteen ninety-seven by the Joint European Torus, JET for short.
The most prominent fusion experiment that’s currently being built is ITER. You will find plenty of articles repeating that ITER, when completed, will produce ten times as much energy as goes in, so a Gain of 10. Here is an example from a 2019 article in the Guardian by Phillip Ball who writes
“[The Iter project] hopes to conduct its first experimental runs in 2025, and eventually to produce 500 megawatts (MW) of power – 10 times as much as is needed to operate it.”
Here is another example from Science Magazine where you can read “[ITER] is predicted to produce at least 500 megawatts of power from a 50 megawatt input.”
So this looks like we’re close to actually creating energy from fusion right? No, wrong.Remember that nuclear fusion is the process by which the sun creates power. The sun forces nuclei into each other with the gravitational force created by its huge mass. We can’t do this on earth so we have to find some other way. The currently most widely used technology for nuclear fusion is heating the fuel in strong magnetic fields until it becomes a plasma. The temperature that must be reached is about 150 million Kelvin. The other popular option is shooting at a fuel pellet with lasers. There are some other methods but they haven’t gotten very far in research and development.
The confusion which you find in pretty much all popular science writing about nuclear fusion is that the energy gain which they quote is that for the energy that goes into the plasma and comes out of the plasma.
In the technical literature, this quantity is normally not just called Q but more specifically Q-plasma. This is not the ratio of the entire energy that comes out of the fusion reactor over that which goes into the reactor, which we can call Q-total. If you want to build a power plant, and that’s what we’re after in the end, it’s the Q-total that matters, not the Q-plasma.
Here’s the problem. Fusion reactors take a lot of energy to run, and most of that energy never goes into the plasma. If you keep the plasma confined with a magnetic field in a vacuum, you need to run giant magnets and cool them and maintain that. And pumping a laser isn’t energy efficient either. These energies never appear in the energy gain that is normally quoted.
The Q-plasma also doesn’t take into account that if you want to operate a power plant, the heat that is created by the plasma would still have to be converted into electric energy, and that can only be done with a limited efficiency, optimistically maybe fifty percent. As a consequence, the Q total is much lower than the Q plasma.
If you didn’t know this, you’re not alone. I didn’t know this until a few years ago either. How can such a confusion even happen? I mean, this isn’t rocket science. The total energy that goes into the reactor is more than the energy that goes into the plasma. And yet, science writers and journalists constantly get this wrong. They get the most basic fact wrong on a matter that affects tens of billions of research funding.
It’s not like we are the first to point out that this is a problem. I want to read you some words from a 1988 report from the European Parliament, more specifically from the Committee for Scientific and Technological Options Assessment. They were tasked with establishing criteria for the assessment of European fusion research.
In 1988, they already warned explicitly of this very misunderstanding.
You have seen in the earlier quotes about ITER that the energy input is normally said to be 50 MegaWatts. But according to the head of the Electrical Engineering Division of the ITER Project, Ivone Benfatto, ITER will consume about 440 MegaWatts while it produces fusion power. That gives us an estimate for the total energy that goes in.
Though that is misleading already because 120 of those 440 MegaWatts are consumed whether or not there’s any plasma in the reactor, so using this number assumes the thing would be running permanently. But okay, let’s leave this aside.
The plan is that ITER will generate 500 MegaWatts of fusion power in heat. If we assume a 50% efficiency for converting this heat into electricity, ITER will produce about 250 MegaWatts of electric power.
That gives us a Q total of about 0.57. That’s less than a tenth of the normally stated Q plasma of 10. Even optimistically, ITER will still consume roughly twice the power it generates. What’s with the earlier claim of a Q of 0.67 for the JET experiment? Same thing.
The Q-plasma also doesn’t take into account that if you want to operate a power plant, the heat that is created by the plasma would still have to be converted into electric energy, and that can only be done with a limited efficiency, optimistically maybe fifty percent. As a consequence, the Q total is much lower than the Q plasma.
If you didn’t know this, you’re not alone. I didn’t know this until a few years ago either. How can such a confusion even happen? I mean, this isn’t rocket science. The total energy that goes into the reactor is more than the energy that goes into the plasma. And yet, science writers and journalists constantly get this wrong. They get the most basic fact wrong on a matter that affects tens of billions of research funding.
It’s not like we are the first to point out that this is a problem. I want to read you some words from a 1988 report from the European Parliament, more specifically from the Committee for Scientific and Technological Options Assessment. They were tasked with establishing criteria for the assessment of European fusion research.
In 1988, they already warned explicitly of this very misunderstanding.
“The use of the term `Break-even’ as defining the present programme to achieve an energy balance in the Hydrogen-Deuterium plasma reaction is open to misunderstanding. IN OUR VIEW 'BREAK-EVEN' SHOULD BE USED AS DESCRIPTIVE OF THE STAGE WHEN THERE IS AN ENERGY BREAKEVEN IN THE SYSTEM AS A WHOLE. IT IS THIS ACHIEVEMENT WHICH WILL OPEN THE WAY FOR FUSION POWER TO BE USED FOR ELECTRICITY GENERATION.”They then point out the risk:
“In our view the correct scientific criterion must dominate the programme from the earliest stages. The danger of not doing this could be that the entire programme is dedicated to pursuing performance parameters which are simply not relevant to the eventual goal. The result of doing this could, in the very worst scenario be the enormous waste of resources on a program that is simply not scientifically feasible.”So where are we today? Well, we’re spending lots of money on increasing Q-plasma instead of increasing the relevant quantity Q-total. How big is the difference? Let us look at ITER as an example.
You have seen in the earlier quotes about ITER that the energy input is normally said to be 50 MegaWatts. But according to the head of the Electrical Engineering Division of the ITER Project, Ivone Benfatto, ITER will consume about 440 MegaWatts while it produces fusion power. That gives us an estimate for the total energy that goes in.
Though that is misleading already because 120 of those 440 MegaWatts are consumed whether or not there’s any plasma in the reactor, so using this number assumes the thing would be running permanently. But okay, let’s leave this aside.
The plan is that ITER will generate 500 MegaWatts of fusion power in heat. If we assume a 50% efficiency for converting this heat into electricity, ITER will produce about 250 MegaWatts of electric power.
That gives us a Q total of about 0.57. That’s less than a tenth of the normally stated Q plasma of 10. Even optimistically, ITER will still consume roughly twice the power it generates. What’s with the earlier claim of a Q of 0.67 for the JET experiment? Same thing.
If you look at the total energy, JET consumed more than 700 MegaWatts of electricity to get its sixteen MegaWatts of fusion power, that’s heat not electric. So if you again assume 50 percent efficiency in the heat to electricity conversion you get a Q-total of about 0.01 and not the claimed 0.67.
And those recent headlines about the NIF success? Same thing again. It’s the Q-plasma that is 0.7. That’s calculated with the energy that the laser delivers to the plasma. But how much energy do you need to fire the laser? I don’t know for sure, but NIF is a fairly old facility, so a rough estimate would be 100 times as much. If they’d upgrade their lasers, maybe 10 times as much. Either way, the Q-total of this experiment is almost certainly well below 0.1.
Of course the people who work on this know the distinction perfectly well. But I can’t shake the impression they quite like the confusion between the two Qs. Here is for example a quote from Holtkamp who at the time was the project construction leader of ITER. He said in an interview in 2006:
But okay, you may say, no one expects accuracy in a TED talk. Then listen to ITER Director General Dr. Bigot speaking to the House of Representatives in April 2016:
Nuclear fusion power is a worthy research project. It could have a huge payoff for the future of our civilization. But we need to be smart about just what research to invest into because we have limited resources. For this, it is super important that we focus on the relevant question: Will it output energy into the grid.
“ITER will be the first fusion reactor to create more energy than it uses. Scientists measure this in terms of a simple factor—they call it Q. If ITER meets all the scientific objectives, it will create 10 times more energy than it is supplied with.”Here is Nick Walkden from JET in a TED talk referring to ITER “ITER will produce ten times the power out from fusion energy than we put into the machine.” and “Now JET holds the record for fusion power. In 1997 it got 67 percent of the power out that we put in. Not 1 not 10 but still getting close.”
But okay, you may say, no one expects accuracy in a TED talk. Then listen to ITER Director General Dr. Bigot speaking to the House of Representatives in April 2016:
[Rep]: I look forward to learning more about the progress that ITER has made under Doctor Bigot’s leadership to address previously identified management deficiencies and to establish a more reliable path forward for the project.What are we to make of all this?
[Bigot]:Okay, so ITER will have delivered in that full demonstration that we could have okay 500 Megawatt coming out of the 50 Megawatt we will put in.
Nuclear fusion power is a worthy research project. It could have a huge payoff for the future of our civilization. But we need to be smart about just what research to invest into because we have limited resources. For this, it is super important that we focus on the relevant question: Will it output energy into the grid.
There seem to be a lot of people in fusion research who want you to remain confused about just what the total energy gain is. I only recently read a new book about nuclear fusion “The Star Builders” which does the same thing again (review here). Only briefly mentions the total energy gain, and never gives you a number.
This misinformation has to stop.
If you come across any popular science article or interview or video that does not clearly spell out what the total energy gain is, please call them out on it.
Thanks for watching, see you next week.
## No comments:
## Post a Comment
COMMENTS ON THIS BLOG ARE PERMANENTLY CLOSED. You can join the discussion on Patreon.
Note: Only a member of this blog may post a comment.
| true | true | true |
Science News, Physics, Science, Philosophy, Philosophy of Science
|
2024-10-12 00:00:00
|
2021-10-02 00:00:00
|
https://lh3.googleusercontent.com/blogger_img_proxy/AEn0k_tZ6v56M9fGYSg0_p2EPdH5ptl2NBTIuM7MnGvjkucgLisHIjt__VqdarIGiDpf7HcKCmSaFAD3tK-bAWgT9NG0e5ejjLXpiOZ3yki8I8v_HGWzXw=w1200-h630-n-k-no-nu
| null |
blogspot.com
|
backreaction.blogspot.com
| null | null |
30,262,608 |
https://www.gulftoday.ae/news/2022/02/08/putin-signals-compromise-over-ukraine-after-macron-talks
|
Putin signals compromise over Ukraine after Macron talks
| null |
Vladimir Putin (right) and Emmanuel Macron attend a joint press conference in Moscow. AFP
Gulf Today Report
Russian President Vladimir Putin says he is ready for compromise and will look at proposals put forward by French leader Emmanuel Macron after some progress had been made in the talks.
The coming days will be crucial in the Ukraine standoff, French President Emmanuel Macron said after the meeting, while still blaming the West for raising tensions over Ukraine.
**READ MORE**
Russia links US nuclear arms talks to security demands
US airborne infantry troops arrive in Poland amid tensions
Putin said the first Moscow summit he has held with a Western leader since the Kremlin began massing troops near its neighbor had been substantive, but also repeated warnings about the threat of war were Ukraine to join NATO.
Macron heads to Kyiv on Tuesday after offering Russia "concrete security guarantees" in an effort to dissuade Moscow from invading its neighbour Ukraine, with Russia's leader vowing to find compromise in response.
Vladimir Putin gestures during a joint press conference with Emmanuel Macron in Moscow. AP
Macron's visit comes during a week of intense Western diplomacy amid a major Russian military buildup on its southwestern frontier that has raised fears it could soon march into Ukraine.
Russia, jostling for influence in post-Cold War Europe, wants security guarantees that include a promise of no missile deployments near its borders and a scaling back of NATO's military infrastructure.
Putin told Macron Moscow would "do everything to find compromises that suit everyone", raising the prospect of a path to de-escalating the volatile situation.
Putin said several proposals put forward by Macron at talks on Monday could form a basis for moving forward on the crisis over Ukraine.
"A number of his ideas, proposals... are possible as a basis for further steps," Putin said after more than five hours of talks in the Kremlin.
Emmanuel Macron speaks during a joint press conference with Vladimir Putin after their talks in Moscow. AP
He did not provide any details but said the two leaders would speak by phone after Macron meets with Ukrainian President Volodymyr Zelensky.
The French president said he had made proposals of "concrete security guarantees" to Putin.
"President Putin assured me of his readiness to engage in this sense and his desire to maintain stability and the territorial integrity of Ukraine," Macron said.
"There is no security for the Europeans if there is no security for Russia," he added.
The French presidency said the proposals include an engagement from both sides not to take any new military action, the launching of a new strategic dialogue and efforts to revive the peace process in Kyiv's conflict with Moscow-backed separatists in eastern Ukraine.
With tensions soaring between Moscow and Ukraine and its allies, Macron was the first top Western leader to meet Putin since the crisis began in December.
| true | true | true |
Putin said the first Moscow summit he has held with a Western leader since the Kremlin began massing troops near its neighbor had been substantive, but also repeated warnings about the threat of war were Ukraine to join NATO.
|
2024-10-12 00:00:00
|
2022-02-08 00:00:00
|
http://www.gulftoday.ae/-/media/gulf-today/images/articles/news/2022/2/8/france-russia-ukraine-talks-main1-750.ashx?h=450&w=750&hash=87B8DECA01D2376E18A71BA7900F4B1A
| null |
gulftoday.ae
|
GulfToday
| null | null |
12,436,411 |
https://addpipe.com/blog/ive-disabled-flash-for-a-week/
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
6,007,404 |
http://kkovacs.eu/cassandra-vs-mongodb-vs-couchdb-vs-redis/
|
The big NoSQL databases comparison
|
Kristof Kovacs
|
# The big NoSQL databases comparison
Hello, I’m Kristof, a human being like you, and an easy to work with, friendly guy.
I've been a programmer, a consultant, CIO in startups, head of software development in government, and built two software companies.
Some days I’m coding Golang in the guts of a system and other days I'm wearing a suit to help clients with their DevOps practices.
## Table of Contents
While SQL databases are insanely useful tools, their monopoly in the
last decades is coming to an end. And it's just time: I can't even count
the things that were forced into relational databases, but never really
fitted them. (That being said, relational databases will always be the
best for the stuff that has *relations*.)
But, the differences between NoSQL databases are much bigger than ever was between one SQL database and another. This means that it is a bigger responsibility on software architects to choose the appropriate one for a project right at the beginning.
In this light, here is a comparison of Open Source NOSQL databases:
## The most popular ones #
### Redis #
**Written in:**C**Main point:**Blazing fast**License:**BSD**Protocol:**Telnet-like, binary safe- Disk-backed in-memory database,
- Master-slave replication, automatic failover
- Simple values or data structures by keys
- but complex operations like ZREVRANGEBYSCORE.
- INCR & co (good for rate limiting or statistics)
- Bit and bitfield operations (for example to implement bloom filters)
- Has sets (also union/diff/inter)
- Has lists (also a queue; blocking pop)
- Has hashes (objects of multiple fields)
- Sorted sets (high score table, good for range queries)
- Lua scripting capabilities
- Has transactions
- Values can be set to expire (as in a cache)
- Pub/Sub lets you implement messaging
- GEO API to query by radius (!)
**Best used:** For rapidly changing data with a foreseeable database
size (should fit mostly in memory).
**For example:** To store real-time stock prices. Real-time analytics.
Leaderboards. Real-time communication. And wherever you used memcached
before.
### Cassandra #
**Written in:**Java**Main point:**Store*huge*datasets in "almost" SQL**License:**Apache**Protocol:**CQL3 & Thrift- CQL3 is very similar to SQL, but with some limitations that come from the scalability (most notably: no JOINs, no aggregate functions.)
- CQL3 is now the official interface. Don't look at Thrift, unless you're working on a legacy app. This way, you can live without understanding ColumnFamilies, SuperColumns, etc.
- Querying by key, or key range (secondary indices are also available)
- Tunable trade-offs for distribution and replication (N, R, W)
- Data can have expiration (set on INSERT).
- Writes can be much faster than reads (when reads are disk-bound)
- Map/reduce possible with Apache Hadoop
- All nodes are similar, as opposed to Hadoop/HBase
- Very good and reliable cross-datacenter replication
- Distributed counter datatype.
- You can write triggers in Java.
**Best used:** When you need to store data so huge that it doesn't fit
on server, but still want a friendly familiar interface to it.
**For example:** Web analytics, to count hits by hour, by browser, by
IP, etc. Transaction logging. Data collection from huge sensor arrays.
### MongoDB #
**Written in:**C++**Main point:**JSON document store**License:**AGPL (Drivers: Apache)**Protocol:**Custom, binary (BSON)- Master/slave replication (auto failover with replica sets)
- Sharding built-in
- Queries are javascript expressions
- Run arbitrary javascript functions server-side
- Geospatial queries
- Multiple storage engines with different performance characteristics
- Performance over features
- Document validation
- Journaling
- Powerful aggregation framework
- On 32bit systems, limited to ~2.5Gb
- Text search integrated
- GridFS to store big data + metadata (not actually an FS)
- Has geospatial indexing
- Data center aware
**Best used:** If you need dynamic queries. If you prefer to define
indexes, not map/reduce functions. If you need good performance on a big
DB. If you wanted CouchDB, but your data changes too much, filling up
disks.
**For example:** For most things that you would do with MySQL or
PostgreSQL, but having predefined columns really holds you back.
### ElasticSearch #
**Written in:**Java**Main point:**Advanced Search**License:**Apache**Protocol:**JSON over HTTP (Plugins: Thrift, memcached)- Stores JSON documents
- Has versioning
- Parent and children documents
- Documents can time out
- Very versatile and sophisticated querying, scriptable
- Write consistency: one, quorum or all
- Sorting by score (!)
- Geo distance sorting
- Fuzzy searches (approximate date, etc) (!)
- Asynchronous replication
- Atomic, scripted updates (good for counters, etc)
- Can maintain automatic "stats groups" (good for debugging)
**Best used:** When you have objects with (flexible) fields, and you
need "advanced search" functionality.
**For example:** A dating service that handles age difference,
geographic location, tastes and dislikes, etc. Or a leaderboard system
that depends on many variables.
## Classic document and BigTable stores #
### CouchDB #
**Written in:**Erlang**Main point:**DB consistency, ease of use**License:**Apache**Protocol:**HTTP/REST- Bi-directional (!) replication,
- continuous or ad-hoc,
- with conflict detection,
- thus, master-master replication. (!)
- MVCC - write operations do not block reads
- Previous versions of documents are available
- Crash-only (reliable) design
- Needs compacting from time to time
- Views: embedded map/reduce
- Formatting views: lists & shows
- Server-side document validation possible
- Authentication possible
- Real-time updates via '_changes' (!)
- Attachment handling
- thus, CouchApps (standalone js apps)
**Best used:** For accumulating, occasionally changing data, on which
pre-defined queries are to be run. Places where versioning is important.
**For example:** CRM, CMS systems. Master-master replication is an
especially interesting feature, allowing easy multi-site deployments.
### Accumulo #
**Written in:**Java and C++**Main point:**A BigTable with Cell-level security**License:**Apache**Protocol:**Thrift- Another BigTable clone, also runs of top of Hadoop
- Originally from the NSA
- Cell-level security
- Bigger rows than memory are allowed
- Keeps a memory map outside Java, in C++ STL
- Map/reduce using Hadoop's facitlities (ZooKeeper & co)
- Some server-side programming
**Best used:** If you need to restict access on the cell level.
**For example:** Same as HBase, since it's basically a replacement:
Search engines. Analysing log data. Any place where scanning huge,
two-dimensional join-less tables are a requirement.
### HBase #
**Written in:**Java**Main point:**Billions of rows X millions of columns**License:**Apache**Protocol:**HTTP/REST (also Thrift)- Modeled after Google's BigTable
- Uses Hadoop's HDFS as storage
- Map/reduce with Hadoop
- Query predicate push down via server side scan and get filters
- Optimizations for real time queries
- A high performance Thrift gateway
- HTTP supports XML, Protobuf, and binary
- Jruby-based (JIRB) shell
- Rolling restart for configuration changes and minor upgrades
- Random access performance is like MySQL
- A cluster consists of several different types of nodes
**Best used:** Hadoop is probably still the best way to run Map/Reduce
jobs on huge datasets. Best if you use the Hadoop/HDFS stack already.
**For example:** Search engines. Analysing log data. Any place where
scanning huge, two-dimensional join-less tables are a requirement.
### Hypertable #
**Written in:**C++**Main point:**A faster, smaller HBase**License:**GPL 2.0**Protocol:**Thrift, C++ library, or HQL shell- Implements Google's BigTable design
- Run on Hadoop's HDFS
- Uses its own, "SQL-like" language, HQL
- Can search by key, by cell, or for values in column families.
- Search can be limited to key/column ranges.
- Sponsored by Baidu
- Retains the last N historical values
- Tables are in namespaces
- Map/reduce with Hadoop
**Best used:** If you need a better HBase.
**For example:** Same as HBase, since it's basically a replacement:
Search engines. Analysing log data. Any place where scanning huge,
two-dimensional join-less tables are a requirement.
## Graph databases #
### OrientDB #
**Written in:**Java**Main point:**Document-based graph database**License:**Apache 2.0**Protocol:**binary, HTTP REST/JSON, or Java API for embedding- Has transactions, full ACID conformity
- Can be used both as a document and as a graph database (vertices with properties)
- Both nodes and relationships can have metadata
- Multi-master architecture
- Supports relationships between documents via persistent pointers (LINK, LINKSET, LINKMAP, LINKLIST field types)
- SQL-like query language (Note: no JOIN, but there are pointers)
- Web-based GUI (quite good-looking, self-contained)
- Inheritance between classes. Indexing of nodes and relationships
- User functions in SQL or JavaScript
- Sharding
- Advanced path-finding with multiple algorithms and Gremlin traversal language
- Advanced monitoring, online backups are commercially licensed
**Best used:** For graph-style, rich or complex, interconnected data.
**For example:** For searching routes in social relations, public
transport links, road maps, or network topologies.
### Neo4j #
**Written in:**Java**Main point:**Graph database - connected data**License:**GPL, some features AGPL/commercial**Protocol:**HTTP/REST (or embedding in Java)- Standalone, or embeddable into Java applications
- Full ACID conformity (including durable data)
- Both nodes and relationships can have metadata
- Integrated pattern-matching-based query language ("Cypher")
- Also the "Gremlin" graph traversal language can be used
- Indexing of nodes and relationships
- Nice self-contained web admin
- Advanced path-finding with multiple algorithms
- Indexing of keys and relationships
- Optimized for reads
- Has transactions (in the Java API)
- Scriptable in Groovy
- Clustering, replication, caching, online backup, advanced monitoring and High Availability are commercially licensed
**Best used:** For graph-style, rich or complex, interconnected data.
**For example:** For searching routes in social relations, public
transport links, road maps, or network topologies.
## The "long tail" #
(Not widely known, but definitely worthy ones)
### Couchbase (ex-Membase) #
**Written in:**Erlang & C**Main point:**Memcache compatible, but with persistence and clustering**License:**Apache**Protocol:**memcached + extensions- Very fast (200k+/sec) access of data by key
- Persistence to disk
- All nodes are identical (master-master replication)
- Provides memcached-style in-memory caching buckets, too
- Write de-duplication to reduce IO
- Friendly cluster-management web GUI
- Connection proxy for connection pooling and multiplexing (Moxi)
- Incremental map/reduce
- Cross-datacenter replication
**Best used:** Any application where low-latency data access, high
concurrency support and high availability is a requirement.
**For example:** Low-latency use-cases like ad targeting or
highly-concurrent web apps like online gaming (e.g. Zynga).
### Scalaris #
**Written in:**Erlang**Main point:**Distributed P2P key-value store**License:**Apache**Protocol:**Proprietary & JSON-RPC- In-memory (disk when using Tokyo Cabinet as a backend)
- Uses YAWS as a web server
- Has transactions (an adapted Paxos commit)
- Consistent, distributed write operations
- From CAP, values Consistency over Availability (in case of network partitioning, only the bigger partition works)
**Best used:** If you like Erlang and wanted to use Mnesia or DETS or
ETS, but you need something that is accessible from more languages (and
scales much better than ETS or DETS).
**For example:** In an Erlang-based system when you want to give access
to the DB to Python, Ruby or Java programmers.
### Aerospike #
**Written in:**C**Main point:**Speed, SSD-optimized storage**License:**License: AGPL (Client: Apache)**Protocol:**Proprietary- Cross-datacenter replication is commercially licensed
- Very fast access of data by key
- Uses SSD devices as a block device to store data (RAM + persistence also available)
- Automatic failover and automatic rebalancing of data when nodes or added or removed from cluster
- User Defined Functions in LUA
- Cluster management with Web GUI
- Has complex data types (lists and maps) as well as simple (integer, string, blob)
- Secondary indices
- Aggregation query model
- Data can be set to expire with a time-to-live (TTL)
- Large Data Types
**Best used:** Any application where low-latency data access, high
concurrency support and high availability is a requirement.
**For example:** Storing massive amounts of profile data in online
advertising or retail Web sites.
### RethinkDB #
**Written in:**C++**Main point:**JSON store that streams updates**License:**License: AGPL (Client: Apache)**Protocol:**Proprietary- JSON document store
- Javascript-based query language, "ReQL"
- ReQL is functional, if you use Underscore.js it will be quite familiar
- Sharded clustering, replication built-in
- Data is JOIN-able on references
- Handles BLOBS
- Geospatial support
- Multi-datacenter support
**Best used:** Applications where you need constant real-time upates.
**For example:** Displaying sports scores on various displays and/or
online. Monitoring systems. Fast workflow applications.
### Riak #
**Written in:**Erlang & C, some JavaScript**Main point:**Fault tolerance**License:**Apache**Protocol:**HTTP/REST or custom binary- Stores blobs
- Tunable trade-offs for distribution and replication
- Pre- and post-commit hooks in JavaScript or Erlang, for validation and security.
- Map/reduce in JavaScript or Erlang
- Links & link walking: use it as a graph database
- Secondary indices: but only one at once
- Large object support (Luwak)
- Comes in "open source" and "enterprise" editions
- Full-text search, indexing, querying with Riak Search
- In the process of migrating the storing backend from "Bitcask" to Google's "LevelDB"
- Masterless multi-site replication and SNMP monitoring are commercially licensed
**Best used:** If you want something Dynamo-like data storage, but no
way you're gonna deal with the bloat and complexity. If you need very
good single-site scalability, availability and fault-tolerance, but
you're ready to pay for multi-site replication.
**For example:** Point-of-sales data collection. Factory control
systems. Places where even seconds of downtime hurt. Could be used as a
well-update-able web server.
### VoltDB #
**Written in:**Java**Main point:**Fast transactions and rapidly changing data**License:**AGPL v3 and proprietary**Protocol:**Proprietary- In-memory
**relational**database. - Can export data into Hadoop
- Supports ANSI SQL
- Stored procedures in Java
- Cross-datacenter replication
**Best used:** Where you need to act fast on massive amounts of incoming
data.
**For example:** Point-of-sales data analysis. Factory control systems.
### Kyoto Tycoon #
**Written in:**C++**Main point:**A lightweight network DBM**License:**GPL**Protocol:**HTTP (TSV-RPC or REST)- Based on Kyoto Cabinet, Tokyo Cabinet's successor
- Multitudes of storage backends: Hash, Tree, Dir, etc (everything from Kyoto Cabinet)
- Kyoto Cabinet can do 1M+ insert/select operations per sec (but Tycoon does less because of overhead)
- Lua on the server side
- Language bindings for C, Java, Python, Ruby, Perl, Lua, etc
- Uses the "visitor" pattern
- Hot backup, asynchronous replication
- background snapshot of in-memory databases
- Auto expiration (can be used as a cache server)
**Best used:** When you want to choose the backend storage algorithm
engine very precisely. When speed is of the essence.
**For example:** Caching server. Stock prices. Analytics. Real-time data
collection. Real-time communication. And wherever you used memcached
before.
## P.S. #
No, there's no date on this review. I *do* update it occasionally,
and believe me, the *basic* properties of databases don't change that much.
Of course, all these systems have much more features than what's listed
here. I only wanted to list the key points that I base my decisions on.
| true | true | true |
The big Cassandra vs Mongodb vs CouchDB vs Redis, vs Riak vs HBase vs Couchbase (ex-Membase) vs OrientDB vs Aerospike vs Neo4j vs Hypertable vs ElasticSearch vs Accumulo vs VoltDB vs Scalaris vs RethinkDB vs Kyoto Tycoon comparison. While SQL databases are insanely useful tools, their monopoly in the last decades is coming to an end. And it's just time: I can't even count the things that were forced into relational databases, but never really fitted them.
|
2024-10-12 00:00:00
|
2010-12-25 00:00:00
| null |
article
|
kkovacs.eu
|
kkovacs.eu
| null | null |
22,254,565 |
https://ai.googleblog.com/2020/02/ml-fairness-gym-tool-for-exploring-long.html
|
ML-fairness-gym: A Tool for Exploring Long-Term Impacts of Machine Learning Syst
| null |
# ML-fairness-gym: A Tool for Exploring Long-Term Impacts of Machine Learning Systems
February 5, 2020
Posted by Hansa Srinivasan, Software Engineer, Google Research
## Quick links
Machine learning systems have been increasingly deployed to aid in high-impact decision-making, such as determining criminal sentencing, child welfare assessments, who receives medical attention and many other settings. Understanding whether such systems are fair is crucial, and requires an understanding of models’ short- and long-term effects. Common methods for assessing the fairness of machine learning systems involve evaluating disparities in error metrics on static datasets for various inputs to the system. Indeed, many existing ML fairness toolkits (e.g., AIF360, fairlearn, fairness-indicators, fairness-comparison) provide tools for performing such error-metric based analysis on existing datasets. While this sort of analysis may work for systems in simple environments, there are cases (e.g., systems with active data collection or significant feedback loops) where the
*context*in which the algorithm operates is critical for understanding its impact. In these cases, the fairness of algorithmic decisions ideally would be analyzed with greater consideration for the environmental and temporal context than error metric-based techniques allow.
In order to facilitate algorithmic development with this broader context, we have released ML-fairness-gym, a set of components for building simple simulations that explore potential long-run impacts of deploying machine learning-based decision systems in social environments. In “Fairness is not Static: Deeper Understanding of Long Term Fairness via Simulation Studies” we demonstrate how the ML-fairness-gym can be used to research the long-term effects of automated decision systems on a number of established problems from current machine learning fairness literature.
**An Example: The Lending Problem**
A classic problem for considering fairness in machine learning systems is the lending problem, as described by Liu et al. This problem is a highly simplified and stylized representation of the lending process, where we focus on a single feedback loop in order to isolate its effects and study it in detail. In this problem formulation, the probability that individual applicants will pay back a loan is a function of their credit score. These applicants also belong to one of an arbitrary number of groups, with their group membership observable by the lending bank.
The groups start with different credit score distributions. The bank is trying to determine a
*threshold*on the credit scores, applied across groups or tailored to each, that best enables the bank to reach its objectives. Applicants with scores higher than the threshold receive loans, and those with lower scores are rejected. When the simulation selects an individual, whether or not they will pay the loan is randomly determined based on their group’s probability of payback. In this example, individuals currently applying for loans may apply for additional loans in the future and thus, by paying back their loan, both their credit score and their group’s average credit score increases. Similarly, if the applicant defaults, the group’s average credit score decreases.
The most effective threshold settings will depend on the bank’s goals. A profit-maximizing bank may set a threshold that maximizes the predicted return, based on the estimated likelihood that applicants will repay their loans. Another bank, seeking to be fair to both groups, may try to implement thresholds that maximize profit while satisfying equality of opportunity, the goal of which is to have equal true positive rates (TPR is also called
*recall*or
*sensitivity*; a measure of what fraction of applicants who
*would have paid back*loans were given a loan). In this scenario, machine learning techniques are employed by the bank to determine the most effective threshold based on loans that have been distributed and their outcomes. However, since these techniques are often focused on short-term objectives, they may have unintended and unfair consequences for different groups.
**Deficiencies in Static Dataset Analysis**
A standard practice in machine learning to assess the impact of a scenario like the lending problem is to reserve a portion of the data as a “test set”, and use that to calculate relevant performance metrics. Fairness is then assessed by looking at how those performance metrics differ across salient groups. However, it is well understood that there are two main issues with using test sets like this in systems with feedback. If test sets are generated from existing systems, they may be incomplete or reflect the biases inherent to those systems. In the lending example, a test set could be incomplete because it may only have information on whether an applicant who has been given a loan has defaulted or repaid. Consequently, the dataset may not include individuals for whom loans have not been approved or who have not had access to loans before.
The second issue is that actions informed by the output of the ML system can have effects that may influence their future input. The thresholds determined by the ML system are used to extend loans. Whether people default or repay these loans then affects their future credit score, which then feed back into the ML system.
These issues highlight the shortcomings of assessing fairness in static datasets and motivate the need for analyzing the fairness of algorithms in the context of the dynamic systems in which they are deployed. We created the ML-fairness-gym framework to help ML practitioners bring simulation-based analysis to their ML systems, an approach that has proven effective in many fields for analyzing dynamic systems where closed form analysis is difficult.
**ML-fairness-gym as a Simulation Tool for Long-Term Analysis**
The ML-fairness-gym simulates sequential decision making using Open AI’s Gym framework. In this framework,
*agents*interact with simulated
*environments*in a loop. At each step, an agent chooses an
*action*that then affects the environment’s state. The environment then reveals an
*observation*that the agent uses to inform its subsequent actions. In this framework, environments model the system and dynamics of the problem and observations serve as data to the agent, which can be encoded as a machine learning system.
**Fairness Is Not Static: Extending the Analysis to the Long-Term**
Since Liu et al.’s original formulation of the lending problem examined only the short-term consequences of the bank’s policies — including short-term profit-maximizing policies (called the max reward agent) and policies subject to an equality of opportunity (EO) constraint — we use the ML-fairness-gym to extend the analysis to the long-term (many steps) via simulation.
The second finding is that equal opportunity constraints — enforcing equalized TPR between groups at each step — does not equalize TPR in aggregate over the simulation. This perhaps counterintuitive result can be thought of as an instance of Simpson’s paradox. As seen in the chart below, equal TPR in each of two years does not imply equal TPR in aggregate. This demonstrates how the equality of opportunity metric is difficult to interpret when the underlying population is evolving, and suggests that more careful analysis is necessary to ensure that the ML system is having the desired effects.
An example of Simpson's paradox. TP are the true positive classifications, FN corresponds to the false negative classifications and TPR is the true positive rate. In years 1 and 2, the lender applies a policy that achieves equal TPR between the two groups. The aggregation over both years does not have equal TPR. |
**Conclusion and Future Work**
While we focused on our findings for the lending problem in this blog post, the ML-fairness-gym can be used to tackle a wide variety of fairness problems. Our paper extends the analysis of two other scenarios that have been previously studied in the academic ML fairness literature. The ML-fairness-gym framework is also flexible enough to simulate and explore problems where “fairness” is under-explored. For example, in a supporting paper, “Fair treatment allocations in social networks,” we explore a stylized version of epidemic control, which we call the
*precision disease control problem*, to better understand notions of fairness across individuals and communities in a social network.
We’re excited about the potential of the ML-fairness-gym to help other researchers and machine learning developers better understand the effects that machine learning algorithms have on our society, and to inform the development of more responsible and fair machine learning systems. Find the code and papers in the ML-fairness-gym Github repository.
| true | true | true |
Posted by Hansa Srinivasan, Software Engineer, Google Research Machine learning systems have been increasingly deployed to aid in high-impact dec...
|
2024-10-12 00:00:00
|
2020-02-05 00:00:00
|
Website
|
research.google
|
research.google
| null | null |
|
13,646,270 |
http://www.bbc.com/future/story/20170210-why-happy-music-makes-you-do-bad-things
|
Why happy music makes you do bad things
|
Richard Gray
|
# Why happy music makes you do bad things
**There may be a surprising dark side to easy-listening and feel-good tracks.**
From the distinctive opening “Whooah” to the recurring funky brass riff that follows each line of lyrics, James Brown’s hit song I Got You (I Feel Good) is a recipe for happiness.
The iconic track is arguably one of the most upbeat ever made, guaranteed to get your heart racing, your head shaking and maybe even your fist pumping in time to the music. It is hard to listen to the Godfather of Soul blast out this tune and feel anything but cheerful.
Yet, it appears there may be something sinister lurking behind the catchy lyrics and energetic performance – listening to this song can make you do bad things.
"In real life, music is used to manipulate people in all kinds of ways," explains Naomi Ziv, a psychologist at the College of Management Academic Studies in Rishon Le Zion, Israel. "A lot of it can be negative," she says. "Music can make people more compliant, more aggressive and even racist."
These latest findings are a stark contrast with some long-held assumptions – including the belief that angry rap and metal by artists like Eminem and Marilyn Manson could incite violent behaviour. In the immediate aftermath of the Columbine High School, for instance, there were reports that linked Manson’s music to the two killers, although it later proved to be false.
In fact, psychologists at the University of Queensland in Australia would suggest that this music may, in fact, soothe our angrier urges. Genevieve Dingle and her colleagues deliberately antagonised people by asking them to talk about an event involving a friend or a colleague that made them angry before allowing them to listen to hardcore metal music. After listening to the music, the participants reported far more positive emotions than those who sat in silence.
“Listening to extreme music may represent a healthy way of processing anger for these listeners,” said Dingle.
Ziv’s research would instead suggest that “easy listening” tunes carry the most danger. In 2011, for instance, she found that music has the power to alter people’s moral judgements. She asked a group of volunteers to listen to a fictional radio advert for a website that claimed to be able to create false documents so people can receive a higher pension. Half of those who listened to the advert also heard Mozart’s Allegro from A Little Night Music playing in the background, while the other half had no music.
Similarly, a separate group were asked to listen to another advert describing how participants could cheat on a seminar paper for college using a website. Again, half of those who listened to the advert also heard James Brown’s I Got You (I Feel Good) playing in the background. In both cases, those who listened to the advert with the background music tended to be more accepting of the unethical, cheating behaviour encouraged in the adverts. In some instances the participants even reported seeing it in a positive light.
**Gently callous**
Another set of studies, published in the journal Psychology of Music, pushed the participants further – by asking them to be callous to another human being.
This time Ziv and her team asked them to do them a favour after completing a grammar test while listening to music in the background. Some heard James Brown's famous hit, others were played a Spanish dance hit called Suavemente by singer Elvis Crespo and a control group heard no music at all.
While the music was still playing, the researchers asked some of the participants to call a female student who needed to take part in the study to earn credits to complete her course, and tell her she could no longer participate. The researchers simply said: “I don’t feel like seeing her.” Another group were asked to tell a student who had missed the past semester due to a sickness that they couldn't have the course material she had been promised after all.
The majority of those who did not listen to music refused the request, which is hardly surprising: who wants to do someone else’s dirty work, especially when it is will harm another person’s chances of completing their studies? Yet Ziv found that in the first test, 65% of those who had music in the background when asked for the favour agreed to do what the researchers asked. In the second test, 82% of the group asked with music agreed.
“It was quite shocking,” says Ziv. “They were being asked to do something that involved hurting someone else and many of them said they would do it.”
So what is going on when people listen to James Brown’s unrelentingly upbeat track? Ziv thinks the answer lies in what happens to our personality when we are happy. “There has been work in the past that has shown when you are in a good mood, you agree more and process information less rigorously. Depressed people tend to be more analytical and are persuaded less easily.
“Christmas music is a perfect example of happy music that can make people more compliant. There are whole teams of people who think about what music to play in shopping malls and adverts to set the right atmosphere.”
Certain features in the music can also play with the way our brains work. Rhythmic sounds, for example, can coordinate the behaviour and thinking of a group of people. Annett Schirmer, a neuroscientist at the University of Singapore, has found that playing a rhythm on a drum can cause brainwaves to synchronise with the beat.
Her findings may help to explain why drums play such a big role in tribal ceremonies and why armies march to the sound of a drum beat. “The rhythm entrains all individuals in a group so that their thinking and behaving becomes temporally aligned,” suggests Schirmer.
It’s still not clear how music might influence behaviour beyond the laboratory, though Ziv suspects the effects may be profound. "In the real world, I think it can go to extremes," she said.
It is a disturbing thought. She points to football fan violence and the role that team songs can play in that, for instance. “Music can create a feeling of group cohesion and agreement,” she says. “When people do things together they are more likely to agree with each other too. This leads to something called groupthink, where there can be a deterioration in moral judgement.”
It may also change the way you vote, she things. “Music is used in politics all the time to create enthusiasm for ideas and to cultivate agreement.”
Jason McCoy, a musicologist at the Dallas Baptist University in Texas, agrees it’s plausible, suggesting that music helps to "normalise the narrative" of otherwise immoral messages. He points to other examples in history like where the Nazis broadcast swing music on the radio to get more youngsters to tune into the propaganda messages that accompanied it. McCoy's own work has examined the role that music may have played in making the messages of hate broadcast on the radio during the Rwandan genocide of 1994 seem more acceptable.
Ziv is currently conducting research on how patriotic music and national anthems can increase racist attitudes and antagonism towards others. She is finding that listening to songs that praise the bravery of Israeli soldiers caused Israeli participants to become more hostile towards non-Israelis and Palestinians, for instance.
Clearly music is just one of many factors subtly influencing our behaviour. But they are worth considering the next time your favourite tune hits the radio. To misquote James Brown’s famous song: Just because you feel good, doesn’t mean that you can do no wrong.
*Join 800,000+ Future fans by liking us on* *Facebook**, or follow us on* *Twitter**,* *Google+**,* *LinkedIn**and **Instagram*
*If you liked this story, **sign up for the weekly bbc.com features newsletter**, called “If You Only Read 6 Things This Week**”. A handpicked selection of stories from BBC Future, Earth, Culture, Capital, Travel and Autos, delivered to your inbox every Friday.*
| true | true | true |
There may be a surprising dark side to easy-listening and feel-good tracks
|
2024-10-12 00:00:00
|
2017-02-14 00:00:00
|
newsarticle
|
bbc.com
|
BBC
| null | null |
|
14,301,897 |
http://cdm.link/2017/05/cybernetic-synth-contains-brain-grown-inventors-cells/
|
This cybernetic synth contains a brain grown from the inventor's cells - CDM Create Digital Music
|
Peter Kirn
|
Digital? Ha. Analog? Oh, please. *Biological?* Now you’re talking.
The core of this synthesizer was grown in a lab from actual living cells sliced right out of its creator. Skin cells are transformed into stem cells which then form a neural network – one that exists not in code, but in actual living tissue.
Now, in comparison to your brain (billions of neurons and a highly sophisticated interactive structure), this handful of petri dish neurons wired into some analog circuits is impossibly crude. It signifies your brain sort of in the way one antenna on an ant signifies the solar system. But philosophically, the result is something radically different from the technological world to which we’re accustomed. This is a true analog-biological instrument. It produces enormous amounts of data. It learns and responds, via logic that’s in cells instead of in a processor chip. Sci-fi style, biological circuitry and analog circuitry are blended with one another – “wet-analogue,” as the creator dubs it.
And for any of you who hope to live on as a brain in a jar in a Eurorack, well, here’s a glimpse of that.
Artist Guy Ben-Ary comes to Berlin this week to present his invention, the project of a highly collaborative inter-disciplinary team. And “cellF” – pronounced “self,” of course – will play alongside other musicians, for yet an additional human element. (This week, you get Schneider TM on guitar, and Stine Janvin Motland singing.)
There are two ways to think of this: one is as circuitry and cell structures mimicking the brain, but another is this biological art as a way of thinking about the synthesizer as invention. The “brain” lives inside a modular synth, and its own structures of neurons are meant in some way as homage to the modular itself.
Whether or not cellF’s musical style is to your liking, the biological process here is astounding on its own – and lets the artist use as his medium some significant developments in cell technology, ones that have other (non-artistic) applications to the future of healing.
The cells themselves come from a skin biopsy, those skin cells then transformed into stem cells via a ground-breaking technique called Induced Pluripotent technology.
Given the importance of skin cells to research and medical applications, that’s a meaningful choice. The network itself comprises roughly 100,000 cells – which sounds like a lot, but not in comparison to the 100 billion neurons in your brain. The interface is crude, too – it’s just an 8×8 electrode grid. But in doing so, Guy and his team have created a representation of the brain in relationship to analog circuitry. It’s just a symbol, in other words – but it’s a powerful one.
Of course, the way you wire your brain into a modular synthesizer when you use it is profoundly more subtle. But that also seems interesting in these sorts of projects: they provide a mirror on our other interactions, on understanding the basic building blocks of how biology and our own body work.
They also suggest artistic response as a way of art and science engaging one another. Just having those conversations can elevate the level of mutual understanding. And that matters, as our human species faces potentially existential challenges.
It also allows artistic practice to look beyond just the ego, beyond even what’s necessarily human. CTM curator Jan Rohlf talks to CDM about the “post-human” mission of these events this week.
For me personally, the underlying and most interesting question is, if how we can conceptualize and envision something like post-human music. Of course, humans have long ago begun to appreciate non-human made sounds as music, for example bird-song, insects, water etc. Nowadays we can add to this list with generative algorithms and all kinds of sound producing machines, or brain-wave-music and so on. But the questions always is, how do we define music? Is this all really music? Can it be music even if there is no intentional consciousness behind, that creates the sounds with the intend of making music? It is a blurry line, I guess. Animals can appreciate sounds and enjoy them. So we might say that they also participate in something that can be called music-making. But machines? At this stage?
The point is, to have the intention to make music, you need not only some kind of apparatus that creates sounds, but you need a mind that has the intention to interpret the sounds as music. Music is experiential and subjective. There is this quote from Lucian Berio that captures this nicely: ” “Music is everything that one listens to with the intention of listening to music”.
Following this, we really would need to have an artificial or non-human consciousness that appreciates music and listens to sound with the intent of listening to music. And only then we could speak of post-human music.
Anyhow, thinking of the post-human as way to rethink the position we humans have in this world, it still makes sense to call such artistic experiments post-human music. They contribute in a shift of perspective, in which we humans are not the pivot or the center of the world anymore, but an element among many equal elements, living or non-living, human or non-human, that are intensely interconnected.
| true | true | true |
Digital? Ha. Analog? Oh, please. Biological? Now you’re talking. The core of this synthesizer was grown in a lab from actual living cells sliced right out of its creator. Skin cells are transformed into stem cells which then form a neural network – one that exists not in code, but in actual living tissue. Now, […]
|
2024-10-12 00:00:00
|
2017-05-08 00:00:00
|
article
|
cdm.link
|
CDM Create Digital Music
| null | null |
|
3,464,484 |
http://morepypy.blogspot.com/2012/01/transactional-memory-ii.html
|
Transactional Memory (II)
| null |
Here is an update about the previous blog post about the Global Interpreter Lock (GIL). In 5 months, the point of view changed quite a bit.
Let me remind you that the GIL is the technique used in both CPython and PyPy to safely run multi-threaded programs: it is a global lock that prevents multiple threads from actually running at the same time. The reason to do that is that it would have disastrous effects in the interpreter if several threads access the same object concurrently --- to the point that in CPython even just manipulating the object's reference counter needs to be protected by the lock.
So far, the ultimate goal to enable true multi-CPU usage has been to remove the infamous GIL from the interpreter, so that multiple threads could actually run in parallel. It's a lot of work, but this has been done in Jython. The reason that it has not been done in CPython so far is that it's even more work: we would need to care not only about carefully adding fine-grained locks everywhere, but also about reference counting; and there are a lot more C extension modules that would need care, too. And we don't have locking primitives as performant as Java's, which have been hand-tuned since ages (e.g. to use help from the JIT compiler).
But we think we have a plan to implement a different model for using
multiple cores. Believe it or not, this is *better* than just removing
the GIL from PyPy. You might get to use all your cores *without ever
writing threads.*
You would instead just use some event dispatcher, say from Twisted, from
Stackless, or from your favorite GUI; or just write your own. From
there, you (or someone else) would add some minimal extra code to the
event dispatcher's source code, to exploit the new transactional features
offered by PyPy. Then you would run your program on a
special version of PyPy, and voilà: you get some form of automatic parallelization.
Sounds magic, but the basic idea is simple: start handling multiple
events in parallel, giving each one its own *transaction.* More about
it later.
## Threads or Events?
First, why would this be better than "just" removing the GIL? Because using threads can be a mess in any complex program. Some authors (e.g. Lee) have argued that the reason is that threads are fundamentally non-deterministic. This makes it very hard to reason about them. Basically the programmer needs to "trim" down the non-determinism (e.g. by adding locks, semaphores, etc.), and it's hard to be sure when he's got a sufficiently deterministic result, if only because he can't write exhaustive tests for it.
By contrast, consider a Twisted program. It's not a multi-threaded program, which means that it handles the "events" one after the other. The exact ordering of the events is not really deterministic, because they often correspond to external events; but that's the only source of non-determinism. The actual handling of each event occurs in a nicely deterministic way, and most importantly, not in parallel with the handling of other events. The same is true about other libraries like GUI toolkits, gevent, or Stackless.
(Of course the Twisted and the Stackless models, to cite only these two, are quite different from each other; but they have in common the fact that they are not multi-threaded, and based instead on "events" --- which in the Stackless case means running a tasklet from one switch() point to the next one.)
These two models --- threads or events --- are the two main models we have right now. The latter is more used in Python, because it is much simpler to use than the former, and the former doesn't give any benefit because of the GIL. A third model, which is the only one that gives multi-core benefits, is to use multiple processes, and do inter-process communication.
## The problem
Consider the case of a big program that has arbitrary complicated dependencies. Even assuming a GIL-less Python, this is likely enough to prevent the programmer from even starting a multi-threaded rewrite, because it would require a huge mess of locks. He could also consider using multiple processes instead, but the result is annoying as well: the complicated dependencies translate into a huge mess of inter-process synchronization.
The problem can also be down-sized to very small programs, like the kind of hacks that you do and forget about. In this case, the dependencies might be simpler, but you still have to learn and use subtle locking patterns or a complex inter-process library, which is overkill for the purpose.
(This is similar to how explicit memory management is not very hard for
small programs --- but still, nowadays a lot of people agree that
automatic memory management is easier for programs of all sizes. I
think the same will eventually be true for using multiple CPUs, but the
correct solution will take time to mature, like garbage collectors did.
This post is a step in hopefully the right direction `:-)`)
## Events in Transactions
Let me introduce the notion of *independent events*: two events are
independent if they don't touch the same set of objects. In a multi-threaded
world, it means that they can be executed in parallel without needing any lock
to ensure correctness.
Events might also be *mostly independent*, i.e. they rarely access the same
object concurrently. Of course, in a multi-threaded world we would still need
locks to ensure correctness, but the point is that the locks are rarely causing
pauses: lock contention is low.
Consider again the Twisted example I gave above. There are often several
events pending in the dispatch queue (assuming the program is using 100%
of our single usable CPU, otherwise the whole discussion is moot). The case I am
interested in is the case in which these events are *generally mostly
independent*, i.e. we expect few conflicts between them. However
they don't *have* to be proved independent. In fact it is fine if
they have arbitrary complicated dependencies as described above. The
point is the expected common case. Imagine that you have a GIL-less
Python and that you can, by a wave of your hand, have all the careful
locking mess magically done. Then what I mean here is the case in which
such a theoretical program would run mostly in parallel on multiple
core, without waiting too often on the locks.
In this case, the solution I'm proposing is that with minimal tweaks in the event dispatch loop, we can handle multiple events on multiple threads, each in its own transaction. A transaction is basically a tentative execution of the corresponding piece of code: if we detect conflicts with other concurrently executing transactions, we abort the whole transaction and restart it from scratch.
By now, the fact that it can basically work should be clear: multiple transactions will only get into conflict when modifying the same data structures, which is the case where the magical wand above would have put locks. If the magical program could progress without too many locks, then the transactional program can progress without too many conflicts. In a way, you get even more than what the magical program can give you: each event is dispatched in its own transaction, which means that from each event's point of view, we have the illusion that nobody else is running concurrently. This is exactly what all existing Twisted-/Stackless-/etc.-based programs are assuming.
Note that this solution, without transactions, already exists in some other languages: for example, Erlang is all about independent events. This is the simple case where we can just run them on multiple cores, knowing by construction of the language that you can't get conflicts. Of course, it doesn't work for Python or for a lot of other languages. From that point of view, what I'm suggesting is merely that transactional memory could be a good model to cope with the risks of conflicts that come from not having a special-made language.
## Not a perfect solution
Of course, transactional memory (TM) is not a perfect solution either. Right now, the biggest issue is the performance hit that comes from the software implementation (STM). In time, hardware support (HTM) is likely to show up and help mitigate the problem; but I won't deny the fact that in some cases, because it's simple enough and/or because you really need the top performance, TM is not the best solution.
Also, the explanations above are silent on what is a hard point for TM, namely system calls. The basic general solution is to suspend other transactions as soon as a transaction does its first system call, so that we are sure that the transaction will succeed. Of course this solution is far from optimal. Interestingly, it's possible to do better on a case-by-case basis: for example, by adding in-process buffers, we can improve the situation for sockets, by having recv() store in a buffer what is received so that it can be re-recv()-ed later if the transaction is aborted; similarly, send() or writes to log files can be delayed until we are sure that the transaction will commit.
From my point of view, the most important point is that the TM solution comes from the correct side of the "determinism" scale. With threads, you have to prune down non-determinism. With TM, you start from a mostly deterministic point, and if needed, you add non-determinism. The reason you would want to do so is to make the transactions shorter: shorter transactions have less risks of conflicts, and when there are conflicts, less things to redo. So making transactions shorter increases the parallelism that your program can achieve, while at the same time requiring more care.
In terms of an event-driven model, the equivalent would be to divide the response of a big processing event into several events that are handled one after the other: for example, the first event sets things up and fires the second event, which does the actual computation; and afterwards a third event writes the results back. As a result, the second event's transaction has little risks of getting aborted. On the other hand, the writing back needs to be aware of the fact that it's not in the same transaction as the original setting up, which means that other unrelated transactions may have run in-between.
## One step towards the future?
These, and others, are the problems of the TM approach. They are "new" problems, too, in the sense that the existing ways of programming don't have these problems.
Still, as you have guessed, I think that it is overall a win, and possibly a big win --- a win that might be on the same scale for the age of multiple CPUs as automatic garbage collection was 20 years ago for the age of RAM size explosion.
Stay tuned for more!
--- Armin (and reviews by Antonio and Fijal)
**UPDATE:**please look at the tiny transaction module I wrote as an example. The idea is to have the same interface as this module, but implemented differently. By making use of transactional memory internally, it should be possible to safely run on multiple CPUs while keeping the very same programmer interface.
## 22 comments:
Great article, great solution to a big problem...
I am really looking forward to this :-)
As an experiment I have developed Pyworks, which makes objects concurrent and methods asynchronious. But it makes little sense to do performance test on an multicore CPU because of the GIL.
The code for Pyworks can be found at https://bitbucket.org/raindog/pyworks
> These two models --- threads or events --- are the two main models we have right now.
Where does Go-style concurrency fit in?
If you go that road, you will certainly find out that Transactional Memory is much, much harder to get right than it looks like in today effectful/imperative languages. Sure, it looks wonderful on paper, but if your language doesn't help you control side-effects it will give you a very hard time.
Currently, there is satisfying STM support in Haskell (because of its tight type-based control of side-effects) and Clojure (beacuse of its tight control on mutability), and it might be getting into Scala.
I doubt Python can easily get such control, at least without an important reorganization of idiomatic practices and frameworks, that go beyond the "let's be event-driven" decision. Which makes your "this is going to work magically" story a bit hard to believe.
There has been intense research on this topic for some decades now, and several attempts at getting it to work in current mainstream languages have mostly failed.
See for example this long retrospective of the STM.NET effort at Microsoft Research, by Joe Duffy:
A (brief) retrospective on transactional memory
or this shorter blog post by Brian Hurt:
The problem with STM: your languages still suck.
I was a bit disappointed that you didn't cite any of the relevant literature in your post. It made me suspicious of "reiventing the wheel"...
One major use-case for multithreading involves a large, unchanging data structure which many threads access. I.e., the data structure is loaded by a parent task, then not modified again; a number of threads are then spawned to use it for calculations.
In CPython, the GIL makes this impossible if only because the reference counters need to be protected. With Cython in threads, however, you can turn off the GIL and do some work on C-style data structures.
I'm wondering whether the STM PyPy effort could have a very useful, and very early, benefit: simply enabling an unchanging data structure to be accessed by a number of processors via the kinds of events you describe. There wouldn't be a need for transactions, because the programmer would take responsibility for only sharing unchanging structures between simultaneously-executing events.
But it seems like the basic requirements for this kind of facility might be met in in early stage of STM development. And a solution that allowed multiple processors to access large, unchanging structures would be very useful in certain applications. I know I have one in mind that I'm looking at CPython/Cython for, but I'd rather see if I could get the performance I need from PyPy.
Just thought it was worth mentioning.
@Anonymous: in the extract you cite I meant "the two main models in Python". As far as I can tell, Go does concurrency by enforcing all communications to go via channels, so I would classify it as a "special-made" language. This solution might be nice and usable, but it does not really work at all in languages like Python.
@Armin, CSP may be built into Go, but IMO this was a mistake, there is no requirement for it to be a language feature; it fits nicer as library. See [python-csp] for a python implementation.
[python-csp] http://code.google.com/p/python-csp/wiki/Tutorial
@gasche: I know about Haskell, Clojure and Scala, and I just read the two blog posts you pointed to.
I'm not talking about giving explicit TM to the end programmers. I'm instead considering TM as an internal, implementation-only feature. That makes it very similar to GCs.
I know the points and issues of traditional TM systems, which are nicely reported by Joe Duffy in "A (brief) retrospective on transactional memory". These are of course perfectly valid issues, but I think they do not apply (or "not that much") in the particular context I'm talking about. For example, this includes the large sections about nested transactions, and about consistency between the transactional and non-transactional worlds (Weak or Strong Atomicity, The Privatization Problem). Even "Where is the Killer App?" is obvious in this case: any existing Twisted App is potentially a Killer App.
Sorry for not including references to papers. I must admit I don't know any paper that describes a similar use case for TM.
The link to the previous blog post is broken. It should be: http://morepypy.blogspot.com/2011/06/global-interpreter-lock-or-how-to-kill.html
> @Armin, CSP may be built into Go, but IMO this was a mistake, there is no requirement for it to be a language feature; it fits nicer as library. See [python-csp] for a python implementation.
Stackless (which PyPy enables) supports Go-style channels as well, no?
http://www.stackless.com/wiki/Channels
Your idea could work for other easy to inject into points, such as loops, and comprehensions. Especially with much of the work in pypy already done for identifying information about loops.
How does this compare to grand central dispatch and blocks? http://en.wikipedia.org/wiki/Grand_Central_Dispatch
Events are a very good way to model concurrency, and are widely used. It is a great place to dispatch concurrency into parallelism.
Closures/blocks provide a fairly decent way to get some of the protection of STM - and in many programs give you the 80% solution. For code that plays nicely and avoids mutable, or global data - this works. Luckily, a lot of event based code is already written in this way. As you say, they are "generally mostly independent".
Making the bad cases a quick fail, like in JavaScript worker threads could be an ok option. As soon as someone tries to access global data(do a system call, access the DOM, or access data outside the closure even), the program would fail there. Then you could fix those cases, or "add non-determinism" as you say. I think I'd prefer fail fast here, rather than have to detect these problems, and have them silently pass by.
You still have scheduling problems, and trying to figure out task size. As well, this does not solve lots of other problems. However, it is cool that it could be applied automatically, and probably 'safely'.
Another random thought... you could probably mark chunks of code as 'pure' as your run through them, and if they do a system call or mutate global data mark them as 'unpure' and don't try them again.
I very much look forward to reading your results as you implement more.
When Armin gets this excited I'd fasten my seatbelt and put my goggles on.
Thank you for letting me be an (otherwise mostly silent) observer.
Please keep shifting boundaries!
- Eric
Update: please look at the tiny transaction module I wrote as an example. The idea is to have the same interface as this module, but implemented differently. By making use of transactional memory internally, it should be possible to safely run on multiple CPUs while keeping the very same programmer interface.
https://bitbucket.org/arigo/arigo/raw/default/hack/stm/transactionmodule/
@Armin: That transaction code looks very simple. It seems trivial to implement a map/mapReduce style function on top of your transaction module.
It is a very similar API to worker pool APIs which many thread using programs use. The main difference is that you combine the join() in the run method. It seems that a threaded web server for example could use this? What would happen if each incoming request comes in, and is put into the transaction (and say the 10th request has an error)? Would it be better to use multiple transactions?
Have you thought how thread local storage would work?
@notme: yes, a web server or anything can use this instead of using threads. It's of course missing a convincing select() or poll() version for that.
The details haven't been thought out; right now an exception interrupts everything. In an STM model it's unclear if concurrent transactions should still be allowed to complete or not. Anyway the point is that exceptions should not really occur because precisely they interrupt everything --- you would typically add instead in every transaction code like "try: .. except: traceback.print_exc()".
Thread local storage: what would be the point?
I also see no reason for Thread local memory.
I like the idea of thinking about TM in the same line as GC. When you have GC the changes to the language is that you don't need to write free/dealloc.
Having TM would mean that you don't have to write acquire_GIL
The devil's in the details.
I'm not sure I buy your conclusions here. STM is not a panacea for solving concurrency issues, and it has some key limitations that limit its general applicability.
On what granularity do you plan to have transactions? How do you know? Perhaps the VM will have enough knowledge of a given thread's activities to limit transactional overhead to only those structures in memory that are shared, but there still needs to be some indirection in case another thread hops in and starts making changes.
Where do transactions start and end? In STMs I know, the in-transaction overhead for reading and writing data is *much* higher, since it needs to know if someone else has committed a transaction first and be able to roll back.
Perhaps this is all intended to be hidden, and you never actually have "threads" that the user can see. But if you're going to parallelize, you'll have threads *somewhere* that are going to contend for resources. If they're going to contend for resources, even in an STM, they're going to have to check for contention, register their interest, and then you're back to the indirection overhead.
Perhaps I'm not understand what your end goal is. You can't simply turn the world into a series of transactions unless you want every read and write to have transaction overhead or you have some clear way of limiting transaction overhead to only where it's needed. You cite Erlang...but Erlang deals with immutable objects, and there's far less need for anything like an STM. Others have mentioned Clojure...but again, Clojure is mostly immutable structures, and transactional overhead is limited to Refs, where you'll make single coarse-grained reads and writes.
Am I missing the point? Are you not suggesting VM-wide STM, with the resulting transactional overhead for every read and write?
@Charles: Indeed, I am suggesting VM-wide STM, with the resulting transactional overhead for every read and write. I actually got such a VM yesterday (with no GC): it seems to be about 10x slower on a single thread.
Note that even 10x slower is a plus if it scales to dozens of processors. But of course, a better point of view is that some years ago the regular pypy *was* 10x slower than CPython. It was a lot of efforts but we managed to make it only 1.5-2x slower. And this is all without counting the JIT. If STM bogs down to a generally-not-triggered read barrier before every read, then the performance impact could be well under 2x.
Please note also that I don't care about Java-like performance where even loosing 10% of performance would be a disaster. If we end up with a pypy-tm that is 2x slower than a regular pypy, I would be quite happy, and I believe that there is a non-negligible fraction of the Python users that would be, too.
On granularity: for now I'm going with the idea that the granularity is defined "naturally" in the source program as the amount of work done every time some central dispatch loop calls some code. There might be several dispatch loops in total, too. This is true in the cases I can think of: typical Twisted or Stackless programs, pypy's "translate.py", the richards benchmark, etc.
Please look at http://paste.pocoo.org/show/539822/ for an example of what I'm talking about. It's a diff against the standard richards.py: it is a pure Python user program in which I added calls to the new 'transaction' module. At this level there is no hint of Transactional Memory.
@Gary Robinson: (off-topic:) for this kind of use case, you can use os.fork() after the immutable data is ready. It "kind of works" both in pypy and in cpython, although not really --- in cpython the reference counts are modified, causing the pages to get unshared between processes; and in pypy the garbage collector (GC) has the same effect, so far. It could be solved in pypy by more tweaks the GC.
@armin:
@Anonymous: in the extract you cite I meant "the two main models in Python". As far as I can tell, Go does concurrency by enforcing all communications to go via channels, so I would classify it as a "special-made" language. This solution might be nice and usable, but it does not really work at all in languages like Python.Armin, Stackless Python uses a model that at the API level is very similar to Go. Go borrows from the Bell Labs family of languages (i.e. Newsqueak). The fundamental idea is that message pasing is used to share information between threads/processes/coroutines. In this regard, Go is in the same camp as say, Erlang (although the messaging systems are different).
What I think is interesting and workable for Python are efforts in languages like Polyphonic C# (see the paper "Scalable Join Patterns") and Concurrent/Parallel ML, where lock-free libraries and STM techniques are used under the hood to improve the efficiency of the messaging/synchronisation system. In this fashion, the programmer has a conceptually clean concurrency model and still can make the important decisions about how to partition the problem.
Cheers,
Andrew
@daniel
@Armin, CSP may be built into Go, but IMO this was a mistake, there is no requirement for it to be a language feature; it fits nicer as library. See [python-csp] for a python libraryI have looked at Python-CSP a long time ago. I recall it being verbose. However I use Stackless Python. And using PyPy's stackless.py, I implemented select() and join patterns. Sometimes I wish I had language support: they cut down on silly mistakes and make the code less verbose for simple cases. However what I have found is that the language can get in the way. For instance, in Go, one has to come up with hacks to do some simple like do a select on an arbitrary number of channels. Perhaps I am wrong but I suspect stuff like select()'s design was influenced by the fact Newsqueak was originally designed to make a windowing system easier to write. So one is monitoring only a handful of channels. In constrast, this is not the way Stackless Python programmes are written.
Cheers,
Andrew
A link to a group that did the same thing (thanks a lot Andrew for this link!):
http://research.microsoft.com/en-us/projects/ame/
In particular the May 2007 paper (HotOS) nicely summarizes exactly what I'm trying to say, and I think it is clearer than me, if I have to jugde from feedback :-)
Speaking as someone maintaining a large application that uses Twisted, this sounds great.
Post a Comment
| true | true | true |
Here is an update about the previous blog post about the Global Interpreter Lock (GIL). In 5 months, the point of view changed quite a bit...
|
2024-10-12 00:00:00
|
2012-01-14 00:00:00
| null | null |
blogspot.com
|
morepypy.blogspot.com
| null | null |
8,553,547 |
http://blogs.wsj.com/digits/2014/10/31/judge-rules-suspect-can-be-required-to-unlock-phone-with-fingerprint/
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
335,661 |
http://www.nytimes.com/2008/10/19/magazine/19Autism-t.html?hp=&pagewanted=all
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
23,652,643 |
http://www.ml-illustrated.com/2020/06/25/ARM-Macs-virtualization-different-take.html
|
ARM Macs and Virtualization: A Different Take (Opinion)
|
Machine Learning Illustrated
|
# ARM Macs and Virtualization: A Different Take (Opinion)
The transition from Intel to Apple Silicon Macs is a huge deal, but some are lamenting this shift as a loss to developers, especially those who rely on virtualization such as Docker. The concern is that under x86 emulation instead of hypervisor, the performance hit could be significant.
As the case may be, my take is the issue wouldn’t be that serious, while the upside for developers is huge, especially for those who primarily develop for Apple’s ecosystem. And if my hunch is correct, that ecosystem will expand to server-side soon enough. Let me explain.
Just to be clear, I’m a huge fan of Docker, and I’m a ML practitioner working
primarily on server-side within Linux, so I’m very familiar with the importance
of the development machines being consistent as server-side. Funnily enough,
I’d say this is the biggest rationale for Apple going ARM for Macs, which is to make
the development environment *exactly the same* as their deployment targets, namely
iOS, iPadOS, Apple Watch, Apple TV, and soon, MacOS.
That is to say, the current situation is that Xcode development is done on x86, emulated in x86 simulators, and re-built for ARM for device deployment. This has been the state since the first iPhone and Apple has done a superb job maintaining the seamlessness of this process ever since (as far as I know), but more edge cases are beginning to crop up.
### Edge Case or Tentpole?
From my narrow view of ML, the one I experience is the lack of emulation of Apple’s Neural Engine (ANE) for accelerating model inferencing. Once a model is compiled to Core ML, the emulator would execute the model on CPU and possibly GPU, but its behavior is different on device when offloaded to ANE. It’s not only a performance difference, but also floating point differences and possibly quantization nuances.
What this means is to truly test a ML model’s behavior on Apple devices, one has to deploy to actual devices, likely via a cable to enable debugging. As you guessed it, this also means that you’d have to have multiple physical devices if one is to test against different models. Multiply this with different iOS versions to test against, it quickly gets out of hand. If Apple wants to (and likely plans to) add more custom hardware, the situation is untenable for both Apple and its developers.
### Virtualized Apples?
With ARM (and other custom silicon) on the development machines, we already know iOS apps will run natively so there’s no need for emulators. I wouldn’t be surprised if these apps are running within individual virtualized instances, to as closely as possible create the same behavior as actual devices. Apple could go as far as virtualize the hardware resources and capabilities so performance would be emulated faithfully also, greatly simplifying the development, testing, and debugging process for developers.
For ANE, this switch will be huge. People have had to deal with “numerical differences” when running ML models on CPU compared to ANE, a process that is slow to debug via tethered devices. With the Mac having the same hardware as the target devices, there’s consistency and no hidden surprises. Whenever the development cycle is shortened and opaque differences removed, it’s a good thing.
What about the downside of Docker becoming 2 to 5x slower without hypervisor? While that is indeed a downside, I’d argue that for local Docker instances, they are better used for functional testing and not part of the core development cycle. Coming from the Python world, I have had no problems developing software in native OS and only use local Docker for validation prior to deployment. The other main use case for Docker of bringing up server clusters locally would be slower, but again, shouldn’t be core part of the development cycle. There’s always the option for remote Docker or even better, deploy to sandbox and integration environments for testing.
### ARMed Cloud?
That brings me to the last point about the ARM transition for Macs, which is the aspect of developing for server-side, not an area of focus for Apple. However, if Docker and virtualization needs are primarily for server-side development, and to be consistent of being consistent, the problem of x86 emulation can be dealt with if Apple offers ARM-based cloud servers. Having this uniformity would truly enable developers to write code on their ARM Macs, test across devices locally, and deploy to iOS devices and server-side, all from one code base, without translation, emulation, or hidden gotchas. Pretty sweet landscape for Apple developers if you ask me.
### Closing
Ever since I bought my Titanium MacBook Pro many years ago, I have been a fan of Apple’s combination of software and hardware, but more importantly, its long term approach to their product developments. This ARM Mac transition has been long in the making, and I, for one, am excited again to buy an ARM MacBook and the future it’d usher in.
| true | true | true |
The transition from Intel to Apple Silicon Macs is a huge deal, but some are lamenting this shift as a loss to developers, especially those who rely on virtualization such as Docker. The concern is that under x86 emulation instead of hypervisor, the performance hit could be significant.
|
2024-10-12 00:00:00
|
2020-06-25 00:00:00
| null |
article
|
ml-illustrated.com
|
Machine Learning Illustrated
| null | null |
359,448 |
http://support.mozilla.com/tiki-view_forum_thread.php?locale=en-US&forumId=1&comments_parentId=195460
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
13,074,868 |
https://github.com/2fd/graphdoc
|
GitHub - 2fd/graphdoc: Static page generator for documenting GraphQL Schema
| null |
- Facebook Test Star Wars
- Github V4 API
- Shopify API
- Pokemon GraphQL
` npm install -g @2fd/graphdoc`
` > graphdoc -e http://localhost:8080/graphql -o ./doc/schema`
` > graphdoc -s ./schema.graphql -o ./doc/schema`
### Generate documentation from for the "modularized schema" of graphql-tools
` > graphdoc -s ./schema.js -o ./doc/schema`
`./schema.graphql`
must be able to be interpreted with graphql-js/utilities#buildSchema
` > graphdoc -s ./schema.json -o ./doc/schema`
`./schema.json`
contains the result of GraphQL introspection query
```
// package.json
{
"name": "project",
"graphdoc": {
"endpoint": "http://localhost:8080/graphql",
"output": "./doc/schema",
}
}
```
And execute
` > graphdoc`
```
> graphdoc -h
Static page generator for documenting GraphQL Schema v2.4.0
Usage: node bin/graphdoc.js [OPTIONS]
[OPTIONS]:
-c, --config Configuration file [./package.json].
-e, --endpoint Graphql http endpoint ["https://domain.com/graphql"].
-x, --header HTTP header for request (use with --endpoint). ["Authorization: Token cb8795e7"].
-q, --query HTTP querystring for request (use with --endpoint) ["token=cb8795e7"].
-s, --schema, --schema-file Graphql Schema file ["./schema.json"].
-p, --plugin Use plugins [default=graphdoc/plugins/default].
-t, --template Use template [default=graphdoc/template/slds].
-o, --output Output directory.
-d, --data Inject custom data.
-b, --base-url Base url for templates.
-f, --force Delete outputDirectory if exists.
-v, --verbose Output more information.
-V, --version Show graphdoc version.
-h, --help Print this help
```
In graphdoc a plugin is simply an object that controls the content that is displayed on every page of your document.
This object should only implement the
`PluginInterface`
.
To create your own plugin you should only create it as a `plain object`
or a `constructor`
and export it as `default`
If you export your plugin as a constructor, when going to be initialized, will receive three parameters
`schema`
: The full the result of GraphQL introspection query`projectPackage`
: The content of`package.json`
of current project (or the content of file defined with`--config`
flag).`graphdocPackage`
: The content of`package.json`
of graphdoc.
For performance reasons all plugins receive the reference to the same object and therefore should not modify them directly as it could affect the behavior of other plugins (unless of course that is your intention)
```
// es2015 export constructor
export default class MyPlugin {
constructor(schema, projectPackage, graphdocPackage) {}
getAssets() {
/* ... */
}
}
```
```
// es2015 export plain object
export default cost myPlugin = {
getAssets() {
/* ... */
},
}
```
```
// export constructor
function MyPlugin(schema, projectPackage, graphdocPackage) {
/* ... */
}
MyPlugin.prototype.getAssets = function() {
/* ... */
};
exports.default = MyPlugin;
```
```
// export plain object
exports.default = {
getAssets: function() {
/* ... */
}
};
```
You can use the plugins in 2 ways.
```
> graphdoc -p graphdoc/plugins/default \
-p some-dependencies/plugin \
-p ./lib/plugin/my-own-plugin \
-s ./schema.json -o ./doc/schema
```
```
// package.json
{
"name": "project",
"graphdoc": {
"endpoint": "http://localhost:8080/graphql",
"output": "./doc/schema",
"plugins": [
"graphdoc/plugins/default",
"some-dependencie/plugin",
"./lib/plugin/my-own-plugin"
]
}
}
```
TODO
TODO
| true | true | true |
Static page generator for documenting GraphQL Schema - 2fd/graphdoc
|
2024-10-12 00:00:00
|
2016-08-09 00:00:00
|
https://opengraph.githubassets.com/18a920c5b77612bfbce7112832d6192858e47a0d0f426339bfb31969ddc7ba80/2fd/graphdoc
|
object
|
github.com
|
GitHub
| null | null |
21,344,892 |
https://www.sapiens.org/archaeology/woodstock-archaeology/
| null | null | null | true | false | false | null | null | null | null | null | null | null | null | null |
32,032,118 |
https://timharford.com/2022/07/learning-to-think-well-involves-hearts-as-well-as-minds/
|
Learning to think well involves hearts as well as minds
|
Tim Harford
|
What does it mean to “learn how to think”? Is it a matter of learning some intellectual skills such as fluent reading, logic and clear expression? Does it require familiarity with some canonical texts or historical facts? Perhaps it’s all about correcting certain biases that cloud our judgment? I recently read a thought-provoking essay by the psychologist Barry Schwartz, best known for his book The Paradox of Choice.
Writing a few years ago in The Chronicle of Higher Education, Schwartz argued that one of the goals of a university education, especially a liberal arts education, is to teach students how to think. The trouble is, said Schwartz, “nobody really knows what that means”.
Schwartz proposes his own ideas. He is less interested in cognitive skills than in intellectual virtues.
“All the traits I will discuss have a fundamental moral dimension,” he says, before setting out the case for nine virtues: love of truth; honesty about one’s own failings; fair-mindedness; humility and a willingness to seek help; perseverance; courage; good listening; perspective-taking and empathy; and, finally, wisdom — the word Schwartz uses to describe not taking any of these other virtues to excess.
One only has to flip the list to see Schwartz’s point. Imagine a person who is hugely knowledgeable and brilliantly rational, yet who falls short on these virtues, being indifferent to truth, in denial about their own errors, prejudiced, arrogant, easily discouraged, cowardly, dismissive, narcissistic and prone to every kind of excess. Could such a person really be described as knowing how to think? They would certainly not be the kind of person you’d want to put in charge of anything.
“My list was meant to start the conversation, not end it,” Schwartz told me. So I sent his list to some people I respect, both in and adjacent to academia, to see what they made of it. The reaction was much the same as mine: almost everyone liked the idea of intellectual virtues, and almost everyone had their own ideas about what was missing.
The Cambridge statistician Sir David Spiegelhalter raised the idea of intellectual variety, since working on disparate projects was often a source of insight. Hetan Shah, chief executive of the British Academy, suggested that this variety, and in particular the ability to see the connection between different parts of a system, was the most important intellectual virtue. He also argued for a sense of humour: if we can’t play with ideas, even dangerous ideas, we are missing something.
Dame Frances Cairncross has chaired several notable academic institutions. She suggested that if one accepted the premise that intellectual virtues were also moral virtues, a greater one was “humanity . . . a sympathy for the human condition and a recognition of human weakness”. She also suggested the virtue of “getting stuff done”, noting the line from the Book of Common Prayer, “we have left undone those things which we ought to have done.” True enough. What would be the value of having all these intellectual virtues if we did not exercise them, and instead spent our days munching popcorn and watching TV?
Tom Chatfield, author of How To Think, mentioned persuasiveness. What is the point of thinking clearly if you cannot help anyone else to do likewise? This is fair, although persuasiveness is perhaps the intellectual virtue that most tempts us into the vices of arrogance, partisanship and an unbalanced treatment of the facts.
Almost everyone raised the omission that was much on my mind: curiosity. Curiosity was not on Schwartz’s list, except perhaps by implication. But curiosity is one of the central intellectual virtues. Curiosity implies some humility, since it is an acknowledgment that there is something one doesn’t yet understand. Curiosity implies open-mindedness and a quest to enlarge oneself. It is protective against partisanship. If we are curious, many other intellectual problems take care of themselves. As Orson Welles put it about the film-going audience: “Once they are interested, they understand anything in the world.”
Very good. Range, systemic thinking, humanity, humour, getting things done, persuasiveness, curiosity. Other plausible virtues were suggested, too; alas, this columnist must also display the virtue of brevity.
But one of my correspondents had a sharply different response to Schwartz’s emphasis on explicitly moral intellectual virtues — tellingly, the one most actively involved in teaching. Marion Turner, professor of English literature at Oxford University, put it frankly: “I’m not trained to teach students how to be good people, and that’s not my job.”
It’s a fair point. It is very pleasant to make a list of intellectual virtues, but why should we believe that academics can teach students courage, humility or any other virtue? Yet if not academics, then who? Parents? Primary schoolteachers? Newspaper columnists? Perhaps we should just hope that people acquire these virtues for themselves? I am really not sure.
Barry Schwartz is on to something, that is clear. Facts, logic, quantitative tools and analytical clarity are all very well, but the art of thinking well requires virtues as well as skills. And if we don’t know who will teach those virtues, or how to teach them, that explains a lot about the world in which we now live.
*Written for and first published in the Financial Times on 10 June 2022.*
*The paperback of The Data Detective was published on 1 February in the US and Canada. Title elsewhere: How To Make The World Add Up. *
*I’ve set up a storefront on Bookshop in the United States and the United Kingdom. Links to Bookshop and Amazon may generate referral fees.*
| true | true | true |
What does it mean to “learn how to think”? Is it a matter of learning some intellectual skills such as fluent reading, logic and clear expression? Does it require familiarity with some canonical te…
|
2024-10-12 00:00:00
|
2022-07-07 00:00:00
|
article
|
timharford.com
|
Tim Harford
| null | null |
|
29,212,918 |
https://www.smithsonianmag.com/smithsonian-institution/the-day-winston-churchill-lost-his-cigar-180947770/
|
The Day Winston Churchill Lost His Cigar
|
Smithsonian Magazine; Natasha Geiling
|
# The Day Winston Churchill Lost His Cigar
Thanks to a gift of over 100 photographs, the National Portrait Gallery celebrates Yousuf Karsh’s iconic photography with an installation of 27 portraits
A portrait of Winston Churchill photographed by Yousuf Karsh during the darkest days of World War II reveals a leader resolute in the face of crisis. The year was 1941; Churchill was visiting Canada, and the Nazi puppet government in France had just sworn to wring the neck of Britain like a chicken. Staring straight into Karsh’s camera, Churchill’s eyes are steely, almost obstinate. Moments prior, he had stood in the Canadian parliament, hands on hips, and announced passionately: “Some chicken! Some neck!”
When Karsh took the iconic photo—the one that would grace the cover of *Life* magazine and launch his international career—he was a young man, excited but nervous about photographing the historic figure. MacKenzie King, former prime minister of Canada, had first noticed Yousuf when he was photographing a meeting with FDR. King asked Karsh if he would photograph Churchill during the Canadian visit, and Karsh agreed.
To prepare, Karsh practiced with a subject similar in stature to Churchill from the waist down. He set up his equipment in the speaker’s chamber in the Canadian House of Parliament, a huge Tudor apartment that was used for the speaker to entertain guests. Wrangling hundreds of pounds of photography equipment, Karsh next waited patiently for the moment when Churchill would finish his speech and exit the House of Commons and enter the speaker’s chamber.
On the tail of his impassioned speech, Churchill came striding into the chamber, arms outstretch, hands open: in one, somebody placed a glass of brandy, in the other, a Havana cigar. It took a moment, but Churchill soon noticed the small, young photographer standing amid his mass of equipment.
“What’s this? What’s this?” Churchill demanded.
Karsh realized, suddenly, that no one had told Churchill that he was to have his picture taken. “Sir, I hope I will be worthy enough to make a photography equal to this historic moment.”
Churchill, reluctantly, acquiesced—sort of. “You may take one.”
One picture, one chance.
Churchill relinquished his glass to an assistant and began to sit for the photograph, still puffing on his cigar. Karsh readied the equipment but, just before taking the picture, he placed an ashtray in front of Churchill, asking that the prime minister remove the cigar from his mouth.
Churchill obstinately refused, and Karsh was perplexed: the smoke from the cigar would certainly obscure the image. He returned to the camera, ready to take the picture—but then with lightening speed, Karsh leaned over the camera and plucked the cigar from Churchill’s lips.
“He looked so belligerent, he could have devoured me,” Karsh would remember later, and it’s a belligerence that comes across in the famous photograph—a scowl over the pilfered cigar that came to represent, seemingly, a fierce glare as if confronting the enemy.
Karsh’s iconic Churchill portrait, as well as 26 other photographs, are on display at the National Portrait Gallery through April 27, 2014. The installation is made possible thanks to a large gift—more than 100 photographs—to the Portrait Gallery by Yousuf Karsh’s wife Estrellita Karsh.
“Yousuf was so thrilled when he came over as a poor Armenian immigrant boy in 1927 to be in this country. He always called it (Canada, America and the United States) the sunshine of freedom,” says Mrs. Karsh. “He would be thrilled that his photographs of Americans are here—and what better home than the Smithsonian, really, what better home.”
The 27 photographs span Karsh’s long career, from the oldest image (a 1936 black and white of FDR, ) to a color photograph of César Chávez, taken 11 years before Karsh’s death in 2002.
“In selecting the portraits to feature, I wanted to spotlight Karsh’s ability to create distinctive and evocative images of such a wide range of famous Americans—from Eleanor Roosevelt to Colonel Sanders to I.M. Pei,” Ann Shumard, curator of the exhibit, explains. “It is my hope that visitors to the exhibition will come away with a new appreciation for Karsh’s singular artistry as a portraitist.”
Spanning nearly six-decades, Karsh gained a reputation for photographing some of the most iconic and influential men and women in the world, from Fidel Castro to Queen Elizabeth. But behind the iconic faces lies a kind of radiant humanity that Karsh was so skilled at capturing: the person behind the mask of society.
“His honest, open approach, his great ability to have the viewer give the best in himself—that comes through,” Mrs. Karsh explains. “And this is what people see whether they’re going to see it in 1920, 1930, 2015 or 3000. That is the element that remains.”
*The Churchill portrait is on view until November 2, 1014. From May 2, 2014 to November 2, 2014, the museum will display an ongoing rotation a selection of portraits from the Karsh collection. **To see a selection of the portraits online, visit our photo collection.*
| true | true | true |
Thanks to a gift of over 100 photographs, the National Portrait Gallery celebrates Yousuf Karsh's iconic photography with an installation of 27 portraits
|
2024-10-12 00:00:00
|
2013-11-19 00:00:00
|
article
|
smithsonianmag.com
|
Smithsonian Magazine
| null | null |
|
28,435,899 |
https://brockallen.com/2017/11/07/the-userinfo-endpoint-is-not-designed-for-apis/
|
The userinfo endpoint is not designed for APIs
| null |
# The userinfo endpoint is not designed for APIs
A common (but incorrect) practice I often see people doing is using the OIDC userinfo endpoint from APIs. It seems like a natural thing to want to do — you have an access token in your API and it contains identity scopes. This means the access token can be used at the userinfo endpoint to access the identity data for those scopes (like profile, email, phone, etc.). The problem is that the userinfo endpoint is designed for the client, not the API (after all, userinfo is defined in the OIDC spec, not the OAuth2 spec).
Another way to think of the problem is that the API has no control over the scopes that access token has been granted. The client controls those scopes. This means your API is taking a dependency on the identity information the client is configured to obtain from the token server, and this is quite brittle. If the client ever changes what identity scopes they request, your API is affected.
A better approach is to configure your API to request the claims that it needs for the user. That’s why there’s a *UserClaims* property on the *ApiResource* (and *Scope*) configuration object model(s) in IdentityServer. These configured user claims will be delivered to the API in the access token itself (or from introspection if using reference tokens). This allows your API to be explicit about what it needs about the user, regardless of the client’s configuration.
Brock, this is such an interesting topic. Depending on who you talk to, you get a different answer. Auth0 loves to pass JWT’s. Other OAuth guru’s insist passing the access token to the API is fine–even positing that’s what access tokens were made for. You’re solution–to configure your API to request the claims that it needs for the user–sounds ideal. But how does this translate into a standard OpenID Connect flow?
I don’t follow. In IdentityServer we have config to model APIs, and as part of that config we allow an API to express what identity data it needs for a user. In a sense, it’s unrelated to OIDC.
Oh ok. Thanks for the reply.
What about the case number of claims would make token so large? I was thinking (perhaps wrongly?) that token has minimum set of claims to allow user in, then on the backend I can get rest of claims using UserInfo endpoint. Is this a valid use case?
Use introspection if that’s the situation.
Also, userinfo is for minimizing the id token size, not the access token.
@brockallen: that clarifies, many thanks:)
| true | true | true |
A common (but incorrect) practice I often see people doing is using the OIDC userinfo endpoint from APIs. It seems like a natural thing to want to do — you have an access token in your API an…
|
2024-10-12 00:00:00
|
2017-11-07 00:00:00
|
https://secure.gravatar.com/blavatar/b8491905a880a404e6a1704b1de0b8961048501e5463e4d980d271584d86ae94?s=200&ts=1728762980
|
article
|
brockallen.com
|
Brockallen
| null | null |
11,241,003 |
http://brucesterling.tumblr.com/post/140618427248
|
BruceS
|
Brucesterling
|
## More you might like
*Well, that’ll be any day now
*What if other people’s data wants to turn you into paperclips
*Fridge-magnet culture heroes in downtown Belgrade, Serbia. Includes Julian Assange and Edward Snowden
“100% AI-Free for how long?
## Meanwhile, on Tumblr, or rather WordPress
Soon, all of the blogs on Tumblr will be hosted on WordPress. Automattic, the parent company of WordPress.com and Tumblr, announced on Wednesday that it will start to move the site’s half a billion blogs to the new WordPress-based backend.
This update shouldn’t affect the way Tumblr works for users, whom Automattic promises won’t notice any difference after the migration. Automattic says the change will make it easier to ship new features across both platforms and let Tumblr run on the stable infrastructure of WordPress.com. (WordPress.com is a private hosting service built on the open-source WordPress content management software.)
“We can build something once and bring it to both WordPress and Tumblr,” the post reads. “Tumblr will benefit from the collective effort that goes into the open source WordPress project.” However, Automattic acknowledges that the move “won’t be easy.” It also doesn’t say when the migration will be complete.
Since acquiring Tumblr in 2019, Automattic has set its sights on revitalizing the once-thriving blogging platform with mixed results. After running experiments with live video and considering “core experience” changes to attract new users, Automattic CEO Matt Mullenweg wrote in a memo that Automattic was reassigning much of the Tumblr team to other projects, saying its long-term goal would be operating Tumblr in “the most smooth and efficient manner.”
| true | true | true |
Graphic whimsy via Bruce Sterling, [email protected]. http://medium.com/bruces
|
2024-10-12 00:00:00
|
2016-03-07 00:00:00
|
article
|
tumblr.com
|
Tumblr
| null | null |
|
7,883,010 |
https://en.wikipedia.org/wiki/Kommissarische_Reichsregierung
|
Reichsbürger movement - Wikipedia
|
Authority control databases International VIAF National Germany France BnF data
|
*Reichsbürger* movement
** Reichsbürgerbewegung** (German pronunciation: [ˈʁaɪ̯çsbʏʁɡɐbəˌveːɡʊŋ] ; '
*Reich*Citizens' Movement') or
**(German pronunciation: [ˈʁaɪ̯çsˌbʏʁɡɐ] ; '**
*Reichsbürger**Reich*Citizen[s]') are several anticonstitutional revisionist groups and individuals in Germany and elsewhere who reject the legitimacy of the modern German state, the Federal Republic of Germany, in favour of the German Reich.
One typical claim is that the German Reich[1] continues to exist in its pre-World War II or pre-World War I borders, and that it is now governed by one of the *Reichsbürger* groups.
Several incidents with violent members of the movement and illegal weapons depots earned the movement the attention of the media and the German authorities.[2]: 20–21 The latter estimated that some 21,000 people belong to the movement in Germany as of July 2021[update].[3]
## History
[edit]The original * Kommissarische Reichsregierung* (Commissary Reich Government, KRR) was founded in 1985 by Wolfgang Gerhard Günter Ebel.
[4]The movement espouses conspiracy theories, antisemitism, and racism.
[5]
[4]The movement has been described as neo-Nazi in character,
[4]although
*The Economist*reported in 2016 that
*Reichsbürger*adherents "draw ridicule even from neo-Nazis".
[6]Many supporters of the
*Reichsbürger*movement are also monarchists who support a restoration of the German Empire
[7]or Holy Roman Empire.
The German Federal Ministry of the Interior concludes that only a small part of the movement is part of the organized neo-Nazi milieu proper (for about 5%). Nonetheless, as the movement rejects the existence of the Federal Republic, it is very likely that members of the movement violate the legal order of the Federal Republic.[8]
During the COVID-19 pandemic, the so-called Querdenker-movement was formed in Germany. The only consensus of the movement was the rejection of government measures against the pandemic. Many Reich citizens were also part of the Querdenker-movement. *Reichsbürger* and other right-wing extremists strengthened within this movement [9] In 2021 the state office for the Protection of the Constitution in Lower Saxony published a study and came to the conclusion: "The mixing of corona deniers, Reich citizens and right-wing extremists leads to a dangerous radicalization of corona denier and Querdenken-movement."[10]
In December 2022, Germany arrested 25 people accused of planning a coup against the German government.[11]
Among the arrests was Heinrich XIII Prinz Reuss, who in the event the coup succeeded would have become head of state.[12]
## Definition and theories
[edit]A handbook of the state of Brandenburg classifies the *Reichsbürger* movement as "extremist" according to the framework of the Federal Office for the Protection of the Constitution (*Verfassungsschutz*). There, "extremist" refers to attitudes and ideologies that are directed against the basic conditions of a modern democracy and open society, such as the right of the people to elect their government democratically. The *Verfassungsschutz* defines the *Reichsbürger* movement as:
Groups and individuals who, for various motives and with various justifications, including [with reference to] the historical German Reich, conspiracy theory patterns of argumentation or a self-defined natural law, reject the existence of the Federal Republic of Germany and its legal system, deny the legitimacy of the democratically elected representatives or even define themselves in their entirety as being outside the legal system and are therefore prepared to commit violations of the legal system.
[2]: 19–21
The *Reichsbürger* movement is characterized by a rejection of the modern Federal Republic of Germany;[5][4] denial of its legality[13] and legitimacy;[14] and denial of the authority of the federal, state, and local governments in Germany.[14] *Reichsbürger* believe that the German Empire borders of 1932 or 1871 borders still exist and that the modern Federal Republic of Germany is "an administrative construct still occupied by the Allied powers".[13]
The *Reichsbürger* movement has used some of the concepts and techniques of the One People's Public Trust, an American sovereign citizen movement operated by pseudolaw ideologue Heather Ann Tucci-Jarraf.[15]
### Historical revisionism
[edit]The self-described *Reichsbürger* maintain that the Federal Republic of Germany is illegitimate and that the Reich's 1919 Weimar Constitution (or an earlier constitution) remains in effect. The *Reichsbürger* use a variety of arguments. One of them is a selective reading of a 1973 decision of the Federal Constitutional Court concerning the 1972 treaty between West and East Germany.
In 1949, the Federal Republic of Germany ("West Germany") and the German Democratic Republic ("East Germany", GDR) were established. The constitution of the Federal Republic mentioned a Germany and a German people beyond the Federal Republic. An example is article 23 (old version)[ clarification needed]:
This Basic Law shall initially apply in the territories of the states Baden, Bavaria, Bremen, Greater Berlin, Hamburg, Hesse, Lower Saxony, North Rhine-Westphalia, Rhineland-Palatinate, Schleswig-Holstein, Württemberg-Baden and Württemberg-Hohenzollern. In other parts of Germany it is to be put into force after their accession.
[citation needed]
During the first twenty years, the Federal Republic tried to isolate the GDR. After this goal seemed to be no longer feasible, the Federal Republic of Germany entered into the Basic Treaty with the German Democratic Republic to establish diplomatic relations and limited cooperation between the two German states. The Christian Democratic opposition in the Federal Republic rejected the treaty. The state of Bavaria even appealed to the Federal Constitutional Court asserting that the Basic Treaty violated the constitutional objective of re-unification.[16]
The judgement of the Court on July 31, 1973, ruled that the Basic Treaty does not violate the constitution because it does not make a German re-unification impossible. In its judgement, the Court also declared:
The Basic Law – not only a thesis of international law doctrine and constitutional law doctrine! – assumes that the German Reich survived the collapse in 1945 and did not perish either with the capitulation or through the exercise of foreign state power in Germany by the Allied occupying powers or later ... This also corresponds to the established jurisdiction of the Federal Constitutional Court, to which the Senate adheres. The German Reich continues to exist ..., still possesses legal capacity, but is not itself capable of acting as an entire state for lack of organisation, in particular for lack of institutionalised organs. ... With the establishment of the Federal Republic of Germany, not a new West German state was founded, but a part of Germany was reorganised. ... The Federal Republic of Germany is therefore not the "legal successor" of the German Reich, but as a state identical with the state of the "German Reich", although "partially identical" with regard to its spatial extent, so that in this respect the identity does not claim exclusivity. The Federal Republic therefore does not encompass the whole of Germany as far as its constitutive people and its constitutive territory are concerned, irrespective of the fact that it recognises a uniform constitutive people of the subject of international law "Germany" (German Reich), to which its own population belongs as an inseparable part, and a uniform constitutive territory "Germany" (German Reich), to which its own territory belongs as a likewise inseparable part. In terms of constitutional law, it restricts its sovereign power to the "area of application of the Basic Law" ..., but also feels responsible for the whole of Germany (cf. Preamble of the Basic Law).
[17]
The *Reichsbürger* usually only cite the part that the German Reich survived the collapse. They remain silent on the Court's statement that the Federal Republic is identical to it. Therefore, many members of the *Reichsbürger* movement typically conclude that the German Reich still "exists" and that the Federal Republic of Germany is not an actual sovereign state but a corporation created by Allied nations after World War II.
Consequently, many *Reichsbürger* groups claim that they have restored the governmental bodies of the German Reich and now act as the official German government (of the German Reich). In practice, a group of *Reichsbürger* had a meeting that elected the office holders of the Reich (e.g., Reich chancellor, Reich President). As there are several of such groups, there are several people in Germany who claim these offices.
Some *Reichsbürger* groups maintain a different point of view. According to them, the Weimar constitution was not legitimate and therefore the older imperial constitution of 1871 is still in effect. These people claim to be *Kaiser* (emperor), *leader of Prussia* etc.
Still other groups have created, in their point of view, German sovereign states without historical precedent, for example a Kingdom of Germany or a regional entity. Other groups do not operate under the label of a 'restored' or new German state but call themselves simply *Selbstverwalter* (lit. 'someone who governs himself'). They reject the Federal Republic and claim that their house is a sovereign entity.[18]
### Antisemitism
[edit]According to the Federal Office for the Protection of the Constitution, some of the *Reichsbürger* groups and individuals hold antisemitic views. One example is the organization *Geeinte deutsche Völker und Stämme*, which was prohibited in March 2020. It promotes the idea that Jews and Muslims have no human rights nor a right to property.[3]
A handbook of the Amadeu Antonio Stiftung holds that most views of the *Reichsideologie* have an antisemitic core. Usually the * Reichsbürger* use antisemitic codes, such as
*those from the Eastern Coast*or
*the*
*Rothschilds*. Antisemitic conspiracy theories are attractive for the people in the
*movement because they provide a simple explanation of the world by dividing humanity into friends and foes. A group of enemies of the people is made responsible for wars and poverty.*
*Reichsbürger*[19]
## Membership
[edit]In April 2018[update], Germany's domestic intelligence service, the Federal Office for the Protection of the Constitution (BfV), estimated that *Reichsbürger* movement membership had grown by 80% over the previous two years, more than estimated earlier, with a total of 18,000 adherents, of whom 950 were categorized as right-wing extremists.[20] This marked an increase from BfV's 2016 estimate of 10,000 adherents[20] and 2017 estimate of 12,600 adherents.[21] The increase in numbers may be attributable to more adherents becoming known to authorities, rather than an actual increase in the number of adherents.[20] The heterogeneity of the movement and its division into many small groups that are often independent of one another make it difficult to estimate the number of active *Reichsbürger*.[22]
*Reichsbürger* adherents are scattered around Germany, but concentrated in the southern and eastern parts of the country,[5] in the states of Brandenburg, Mecklenburg-Western Pomerania and Bavaria.[13] BfV has estimated that there are 3,500 adherents in Bavaria and around 2,500 in Baden-Württemberg.[13]
Adherents tend to be older,[5] with most aged 40–60 years old[23] and an average age of over 50.[13] The majority are male[13][23] and socially disadvantaged.[13] The Amadeu Antonio Foundation, which monitors far-right activities in Germany, states that *Reichsbürger* adherents are "often isolated" and "completely cut off from reality".[5] German counter-extremism official Heiko Homburg states that the *Reichsbürger* movement is an amalgamation of right-wing extremists, esoterics, and sovereign citizens, and that the movement attracts conspiracy theorists, the economically troubled, and "people who are a little mentally disordered".[14]
## Activities
[edit]As of 2009, there was no reliable count of the number of KRRs then existing, but the *KRR FAQ*, an online registry maintained by a German jurist, lists some 60 persons or organizations associated with operating competing KRRs. Several (though by no means all) KRRs have links to far-right extremist or neo-Nazi groups.[1] The Federal Office for the Protection of the Constitution, Germany's federal domestic security agency, has monitored *Reichsbürger* since November 2016, and the security services of individual states have been monitoring the activities of the group for longer.[21]
Some KRRs are ready to issue, for a fee, so-called official documents such as building permits, and driving licences, which their adherents may attempt to use in everyday life. In one instance, Wolfgang Ebel's KRR issued an "excavation permit" to the Principality of Sealand (a micronation), who then had men dig up a plot of land in the Harz region in search of the Amber Room for two weeks, until the landowner hired a private security service to drive them off.[24] Similarly, in 2002 Ebel's KRR asserted that it had sold the Hakeburg , a manor in Kleinmachnow south of the Berlin city limits that had been owned by the German Reichspost (and therefore, according to Ebel, by his KRR) to one of the two competing governments of Sealand, thus creating, in their view, an enclave of Sealand in Germany.[25]
KRR adherents have also on occasion refused to pay taxes or fines, arguing that the laws providing for such sanctions have no constitutional basis. In the ensuing judicial proceedings, they refuse to recognize the courts as legitimate.[26] Some also pursue their activities abroad. In 2009, after Swiss authorities refused to recognize the so called Reich Driving Licence of a German KRR adherent, he unsuccessfully appealed the case up to the Federal Supreme Court of Switzerland.
Wolfgang Ebel's original organization, in particular, continues to attempt enforcing its asserted authority through attempts at intimidation.[24] According to Ebel, his government has issued more than 1,000 arrest warrants against people who have disregarded documents issued by the KRR. These warrants inform the addressee that, once the Reich Government is in power, they will be tried for high treason, for which the penalty is death.[24] Ebel has also admitted owning a government helicopter painted in the national colours, but has denied using it for intimidating fly-overs.[24] Several attempts to prosecute Ebel for threats, impersonating a public servant and so forth have failed because, according to German prosecutors, all courts have found him to be legally insane.[24]
## Violence by Reichsbürger activists
[edit]In 2016, Adrian Ursache, a self-proclaimed *Reichsbürger* and the 1998 winner of the Mister Germany beauty contest, violently resisted his eviction from his house in Reuden. When the German police arrived on scene they encountered a group of around 120 people, who were staying on Ursache's and his in-law's property. Ursache deemed his property as part of the self-proclaimed State of Ur and flew the flag of the old German *Reich* above the home. After a first eviction attempt failed, the German police returned with a special response team the day after. When the eviction started, Ursache opened fire and injured two officers. Ursache was shot and rushed to a hospital.[22][27][28] In 2019, Ursache was convicted of attempted murder and sentenced to 7 years in prison.[28]
Also in 2016, in Georgensgmünd near Nuremberg, a self-described *Reichsbürger* fired on a special response unit of the Bavarian Police when they attempted to confiscate his 31 firearms. Three police officers were injured. One of them later died from his injuries.[29] The weapons confiscation followed the revocation of the murderer's firearms permit and his repeated refusal to co-operate with local authorities.[30] German authorities expressed concern at the escalation in violence. The event attracted international attention.[31] Bavarian ministers called for increased surveillance of the right-wing extremist movement.[32] On 23 October 2017, Wolfgang P. was sentenced to imprisonment for life.[33]
In Höxter, North Rhine-Westphalia, in 2014, one *Reichsbürger* group (the Free State of Prussia) attempted to smuggle weapons into Germany in an attempt to create its own militia.[13] Police raids have found large stockpiles of guns and ammunition hoarded by *Reichsbürger* adherents.[13] In 2018, the German magazine *Focus* reported that *Reichsbürger* adherents had been attempting to build an armed militia in preparation for Day X—"an imagined day of reckoning or uprising against the German government".[34]
In April 2022 four members of a *Reichsbürger* group called United Patriots (*Vereinte Patrioten*) were detained for plotting to overthrow the government.[35] They planned to destroy electrical substations and power lines through bomb attacks to cause a nationwide power outage to create "civil war-like" conditions.[36] Two members are also alleged to have been plotting to kidnap the German health minister Karl Lauterbach.[35] Lauterbach was said to have been aware of the plans.[36]
## Patriotic Union
[edit]**Patriotic Union** (*Patriotische Union*) or **The Council** (*Der Rat*) is the name of a German right-wing extremist *Reichsbürger* group. It aims is to establish a new government in Germany in the tradition of the German Empire of 1871. The gang wanted to provoke chaos and a civil war in Germany and thus take over power in the Federal Republic of Germany. Among other things, the German Bundestag was to be taken by force of arms and taking into account, that there will be killings of people.
### Aims
[edit]The group wanted to establish a new government (Council). Since November 2021 the network had been planning an armed attack on the Bundestag, as well as public arrests of politicians to cause public unrest. The Patriotic Union assumed that parts of the German security authorities would then have shown solidarity with the terrorist group, which would have led to an overthrow and the group would have taken power.[37]
### Members
[edit]Heinrich Reuss, a German aristocrat, is alleged to have led the group and been the planned head of state of the group.[38]
The group, which comprised more than a hundred people, was divided into areas of responsibility. The Federal public prosecutor has 52 suspects and arrested 25 of them.
The gang also included several former members of the Special Forces Command (KSK), including a former Staff Sergeant of the Paratrooper Battalion of the Bundeswehr, Rüdiger von P. The GSG9 searched a KSK site of the Graf Zeppelin Barracks near Calw. Rüdiger von P. was supposed to lead the military arm of the group. The Federal Public Prosecutor describes von P. alongside Heinrich Reuss as a "ringleader". Von P. is said to have tried to recruit police officers and soldiers.[39]
A lawyer and judge in the state of Berlin, Birgit Malsack-Winkemann, was designated as the future minister of Justice. Malsack-Winkemann was a member of the German Bundestag from 2017 to 2021 for the AfD and was arrested on December 7, 2022. The group included at least one other AfD politician, an AfD Stadtrat from Olbernhau in the Saxon Ore Mountains.[40]
Other members were doctors and at least one was an entrepreneur.
### 2022 investigations and arrests
[edit]German police authorities have been investigating the group since spring 2022. The group is also made up of parts of the radicalized German *Querdenker* movement, a heterogeneous group of COVID-19 protesters and deniers. Reuss was the starting point for the investigations, which ended up being carried out by the Federal Criminal Police Office (BKA) under the name *Shadow*. In addition, several state criminal investigation offices and state offices for the protection of the constitution were involved.
Over 3,000 police officers, including officers of the GSG 9 unit, searched more than 130 sites (including homes, offices, and storage facilities) throughout Germany, one of the largest anti-extremist raids in the nation's history.[41][42] Searches of areas in Austria and Italy took place simultaneously.[41] During the raids, coordinated by the Federal Police, 25 people were arrested out of a total of 52 suspected far-right coup plotters associated with the *Reichsbürger* movement.[41] Those implicated in the suspected plot included active military personnel and policemen. Prosecutors stated that those arrested plotted an armed overthrow of the German government and the democratic constitution.[41][43] The Patriotic Union group had stockpiled Iridium satellite telephones, expensive devices which could operate even if the electricity network was down.[44] The General Federal Prosecutor Office said: "The arrested suspects belong to a terrorist organization which was founded by the end of November 2021 at the latest and which has set itself the goal of overthrowing the existing state order in Germany and replacing it with its own form of state, the outlines of which have already been worked out."[41]
Those arrested included aristocrat Heinrich Reuss (who styles himself *Heinrich XIII, Prince Reuss of Greiz*) a 71-year-old descendant of the House of Reuß, who owns an estate in Thuringia where the group met, several of his followers, and a 69-year-old former Bundeswehr parachutist commander identified as Rüdiger von P.[41][45] Also arrested was Birgit Malsack-Winkemann, a former Alternative for Germany (AfD) member of the Bundestag and a current judge.[41][43][46]
## Police connections with Reichsbürger movement
[edit]There were renewed calls for more serious measures against the movement in 2016, including revocation of firearms permits and seizure of their weapons, following disciplinary action against police officers allegedly connected to the movement.[47][48] On 27 October 2016, a Bavarian Police officer was suspended from duty because of his connections to one of the *Reichsbürger* movements. There have been allegations of similar kind against other police officers in different states of Germany as well.[49][50]
## List of *Reichsbürger* groups
[edit]The following is a non-exhaustive list of KRRs that have received media coverage.[51]
- Fürstentum Germania, formerly based at Krampfer Palace, established in 2009, claims 300 adherents.
[52] - Interim Partei – Das Reicht
[26] - Zentralrat Souveräner Bürger, based in an inn in Schwanstetten.
[53] - Ur, based in Elsteraue. Its leader Adrian Ursache was injured in a 2016 shoot-out with police.
[54] - NeuDeutschland, based in Wittenberg. Founded in 2012, it claims 3,500 members. Led by self-proclaimed King of Germany Peter Fitzek.
[55] - Patriotic Union, a far-right group which attempted a coup in 2022.
[56][57] - Freistaat Preußen, an irredentist movement supporting the annexation of Kaliningrad from Russia. Based in Königsfeld, Rhineland-Palatinate and Bonn.
[58]
## See also
[edit]- Antisemitism in 21st-century Germany
- Inner emigration
- Racism in Germany
- Radical right (Europe)
- Free Saxons - Monarchist and regionalist political party operating in the German state of Saxony.
- Sovereign citizen movement – a similar movement which is primarily active in the United States and Anglosphere countries.
- Freeman on the land movement – an offshoot of the sovereign citizen movement, it is active in Canada and other Anglosphere countries.
- Union of Slavic Forces of Russia – a similar movement with members and supporters in Post-Soviet Russia.
## References
[edit]- ^
**a**Oppong, Martin (15 May 2008). "'Kommissarische Reichsregierungen': Gefährliche Irre".**b***Die Tageszeitung*(in German). Retrieved 25 March 2009. - ^
**a**Hüllen, Michael; Homburg, Heiko (2017). "'Reichsbürger' zwischen zielgerichtetem Rechtsextremismus, Gewalt und Staatsverdrossenheit" (PDF). In Wilking, Dirk (ed.).**b***Reichsbürger. Ein Handbuch*(3rd ed.). Potsdam: Demos: Brandenburgisches Institut für Gemeinwesenberatung. pp. 15–53. Retrieved 7 December 2022. - ^
**a**"Verfassungsschutzbericht 2021" (PDF).**b***bmi.bund.de*. 2021. Retrieved 7 December 2022. - ^
**a****b****c**"German police raid neo-Nazi Reichsbürger movement nationwide".**d***BBC News*. 19 March 2020. - ^
**a****b****c****d**Schuetze, Christopher F. (19 March 2020). "Germany Shuts Down Far-Right Clubs That Deny the Modern State".**e***New York Times*. **^**"Hundreds of Germans are living as if the Reich never ended".*The Economist*. 10 November 2016.**^**Wright, Timothy (22 June 2019). "Germany's New Mini-Reichs".*Los Angeles Review of Books*.**^**""Reichsbürger" und "Selbstverwalter" - eine zunehmende Gefahr?" ['Reichsbürger' and 'Selbstverwalter' - an Increasing Danger?].*German Federal Ministry of the Interior*(in German). 2022. Archived from the original on 13 September 2018. Retrieved 7 December 2022.**^**NDR. "Rechtsextremisten und Reichsbürger nutzen Corona-Demos".*www.ndr.de*(in German). Retrieved 14 September 2023.**^**"Die Vermischung von Coronaleugnern, Reichsbürgern und Rechtsextremisten führt zu einer gefährlichen Radikalisierung der Coronaleugner- und Querdenken-Bewegung | Verfassungsschutz Niedersachsen".*www.verfassungsschutz.niedersachsen.de*. Retrieved 14 September 2023.**^**Stern, Zahid Mahmood, Chris (7 December 2022). "Germany arrests 25 suspected far-right extremists for plotting to overthrow government".*CNN*. Retrieved 21 May 2024.`{{cite web}}`
: CS1 maint: multiple names: authors list (link)**^**"Reichsbürger group members go on trial over alleged coup".*BBC News*. 21 May 2024. Retrieved 21 May 2024.- ^
**a****b****c****d****e****f****g****h**Dick, Wolfgang (19 October 2016). "What is behind the right-wing 'Reichsbürger' movement?".**i***Deutsche Welle*. - ^
**a****b**Anthony Faiola & Stephanie Kirchner (20 March 2017). "In Germany, right-wing violence flourishing amid surge in online hate".**c***Washington Post*. **^**Barrows, Samuel (26 March 2021), "Sovereigns, Freemen, and Desperate Souls: Towards a Rigorous Understanding of Pseudolitigation Tactics in United States Courts",*Boston Law review*, retrieved 23 November 2022**^**Grau, Andreas. "Urteil des Bundesverfassungsgerichts zum Grundlagenvertrag zwischen der BRD und der DDR" [Judgment of the Federal Constitutional Court on the Basic Treaty between the FRG and the GDR]. Konrad Adenauer Stiftung. Retrieved 8 December 2022.**^**BVerfGE 36, 1 – Grundlagenvertrag. Last retrieved 2022-12-08.**^**German Federal Ministry of the Interior: Topthema Reichsbürger Archived 13 September 2018 at the Wayback Machine, 2022, last seen December 7, 2022.**^**""Reichsbürger" und Souveränisten. Basiswissen und Handlungsstrategien" (PDF).*amadeu-antonio-stiftung.de*. 2018. pp. 5, 9. Retrieved 8 December 2022.- ^
**a****b**"Germany's far-right Reichsbürger movement larger than earlier estimated". Deutsche Welle. 28 April 2018.**c** - ^
**a**"Verfassungsschutz zählt 12.600 Reichsbürger in Deutschland" [Federal Office for the Protection of the Constitution estimates 12.600 Reichsbürger in Germany].**b***Frankfurter Allgemeine Zeitung*. 22 May 2017. Retrieved 23 May 2017. - ^
**a**Schaaf, Julia (12 September 2016). "'Reichsbürger' – Szene: Schießerei im Staat Ur".**b***Frankfurter Allgemeine Zeitung*(in German). Retrieved 18 August 2021. - ^
**a**Connolly, Kate (19 March 2020). "German police arrest members of far-right group after state ban".**b***The Guardian*. - ^
**a****b****c****d**Gessler, Philip (15 August 2000). "Die Reichsminister drohen mit dem Tod".**e***Die Tageszeitung*(in German). Retrieved 25 March 2009. **^**"Dokumentation der Eigentumsverhältnisse für unser Staatsgebiet Hakeburg in Kleinmachnow bei Berlin" (PDF). November 2002. Retrieved 7 December 2022.- ^
**a**"BRD-Leugner: Was ist die Interim Partei?".**b***Badische Zeitung*(in German). 3 September 2008. Retrieved 25 March 2009. **^**"Chronologie: Mordprozess gegen 'Reichsbürger' Adrian Ursache".*mdr.de*(in German). Archived from the original on 12 November 2017. Retrieved 30 November 2017.- ^
**a**Crossand, David (19 April 2019). "'Mr Germany' jailed for shooting police officer".**b***Times of London*. **^**"Wolfgang P. aus Georgensgmünd: Lebenslange Haft wegen Polizistenmordes für Reichsbürger".*Frankfurter Allgemeine Zeitung*. 23 October 2017. Retrieved 30 November 2017.**^**"Germany shooting: Policeman dies in raid on far-right gunman".*BBC News*. 20 October 2016. Retrieved 21 October 2016.**^**Oltermann, Philip (20 October 2016). "Germany fears radicalisation of Reichsbürger movement after police attacks".*The Guardian*. Retrieved 21 October 2016.**^**Dearden, Lizzie (21 October 2016). "Anti-government 'Reichsbürger' attacks German police and calls them Nazis after extremist shoots officer dead".*The Independent*. Retrieved 21 October 2016.**^**"Lebenslange Haft wegen Polizistenmordes für Reichsbürger".*Faz.net*. 23 October 2017. Retrieved 29 November 2017.**^**Schumacher, Elizabeth (12 January 2018). "Report: Far-right Reichsbürger movement is growing, building army".*Deutsche Welle*.- ^
**a**"Germany kidnap plot: Gang planned to overthrow democracy".**b***BBC News*. 14 April 2022. Retrieved 14 April 2022. - ^
**a**"German police arrest far-right extremists over plans to 'topple democracy'".**b***Deutsche Welle*. Retrieved 14 April 2022. **^**"Festnahmen bei Reichsbürger-Razzia – Anführer der Gruppe aus Frankfurt".*Hessenschau*. 7 December 2022.**^**Bennhold, Katrin; Solomon, Erika (7 December 2022). "Germany Arrests 25 Suspected of Planning to Overthrow Government".*The New York Times*.**^**Litschko, Konrad (7 December 2022). "Razzia gegen Reichsbürger: Ziel war ein Systemwechsel" [Raid against Reich citizens: The goal was a system change].*Die Tageszeitung: taz*(in German).**^**Wolf, Ulrich; Anders, Franziska; Klemenz, Franziska; Schlottmann, Karin; Langhof, Erik-Holm (8 December 2022). "Nach Razzien bei Reichsbürgern: BKA erwartet weitere Beschuldigte" [After raids on Reich citizens: BKA expects more suspects].*Sächsische*(in German).- ^
**a****b****c****d****e****f**David Crossland, German police arrest 25 far-right coup plotters in dawn raids,**g***The Times*(7 December 2022). **^**Berlin, David Crossland (7 December 2022). "Judge among far-right plotters arrested by German police over Reichstag coup".*The Times*.- ^
**a**"German police arrest 25 suspects in plot to overthrow state". Deutsche Welle. Reuters. 7 December 2022. Retrieved 7 December 2022.**b** **^**Connolly, Kate (7 December 2022). "Reichsbürger: the German conspiracy theorists at heart of alleged coup plot".*The Guardian*.**^**"Großrazzia: Gruppe soll Staatsumsturz geplant haben – Reussen-Prinz festgenommen".*MDR*. 7 December 2022. Retrieved 7 December 2022.**^**tagesschau.de. "Bundesweite Razzia wegen geplanten Staatsstreichs".*tagesschau.de*(in German). Retrieved 7 December 2022.**^**Saha, Marc (31 October 2016). "A broken oath: Reichsbürger in the police force".*Deutsche Welle*. Retrieved 7 November 2016.**^**"State premier: Three suspected Reichsbürger police in Saxony".*Deutsche Welle*. 6 November 2016. Retrieved 7 November 2016.**^**Clauß, Anna; Menke, Birger; Neumann, Conny; Ziegler, Jean-Pierre (21 October 2016). "Staatsleugner als Staatsdiener".*Spiegel Online*. Retrieved 29 November 2017.**^**"Polizei suspendiert mutmaßlichen 'Reichsbürger'".*Spiegel Online*. 27 October 2016. Retrieved 29 November 2017.**^**See, generally, the media section Archived 26 October 2016 at the Wayback Machine of*KRR FAQ*.**^**Fröhlich, Alexander (15 March 2009). "Die Hippies von Germania".*Der Tagesspiegel*(in German). Retrieved 25 March 2009.**^**"'Staatenlose' lösen Unbehagen aus".*Schwabacher Tagblatt*(in German). 28 October 2008. Retrieved 25 March 2009.**^**Turner, Zeke (25 August 2016). "Extremist Group Leader Injured in Shootout With German Police".*The Wall Street Journal*.**^**"Judge sends 'King of Germany' to jail".*The Local*. 18 October 2013. Retrieved 8 November 2021.**^**Riedel, Florian Flade und Katja (22 March 2023). ""Patriotische Union": Schüsse bei neuer "Reichsbürger"-Razzia".*Tagesschau*(in German). Retrieved 21 May 2024.**^**Ayyadi, Kira (7 December 2022). "Reichsbürger planten Staatsstreich".*Belltower News*(in German). Retrieved 21 May 2024.**^**"Reichsbürger: Wie eine "Ministerpräsidentin" aus der Eifel die Bundesrepublik bekämpft und einen Weltkrieg riskieren will".*Rhein-Zeitung*(in German). 16 February 2017. Archived from the original on 25 March 2018. Retrieved 7 July 2024.
## External links
[edit]- Schmidt, Frank (2007). "KRR FAQ" (in German). Retrieved 25 March 2009, a
*KRR*database. - Goldenberg, Rina (7 December 2022). "What is Germany's 'Reichsbürger' movement?".
*Deutsche Welle*. - Bennhold, Katrin (11 October 2020). "QAnon Is Thriving in Germany. The Extreme Right Is Delighted".
*The New York Times*. - Koehler, Daniel (2019). "Anti-immigration militias and vigilante groups in Germany".
*Vigilantism against Migrants and Minorities*. pp. 86–102. doi:10.4324/9780429485619-6. ISBN 978-0-429-48561-9. S2CID 211441703. - Pfahl-Traughber, Armin (2019).
*Rechtsextremismus in Deutschland*. doi:10.1007/978-3-658-24276-3. ISBN 978-3-658-24275-6. S2CID 199236657. - Landwehr, Claudia (November 2020). "Backlash against the procedural consensus".
*The British Journal of Politics and International Relations*.**22**(4): 598–608. doi:10.1177/1369148120946981. S2CID 225253850. - Sarteschi, Christine M. (September 2021). "Sovereign citizens: A narrative review with implications of violence towards law enforcement".
*Aggression and Violent Behavior*.**60**: 101509. doi:10.1016/j.avb.2020.101509. PMC 7513757. PMID 32994748. - Netolitzky, Donald (3 May 2018). "A Pathogen Astride the Minds of Men: The Epidemiological History of Pseudolaw". SSRN 3177472.
- Sturm, Tristan; Mercille, Julien; Albrecht, Tom; Cole, Jennifer; Dodds, Klaus; Longhurst, Andrew (November 2021). "Interventions in critical health geopolitics: Borders, rights, and conspiracies in the COVID-19 pandemic".
*Political Geography*.**91**: 102445. doi:10.1016/j.polgeo.2021.102445. PMC 8580506. PMID 34785870. - Schweiger, Christian (1 September 2019). "Deutschland einig Vaterland?".
*German Politics and Society*.**37**(3): 18–31. doi:10.3167/gps.2019.370303. S2CID 218888433. - Pantucci, Raffaello (2022). "Extreme Right-Wing Terrorism and COVID-19 – A Two-Year Stocktake".
*Counter Terrorist Trends and Analyses*.**14**(3): 17–23. JSTOR 48676737.
| true | true | true | null |
2024-10-12 00:00:00
|
2009-03-25 00:00:00
|
website
|
wikipedia.org
|
Wikimedia Foundation, Inc.
| null | null |
|
11,930,880 |
https://github.com/yadav-rahul/Image-Crawler
|
GitHub - yadav-rahul/Image-Crawler: :camera: Python application based on flask framework.
|
Yadav-Rahul
|
This app takes an url from the user and with just one click download all images from the url under `images`
directory.
Image Crawler uses python's * flask* framework crawling web page.
```
git clone https://github.com/yadav-rahul/Image-Crawler & cd Image-Crawler
sudo pip3 install -r requirements.txt
python3 main.py
got to http://localhost:5000/ in your browser
enter URL and see the magic!
```
MIT © Rahul Yadav
This app is for learning purposes, and just a prototyping for reference, and not meant for any use in production / commercial purposes.
| true | true | true |
:camera: Python application based on flask framework. - yadav-rahul/Image-Crawler
|
2024-10-12 00:00:00
|
2016-06-18 00:00:00
|
https://opengraph.githubassets.com/bc1da38f8c0720e354ebaf825eebd3312030d5b8fb68820b45b3b19750464a78/yadav-rahul/Image-Crawler
|
object
|
github.com
|
GitHub
| null | null |
8,714,421 |
http://coderfactory.co/posts/top-70-programming-quotes-of-all-time#.VIT9G6hbQT8.hackernews
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
23,823,516 |
https://www.wired.com/story/the-next-generation-of-batteries-could-be-built-by-viruses/
|
The Next Generation of Batteries Could Be Built by Viruses
|
Daniel Oberhaus
|
In 2009, MIT bioengineering professor Angela Belcher traveled to the White House to demo a small battery for President Barack Obama, who was just two months into his first term in office. There aren’t many batteries that can get an audience with the leader of the free world, but this wasn’t your everyday power pouch. Belcher had used viruses to assemble a lithium-ion battery’s positive and negative electrodes, an engineering breakthrough that promised to reduce the toxicity of the battery manufacturing process and boost their performance. Obama was preparing to announce $2 billion in funding for advanced battery technology, and Belcher’s coin cell pointed to what the future might hold in store.
A decade after Belcher demoed her battery at the White House, her viral assembly process has rapidly advanced. She’s made viruses that can work with over 150 different materials and demonstrated that her technique can be used to manufacture other materials like solar cells. Belcher’s dream of zipping around in a “virus-powered car” still hasn’t come true, but after years of work she and her colleagues at MIT are on the cusp of taking the technology out of the lab and into the real world.
As nature’s microscopic zombies, viruses straddle the divide between the living and the dead. They are packed full of DNA, a hallmark of all living things, but they can’t reproduce without a host, which disqualifies them from some definitions of life. Yet as Belcher demonstrated, these qualities could be adopted for nanoengineering to produce batteries that have improved energy density, lifetime, and charging rates that can be produced in an eco-friendly way.
“There has been growing interest in the battery field to explore materials in nanostructure form for battery electrodes,” says Konstantinos Gerasopoulos, a senior research scientist who works on advanced batteries at Johns Hopkins Applied Physics Laboratory. “There are several ways that nanomaterials can be made with conventional chemistry techniques. The benefit of using biological materials, such as viruses, is that they already exist in this ‘nano’ form, so they are essentially a natural template or scaffold for the synthesis of battery materials.”
Nature has found plenty of ways to build useful structures out of inorganic materials without the help of viruses. Belcher’s favorite example is the abalone shell, which is highly structured at the nanoscale, lightweight, and sturdy. Over the process of tens of millions of years, the abalone evolved so that its DNA produces proteins that extract calcium molecules from the mineral-rich aquatic environment and deposit it in ordered layers on its body. The abalone never got around to building batteries, but Belcher realized this same fundamental process could be implemented in viruses to build useful materials for humans.
“We’ve been engineering biology to control nanomaterials that are not normally grown biologically,” Belcher says. “We’ve expanded biology’s toolkit to work with new materials.”
Belcher’s virus of choice is the M13 bacteriophage, a cigar-shaped virus that replicates in bacteria. Although it's not the only virus that can be used for nanoengineering, Belcher says it works well because its genetic material is easy to manipulate. To conscript the virus for electrode production, Belcher exposes it to the material she wants it to manipulate. Natural or engineered mutations in the DNA of some of the viruses will cause them to latch on to the material. Belcher then extracts these viruses and uses them to infect a bacterium, which results in millions of identical copies of the virus. This process is repeated over and over, and with each iteration the virus becomes a more finely-tuned battery architect.
Belcher’s genetically engineered viruses can’t tell a battery anode from a cathode, but they don’t need to. Their DNA is only programmed to do a simple task, but, when millions of viruses perform the same task together, they produce a usable material. For example, the genetically-modified virus might be engineered to express a protein on its surface that attracts cobalt oxide particles to cover its body. Additional proteins on the surface of the virus attract more and more cobalt oxide particles. This essentially forms a cobalt oxide nanowire made of linked viruses that can be used in a battery electrode.
Belcher’s process matches DNA sequences with elements on the periodic table to create a sped-up form of unnatural selection. Coding the DNA one way might cause a virus to latch on to iron phosphate, but, if the code is tweaked, the virus might prefer cobalt oxide. The technique could be extended to any element on the periodic table, it’s just a matter of finding the DNA sequence that matches it. In this sense, what Belcher is doing is not so far from the selective breeding done by dog fanciers to create pooches with desirable aesthetic qualities that would be unlikely to ever show up in nature. But instead of breeding poodles, Belcher is breeding battery-building viruses.
Belcher has used her viral assembly technique to build electrodes and implement them in a range of different battery types. The cell she demoed for Obama was a standard lithium-ion coin cell like you might find in a watch and was used to power a small LED. But for the most part, Belcher has used electrodes with more exotic chemistries like lithium-air and sodium-ion batteries. The reason, she says, is that she didn’t see much sense in trying to compete with the well-established lithium-ion producers. “We aren’t trying to compete with current technology,” Belcher says. “We look at the question, ‘Can biology be used to solve some problems that haven’t been solved so far?’”
One promising application is to use the viruses to create highly ordered electrode structures to shorten the path of an ion as it moves through the electrode. This would increase the battery’s charge and discharge rate, which is “one of the ‘holy grails’ of energy storage,” says Paul Braun, director of the Materials Research Laboratory at the University of Illinois. In principle, he says, viral assembly can be used to significantly improve the structure of battery electrodes and boost their charging rates.
So far Belcher’s virally-assembled electrodes have had an essentially random structure, but she and her colleagues are working on coaxing the viruses into more ordered arrangements. Nevertheless, her virus-powered batteries performed as well or better than those with electrodes made with traditional manufacturing techniques, including improved energy capacity, cycle life, and charging rates. But Belcher says the biggest benefit of viral assembly is that it is eco-friendly. Traditional electrode manufacturing techniques require working with toxic chemicals and high temperatures. All Belcher needs are the electrode materials, room temperature water, and some genetically-engineered viruses.
“Something my lab is completely focused on now is trying to get the cleanest technology,” Belcher says. This includes taking into consideration things like where the mined material for electrodes is sourced, and the waste products produced by manufacturing the electrodes.
Belcher hasn’t brought the technology to market yet, but says she and her colleagues have several papers under review that show how the technology can be commercialized for energy and other applications. (She declined to get into the specifics.)
When Belcher first suggested that these DNA-driven assembly lines might be harnessed to build useful things for humans, she encountered a lot of skepticism from her colleagues. “People told me I was crazy,” she says. The idea no longer seems so far-fetched, but taking the process out of the lab and into the real world has proven challenging. “Traditional battery manufacturing uses inexpensive materials and processes, but engineering viruses for performance and solving scalability issues will require years of research and associated costs,” says Bogdan Dragnea, a professor of chemistry at the Indiana University Bloomington. “We have only recently started to understand the potential virus-based materials hold from a physical properties perspective.”
Belcher has already co-founded two companies based on her work with viral assembly. Cambrios Technologies, founded in 2004, uses a manufacturing process inspired by viruses to build the electronics for touch screens. Her second company, Siluria Technologies, uses viruses in a process that converts methane to ethylene, a gas widely used in manufacturing. At one point, Belcher was also using viruses to assemble solar cells, but the technology wasn’t efficient enough to compete with new perovskite solar cells.
Whether the viral assembly of battery electrodes can scale to the levels needed for commercial production remains an open question. “In a battery production facility they use tons of material, so getting to that level with biological molecules is not very easy,” says Gerasopoulos. He says he doesn’t believe this obstacle is insurmountable, but is “probably among the key challenges up to this point.”
Even if the world never sees a virus-powered Tesla, Belcher’s approach to biologically-driven nanoengineering holds immense promise in areas that have little to do with electricity. At MIT, Belcher is working with a team of scientists that leverage viral assembly techniques to create tumor-hunting nanoparticles. Designed to track down cancerous cells that are far too small to be detected by doctors, these nanoparticles could drastically improve early detection and lower mortality rates in cancer patients. In principle, the particles could also be armed with biomaterial that would kill the cancer cells, although this remains a distant goal.
For all of human history, viruses have been the harbingers of death and disease. But Belcher’s work points to a future where these little parcels of DNA may have a lot more to offer.
*Updated 2-27-20 3:30pm EDT: Siluria Technologies produces ethylene from methane, not CO2.*
- Inside Mark Zuckerberg's lost notebook
- How to enable dark mode on all your apps and devices
- Ask the Know-It-Alls: What is a coronavirus?
- The bird “snarge” menacing air travel
- We need to talk about “cloud neutrality”
- 👁 The secret history of facial recognition. Plus, the latest news on AI
- 💻 Upgrade your work game with our Gear team’s favorite laptops, keyboards, typing alternatives, and noise-canceling headphones
| true | true | true |
Angela Belcher found a way to turn nature's zombies into a tiny assembly line. But creating a new power cell might be just the beginning.
|
2024-10-12 00:00:00
|
2020-02-26 00:00:00
|
article
|
wired.com
|
WIRED
| null | null |
|
1,220,604 |
http://news.bbc.co.uk/1/hi/business/8588432.stm
|
Times website to charge from June
| null |
Times and Sunday Times websites to charge from June
James Harding, The Times: "It is less of a risk than continuing to do what we are currently doing"
The Times and Sunday Times newspapers will start charging to access their websites in June, owner News International (NI) has announced.
Users will pay £1 for a day's access and £2 for a week's subscription.
The move opens a new front in the battle for readership and will be watched closely by the industry.
NI chief executive Rebekah Brooks said it was "a crucial step towards making the business of news an economically exciting proposition".
Both titles will launch new websites in early May, separating their digital presence for the first time and replacing the existing, combined site, Times Online.
ANALYSIS
By Torin Douglas, BBC media correspondent
News International says its new pricing policy is simple and affordable. That will be for readers to judge. Many of its rivals still believe charging for content will only work for specialist publications, such as the Financial Times or Wall Street Journal.
Privately, executives admit the two papers are likely to lose thousands of regular online readers - and millions of more casual ones - because there'll still be plenty of news and comment on other websites, free of charge. But they hope £2 a week is a small enough sum to entice many readers over the paywall.
There's likely to be a huge marketing campaign to change people's habits and perceptions. We can expect comparisons with the price of a cup of coffee (by which standard, newspapers remain astonishingly good value). Subscribers to the print versions of the newspapers will get online access thrown in. There'll also be new apps, adapting the content for phones, tablets and other devices. As Rebekah Brooks of News International says, "This is just the start". The whole industry will be watching intently.
The two new sites will be available for a free trial period to registered customers. And payment will give customers access to both sites.
With newspaper sales in decline, companies have been searching for a business model that will make money from their websites.
Risk
But with so much news content available for free on the internet, NI's decision to charge is seen by many people as a high risk strategy.
Rupert Murdoch, whose News Corp owns NI, has led a fierce campaign against internet sites which distribute news content from his companies. He has criticised Google in particular.
James Harding, editor of The Times, agreed that NI's paywall strategy was a risk. "But it's less of a risk than just throwing away our journalism and giving it away from free," he told the BBC.
He likened the news industry to the music industry of four years ago. "People said the game is up for the music industry because everyone is downloading for free. But now people are buying from download sites."
Mrs Brooks said the decision to charge came "at a defining moment for journalism... We are proud of our journalism and unashamed to say that we believe it has value".
Just the start
And she hinted that two other News International publications, The Sun and the News of the World, would also go behind a paywall.
"This is just the start. The Times and The Sunday Times are the first of our four titles in the UK to move to this new approach. We will continue to develop our digital products and to invest and innovate for our customers."
Privately, executives admit the two papers are likely to lose thousands of regular online readers - and millions of more casual ones - says BBC media correspondent Torin Douglas. But they hope £2 a week is a small enough sum to entice many readers over the paywall.
We can expect marketing campaigns to make comparisons with the price of a cup of coffee, "by which standard, newspapers remain astonishingly good value," our correspondent says.
PAYWALLS' PROGRESS
In December, Johnston Press began a paywall trial for six local weekly papers, charging users £5 for three months. Johnston has yet to report on the trial's success
In the US, the large daily Newsday charged $5 a week for access to its website. By mid-January, three months after charging began, just 35 subscribers had signed up
The Financial Times charges readers on a "metered" model, under which readers get access to some articles for free, but must pay for more. The system is generally regarded as a success
'Dreamland'
The media industry uses a general yardstick that about 5% of visitors to news websites are likely to pay for content. Latest figures show that The Times and Sunday Times had 1.22m daily users.
However, Claire Enders, of media research company Enders Analysis, says that anyone who believes the Times papers will get a 5% conversion is in "dreamland".
"This is not just about adding subscribers, but also strengthening the relationship with loyal readers of the website and papers. If you are going to try this [charging] then the model they have chosen is the best way," she said.
News Corp owns the Wall Street Journal, which has one of the most successful paid-for sites with about 407,000 electronic subscribers.
But some analysts point out that the WSJ offers specialist content, and that charging for general news is a different business model.
The editor-in-chief of the Guardian, Alan Rusbridger, is a leading sceptic of paywalls and has vowed to keep most of the content of his newspaper free online. In January he described the move towards paywall business models as a "hunch".
Unlike some commentators, Ms Enders does not expect any of the major UK newspaper groups to follow suit quickly.
Nor does she believe that Mr Murdoch's strategy represents the last throw of the dice for some of his loss-making papers. "If it fails, Murdoch will think of something else. He has been supporting his loss-makers for years."
Newspaper circulation for print and online editions, Feb 2010
Publication
Print
Online
Daily average circulation
Year change (%)
Daily average unique browsers
Year change (%)
Daily Mail
2,111,204
-3.1
2,265,623
68.2
The Daily Telegraph
685,177
-9.8
1,548,059
9.7
The Guardian
284,514
-16.4
1,869,448
36.6
The Independent
183,547
-10.9
465,346
3.6
The Sun
2,972,763
0.6
1,388,831
-9.9
The Times
505,062
-16.9
1,215,446
-1.8
Source: ABC. Express Newspapers' websites and the FT website are not audited by ABC. Separate figures for the Daily Mirror online are not available, only aggregate figures for Mirror Group Digital.
## Bookmark with:
What are these?
| true | true | true |
The Times and Sunday Times newspapers will start charging to access their websites in June, News International announces.
|
2024-10-12 00:00:00
|
2010-03-26 00:00:00
| null | null | null |
BBC
| null | null |
15,314,242 |
https://www.artsy.net/article/artsy-editorial-hard-painting-made-computer-human
|
It’s Getting Hard to Tell If a Painting Was Made by a Computer or a Human | Artsy
|
Rene Chun
|
# It’s Getting Hard to Tell If a Painting Was Made by a Computer or a Human
Example of images generated by CAN, included in “CAN: Creative Adversarial Networks Generating ‘Art’ by Learning About Styles and Deviating from Style Norms.” Courtesy of Ahmed Elgammal.
Cultural pundits can close the book on 2017: The biggest artistic achievement of the year has already taken place. It didn’t happen in a paint-splattered studio on the outskirts of Beijing, Singapore, or Berlin. It didn’t happen at the Venice Biennale. It happened in New Brunswick, New Jersey, just off Exit 9 on the Turnpike.
That’s the home of the main campus of Rutgers University—all four square miles and 640 buildings of it, including the school’s Art and Artificial Intelligence Lab (AAIL). Nobody would mistake this place as an incubator for fine art. It looks like a bootstrap startup, all cubicles and gray carpet, with lots of cheap Dell monitors and cork boards filled with tech gibberish.
On February 14th of this year, it’s where Professor Ahmed Elgammal ran a new art-generating algorithm through a computer and watched as it spit out a series of startling images that took his breath away. Two weeks later, Elgammal conducted a special Turing test to see how his digital art stacked up against dozens of museum-grade canvases.
In a randomized-controlled double-blind study, subjects were unable to distinguish the computer art from two sample sets of acclaimed work created by flesh-and blood artists (one culled from the canon of Abstract Expressionist paintings, the other from works shown at the 2016 edition of Art Basel in Hong Kong). In fact, the computer-made pictures were often rated by subjects as more “novel” and “aesthetically appealing” than the human-generated art. The ensuing peer-reviewed paper sparked an unsettling art world rumor: Watson had learned how to paint like Picasso.
Programming a computer to make unique and appealing art that people would hang on their walls is the culmination of an impressive body of work that stretches back to 2012, when the Rutgers Department of Computer Science launched the AAIL. The lab’s mission statement is simple: “We are focused on developing artificial intelligence and computer vision algorithms in the domain of art.” Over the years, the lab has developed several innovative algorithms that have piqued the interest of everyone from curators and historians to authenticators and auction houses. One algorithm, which incorporates the elements of novelty and influence, is used to measure artistic creativity. Another analyzes paintings and classifies them according to artist, period, and genre, similar to a Shazam for art. There’s even a forensics algorithm in the AAIL pipeline that identifies the subtle but distinct variations in the brushstrokes of different artists. In a business where forgeries are increasingly difficult to spot, that’s the kind of digital litmus test that insurance carriers, collectors, and galleries will beat a path to your lab door for.
The next step was obvious: a program that didn’t copy old art, but rather actually created new compositions. Elgammal “trained” his algorithm by feeding it over 80,000 digitized images of Western paintings culled from a timeline that stretched from the 15th to the 20th century. Using this immense corpus as the programming source material, he went about the task of creating a variation of the artificial intelligence system known as Generative Adversarial Networks. These so-called “GANs” are great at generating images of handbags and shoes, but not so great at generating original visual art. So Elgammal came up with his own proprietary image-generating system: Creative Adversarial Networks (CANs).
Reduced to the most elementary definition, a GAN is emulative and a CAN, as its name suggests, is creative. “The images generated by CAN do not look like traditional art, in terms of standard genres,” Elgammal wrote in his June 2017 paper*, *“CAN: Creative Adversarial Networks Generating ‘Art’ by Learning About Styles and Deviating from Style Norms.” “We also do not see any recognizable figures. Many of the images seem abstract. Is that simply because it fails to emulate the art distribution, or is it because it tries to generate novel images? Is it at all creative?”
When asked a similar question several months later, Elgammal no longer harbors any doubt. “The machine developed an aesthetic sense,” he says bluntly. “It learned how to paint.”
Example of images generated by CAN, included in “CAN: Creative Adversarial Networks Generating ‘Art’ by Learning About Styles and Deviating from Style Norms.” Courtesy of Ahmed Elgammal.
Like most splashy technological breakthroughs, the Rutgers art algorithm was actually borne of thousands of hours of tedious lab work. During the three weeks leading up to this pivotal moment, Elgammal and his two assistants made numerous tweaks to their finely calibrated algorithm, trying to coax the stubborn binary code into creating art that looked more human. Despite all the hard work, the 45-year-old AAIL director was initially frustrated. The pictures were neither good nor bad; they occupied the dreaded midpoint on the creativity bell curve.
The team got over this suboptimal hump by introducing more “stylistic ambiguity” and “deviations from style norms” into the algorithm, Elgammal explains. It’s a delicate balancing act. Stray too far from established painting styles and the resulting images will strike viewers as bizarre. Conversely, hew too closely to the traditional art canon and the computer will churn out lackluster pictures that are derivative and familiar, the computer equivalent of paint-by-numbers.
After writing some more style patches, Elgammal ran the algorithm one more time. “I was expecting to see images that were fuzzy and not clear, weird faces and weird objects,” he said. Surprisingly, though, that didn’t happen. Instead, the AAIL team absolutely nailed the formulation. “The compositions and colors were very nice,” says Elgammal, relishing the memory of that eureka moment. “We said “Wow! If this was in a museum, you would love it.”
Pressed for a reason why the algorithm generated abstract art instead of, say, portraits and still lifes, Elgammal stresses the evolutionary nature of the Creative Adversarial Network. “It makes sense,” he says matter-of-factly. “If you feed the machine art history from the Renaissance to the present and ask it to generate something that fits into a style, the natural progression would be something along abstract aesthetic lines.” The A.I. art guru is on a roll: “Since the algorithm works by trying to deviate from style norms, it seems that it found the answer in more and more abstraction. That’s quite interesting, because that tells us that the algorithm successfully catches the progression in art history and chose to generate more abstract works as the solution. So abstraction is the natural progress in art history.”
In other words, the algorithm did exactly what many human artists would do given the same circumstances: It produced the kind of arresting images that would have a shot at catching the jaded eye of a Larry Gagosian or Charles Saatchi. Turning out art that smacked of Dutch Masters wouldn’t tickle the brain’s neural network enough, resulting in “habituation,” or a decreased arousal in response to repetitions of a stimulus. Simply put, art collectors are a bit like drug addicts: The visual stimuli the artwork projects must have enough “arousal potential” to trigger what psychologists refer to as the “hedonic response.”
The theories of psychologist Colin Martindale in this regard figure prominently in the DNA of Elgammal’s art algorithm. In his most popular text, *The Clockwork Muse: The Predictability of Artistic Change *(1990), Martindale suggests that successful artists incorporate novelty in their work. He hypothesized that this increase in arousal potential counteracts the viewer’s inevitable habituation response. This bump in creative novelty, however, must be minimized in order to avoid negative viewer reaction. Martindale also believed that artists embraced “style breaks” (a range of artistic phases) as a tool to make their work less predictable and more attractive to viewers over an extended period of time. It’s exactly the kind of thing an art algorithm could be based upon. “Among theories that try to explain progress in art,” Elgammal notes in his paper, “we find Martindale’s theory to be computationally feasible.”
Desmond Paul Henry, 1962. Courtesy of the D.P. Henry Archive.
The history of computer art, which constitutes the bedrock of the burgeoning academic field known on college campuses as the “digital humanities,” dates back to Desmond Paul Henry’s “Drawing Machines” of the early 1960s. The design of these contraptions was based on salvaged bombsight computers used by pilots during World War II to deliver munitions with pinpoint accuracy. The images Henry’s machines generated were abstract, curvilinear, and decidedly complex.
The computer art movement of the 1960s spawned more machine-made pictures, ranging from Alfons Schilling’s low-tech “spin art” (think of dripping paint on canvas attached to a giant potter’s wheel, which long predated Damien Hirst’s cynical “spin paintings”) to the early digital designs and animation produced at the legendary Bell Telephone Labs in Murray Hill, New Jersey. Founded in 1966 by Bell engineers Billy Klüver and Fred Waldhauer and artists Robert Rauschenberg and Robert Whitman, Experiments In Art and Technology (E.A.T.) was the seminal Bell Labs project upon which all of today’s computer-generated art is founded. The creative process was extremely arduous. The art programs and data, for instance, had to be rendered via old school keypunch. Those punch cards were fed into a room-sized computer, one by one. The resulting still images then had to be manually transferred to a visual output medium like a pen or microfilm plotter, a line printer or an alphanumeric printout.
As new computer technology was introduced, new machine-made art quickly followed: dot matrix printer art (1970s), video game art (2000s), 3D-printed art (2010s). What makes Elgammal’s computer images unique is that this marks the first time where A.I. has completely expunged humans from the real-time creative loop.
Unlike DeepDream, Google’s much hyped 2015 bot-art project, human intervention is absolutely null in the Rutgers AAIL machine. Elgammal just turns on the computer and the algorithm does its thing. In stark contrast, DeepDream requires the human touch; Google programmers start with an image and apply texture (a.k.a. “style”). This means that the DeepDream composition is actually dictated by the input image or photo selected by a human entity.
Having an autonomous art algorithm seems to make all the difference. “The scores indicate that the subjects regard these paintings not just as art but as something appealing,” says Professor Elgammal. The scores that he’s referring to were tabulated after human subjects were asked to rate how intentional, communicative, visually structured, and inspiring the paintings were. The data revealed that subjects “rated the images generated by [the computer] higher than those created by real artists, whether in the Abstract Expressionism set or in the Art Basel set.”
The numbers went far beyond statistically significant. When asked to guess the authorship of actual artworks shown at Art Basel in Hong Kong in 2016, 59 percent of respondents inaccurately guessed that they were made by machines. In another portion of the survey, 75 percent of respondents assumed that paintings made by the algorithm were actually generated by humans. The computer-generated paintings squared off against comparable works by artists like Leonardo Drew, Andy Warhol, Heimo Zobernig, and Ma Kelu. Most of the contemporary artists whose work was used in the AAIL experiment declined to comment on Elgammal’s research paper, with one exception: Panos Tsagaris.
*Untitled*, 2016
Panos Tsagaris, *Untitled*, 2015. Courtesy of the artist and Kalfayan Galleries Athens-Thessaloniki.
An untitled 2015 work by the Greek artist—a mixed media canvas tinged with gold leaf—was shown by Kalfayan Galleries at Art Basel 2016 in Hong Kong, and was included as a sample image for the AAIL tests. Tsagaris finds A.I. art “fascinating,” and considers the algorithm more of a peer rather than a disruptive threat. “I’m curious to see how this project will progress as the technology develops further,” he says. “How human-made paintings generated by a machine look like is one thing; bringing the A.I. artist to the level where it can create a concept, a series of emotions upon which it will base the painting that it will create is a whole other level.” He sounds more like Philip K. Dick than Clement Greenberg: “I want to see art that was generated in the mind and heart of the A.I. artist.”
Art historian and critic James Elkins is less sanguine. “This is annoying because [algorithms] are made by people who think that styles are what matter in art as opposed to social contexts, meaning, and expressive purpose,” he says. “One consequence of that narrow sense of what’s interesting is that it implies that a painting’s style is sufficient to make it a masterpiece.” Elkins doesn’t believe artists will go the way of cobblers and cabbies anytime soon either. “If human artists were to stop making art,” he argues, “so would the computers.”
Michael Connor, the artistic director of Rhizome, a non-profit that provides a platform for digital art, agrees. He describes the gap between silicon- and carbon-based artists as wide and deep: “Making art is not the sole role of being an artist. It’s also about creating a body of work, teaching, activism, using social media, building a brand.” He suggests that the picture Elgammal's algorithm generates is art in the same way that what a Monet forger paints is art: “This kind of algorithm art is like a counterfeit. It’s a weird copy of the human culture that the machine is learning about.” He adds that this isn’t necessarily a *bad* thing: “Like the Roman statues, which are copies of the original Greek figures, even copies can develop an intrinsic value over time.”
Elgammal is quick to point out that the learning curve of his algorithm perfectly conforms to the maturation process of the human artist. “In the beginning of their careers artists like Picasso and Cézanne imitated or followed the style of painters they were exposed to, either consciously or unconsciously. Then, at some point, they broke out of this phase of imitations and explored new things and new ideas,” he says. “They went from traditional portraits to Cubism and Fauvism. This is exactly what we tried to implement into the machine-learning algorithm.”
And, just like a real emerging artist, the algorithm is about to have its first one-machine show. “Unhuman: Art in the Age of A.I.,” an exhibition in Los Angeles this October, will feature 12 of the original, A.I.-produced pieces used in the Rutgers study. And after this debut, Elgammal’s algorithm has plenty of room for career growth. That’s because the coders in the Rutgers lab haven’t exploited all the “collative variables” that can be used to jack up the “arousal potential” of the images the algorithm generates. The higher the arousal potential (to a point), the more pleasing the A.I. art is to humans (and the more likely they are to buy it, presumably).
Despite all the A.I. art naysayers, here’s the thing that should make painters and the dealers who represent them nervous: Elgammal claims that the images his computer code generates will only get better over time. “By digging deep into art history, we will be able to write code that pushes the algorithm to explore new elements of art,” he says confidently. “We will refine the formulations and emphasize the most important arousal-raising properties for aesthetics: novelty, surprisingness, complexity, and puzzlingness.”
*Surprisingness* and *puzzlingness*—not exactly *Artforum* buzzwords. But allow the algorithm time to improve and compile a body of work, and they might be. Elgammal insists this technology is no one-hit wonder. He envisions an entire infrastructure developing to support his arousal-inducing digital art: galleries, agents, online auctions, even authenticators (a service that will undoubtedly be rendered by yet another AAIL algorithm).
*Untitled (HZ 2015-080)*, 2015
*Nature-Abstract No.1*, 1984
But before selling all your Warhols and investing heavily in an algorithm-generated art portfolio, consider this history lesson. In 1964, A. Michael Noll, an engineer and early computer pioneer at Bell Labs, did his own art Turing test. He programmed an IBM computer and a General Dynamics microfilm plotter to generate an algorithmic riff of the Piet Mondrian masterpiece *Composition with Lines* (1917). The digital image was projected on a cathode ray tube and photographed with a 35mm camera. A copy of that print, which Noll cheekily titled *Computer Composition with Lines*, was shown to 100 subjects next to a reproduction of the Mondrian painting. Only 28 percent of the subjects were able to correctly identify the IBM mimic. Even more stunning, 59 percent of the subjects preferred the computer image over the Mondrian original.
The following year, a collection of Noll’s digital art was exhibited at the Howard Wise Gallery in New York, marking the first time that computer-generated art was featured in an American art gallery. The* New York Times* gave the groundbreaking exhibition a rave review. According to Noll, though, the public response was “disappointing.” Not a single image from the show was sold.
That failed exhibition did nothing to diminish Noll’s optimism about the future of digital art. “The computer may be potentially as valuable a tool to the arts as it has already proven itself to be in the sciences,” he wrote in 1967. In the half-century since those words first appeared in print, that prophecy has yet to come true. But what should we make of a new algorithm that’s not so much a “tool” to assist artists, as it is a machine to replace them? It's the hoary cyberpunk plot unspooling in real life: Mad scientist invents a machine that becomes more human than humans.
Anyone who follows the contemporary art market will notice an additional wrinkle here. For a moment, the prevailing style—from art schools to the gallery circuit and the auction houses—was a breed of abstract painting that critics dubbed “Zombie Formalism” (aka Neo-Modernism, MFA Abstraction, and, more derisively, Crapstraction). Clinical, derivative, pretentious, and vertically formatted for convenient Instagram posting, this new genre, which is frequently digitized, filtered, and presented through a computer, is human art masquerading as algorithm art.
It’s the kind of exquisite irony that sparks conversations about creeping dystopia and the decline of culture: To regain their edge and pull higher scores on Professor Elgammal’s next Turing test, humans might have to start painting *more* like robots. If budding crapstractionists followed the lead of artificial intelligence—“deviating from the norm” and injecting a touch of “style ambiguity” into their work—their painting might actually improve.
| true | true | true |
An A.I. lab in New Jersey has launched an algorithm whose abstract paintings are able to fool humans. Should artists be nervous about competition from machines?
|
2024-10-12 00:00:00
|
2017-09-21 00:00:00
|
https://d7hftxdivxxvm.cloudfront.net?height=630&quality=80&resize_to=fill&src=https%3A%2F%2Fartsy-media-uploads.s3.amazonaws.com%2Fm4A1tGFxOYUxMZP4FMBxRA%252Fmagcomp.jpg&width=1200
|
article
|
artsy.net
|
Artsy
| null | null |
16,361,506 |
https://www.youtube.com/watch?v=7rOQv_6L9fQ
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
11,003,516 |
https://gist.github.com/jeena/6072278fd5841a77a3e7
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
22,518,155 |
https://www.seattletimes.com/business/boeing-aerospace/faa-engineers-objected-to-boeings-removal-of-some-787-lightning-protection-measures/
|
FAA engineers objected to Boeing’s removal of some 787 lightning protection measures
|
Dominic Gates
|
Last spring, Federal Aviation Administration (FAA) managers approved removing a key feature of the 787 Dreamliner wing that aimed to protect it in the event of a lightning strike.
Boeing’s design change, which reduces costs for the company and its airline customers, sped through despite firm objections raised by the agency’s own technical experts, who saw an increased risk of an explosion in the fuel tank inside the wing.
That clash will come under scrutiny Wednesday as FAA Administrator Steve Dickson appears before a House committee examining the agency’s regulatory performance in the wake of the two Boeing 737 MAX crashes.
Lightning protection on an airplane like the 787 that’s fabricated largely from carbon composites is more elaborate than on a metal aircraft. When Boeing developed the Dreamliner, it included special measures to protect the wing fuel tank. It sealed each metal fastener in the wing with an insulating cap and embedded copper foil in strips across the carbon wing skin to disperse the current from any lightning strike.
Five years ago, Boeing quietly stopped adding the insulating fastener caps. Its own engineers approved the change with minimal input from the FAA.
Then, in March, it stopped adding the copper foil. The entire wing surface of any 787 delivered since then now lacks both protections.
The FAA initially rejected the removal of the foil from the wing on February 22, when its certification office ruled that Boeing had not shown, as regulations required, that the ignition of fuel tank vapor by a lightning strike would be “extremely improbable,” defined in this case as likely to occur no more than three times in a billion flight hours.
By then Boeing had already built about 40 sets of wings without the foil.
Facing the prospect of not being able to deliver those airplanes, Boeing immediately appealed. FAA managers reversed the ruling exactly a week later — just days before the unrelated crash of the second 737 MAX.
In June, a senior FAA safety engineer, Thomas Thorson, concerned that the agency was hurriedly approving Boeing’s desired changes so it could deliver planes it had already built, formally objected.
“I do not agree that delivery schedules should influence our safety decisions and areas of safety critical findings, nor is this consistent with our safety principles,” Thorson wrote.
FAA management has faced heavy criticism for the way it took scrutiny of the MAX’s certification away from its own technical staff and delegated most of the approval process to Boeing itself.
The 787 decision, which came as Boeing was pushing to reduce the cost and complexity of manufacturing the jet, raises similar concern.
Boeing says the changes were introduced as its understanding of lightning protection evolved over time, both in terms of what works well in practice and in what’s needed to meet the FAA requirements.
In a statement, Boeing said the 787 has “several other layers of protection from lightning strikes” and that each design change “was properly considered and addressed by Boeing, thoroughly reviewed with and approved by the FAA.”
Thorson, propulsion technical project manager at the FAA, wrote that agency’s technical experts had discovered errors in the way Boeing had summed up the various risks of the lightning protection features and that with the removal of the foil “the fuel tank ignition threat … cannot be shown extremely improbable.”
Thorson estimated that if the math were corrected, the ignition risk “would be classified as potentially unsafe.”
He recommended that the FAA reject Boeing’s assertion that it complied with regulations “due to the amount of risk that the FAA would be accepting for fuel tank ignition due to lightning.”
Thorson also objected to the FAA delegating to Boeing itself a System Safety Assessment of the design change that was specific to the largest Dreamliner model, the 787-10, because of different details inside the wing.
He wrote that the rationale provided for this delegation of oversight was the FAA’s inability “to support the airplane delivery schedule.” The FAA’s approval of the design change for that specific model on June 28 allowed Boeing to go ahead next day and deliver a 787-10 in South Carolina to Dutch airline KLM.
The 787 lightning protection changes were first raised last month in a letter to the FAA from Rep. Peter DeFazio, D-Ore., chair of the House Committee on Transportation and Infrastructure, and Rep Rick Larsen, D-Everett, chair of the Aviation subcommittee. The committee provided Thorson’s letter and other supporting documents after a request by The Seattle Times.
FAA Administrator Dickson wrote to the committee on Friday insisting that “the design change had no unsafe features” and that the 787s produced since the removal of the copper foil from the wing skin are “currently safe to operate.”
Still, in October, perhaps in response to the mounting criticism of its oversight role, the FAA seemed to take a step back.
In a letter to Boeing, the agency’s airplane certification unit said the “cumulative effect of multiple issues” affecting the 787’s lightning protection — including the deliberate design changes, a sequence of Boeing manufacturing errors and the discovery that some lightning protection features proved inadequate in practice — raised concerns that the risk from a lightning strike is greater than the regulations allow.
As a result, eight years after the FAA gave its original approval to the 787, and months after approving the removal of the foil from the wing, it finally asked Boeing to conduct a formal re-assessment of the risk of a fuel tank explosion in the 787 wing.
### Redrawn lightning strike zones
Each passenger airplane, on average, gets hit by lightning about once a year, with more strikes recorded in certain regions. The impact on a plane with a traditional metal airframe is often minimal because the current passes along the skin from the front to the back, then exits to the ground.
A carbon skin is much less conductive, and so requires careful protection to avoid having all the power of a lightning strike concentrated in the small area that’s hit.
A British Airways 787 struck by lightning two years ago shortly after it departed London’s Heathrow airport sustained more than 40 holes in the fuselage from the strike, damage that was discovered only after it landed in India.
Beneath its carbon skin, and specifically inside the wing fuel tank, the 787 has metal fittings and structures. These must all be grounded and linked so that if current reaches them, it’s distributed safely away.
The danger is that a small crack in the metal, or two metal fittings close together but not linked, might cause the current to jump the gap, creating a spark that could ignite the fuel vapor. The metal fasteners in the jet’s wing skin and the metal ribs, wires, tubes and fittings in the interior of the fuel tank must all be protected against such a possibility.
The original 787 wing design certified by the FAA gave most wing fasteners triple protection: an insulating sealant cap on the exterior head, the copper foil to disperse the current and a collar that compressed to create a secure, tight fit when the fastener was inserted. This made them fault-tolerant: if one layer failed, there were two other layers of protection.
Boeing stopped installing the insulating caps on the fasteners in 2014 because the sealant cap tended to crack quickly in service, and maintaining tens of thousands of such fastener heads was an expensive headache for the airlines.
On the minority of fasteners that didn’t have the collar creating the tight fit, Boeing put insulating cap seals on the inside of the tank instead.
The copper foil just under the skin was another headache. It added weight and cost for Boeing to install and, if it was struck by lightning, it was expensive for the airline to repair.
SAE International, a global engineering association that sets industry standards, classifies the different areas of an airframe for lightning strike purposes as Zone 1, areas likely to get a direct hit; Zone 2, areas aft of the direct hit vulnerable to a strike sweeping back as the airplane moves forward; and Zone 3, areas unlikely to get a lightning strike.
Boeing decided to remove the foil from Zone 3.
In December 2018, SAE revised the zones based on data from reports of more than 1,000 lightning strikes on aircraft. It found that the area aft of the engines — designated Zone 2 when the 787 was certified — was rarely hit, and so changed its classification to Zone 3.
Now all of the wing except for the wingtip — almost the entire fuel tank — is classified with the lowest vulnerability, Zone 3.
The basis for much of Thorson’s objection in June was that now tens of thousands of wing fasteners were no longer fault tolerant. With no copper foil and few fasteners with in-tank cap seals — approximately 10%, one FAA engineer told the Seattle Times — that left each of the remaining 90% of wing fasteners a potential single point of failure, a potential ignition source, in the event of a lightning strike.
During initial certification of the 787, the FAA told Boeing that in assessing the probability of a fuel tank explosion, it didn’t need to sum up all the features in Zone 3 that were not fault tolerant since the risk was very low. The overall probability of a fuel tank explosion, Boeing calculated then, met the “three in a billion” threshold, though not with much margin to spare.
In turning down Boeing’s proposal in February, the FAA’s technical staff argued that Boeing’s design changes left many more features that aren’t fault tolerant and that this calculation needs to be done anew to assess the risk properly.
In an internal email to Thorson, another FAA safety engineer referred to the pending 787-10 delivery to KLM, writing that “this is clearly a contentious issue and Boeing is rushing the certification so they can deliver airplanes.”
Boeing rejected the opinion of the FAA technical specialists and persuaded FAA management that it was still in compliance with safety regulations.
In a statement, the agency said that decision was made “by managers who were qualified safety experts with technical backgrounds.”
In written answers provided to The Seattle Times, Boeing said the 787 design provides “the necessary safety and protection redundancy” and meets “all regulatory and design requirements.”
Boeing also noted that other 787 lightning protection measures remain.
In addition to the various measures taken to reduce the chance of a spark, the 787 wing fuel tank has a nitrogen-generating system (NGS) that reduces flammable vapor by filling the space above the fuel with inert gas.
This is a highly reliable way to prevent an explosion. The one weakness in the system is that, if it’s inoperative for any reason, airlines are still allowed to fly the aircraft for a limited time.
### A new risk assessment
Boeing is currently conducting the new risk assessment ordered by the FAA in October.
In addition to the removal of the insulating caps from the metal fasteners and the copper foil from the carbon wing skin, the FAA has asked Boeing to consider multiple other changes it’s made to details inside the fuel tank.
In the system that measures how much fuel is in the tank, Boeing dropped a feature designed to prevent wires rubbing together. It removed sealant from certain areas judged to no longer be vulnerable to sparking. It removed clamps designed to ensure hydraulic tubing didn’t come loose inside the tank.
Most of the design changes reduce complexity, cost and weight. Boeing has steadily reduced the cost of building the 787, a vital part of its drive to recoup the $22 billion in still-outstanding 787 production costs deferred into the future.
The FAA said it also wants Boeing to assess the risk from discoveries since the plane entered service.
For example, a primer used to coat surfaces inside the tank before sealant was added was found not to adhere as well as expected and was prone to degrading, leaving the sealant loose.
Boeing must also factor into its assessment a series of manufacturing errors that have slipped through in the production process at different times, so that various 787s in service around the world have details within the wings that don’t conform to the design.
The FAA letter cited 11 examples of such manufacturing concerns, such as “fastener washers installed backwards with a sharp edge against the primer,” causing it to crack. Each specific issue affects only the jets built during the period before that mistake was corrected.
The assessment needs to address all the configurations of Dreamliners that have been produced with the various design differences and known production quality errors, the FAA said.
Boeing, in its written response to questions, said it “has assessed all of the configurations in-service” and is “confident that the aircraft all meet the federal safety requirements.”
Appearing before DeFazio’s committee Wednesday, FAA chief Dickson will be pressed to defend his agency’s independence from industry and the integrity of its oversight.
His written response to the committee insists that Boeing’s decision to delete copper foil from the 787 wing skin complied with all FAA requirements and that the reassessment now underway of the jet’s lightning protection was not driven by that change.
And he denied there’s any “concern that a short-term or urgent safety issue exists.”
The FAA engineer who spoke to The Seattle Times said he found these were conflicting assertions. On the one hand, the FAA says the 787 is safe. On the other, it’s ordered Boeing to recheck that it’s safe.
If Boeing’s re-assessment confirms that the 787 remains within regulations, the jetmaker’s next new airplane will follow the same path. The massive carbon composite wing of the 777X will be built according to the latest 787 wing specifications: without the copper foil or the insulating cap seals.
The opinions expressed in reader comments are those of the author only and do not reflect the opinions of The Seattle Times.
| true | true | true |
To protect the wing fuel tank of the 787 in a lightning storm, Boeing originally built in a series of protective measures. Boeing has now dropped some of those protections from the entire 787 wing, drawing complaints from FAA technical...
|
2024-10-12 00:00:00
|
2019-12-10 00:00:00
|
article
|
seattletimes.com
|
The Seattle Times
| null | null |
|
41,400,257 |
https://yoshuabengio.org/2024/08/29/bounding-the-probability-of-harm-from-an-ai-to-create-a-guardrail/
|
Bounding the probability of harm from an AI to create a guardrail - Yoshua Bengio
|
Yoshuabengio
|
As we move towards more powerful AI, it becomes urgent to better understand the risks, ideally in a mathematically rigorous and quantifiable way, and use that knowledge to mitigate them. Is there a way to design powerful AI systems based on machine learning methods that would satisfy probabilistic safety guarantees, i.e., would be provably unlikely to take a harmful action?
Current AI safety evaluations and benchmarks test the AI for cases where it may behave badly, e.g., by providing answers that could yield dangerous misuse. That is useful and should be legally required with flexible regulation, but is not sufficient. These tests only tell us one side of the story: If they detect bad behavior, a flag is raised and we know that something must be done to mitigate the risks. However, if they do not raise such a red flag, we may still have a dangerous AI in our hands, especially since the testing conditions might be different from the deployment setting, and attackers (or an out-of-control AI) may be creative in ways that the tests did not consider. Most concerningly, AI systems could simply recognize they are being tested and have a temporary incentive to behave appropriately while being tested. Part of the problem is that such tests are spot checks. They are trying to evaluate the risk associated with the AI in general by testing it on special cases. Another option would be to evaluate the risk on a case-by-case basis and reject queries or answers that are considered to potentially violate a safety specification.
With the long-term goal of obtaining a probabilistic guarantee that would apply in every context, we thus consider in this new paper (see reference and co-authors below) the objective of estimating a context-dependent upper bound on the probability of violating a given safety specification. Such a risk evaluation would need to be performed at run-time to provide a guardrail against dangerous actions of an AI.
There are in general multiple plausible hypotheses that could explain past data and make different predictions about future events. Because the AI does not know which of these hypotheses is right, we derive bounds on the safety violation probability predicted under the true but unknown hypothesis. Such bounds could be used to reject potentially dangerous actions. Our main results involve searching for cautious but plausible hypotheses, obtained by a maximization that involves Bayesian posteriors over hypotheses and assuming a sufficiently broad prior. We consider two forms of this result, in the commonly considered iid case (where examples are arriving independent from a distribution that does not change with time) and in the more ambitious but more realistic non-iid case. We then show experimental simulations with results consistent with the theory, on toy settings where the Bayesian calculations can be made exactly, and conclude with open problems towards turning such theoretical results into practical AI guardrails.
Can a Bayesian Oracle Prevent Harm from an Agent? By Yoshua Bengio, Michael K. Cohen, Nikolay Malkin, Matt MacDermott, Damiano Fornasiere, Pietro Greiner and Younesse Kaddar, in arXiv:2408.05284, 2024.
This paper is part of a larger research program (with initial thoughts already shared in this earlier blog post) that I have undertaken with collaborators that asks the following question: If we could leverage recent advances in machine learning and amortized probabilistic inference with neural networks to get good Bayesian estimates of conditional probabilities, could we obtain quantitative guarantees regarding the safety of the actions proposed by an AI? The good news is that as the amount of computational resources increases, it is possible to make such estimators converge towards the true Bayesian posteriors. Note how this does not require asymptotic data, but “only” asymptotic compute. In other words, whereas most catastrophic AI scenarios see things getting worse as the AI becomes more powerful, such approaches may benefit from the increase in computational resources to increase safety (or get tighter safety bounds).
The above paper leaves open a lot of challenging questions, and we need more researchers digging into them (more details and references in the paper):
**Moderate overcautiousness**. Can we ensure that we do not underestimate the probability of harm but do not massively overestimate it?**Tractability of posterior estimation**. How can we efficiently estimate the required Bayesian posteriors? For computational tractability, a plausible answer would rely on amortized inference, which turns the difficult estimation of these posteriors into the task of training a neural net probabilistic estimator which will be fast at run-time. Recent work on amortized Bayesian inference for symbolic models suggests that this is feasible.**Efficient search for a cautious theory**. How can we efficiently identify a cautious but plausible theory that upper-bounds the risk, since this requires an optimization at run-time? Again, a plausible answer is to rely on amortized probabilistic inference, e.g., by sampling theories with a low or annealed temperature, a technique that has been used for decades in the nonconvex optimization literature and more recently combined with amortized sequential sampling methods.**Partial specification of theories**. Another issue is that a full explanatory theory will generally be too large an object to make the above bound calculations and optimization sufficiently efficient. An alternative would be to derive bounds that only require partial explanations, similar to how humans tell cautious “stories” about what could go wrong in uncertain circumstances when thinking about taking a potentially dangerous decision.**Safety specification in natural language**. How is a safety specification provided in natural language, like with Anthropic’s Constitutional AI, turned into a random variable over which conditional probabilities can be estimated? Having humans label lots of situations that match the desired constraint seems impractical. The alternative proposed in my February blog post is to learn to interpret natural language statements as formal random variables over which probabilistic causal reasoning can occur. The ambiguity of natural language can then be captured by having a distribution over formal interpretations.**Effect of approximation error**. Finally, because we envisage using machine learning to estimate the bound, we have to consider the fact that a learned predictor will be imperfect. How do we correct conservatively for the potentially dangerous errors made by this predictor? This may be particularly important if we use a neural network to estimate probabilities, since they are subject to lack of robustness and adversarial attacks, e.g., how do we make sure that the agent proposing the actions has not learned to find loopholes in the risk estimator?
| true | true | true |
As we move towards more powerful AI, it becomes urgent to better understand the risks, ideally in a mathematically rigorous and quantifiable way, and use…
|
2024-10-12 00:00:00
|
2024-08-29 00:00:00
|
article
|
yoshuabengio.org
|
Yoshua Bengio
| null | null |
|
28,429,513 |
https://travel.stackexchange.com/q/168537/101
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
2,747,725 |
http://www.reddit.com/r/technology/comments/ikymu/googlefacebook_allows_you_to_view_your_facebook/c24mfci
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
1,202,894 |
http://www.viddler.com/explore/loopytube/videos/644/
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
19,298,576 |
http://rydesplit.ca
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
6,761,551 |
http://www.cnn.com/2013/11/19/showbiz/uk-monty-python-reunion/index.html?c=showbiz
|
Spam, spam, spam or Monty Python reunion? | CNN
|
Susannah Cullinane
|
### Story highlights
NEW: Terry Jones tells the BBC that a reunion is happening
The comedy group Monty Python's Flying Circus was formed in October 1969
They produced 45 TV episodes for the BBC and five feature films before separating in 1983
Ex-Python Eric Idle has tweeted that there will be "a big forthcoming news event"
A series of cryptic tweets and the announcement of a news conference sparked speculation that the five surviving members of British comedy troupe Monty Python may be about to reunite.
And one of the members appears to have let the cat out of the bag, telling the BBC that a reunion is indeed happening.
“We’re getting together and putting on a show – it’s real,” Terry Jones told the BBC, adding: “I’m quite excited about it. I hope it makes us a lot of money. I hope to be able to pay off my mortgage!”
The news conference will be in London on Thursday.
Member Eric Idle tweeted Tuesday that there was a “Python meeting this morning,” after tweeting Monday: “Only three days to go till the Python press conference. Make sure Python fans are alerted to the big forthcoming news event.”
The public relations agency that sent out the invitations to the news conference declined to confirm it was to announce a reunion, saying: “All will be revealed on Thursday.”
Michael Palin, John Cleese, Terry Gilliam, Terry Jones, Eric Idle and the late Graham Chapman became comedic legends with the creation of Monty Python’s Flying Circus in October 1969.
Read more: What is Monty Python?
They produced 45 TV episodes for the BBC and five feature films together before going their separate ways in 1983.
The shows mostly consisted of a string of barely coherent sketches, often lacking conventional punch lines and loosely tied together by Gilliam’s stream of consciousness animations.
The group dressed as old ladies, dressed as transvestite lumberjacks, performed sketches about pompous middle class men, used catchphrases such as “And now for something completely different,” and sang amusing ditties such as “Spam, spam, spam, spam, spam, spam, spam, spam …”
Read more: 40 years of silliness
Although the TV show ran for only four seasons, it proved a massive cult hit when it was shown in the United States beginning in 1974 – just as the show was winding up on the other side of the Atlantic.
That success spawned a series of spin-off productions, including the films “Monty Python and the Holy Grail,” the Bible-baiting “Monty Python’s Life of Brian” and “Monty Python’s The Meaning of Life” as well as “Live at the Hollywood Bowl.”
Many of today’s comedians cite Python as a key influence, and its influence can be seen in comedy shows including “The Daily Show” and “The Simpsons.”
| true | true | true |
A series of cryptic tweets and a news conference announcement have sparked speculation that British comedy troupe Monty Python may be about to reunite.
|
2024-10-12 00:00:00
|
2013-11-19 00:00:00
|
article
|
cnn.com
|
CNN
| null | null |
|
1,583,552 |
http://nextparadigms.com/2010/08/07/7-hints-about-upcoming-android-3-0-gingerbread/
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
22,321,186 |
https://blakeniemyjski.com/home-automation/thinking-about-smart-home-power-usage/
|
https://blakeniemyjski.com/blog/thinking-about-smart-home-power-usage/
| null | null | true | true | false | null |
2024-10-12 00:00:00
| null | null | null |
blakeniemyjski.com
|
blakeniemyjski.com
| null | null |
9,718,366 |
http://tech.eu/features/5067/european-tech-unicorns-gp-bullhound-report/
|
13 European tech companies became 'unicorns' in the last year
|
Neil Murray
|
Tech 'dealmaker' GP Bullhound has today released a research report detailing that Europe has seen thirteen of its tech companies pass a billion-dollar valuation in the last year, meaning that Europe is currently producing a so-called 'unicorn' at a rate faster than one a month.
Of the thirteen new companies to join the once-exclusive club that's seemingly becoming easier to join, 8 were from the UK, 3 were from Germany while France and The Netherlands also had 1 addition each.
### The thirteen in full
- Adyen (The Netherlands) - BlaBlaCar (France) - Delivery Hero (Germany) - FanDuel (UK) - Farfetch (UK) - Funding Circle (UK) - Home24 (Germany) - Powa (UK) - Rocket Internet (Germany) - Shazam (UK) - Skrill (UK) - TransferWise (UK) - Ve (UK)
However, three of Europe's companies who previously had a valuation of $1 billion+ lost their 'unicorn' classification after a difficult year of trading. These were the UK's Monitise and Boohoo.com, and Spain's eDreams Odigeo, whose post-IPO troubles have been covered by us before.
This means that Europe is now home to 40 unicorns, compared to 30 at this same point last year.
### Other key findings from the report
- Europe's 'unicorns' have a collective value of $120 billion - Therefore, the average European 'unicorn' is valued at $3 billion - The UK has produced the most 'unicorns' since 2000 with 17. Sweden has produced 6, and Germany and Russia are home to 4 each - Fintech is the industry category gaining 'unicorns' at the fastest rate - Less than half of Europe's 'unicorns' have had a liquidity event, with time to liquidity averaging above 8 years - Building a 'unicorn' is expensive, with the median investment $140 million - Index Ventures is the most successful in investing in Europe's 'unicorns', having backed 9 - 87% of the companies are still managed by at least one member of the founding team
However, although Europe is producing billion-dollar companies at a rate faster than ever before, it still remains behind the US, with America seeing 22 new companies gain a valuation north of $1 billion in the same period. And while European 'unicorns' have a collective value of $120 billion, Facebook has a market capitalisation of more than double that on its own ($275 billion).
But while there is still some catching up to do, as the gap in available capital between the US and Europe decreases, it's likely that the gap between creating 'unicorns' will continue to do so too.
You can read the full report from GP Bullhound here (PDF).
**Also read:**
European tech company exits in 2014: 358 deals in total, €80.14B (disclosed), and other take-aways
An interview with NASDAQ exec Adam Kostyál about Europe’s potential for future tech IPOs
*Featured image credit:* Shari ONeal / Shutterstock
| true | true | true |
According to research from GP Bullhound, Europe has produced 13 technology companies valued at $1 billion+ in the last year. Neil Murray lists the latest 'unicorns' to come out of Europe.
|
2024-10-12 00:00:00
|
2015-06-15 00:00:00
|
website
|
tech.eu
|
Tech.eu
| null | null |
|
9,107,610 |
http://graphics.wsj.com/infectious-diseases-and-vaccines/#b02g20t20w15
|
Battling Infectious Diseases in the 20th Century: The Impact of Vaccines
|
Tynan DeBold
|
# Battling Infectious Diseases in the 20th Century: The Impact of Vaccines
The number of infected people, measured over 70-some years and across all 50 states and the District of Columbia, generally declined after vaccines were introduced.
The heat maps below show number of cases per 100,000 people.
Source: Project Tycho
| true | true | true |
The number of infected people, measured over 70-some years and across all 50 states and the District of Columbia, generally declined after vaccines were introduced.
|
2024-10-12 00:00:00
|
2024-01-01 00:00:00
|
http://si.wsj.net/public/resources/images/OG-AD828_VACCIN_NS_20150211161752.jpg
|
article
|
graphics.wsj.com
|
WSJ
| null | null |
4,653,079 |
http://crca.ucsd.edu/~msp/techniques/latest/book-html/
|
Miller Puckette
| null |
Miller Puckette Department of Music University of California San Diego La Jolla, CA 92093-0099 ... and ... IRCAM, Paris, France [email protected] |
|
**Books, articles, and videos:**
Classes taught at UCSD
Music Department Lecture Series:
Voice as Musical Instrument (2019)
Recorded music, research talks, and class lectures
with a searchable table of class contents
Book (online or paper):
Theory and Techniques of Electronic Music
Rhy by Ed Harkins (DRAFT):
HTML; PDF;
audio files.
other publications
Biographical note
Past and upcoming events
*...he says to me, "Are you ready to be yourself again?"
I said, "Which one?"*
-Iggy Pop in an interview by David Marchese
| true | true | true | null |
2024-10-12 00:00:00
|
2001-01-01 00:00:00
| null | null | null | null | null | null |
2,077,215 |
http://mashable.com/2011/01/06/quora-growth-not-twitter/
|
Why Quora Will Never Be as Big as Twitter
|
Vadim Lavrusik
|
Though it's a valuable service that very often yields answers to your questions and displays insightful commentary about topics you're interested in, the site has a focused appeal that makes it unlikely to attract mainstream users. And that's OK, because maybe Quora isn't trying to attract the mainstream audience.
Quora has learned a lot from the likes of Facebook, Twitter and Digg, and it has provided an outlet for conversation and discussion that many other social sites have been unable to achieve. It's a great site.
But despite having done an outstanding job attracting a user base and providing a quality experience with other big competitors on its tail, its growth is likely to be more steady and organic. Though Quora will grow this coming year and attract a substantial user base, I am skeptical of the claims that it will scale to the stature of Twitter. Here's why.
Focused Appeal
In some ways, Quora has a broad appeal: answering specific questions and questions you didn't know you had but that interest you. When users go to a search engine like Google, they are looking for specific information. Quora is similar, but, instead of an algorithm, you get answers from people who are knowledgeable about the topic. It's similar to a social search engine.
But questions and answers can only take you so far. Twitter, for example, has broad appeal because it is a simple act of publishing and distribution in 140-character messages. It appealed to a broad audience after getting the attention of celebrities and notable figures like Oprah, but more importantly, it satisfied the need for simple and real-time publishing to your connections. It's simple, and users don't feel like they have to work to use it. Quora feels heavy, which is of course where it excels, providing in-depth commentary to questions. But that heaviness is unlikely to attract a large audience.
It might be useful for focused groups -- say, journalists -- seeking answers or hoping to get insights on topics they are interested in. Those in the tech industry may also find it useful, but this utility may not be that attractive to an average user.
Attracting a Broader Audience
The incentive for an average user to join needs to improve. When Foursquare began growing at the beginning of last year, some also called it the "next Twitter." But we quickly learned that only 4% of online Americans use location-based services like Foursquare. Of course, Foursquare still managed to grow to 5 million users and attract a lot of buzz, but the broad interest in sharing your location and the incentive of rewards still wasn't enough.
Despite some assertions that Quora has gained a broader audience, it seems to be heavily trafficked by technologists, those in the media industry and social media types.
Though these groups are often the early adopters of other services, Quora seems to be particularly tech-heavy. That's part of the reason why I find it useful -- because there are people there who are interested and who are contributing to topics I am interested in. But what about users interested in topics outside of technology? Topics like automotive, for example, have a smaller amount of followers. If there are fewer users for you to connect with that are interested in the same topics as you, it decreases your chances of sticking around. For Quora, user retention will be a challenge.
Current Design
In its current design, Quora is most useful with a smaller community of users and connections. Before the influx of new users, each time I logged on I was presented with useful content that was relevant to me. It became an addicting experience browsing through the top questions and chiming in when appropriate. Even Irene Au, Google's head of design, praised how successful Quora's design is, "not only visually, but also for interaction in terms how they built in mechanics for ensuring high quality content." Its current design is perfect for presenting quality content among a smaller community of users.
As more users recently signed up and began contributing to Quora, there has been more noise and less value. Of course, some of the features on the site, such as up-voting of answers, are meant to present the best possible answers, and yet the news feed seems to be getting flooded with up-voted answers. And this tweet from The New York Times's Patrick LaForge may not be a unique frustration.
Though Quora's UI is great, it seems inadequate for handling a large user base and allowing people to discover relevant information amid a flood of content. As it grows in its current design, the quality of the user experience seems to decline.
Lots of Competition
The big dogs also have the advantage of a large user base to which they can introduce new products. Quora is starting from the ground up. A Facebook user, for example, may be more likely to default to using Facebook's social questions product because all of their connections are already on the platform. Competition with the mainstream also means that Quora will be competing for user attention and user acquisition. It has already set itself apart by providing a quality community and relevant information to its users, but how long will that last and will it be able to scale with big competitors in the ring? As much as I'd like it to, I think it is unlikely.
What's your take on Quora? Will it scale to the size of top social networks? Does it even need to? Let us know in the comments below.
More Startup Resources from Mashable:
- 5 Predictions for Startups in 2011
- 5 Lessons Big Corporations Can Learn From Startups
- HOW TO: Get the Most Out of a Coworking Space
| true | true | true |
Why Quora Will Never Be as Big as Twitter
|
2024-10-12 00:00:00
|
2011-01-06 00:00:00
|
article
|
mashable.com
|
Mashable
| null | null |
|
9,837,307 |
http://motherboard.vice.com/blog/the-wall-of-sound
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
32,008,408 |
https://docs.rust-embedded.org/book/intro/index.html
|
The Embedded Rust Book
| null |
# Introduction
Welcome to The Embedded Rust Book: An introductory book about using the Rust Programming Language on "Bare Metal" embedded systems, such as Microcontrollers.
## Who Embedded Rust is For
Embedded Rust is for everyone who wants to do embedded programming while taking advantage of the higher-level concepts and safety guarantees the Rust language provides. (See also Who Rust Is For)
## Scope
The goals of this book are:
-
Get developers up to speed with embedded Rust development. i.e. How to set up a development environment.
-
Share
*current*best practices about using Rust for embedded development. i.e. How to best use Rust language features to write more correct embedded software. -
Serve as a cookbook in some cases. e.g. How do I mix C and Rust in a single project?
This book tries to be as general as possible but to make things easier for both the readers and the writers it uses the ARM Cortex-M architecture in all its examples. However, the book doesn't assume that the reader is familiar with this particular architecture and explains details particular to this architecture where required.
## Who This Book is For
This book caters towards people with either some embedded background or some Rust background, however we believe everybody curious about embedded Rust programming can get something out of this book. For those without any prior knowledge we suggest you read the "Assumptions and Prerequisites" section and catch up on missing knowledge to get more out of the book and improve your reading experience. You can check out the "Other Resources" section to find resources on topics you might want to catch up on.
### Assumptions and Prerequisites
- You are comfortable using the Rust Programming Language, and have written, run, and debugged Rust applications on a desktop environment. You should also be familiar with the idioms of the 2018 edition as this book targets Rust 2018.
- You are comfortable developing and debugging embedded systems in another
language such as C, C++, or Ada, and are familiar with concepts such as:
- Cross Compilation
- Memory Mapped Peripherals
- Interrupts
- Common interfaces such as I2C, SPI, Serial, etc.
### Other Resources
If you are unfamiliar with anything mentioned above or if you want more information about a specific topic mentioned in this book you might find some of these resources helpful.
Topic | Resource | Description |
---|---|---|
Rust | Rust Book | If you are not yet comfortable with Rust, we highly suggest reading this book. |
Rust, Embedded | Discovery Book | If you have never done any embedded programming, this book might be a better start |
Rust, Embedded | Embedded Rust Bookshelf | Here you can find several other resources provided by Rust's Embedded Working Group. |
Rust, Embedded | Embedonomicon | The nitty gritty details when doing embedded programming in Rust. |
Rust, Embedded | embedded FAQ | Frequently asked questions about Rust in an embedded context. |
Rust, Embedded | Comprehensive Rust 🦀: Bare Metal | Teaching material for a 1-day class on bare-metal Rust development |
Interrupts | Interrupt | - |
Memory-mapped IO/Peripherals | Memory-mapped I/O | - |
SPI, UART, RS232, USB, I2C, TTL | Stack Exchange about SPI, UART, and other interfaces | - |
### Translations
This book has been translated by generous volunteers. If you would like your translation listed here, please open a PR to add it.
## How to Use This Book
This book generally assumes that you’re reading it front-to-back. Later chapters build on concepts in earlier chapters, and earlier chapters may not dig into details on a topic, revisiting the topic in a later chapter.
This book will be using the STM32F3DISCOVERY development board from STMicroelectronics for the majority of the examples contained within. This board is based on the ARM Cortex-M architecture, and while basic functionality is the same across most CPUs based on this architecture, peripherals and other implementation details of Microcontrollers are different between different vendors, and often even different between Microcontroller families from the same vendor.
For this reason, we suggest purchasing the STM32F3DISCOVERY development board for the purpose of following the examples in this book.
## Contributing to This Book
The work on this book is coordinated in this repository and is mainly developed by the resources team.
If you have trouble following the instructions in this book or find that some section of the book is not clear enough or hard to follow then that's a bug and it should be reported in the issue tracker of this book.
Pull requests fixing typos and adding new content are very welcome!
## Re-using this material
This book is distributed under the following licenses:
- The code samples and free-standing Cargo projects contained within this book are licensed under the terms of both the MIT License and the Apache License v2.0.
- The written prose, pictures and diagrams contained within this book are licensed under the terms of the Creative Commons CC-BY-SA v4.0 license.
TL;DR: If you want to use our text or images in your work, you need to:
- Give the appropriate credit (i.e. mention this book on your slide, and provide a link to the relevant page)
- Provide a link to the CC-BY-SA v4.0 licence
- Indicate if you have changed the material in any way, and make any changes to our material available under the same licence
Also, please do let us know if you find this book useful!
| true | true | true | null |
2024-10-12 00:00:00
|
2018-01-01 00:00:00
| null | null | null | null | null | null |
11,287,462 |
https://github.com/Hactar-js/hactar
|
GitHub - Hactar-js/hactar: The solution to JavaScript Fatigue. Zero config dev
|
Hactar-js
|
*warning*: Hactar is in the very early alpha stages and since it is a tool that modifies your code it is very important you run it against stuff that has backups. It likely wont make anything explode but you could lose work.
Hactar is the solution to JavaScript Fatigue. Hactar configures build tools, installs dependencies, adds imports, creates tests etc, all automatically. There are no boilerplates to clone, no generators to run, and no build tools to configure. To use Hactar you simply start writing code and Hactar figures out what you want to do and the best practices to make it happen. Start writing ES6 and it will add Babel, start writing sass and it will add node-sass, import an image and it will add Webpack etc
No more starting projects with configuration and boilerplate, Hactar let's you start writing code immediately.
Hactar can currently;
- Automatically install dependencies
- Detect ES6 and add Babel transpilation
- Detect experimental ES6 features and configure Babel presets like stage-0
- Automatically detect React and add babel-react plugins
Hactar does this all without any interaction from you. Hactar parses your code, figures out what you are coding, then installs, configures, and writes code to make it work. You start writing code and Hactar does the rest.
Here is a screencast for the visual learners among us
A typical Hactar workflow looks like this;
Run Hactar:
```
$ hactar -p hactar-babel
initiating npm
name: (testcats)
...
hactar is running
```
Now start coding:
```
$ touch src/index.js
$ atom .
```
```
import React from 'react';
import Button from 'react-toolbox/lib/button';
const CustomButton = () => (
<Button label="Hello world" raised accent />
);
export default CustomButton;
```
Hactar will parse the code and detect the usage of ES6, React, and react-toolbox:
```
installing babel
configuring babel with es2015
installing react
installing react-toolbox
```
- Installation
- Usage
- The Principles of Hactar
- Presets and Plugins
- How Hactar Works
- Documentation
- License
- Support
Install globally using:
`$ npm install -g hactar`
Hactar is designed to have almost no interaction. There are no generators you can execute nor things to configure. To use Hactar, you simply run the `hactar`
command. The only option available to you is `--plugins`
, which you can use to install various Hactar plugins:
`$ hactar --plugins plugin-name,plugin-name`
You can also install a plugin simply by adding it to your dependencies (which is what the --plugins argument does)
`$ npm install --save-dev hactar-babel`
Hactar is not a boilerplate and it is not a scaffolder. You don't have to run Hactar every time you need to create a new *thing* with a new *thing*. If something is gonna need tests, Hactar will figure it out through parsing, no need for you to tell it. And when conventions change, Hactar will automatically refactor your code using codemods; no interaction from you.
Hactar plugins are simple ES6 generator functions so you already know how to write them. There are no unfamiliar models like streams, transforms, pipes etc to learn. Writing Hactar plugins feels as productive as writing shell scripts but better. You can code plugins for Hactar while you work on your projects -- building solutions to fatigues as they occur.
There are many solutions to JavaScript Fatigue but most require you to *adopt* them and without them your code becomes useless, unable to be used without the solution. And if you want someone else to contribute to the code, they now need to learn the tool and its ecosystem.
When your solution to fatigue is a dependency the solution can become the fatigue.
Because Hactar simply writes code, your code is not dependent on it. Nothing Hactar does is dependent on Hactar to work. No one contributing to your code even need know Hactar exists. Hactar is transparent and designed to fade into the background. It is just another coder on your team -- one you pay with CPU. If Hactar stops being useful you can simply fire it.
Hactar is immediately beneficial today. Hactar is oriented towards tiny plugins that do one thing well (like for example, adding babel support). You don't need a ton of plugins for it to come together and work for you. It has a ton of little things that make your life better now. Too many solutions to fatigue are "all or nothing" propositions that require huge wins before the little wins. How many have set out to solve their fatigue only to realize 6 months later things that the ecosystem has changed too much making it useless, or that it was too ambitious, so they give up and return to what works good enough. Hactar is not like that, it comes with little wins today and can be grown to be so much more. Hactar evolves fast and is designed to be changeable and hackable, even while you work on your projects. Every plugin is designed to improve your coding experience in some tiny way; whether it is extracting tests from comments or automatically adding a preset to babel. It is always useful now not later.
Hactar currently has the following plugins;
- hactar-auto-install A plugin that parses your imports and automatically installs missing dependencies.
- hactar-babel Provides all the babel plugins that do things like configure ES62105 preset, detect stage-0 features, react etc.
You can find all the existing Hactar plugins by searching for *hactar* on npm
There are four parts to Hactar;
- A filesystem watcher (uses chokidar)
- A CSP like Flux dispatcher + Redux store
- Generator functions and reducers, which make up the plugins.
- Parsers and codemods. Most plugins in Hactar make use of a JS parser such as Espree and codemod tools like jscodeshift
Every plugin in Hactar receives all the actions and can dispatch actions to all other plugins via a channel. Plugins are split into two parts;
- Reducers which can be used to store state
- Sagas that can be used to dispatch actions and make asynchronous modifications to the codebase
Sagas are generator functions that run on a loop for as long as Hactar is running.
A plugin that adds a index.js file when Hactar is loaded would look like this:
```
import { put } from 'js-csp'
function* saga(action, ch) {
if(action.type == 'INITIALIZE') {
// Dispatch an ADD_FILE action for an addFile plugin to pick up
yield put(ch, {type: 'ADD_FILE', name: 'index.js', contents: `console.log('Hello World!')`})
}
}
export { saga }
```
And we could handle storing state and getting state by doing the following
```
import { put } from 'js-csp'
const reducer = (state, action) => {
switch (action.type) {
case 'DOGS_R_AWESOME':
return {
...state,
dogs: 'Are Awesome'
}
default:
return state
}
}
function* saga(action, ch, getState) {
if(getState().hasDogs) {
yield put(ch, {type: 'DOGS_R_AWESOME'})
}
}
export { reducer, saga }
```
Hactar is designed to be insanely easy to make plugins for. The hope is that this will encourage you to solve fatigues when you experience them and not later *when you can get around to it*. If something annoys you and you feel it could be automated away it shouldn't take learning a new ecosystem to write a solution, it should just be a matter of coding a solution. If you can write ES6 code you can write a Hactar plugin. I feel very strongly that the process for automating something should be *write code* and not install x, configure y, read the docs on z, and cuss at...
See the documentation for more examples
More documentation coming soon!
ISC
If you found this repo useful please consider supporting me on Gratipay, sending me some bitcoin `1csGsaDCFLRPPqugYjX93PEzaStuqXVMu`
, or giving me lunch money via Cash.me/$k2052 or paypal.me/k2052
| true | true | true |
The solution to JavaScript Fatigue. Zero config dev - Hactar-js/hactar
|
2024-10-12 00:00:00
|
2016-03-14 00:00:00
|
https://opengraph.githubassets.com/204b5a335536791ce4a61c16a07f18df9ac857b69fcd294eb714d29b5bbcb0f3/Hactar-js/hactar
|
object
|
github.com
|
GitHub
| null | null |
36,354,835 |
https://overreacted.io/goodbye-clean-code/
|
Goodbye, Clean Code
| null |
# Goodbye, Clean Code
January 11, 2020
It was a late evening.
My colleague has just checked in the code that they’ve been writing all week. We were working on a graphics editor canvas, and they implemented the ability to resize shapes like rectangles and ovals by dragging small handles at their edges.
The code worked.
But it was repetitive. Each shape (such as a rectangle or an oval) had a different set of handles, and dragging each handle in different directions affected the shape’s position and size in a different way. If the user held Shift, we’d also need to preserve proportions while resizing. There was a bunch of math.
The code looked something like this:
```
let Rectangle = {
resizeTopLeft(position, size, preserveAspect, dx, dy) {
// 10 repetitive lines of math
},
resizeTopRight(position, size, preserveAspect, dx, dy) {
// 10 repetitive lines of math
},
resizeBottomLeft(position, size, preserveAspect, dx, dy) {
// 10 repetitive lines of math
},
resizeBottomRight(position, size, preserveAspect, dx, dy) {
// 10 repetitive lines of math
},
};
let Oval = {
resizeLeft(position, size, preserveAspect, dx, dy) {
// 10 repetitive lines of math
},
resizeRight(position, size, preserveAspect, dx, dy) {
// 10 repetitive lines of math
},
resizeTop(position, size, preserveAspect, dx, dy) {
// 10 repetitive lines of math
},
resizeBottom(position, size, preserveAspect, dx, dy) {
// 10 repetitive lines of math
},
};
let Header = {
resizeLeft(position, size, preserveAspect, dx, dy) {
// 10 repetitive lines of math
},
resizeRight(position, size, preserveAspect, dx, dy) {
// 10 repetitive lines of math
},
}
let TextBlock = {
resizeTopLeft(position, size, preserveAspect, dx, dy) {
// 10 repetitive lines of math
},
resizeTopRight(position, size, preserveAspect, dx, dy) {
// 10 repetitive lines of math
},
resizeBottomLeft(position, size, preserveAspect, dx, dy) {
// 10 repetitive lines of math
},
resizeBottomRight(position, size, preserveAspect, dx, dy) {
// 10 repetitive lines of math
},
};
```
That repetitive math was really bothering me.
It wasn’t *clean*.
Most of the repetition was between similar directions. For example, `Oval.resizeLeft()`
had similarities with `Header.resizeLeft()`
. This was because they both dealt with dragging the handle on the left side.
The other similarity was between the methods for the same shape. For example, `Oval.resizeLeft()`
had similarities with the other `Oval`
methods. This was because they all dealt with ovals. There was also some duplication between `Rectangle`
, `Header`
, and `TextBlock`
because text blocks *were* rectangles.
I had an idea.
We could *remove all duplication* by grouping the code like this instead:
```
let Directions = {
top(...) {
// 5 unique lines of math
},
left(...) {
// 5 unique lines of math
},
bottom(...) {
// 5 unique lines of math
},
right(...) {
// 5 unique lines of math
},
};
let Shapes = {
Oval(...) {
// 5 unique lines of math
},
Rectangle(...) {
// 5 unique lines of math
},
}
```
and then composing their behaviors:
```
let {top, bottom, left, right} = Directions;
function createHandle(directions) {
// 20 lines of code
}
let fourCorners = [
createHandle([top, left]),
createHandle([top, right]),
createHandle([bottom, left]),
createHandle([bottom, right]),
];
let fourSides = [
createHandle([top]),
createHandle([left]),
createHandle([right]),
createHandle([bottom]),
];
let twoSides = [
createHandle([left]),
createHandle([right]),
];
function createBox(shape, handles) {
// 20 lines of code
}
let Rectangle = createBox(Shapes.Rectangle, fourCorners);
let Oval = createBox(Shapes.Oval, fourSides);
let Header = createBox(Shapes.Rectangle, twoSides);
let TextBox = createBox(Shapes.Rectangle, fourCorners);
```
The code is half the total size, and the duplication is gone completely! So *clean*. If we want to change the behavior for a particular direction or a shape, we could do it in a single place instead of updating methods all over the place.
It was already late at night (I got carried away). I checked in my refactoring to master and went to bed, proud of how I untangled my colleague’s messy code.
## The Next Morning
… did not go as expected.
My boss invited me for a one-on-one chat where they politely asked me to revert my change. I was aghast. The old code was a mess, and mine was *clean*!
I begrudgingly complied, but it took me years to see they were right.
## It’s a Phase
Obsessing with “clean code” and removing duplication is a phase many of us go through. When we don’t feel confident in our code, it is tempting to attach our sense of self-worth and professional pride to something that can be measured. A set of strict lint rules, a naming schema, a file structure, a lack of duplication.
You can’t automate removing duplication, but it *does* get easier with practice. You can usually tell whether there’s less or more of it after every change. As a result, removing duplication feels like improving some objective metric about the code. Worse, it messes with people’s sense of identity: *“I’m the kind of person who writes clean code”*. It’s as powerful as any sort of self-deception.
Once we learn how to create abstractions, it is tempting to get high on that ability, and pull abstractions out of thin air whenever we see repetitive code. After a few years of coding, we see repetition *everywhere* — and abstracting is our new superpower. If someone tells us that abstraction is a *virtue*, we’ll eat it. And we’ll start judging other people for not worshipping “cleanliness”.
I see now that my “refactoring” was a disaster in two ways:
-
Firstly, I didn’t talk to the person who wrote it. I rewrote the code and checked it in without their input. Even if it
*was*an improvement (which I don’t believe anymore), this is a terrible way to go about it. A healthy engineering team is constantly*building trust*. Rewriting your teammate’s code without a discussion is a huge blow to your ability to effectively collaborate on a codebase together. -
Secondly, nothing is free. My code traded the ability to change requirements for reduced duplication, and it was not a good trade. For example, we later needed many special cases and behaviors for different handles on different shapes. My abstraction would have to become several times more convoluted to afford that, whereas with the original “messy” version such changes stayed easy as cake.
Am I saying that you should write “dirty” code? No. I suggest to think deeply about what you mean when you say “clean” or “dirty”. Do you get a feeling of revolt? Righteousness? Beauty? Elegance? How sure are you that you can name the concrete engineering outcomes corresponding to those qualities? How exactly do they affect the way the code is written and modified?
I sure didn’t think deeply about any of those things. I thought a lot about how the code *looked* — but not about how it *evolved* with a team of squishy humans.
Coding is a journey. Think how far you came from your first line of code to where you are now. I reckon it was a joy to see for the first time how extracting a function or refactoring a class can make convoluted code simple. If you find pride in your craft, it is tempting to pursue cleanliness in code. Do it for a while.
But don’t stop there. Don’t be a clean code zealot. Clean code is not a goal. It’s an attempt to make some sense out of the immense complexity of systems we’re dealing with. It’s a defense mechanism when you’re not yet sure how a change would affect the codebase but you need guidance in a sea of unknowns.
Let clean code guide you. **Then let it go.**
| true | true | true |
Let clean code guide you. Then let it go.
|
2024-10-12 00:00:00
|
2020-01-11 00:00:00
| null | null | null | null | null | null |
8,939,660 |
http://www.bbc.com/future/story/20150122-the-secret-to-immortality
|
Back-up brains: The era of digital immortality
|
Simon Parkin
|
# Back-up brains: The era of digital immortality
**How do you want to be remembered? As Simon Parkin discovers, we may eventually be able to preserve our entire minds for generations to come – would you?**
A few months before she died, my grandmother made a decision.
Bobby, as her friends called her (theirs is a generation of nicknames), was a farmer’s wife who not only survived World War II but also found in it justification for her natural hoarding talent. ‘Waste not, want not’ was a principle she lived by long after England recovered from a war that left it buckled and wasted. So she kept old envelopes and bits of cardboard cereal boxes for note taking and lists. She kept frayed blankets and musty blouses from the 1950s in case she needed material to mend. By extension, she was also a meticulous chronicler. She kept albums of photographs of her family members. She kept the airmail love letters my late grandfather sent her while he travelled the world with the merchant navy in a box. Her home was filled with the debris of her memories.
Yet in the months leading up to her death, the emphasis shifted from hoarding to sharing. Every time I visited my car would fill with stuff: unopened cartons of orange juice, balls of fraying wool, damp, antique books, empty glass jars. All things she needed to rehome now she faced her mortality. The memories too began to move out. She sent faded photographs to her children, grandchildren and friends, as well as letters containing vivid paragraphs detailing some experience or other.
On 9 April, the afternoon before the night she died, she posted a letter to one of her late husband’s old childhood friends. In the envelope she enclosed some photographs of my grandfather and his friend playing as young children. “You must have them,” she wrote to him. It was a demand but also a plea, perhaps, that these things not be lost or forgotten when, a few hours later, she slipped away in her favourite armchair.
The hope that we will be remembered after we are gone is both elemental and universal. The poet Carl Sandburg captured this common feeling in his 1916 poem Troths:
Yellow dust on a bumblebee’s wing,
Grey lights in a woman’s asking eyes,
Red ruins in the changing sunset embers:
I take you and pile high the memories.
Death will break her claws on some I keep.
* *
It is a wishful tribute to the potency of memories. The idea that a memory could prove so enduring that it might grant its holder immortality is a romantic notion that could only be held by a young poet, unbothered by the aches and scars of age.
Nevertheless, while Sandburg’s memories failed to save him, they *survived* him. Humans have, since the first paintings scratched on cave walls, sought to confound the final vanishing of memory. Oral history, diary, memoir, photography, film and poetry: all tools in humanity’s arsenal in the war against time’s whitewash. Today we bank our memories onto the internet’s enigmatic servers, those humming vaults tucked away in the cooling climate of the far North or South. There’s the Facebook timeline that records our most significant life events, the Instagram account on which we store our likeness, the Gmail inbox that documents our conversations, and the YouTube channel that broadcasts how we move, talk or sing. We collect and curate our memories more thoroughly than ever before, in every case grasping for a certain kind of immortality.
Is it enough? We save what we believe to be important, but what if we miss something crucial? What if some essential context to our words or photographs is lost? How much better it would be to save everything, not only the written thoughts and snapped moments of life, but the entire mind: everything we know and all that we remember, the love affairs and heartbreaks, the moments of victory and of shame, the lies we told and the truths we learned. If you could save your mind like a computer’s hard drive, would you? It’s a question some hope to pose to us soon. They are the engineers working on the technology that will be able create wholesale copies of our minds and memories that live on after we are burned or buried. If they succeed, it promises to have profound, and perhaps unsettling, consequences for the way we live, who we love and how we die.
**Carbon copy**
I keep my grandmother’s letters to me in a folder by my desk. She wrote often and generously. I also have a photograph of her in my kitchen on the wall, and a stack of those antique books, now dried out, still unread. These are the ways in which I remember her and her memories, saved in hard copy. But could I have done more to save her?
San Franciscan Aaron Sunshine’s grandmother also passed away recently. “One thing that struck me is how little of her is left,” the 30-year-old tells me. “It’s just a few possessions. I have an old shirt of hers that I wear around the house. There's her property but that's just faceless money. It has no more personality than any other dollar bill.” Her death inspired Sunshine to sign up with Eterni.me, a web service that seeks to ensure that a person’s memories are preserved after their death online.
It works like this: while you’re alive you grant the service access to your Facebook, Twitter and email accounts, upload photos, geo-location history and even Google Glass recordings of things that you have seen. The data is collected, filtered and analysed before it’s transferred to an AI avatar that tries to emulate your looks and personality. The avatar learns more about you as you interact with it while you’re alive, with the aim of more closely reflecting you as time progresses.
“It’s about creating an interactive legacy, a way to avoid being totally forgotten in the future,” says Marius Ursache, one of Eterni.me’s co-creators. “Your grand-grand-children will use it instead of a search engine or timeline to access information about you – from photos of family events to your thoughts on certain topics to songs you wrote but never published.” For Sunshine, the idea that he might be able to interact with a legacy avatar of his grandmother that reflected her personality and values is comforting. “I dreamt about her last night,” he says. “Right now a dream is the only way I can talk to her. But what if there was a simulation? She would somehow be less gone from my life.”
While Ursache has grand ambitions for the Eterni.me service (“it could be a virtual library of humanity”) the technology is in still its infancy. He estimates that subscribers will need to interact with their avatars for decades for the simulation to become as accurate as possible. He’s already received many messages from terminally ill patients who want to know when the service will be available – whether they can record themselves in this way before they die. “It’s difficult to reply to them, because the technology may take years to build to a level that’s useable and offers real value,” he says. But Sunshine is optimistic. “I have no doubt that someone will be able to create good simulations of people's personalities with the ability to converse satisfactorily,” he says. “It could change our relationship with death, providing some noise where there is only silence. It could create truer memories of a person in the place of the vague stories we have today.”
It could, I suppose. But what if the company one day goes under? As the servers are switched off, the people it homes would die a second death.
As my own grandmother grew older, some of her memories retained their vivid quality; each detail remained resolute and in place. Others became confused: the specifics shifted somehow in each retelling. Eterni.me and other similar services counter the fallibility of human memory; they offer a way to fix the details of a life as time passes. But any simulation is a mere approximation of a person and, as anyone who has owned a Facebook profile knows, the act of recording one’s life on social media is a selective process. Details can be tweaked, emphases can be altered, entire relationships can be erased if it suits one’s current circumstances. We often give, in other words, an unreliable account of ourselves.
**Total recall**
What if, rather than simply picking and choosing what we want to capture in digital form, it was possible to record the contents of a mind in their entirety? This work is neither science fiction nor the niche pursuit of unreasonably ambitious scientists. Theoretically, the process would require three key breakthroughs. Scientists must first discover how to preserve, non-destructively, someone's brain upon their death. Then the content of the preserved brain must be analysed and captured. Finally, that capture of the person’s mind must be recreated on a simulated human brain.
First, we must create an artificial human brain on which a back-up of a human’s memories would be able to ‘run’. Work in the area is widespread. MIT runs a course on the emergent science of ‘connectomics’, the work to create a comprehensive map of the connections in a human brain. The US Brain project is working to record brain activity from millions of neurons while the EU Brain project tries to build integrated models from this activity.
Anders Sandberg from the Future of Humanity Institute at Oxford University, who in 2008 wrote a paper titled Whole Brain Emulation: A Roadmap, describes these projects as “stepping stones” towards being able to fully able to emulate the human brain.
“The point of brain emulation is to recreate the function of the original brain: if ‘run’ it will be able to think and act as the original,” he says. Progress has been slow but steady. “We are now able to take small brain tissue samples and map them in 3D. These are at exquisite resolution, but the blocks are just a few microns across. We can run simulations of the size of a mouse brain on supercomputers – but we do not have the total connectivity yet. As methods improve I expect to see automatic conversion of scanned tissue into models that can be run. The different parts exist, but so far there is no pipeline from brains to emulations.”
Investment in the area appears to be forthcoming, however. Google is heavily invested in brain emulation. In December 2012 the company appointed Ray Kurzweil as its director of engineering on the Google Brain project, which aims to mimic aspects of the human brain. Kurzweil, a divisive figure, is something of a figurehead for a community of scientists who believe that it will be possible to create a digital back-up of a human brain within their lifetime. A few months later, the company hired Geoff Hinton, a British computer scientist who is one of the world's leading experts on neural networks, essentially the circuitry of how the human mind thinks and remembers.
Google is not alone, either. In 2011 a Russian entrepreneur, Dmitry Itskov, founded ‘The 2045 Initiative’, named after Kurzweil’s prediction that the year 2045 will mark the point at which we’ll be able to back up our minds to the cloud. While the fruits of all this work are, to date, largely undisclosed, the effort is clear.
Neuroscientist Randal Koene,science director for the 2045 Initiative, is adamant that creating a working replica of a human brain is within reach. “The development of neural prostheses already demonstrate that running functions of the mind is possible,” he says. It’s not hyperbole. Ted Berger, a professor at the University of Southern California’s Center for Neuroengineering has managed to create a working prosthetic of the hippocampus part of the brain. In 2011 a proof-of-concept hippocampal prosthesis was successfully tested in live rats and, in 2012 the prosthetic was successfully tested in non-human primates. Berger and his team intend to test the prosthesis in humans this year, demonstrating that we are already able to recreate some parts of the human brain.
**Memory dump**
Emulating a human brain is one thing, but creating a digital record of a human’s memories is a different sort of challenge. Sandberg is cynical of whether this simplistic process is viable. “Memories are not neatly stored like files on a computer to create a searchable index,” he says. “Memory consists of networks of associations that are activated when we remember. A brain emulation would require a copy of them all.”
Indeed, humans reconstruct information from multiple parts of the brain in ways that are shaped by our current beliefs and biases, all of which change over time. These conclusions appear at odds with any effort to store memories in the same way that a computer might record data for easy access. It is an idea based on, as one sceptic I spoke to (who wished to remain anonymous) put it, “the wrong and old-fashioned ‘possession’ view of memory”.
There is also the troubling issue of how to extract a person’s memories without destroying the brain in the process. “I am sceptical of the idea that we will be able to accomplish non-destructive scanning,” says Sandberg. “All methods able to scan neural tissue at the required high resolution are invasive, and I suspect this will be very hard to achieve without picking apart the brain.” Nevertheless, the professor believes a searchable, digital upload of a specific individual’s memory could be possible so long as you were able to “run” the simulated brain in its entirety.
“I think there is a good chance that it could work in reality, and that it could happen this century,” he says. “We might need to simulate everything down to the molecular level, in which case the computational demands would simply be too large. It might be that the brain uses hard-to-scan data like quantum states (an idea believed by some physicists but very few neuroscientists), that software cannot be conscious or do intelligence (an idea some philosophers believe but few computer scientists), and so on. I do not think these problems apply, but it remains to be seen if I am right.”
If it could be done, then, what would preserving a human mind mean for the way we live?
Some believe that there could be unanticipated benefits, some of which can make the act of merely extending a person’s life for posterity seem rather plain by comparison. For example, David Wood, chairman of the London Futurists, argues that a digital back-up of a person’s mind could be studied, perhaps providing breakthroughs in understanding the way in which human beings think and remember.
And if a mind could be digitally stored while a person was still alive then, according to neuroscientist Andrew A Vladimirov, it might be possible to perform psychoanalysis using such data. “You could run specially crafted algorithms through your entire life sequence that will help you optimise behavioural strategies,” he says.
Yet there’s also an unusual set of moral and ethical implications to consider, many of which are only just beginning to be revealed. “In the early stages the main ethical issue is simply broken emulations: we might get entities that are suffering in our computers,” says Sandberg. “There are also going to be issues of volunteer selection, especially if scanning is destructive.” Beyond the difficulty of recruiting people who are willing to donate their minds in such a way, there is the more complicated issue of what rights an emulated mind would enjoy. “Emulated people should likely have the same rights as normal people, but securing these would involve legislative change,” says Sandberg. “There might be the need for new kinds of rights too. For example, the right for an emulated human to run in real-time so that they can participate in society.”
Defining the boundaries of a person’s privacy is already a pressing issue for humanity in 2015, where third-party corporations and governments hold more insight into our personal information than ever before. For an emulated mind, privacy and ownership of data becomes yet more complicated. “Emulations are vulnerable and can suffer rather serious breaches of privacy and integrity,” says Sandberg. He adds, in a line that could be lifted from a Philip K Dick novel: “We need to safeguard their rights”. By way of example, he suggests that lawmakers would need to consider whether it should be possible to subpoena memories.
**Property laws**
“Ownership of specific memories is where things become complex,” says Koene. “In a memoir you can choose which memories are recorded. But if you don't have the power of which of your memories others can inspect it becomes a rather different question.” Is it a human right to be able to keep secrets?
These largely un-interrogated questions also begin to touch on more fundamental issues of what it means to be human. Would an emulated brain be considered human and, if so, does the humanity exist in the memories or the hardware on which the simulated brain runs? If it's the latter, there’s the question of who owns the hardware: an individual, a corporation or the state? If an uploaded mind requires certain software to run (a hypothetical Google Brain, for example) the ownership of the software license could become contentious.
The knowledge that one’s brain is to be recorded in its entirety might also lead some to behave differently during life. “I think it would have the same effect as knowing your actions will be recorded on camera,” says Sandberg. “In some people this knowledge leads to a tendency to conform to social norms. In others it produces rebelliousness. If one thinks that one will be recreated as a brain emulation then it is equivalent to expecting an extra, post-human life.”
Even if it were possible to digitally record the contents and psychological contours of the human mind, there are undeniably deep and complicated implications. But beyond this, there is the question of whether this is something that any of us truly want. Humans long to preserve their memories (or, in some cases, to forget them) because they remind us of who we are. If our memories are lost we cease to know who we were, what we accomplished, what *it all meant*. But at the same time, we tweak and alter our memories in order to create the narrative of our lives that fits us at any one time. To have everything recorded with equal weight and importance might not be useful, either to us or to those who follow us.
Where exactly is the true worth of the endeavour? Could it actually be the comforting knowledge for a person that they, to one degree or other, won’t be lost without trace? The survival instinct is common to all life: we eat, we sleep, we fight and, most enduringly, we reproduce. Through our descendants we reach for a form of immortality, a way to live on beyond our physical passing. All parents take part in a grand relay race through time, passing the gene baton on and on through the centuries. Our physical traits – those eyes, that hair, this temperament – endure in some diluted or altered form. So too, perhaps, do our metaphysical attributes (“what will survive of us is love,” as Philip Larkin tentatively put it in his 1956 poem, ‘An Arundel Tomb’). But it is the mere echo of immortality. Nobody lives forever; with death only the fading shadow of our life remains. There are the photographs of us playing as children. There are the antique books we once read. There is the blouse we once wore.
I ask Sunshine why he wants his life to be recorded in this way. “To be honest, I'm not sure,” he says. “The truly beautiful things in my life such as the parties I've thrown, the sex I've had, the friendships I’ve enjoyed. All of these things are too ephemeral to be preserved in any meaningful way. A part of me wants to build monuments to myself. But another part of me wants to disappear completely.” Perhaps that is true of us all: the desire to be remembered, but only the parts of us that we hope will be remembered. The rest can be discarded.
Despite my own grandmother’s careful distribution of her photographs prior to her death, many remained in her house. These eternally smiling, fading unknown faces evidently meant a great deal to her in life but now, without the framing context of her memories, they lost all but the most superficial meaning. In a curious way, they became a burden to those of us left behind.
My father asked my grandmother’s vicar (a kindly man who had been her friend for many years), what he should do with the pictures; to just throw the photographs away seemed somehow flippant and disrespectful. The vicar’s advice was simple. Take each photograph. Look at it carefully. In that moment you honour the person captured. Then you may discard of it and be free.
| true | true | true |
How do you want to be remembered? As Simon Parkin discovers, we may eventually be able to preserve our entire minds for generations to come – would you?
|
2024-10-12 00:00:00
|
2015-01-23 00:00:00
|
newsarticle
|
bbc.com
|
BBC
| null | null |
|
32,062,970 |
https://www.nytimes.com/2022/07/11/science/nasa-webb-telescope-images-livestream.html
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
9,545,324 |
https://www.youtube.com/watch?v=QNznD9hMEh0
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
7,179,347 |
http://blog.marketmesuite.com/how-to-simplify-your-social-posting-strategy/?src=hackernews
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
10,751,269 |
http://arstechnica.com/information-technology/2015/12/rsync-net-zfs-replication-to-the-cloud-is-finally-here-and-its-fast/
|
rsync.net: ZFS Replication to the cloud is finally here—and it’s fast
|
Jim Salter
|
In mid-August, the first commercially available ZFS cloud replication target became available at rsync.net. Who cares, right? As the service itself states, "If you're not sure what this means, our product is Not For You."
Of course, this product is for someone—and to those would-be users, this really *will* matter. Fully appreciating the new rsync.net (spoiler alert: it's pretty impressive!) means first having a grasp on basic data transfer technologies. And while ZFS replication techniques are burgeoning today, you must actually begin by examining the technology that ZFS is slowly supplanting.
## A love affair with rsync
Revisiting a first love of any kind makes for a romantic trip down memory lane, and that's what revisiting rsync—as in "rsync.net"—feels like for me. It's hard to write an article that's inevitably going to end up trashing the tool, because I've been wildly in love with it for more than 15 years. Andrew Tridgell (of Samba fame) first announced rsync publicly in June of 1996. He used it for three chapters of his PhD thesis three years later, about the time that I discovered and began enthusiastically using it. For what it's worth, the earliest record of my professional involvement with major open source tools—at least that I've discovered—is my activity on the rsync mailing list in the early 2000s.
Rsync is a tool for synchronizing folders and/or files from one location to another. Adhering to true Unix design philosophy, it's a *simple* tool to use. There is no GUI, no wizard, and you can use it for the most basic of tasks without being hindered by its interface. But somewhat rare for any tool, in my experience, rsync is also very elegant. It makes a task which is humanly intuitive *seem* simple despite being objectively complex. In common use, rsync looks like this:
| true | true | true |
Even an rsync-lifer admits ZFS replication and rsync.net are making data transfers better.
|
2024-10-12 00:00:00
|
2015-12-17 00:00:00
|
article
|
arstechnica.com
|
Ars Technica
| null | null |
|
11,168,227 |
https://royaljay.com/development/angular2-tutorial/
|
Angular2 Tutorial: How To Build Your First App With Angular2
|
Doug Ludlow
|
A lot has been written about Angular2 over the past year. Much of the information has quickly become outdated as the API continues to evolve and mature. This fast-paced change has created gaps in documentation, making it difficult for busy developers to stay up-to-date with the latest version of the framework.
I’ve been wanting to test out a number of new technologies; ASP.NET 5 (now ASP.NET Core 1.0), Angular2, TypeScript, and JSPM, to name a few.
Recently, I started a new project and had the opportunity to test out the Angular2 framework. There is always risk associated with using new technologies on a project – lack of up-to-date documentation, project configuration problems, and unforeseen bugs.
I encountered all of the above.
In this **Angular2 tutorial**, I’ll reveal the problems faced and lessons learned along the way so you are better positioned to have success with this exciting new technology.
### ASP.NET Core 1.0
I wanted to try the new ASP.NET, which combines MVC and Web API.
Also, I develop on a MacBook Pro in Parallels and because ASP.NET Core is cross platform, the idea of native development in OS X without having to spin up a Windows VM was enticing.
### Angular2/TypeScript
Upon starting the project, I knew I wanted to use Angular2, but wasn’t so sure about TypeScript. I’d heard a lot about the tool, but didn’t really see any practical reason to use it over regular plain old JavaScript. However, after finding examples all over the web while searching for the best way to get started with Angular2, and since Angular2 is built in TypeScript, I decided to give it a shot.
From using annotations (@Injectable()) to adding metadata to classes or class members, to using lambdas ((a) => a.foo === b;) instead of anonymous functions, there are so many great features and syntax to take advantage of.
### JSPM
Historically, the defacto standard for JavaScript dependency management has been NPM for dev/build dependencies and Bower for browser dependencies. But, there has been a lot of innovation out there recently. More and more developers are starting to use npm3 for browser dependencies. The Angular2 team has decided they will only distribute via npm, not bower.
There are ways to make it work with bower. But, I decided to branch out and explore the other options.
From the official Angular2 site, there are various examples using SystemJS to load scripts. SystemJS is built and maintained by Guy Bedford. Guy has also been working on JSPM, a package manager for frontend dependencies that uses SystemJS. One of the benefits of using JSPM over Bower is that it handles resolving and loading dependencies for the user. Instead of having to rely on a complicated grunt/bower task to find, order, concatenate, minify and inject bower dependencies, JSPM maps the dependencies and relies on SystemJS to load it all, simplifying the frontend build process. More on this later.
### Visual Studio Code
I started out using Visual Studio 2015 for the development environment, but eventually found it was too slow with this particular configuration and eventually switched to Visual Studio Code. Code has come a long way recently. In fact, I’d recommend it over Visual Studio 2015. It wasn’t long after using Visual Studio Code that I switched the entire dev environment completely to OS X and powered it off a VM. This set up provides a lot more flexibility.
### Enough Talk, Show Me The Code!
Enough intro. Let’s dive into some code! (See the completed walkthrough on Github.)
### ASP.NET 5/Core 1 Setup
(Feel free to skip this section if ASP.NET is not your thing.)
Let’s get the backend up and running first. I’ll assume you already have ASP.NET 5/Core 1 and NPM installed and updated. If you don’t, see the ASP.NET Getting Started docs and the NPM docs.
First, install is the ASP.NET yeoman generator (and yeoman if you don’t already have it):
> npm install -g yo generator-aspnet
Next generate a new project by following the prompts:
> yo aspnet _-----_ | | .--------------------------. |--(o)--| | Welcome to the | `---------´ | marvellous ASP.NET 5 | ( _´U`_ ) | generator! | /___A___\ '--------------------------' | ~ | __'.___.'__ ´ ` |° ´ Y ` ? What type of application do you want to create? Web API Application ? What's the name of your ASP.NET application? AspnetCore1Angular2Jspm create AspnetCore1Angular2Jspm/.gitignore create AspnetCore1Angular2Jspm/appsettings.json create AspnetCore1Angular2Jspm/Dockerfile create AspnetCore1Angular2Jspm/Startup.cs create AspnetCore1Angular2Jspm/project.json create AspnetCore1Angular2Jspm/Properties/launchSettings.json create AspnetCore1Angular2Jspm/Controllers/ValuesController.cs create AspnetCore1Angular2Jspm/wwwroot/README.md create AspnetCore1Angular2Jspm/wwwroot/web.config Your project is now created, you can use the following commands to get going cd "AspnetCore1Angular2Jspm" dnu restore dnu build (optional, build will also happen when it's run) dnx web
Next, cd into the generated project and restore it’s dependencies:
> cd AspnetCore1Angular2Jspm > dnu restore
Open up Visual Studio Code:
> code .
* Note: If* code .
*doesn’t work for you, see Launching from the Command Line.*
Add a basic index.html to to the wwwroot folder. Something like this:
<!doctype html> <html> <head> <meta charset="utf-8"> <title>ASP.NET Core 1/Angular2/JSPM Sample</title> </head> <body> <p>Hello world!</p> </body> </html>
You need to tell ASP.NET to serve the file. Open up the Startup.cs and add the following line in the Configure method, anywhere **before** app.UseStaticFiles():
app.UseDefaultFiles();
Run the following and you should see a little “Hello world!” page when you navigate to http://localhost:5000.
> dnx web
* Note: To stop the* dnx web
*command at any time, press*^C
*(Ctrl + c).*
### JSPM/TypeScript/Angular2 Setup
The folder structure should look similar to this:
. ├── Controllers │ └── ValuesController.cs ├── Dockerfile ├── Properties │ └── launchSettings.json ├── Startup.cs ├── appsettings.json ├── project.json ├── project.lock.json └── wwwroot ├── README.md ├── index.html └── web.config
Create a package.json by running the following and answering the prompts:
> npm init
Install JSPM. You’ll install it globally first, then install it locally as a dev dependency:
> npm install -g jspm > npm install --save-dev jspm
Configure JSPM with run jspm init for the following responses:
> jspm init Would you like jspm to prefix the jspm package.json properties under jspm? [yes]: Enter server baseURL (public folder path) [./]:./wwwroot Enter jspm packages folder [wwwroot/jspm_packages]: Enter config file path [wwwroot/config.js]: Configuration file wwwroot/config.js doesn't exist, create it? [yes]: Enter client baseURL (public folder URL) [/]: Do you wish to use a transpiler? [yes]: Which ES6 transpiler would you like to use, Babel, TypeScript or Traceur? [babel]:typescript ok Verified package.json at package.json Verified config file at wwwroot/config.js Looking up loader files... system.js system.src.js system.js.map system-csp-production.js system-polyfills.js system-csp-production.js.map system-csp-production.src.js system-polyfills.src.js system-polyfills.js.map Using loader versions: [email protected] Looking up npm:typescript Updating registry cache... ok Installed typescript as npm:typescript@^1.6.2 (1.7.5) ok Loader files downloaded successfully
This will install the TypeScript compiler and anything else it needs in wwwroot/jspm_packages/ and map everything up in the wwwroot/config.js. I’ll explain how that works in a little bit.
* Note: You may want to add* jspm_packages/
*to your*.gitignore
*.*
For convenience, lets add the following to the scripts property of the packages.json . This will run jspm install anytime you run npm install :
"postinstall": "jspm install"
Add a tsconfig.json file to wwwroot/:
{ "compilerOptions": { "target": "es5", "module": "commonjs", "emitDecoratorMetadata": true, "experimentalDecorators": true }, "exclude": [ "jspm_packages" ] }
You will need to install a plugin to load the TypeScript files called plugin-typescript, built by Frank Wallis:
> jspm install ts
Add the typescriptOptions and packages properties to configuration in wwwroot/config.js, so that it looks like this:
System.config({ baseURL: "/", defaultJSExtensions: true, transpiler: "typescript", typescriptOptions: { "tsconfig": true, "module": "system" }, packages: { "app": { "defaultExtension": "ts", "meta": { "*.ts": { "loader": "ts" } } } }, paths: { "npm:*": "jspm_packages/npm/*" }, map: { ... } });
Now, we’re finally ready to install Angular2:
> jspm install angular2
You may notice at this point the map property in wwwroot/config.js has blown up with hundreds of entries. The config.js is a file SystemJS uses to load your app’s dependencies as well as their own dependencies. When you install something with JSPM, it places it inside the jspm_packages folder and adds the path to the map property (along with any dependencies) so SystemJS knows where to look when the app requires an external module.
*Pretty cool, huh?*
There is one thing left to do in the initial setup. Install the Angular2 dependencies as dependencies of your own. This gets a little tricky because you have to make sure the versions stay in sync with Angular2, otherwise two different copies will be downloaded and imported into the app.
Install reflect-metadata, rxjs, and zone.js. As of this writing, angular2 resolves to version 2.0.0-beta1 . [email protected], [email protected] and [email protected] were all installed along with it as dependencies. If you’re using [email protected], run the following command:
> jspm install [email protected] [email protected] [email protected]
Otherwise you may have to look up the exact versions in the config.js and adjust them before running the above command.
### Building the App
With setup complete, we’re ready to start building. Go ahead and create an app folder inside wwwroot. This is where all of the Angular components, directives and services will live.
Inside wwwroot/app/, create an app.component.ts It will house the AppComponent or entry point component for the Angular2 app. Fill it with this:
import { Component } from 'angular2/core'; @Component({ selector: 'my-app', template: '{{welcome}}' }) export class AppComponent { welcome: string = 'Hello from Angular2!' }
Create a boot.ts inside of wwwroot/app/ :
import 'reflect-metadata'; // workaround for https://github.com/angular/angular/issues/6007 import Zone from 'zone.js'; window.Zone = Zone; import { bootstrap } from 'angular2/platform/browser'; import { AppComponent } from './app.component'; bootstrap(AppComponent);
This is where you’ll bootstrap the app. Notice the imports at the top for relect-metadata and zone.js. These are required by Angular2. Take note of the workaround that adds zone.js to the global namespace. Keep it there or bad things will happen 🙂
Once the Angular2 code is complete, load it into the index.html. Replace the innards of the body tag with the following:
<!doctype html> <html> <head> <meta charset="utf-8"> <title>ASP.NET Core 1/Angular2/JSPM Sample</title> </head> <body> <my-app></my-app> <script src="jspm_packages/system.js"></script> <script src="config.js"></script> <script>System.import('app/boot')</script> </body> </html>
Reload the page and you should see *“Hello from Angular2!”*!
### Optimize
You may notice that when the page is loaded the browser made something like 258 requests and downloaded 3.1 mb…
*What the heck!”* you might be saying, *“I only have two files, app.component.ts and boot.ts!”*
Take a look at the requests and you’ll see typescript.js – the typescript compiler – which accounts for 1.8 mb of the download. The rest of the files are angular. The browser is transpiling the typescript on the fly. Pretty cool, but *definitely* not something you want to do in production or really want to download and transpile each individual Angular2 file.
Here’s where the SystemJS **builder** comes in. It creates bundles you can load instead of lazy loading individual files. For development, create a bundle with arithmetic to include any external dependencies the app imports, but exclude any files within the app:
> jspm bundle app/**/* - [app/**/*] wwwroot/bundle.js --inject
The –inject will inject the list of files into the config.js and requests for those particular files will result in the request being intercepted, and the bundle will be served.
In other words, instead of 258 requests, the browser will only make 21 requests!
For a production build, remove the arithmetic and let it bundle everything. It’s important to understand that transpiling happens as part of the bundling step and no longer in the browser. In fact you can even pass a –minify flag into the cli and it will mangle the bundle for you.
> jspm bundle app/**/* wwwroot/bundle.js --minify --inject
It is also possible to use the JSPM bundler/builder with gulp/grunt if you’d like more flexibility. See the SystemJS builder documentation for more information.
### Testing
There are a few important things to keep in mind when it comes to testing with JSPM and Angular2. The Angular2 team has been putting together a testing guide, which is a great reference, and still suggest using Karma and Jasmine for testing.
Go ahead and pull Karma and Jasmine down, along with the JSPM plugin for Karma and the PhantomJS2 launcher. I’m also going to throw in the Karma spec reporter:
> npm install --save-dev karma jasmine karma-jasmine karma-jspm karma-phantomjs2-launcher karma-spec-reporter
Create a karma.conf.js in the root of the project and make it look like this:
/* global module */ module.exports = function (config) { 'use strict'; config.set({ basePath: './wwwroot', singleRun: true, frameworks: ['jspm', 'jasmine'], jspm: { loadFiles: [ 'app/**/*.spec.ts' ], serveFiles: [ 'app/**/*!(*.spec).ts', 'tsconfig.json' ] }, proxies: { '/app': '/base/app', '/jspm_packages': '/base/jspm_packages', '/tsconfig.json': '/base/tsconfig.json' }, reporters: ['spec'], browsers: ['PhantomJS2'], }); };
Replace the value of test in the scripts property of the package.json with the following:
node ./node_modules/karma/bin/karma start
Finally, add a test. Create an app.component.spec.ts in the wwwroot/app/ folder. Fill it with the following:
import { AppComponent } from './app.component'; describe('AppComponent', () =>{ let appComponent: AppComponent; beforeEach(() => { appComponent = new AppComponent(); }); it('has the correct welcome message', () => { expect(appComponent.welcome).toEqual('Hello from Angular2!'); }); });
If you were to run npm test right now, we would get an error along the lines of, **“Potentially unhandled rejection [3] reflect-metadata shim is required when using class decorators.”** This is because you just imported the required relect-metadata shim in the boot.ts, not the app.component.ts.
But, you don’t want to import it in every test file. ~~Luckily, you can leverage SystemJS again.~~
~~Add a meta property to wwwroot/config.js:~~
meta: { "angular2/core": { "deps": [ "es6-shim", "reflect-metadata", "rxjs" ] } },
~~This tells SystemJS that any time angular2/core is requested, load the dependencies as well.~~
**Update (8 March 2016) **
I found out that this wasn’t working as expected and went another route to pull those dependencies in automatically.
Create a test.ts file in wwwroot/app/ and fill it with the following:
import 'reflect-metadata'; import 'es6-shim'; import 'zone.js';
The include it in the loadFiles array before your tests in the karma.conf.js :
/* global module */ module.exports = function (config) { 'use strict'; config.set({ ... jspm: { loadFiles: [ 'app/test.ts', 'app/**/*.spec.ts' ], serveFiles: [ 'app/**/*!(*.spec).ts', 'tsconfig.json' ] }, ... }); };
We already installed reflect-metadata and zone.js earlier. Install es6-shim as well, but be sure to match the version es6-shim up with the version that shows in wwwroot/config.js under npm:[email protected].* . For [email protected] , [email protected] is the current version:
> jspm install [email protected]
Now Angular2’s dependencies will be loaded before your tests and you should be able to run them without issue.
Run npm test .
You should get a passed test:
AppComponent ✓ has the correct welcome message PhantomJS 2.0.0 (Mac OS X 0.0.0): Executed 1 of 1 SUCCESS (0.002 secs / 0.001 secs) TOTAL: 1 SUCCESS
### Conclusion
Using newer technologies such as ASP.NET Core, Angular2, and JSPM, definitely makes it feel like you’re on the fringe. I’m confident the development experience and productivity will be improved, especially when the bugs are all worked out. Staying on top of all the amazing new technologies being released can be a challenge, but it’s great to be part of a community that solves problems, documents and shares what they’ve learned for the common good.
### Update (7 March 2016)
Thanks to the valuable feedback provided, here are a few more edits required to ensure app runs successfully.
Add the following line to the wwwroot/tsconfig.json so that systemjs won’t try to load sourcemaps:
{ "compilerOptions": { "target": "es5", "module": "commonjs", "emitDecoratorMetadata": true, "experimentalDecorators": true, "sourceMap": false }, "exclude": [ "jspm_packages" ] }
~~Also, modify the wwwroot/config.js by adding the prefix npm: to the angular/core in the meta property so that it looks like so:~~
meta: { "npm:angular2/core": { "deps": [ "es6-shim", "reflect-metadata", "rxjs" ] } },
~~So as to avoid something like the following error from cropping up:~~
Error: XHR error (404 Not Found) loading http://localhost:5000/jspm_packages/npm/[email protected]/core Error loading http://localhost:5000/jspm_packages/npm/[email protected]/core as "../core" from http://localhost:5000/jspm_packages/npm/[email protected]/platform/browser.js
Big thank you to Maulik Patel!
### Updating Angular2
So this blog post is a few weeks old now and Angular2, as expected, has bumped it’s version a few times since. I thought I might as well update the app too.
This can be done easily through JSPM, like so:
> jspm install angular2
Don’t forget to update the dependencies we’ve installed, remember that we’ve gotta keep em’ in sync. At the time of this writing (7 March 2016) [email protected] relies on the following versions to be installed:
> jspm install [email protected] [email protected]
And that’s it. Not too bad. Be sure to checkout the Angular2 changelog for change details and especially any breaking changes so you can adjust your app’s code accordingly.
| true | true | true |
This tutorial will show you step-by-step how to build and launch an app with Angular2, JSPM and ASP.NET.
|
2024-10-12 00:00:00
|
2016-02-15 00:00:00
| null |
webpage
|
royaljay.com
|
RoyalJayTech
| null | null |
17,296,990 |
https://www.raptitude.com/2018/05/where-success-leaks/
|
The Hole Where All The Success Leaks Out
|
David Cain
|
Each of us has a few professional-level skills—usually ones relating to our jobs, or hobbies we’ve been trained in formally. But when it comes to everything else we do, we’re amateurs.
Being an amateur just means below pro level—you may do some aspects quite well, but you still muck things up that a professional never would. For example, there are dishes I can cook pretty well, but I’m no chef. I chronically overcook vegetables, serve things I haven’t tasted, and who knows what else that would make a proper chef cringe.
There’s a huge upside to being an amateur, however. On the excellent Farnam Street Blog, Shane Parrish dusted off a brilliant insight about effectiveness and expertise, which he found in an old “get better at tennis” book from the 1970s.
Shane isn’t particularly interested in tennis. Neither am I, and chances are you aren’t either. But this insight is so powerful and universally applicable that anyone could use it to drastically improve their performance at virtually anything—any job, any art, any sport, any skill at all.
The author of the book, Simon Ramo, was *very* interested in tennis, however. And he watched a lot of it, looking for what makes some players better than others. One thing he noticed is that the pros weren’t simply *better* than amateurs, but that they won their matches in a completely different way.
When pros meet, matches are decided mostly by a slight edge one player has over the other—in speed, awareness, or some other highly-trained quality. Their rallies go back and forth, both players knowing where they need to be, until one player puts the ball just beyond the other’s stroke. Pro players have all the fundamental elements mastered, and they win by being slightly better than their opponent at one or more of them.
When amateurs meet, they don’t edge each other out by being slightly more skillful. Instead, it’s a contest of who makes the fewest huge, gaping blunders. Amateurs constantly make egregious point- and game-losing mistakes, of the sort that pros no longer make. The outcome is decided by who makes the fewest—or least catastrophic—such mistakes.
And of course it works that way. Getting to the pros is a long, arduous process, one that filters out players with major flaws in their game. Coaches leap on those flaws as soon as they see them and drill them out of their athletes. Amateurs don’t go through this filtering process, so the flaws and bad habits remain, costing them bigtime every single time they play.
Everyone’s strategy, therefore—whether you want to win more amateur matches, or graduate to professional status—should be to identify and eliminate these big, costly rookie blunders, one by one. This is far more effective than getting quicker, hitting harder, or making that one brilliant shot now and then.
The same principle applies across the non-tennis world. Everything has its own list of classic amateur blunders:
**Poker**: slow-playing every good hand; betting without remembering your position
**Frugality**: buying lunch instead of packing one; trying to “tighten the belt” without tracking the numbers
**Meditation**: trying to silence mental chatter; only meditating when you’re in a good mood
**Office productivity**: leaving email notifications on; switching tasks when you hit a hard part
Just one of these blunders, made consistently, can undermine almost everything you’re doing right. Each is like a hole where your success leaks out. If the hole is big enough, or there are multiple holes, it’s hard to get anywhere beyond “struggling,” no matter how good you are at other parts of the game.
The good news: this also means that fixing even one such hole, or starting to, will make you immediately better, and not by just a little.
The boxer who stops dropping her right hand while throwing a jab becomes immediately and permanently better. The freelancer who quits trying to compete on price suddenly has a much easier time getting decent clients and interesting work.
Think about your best-honed skills, whatever they are: designing websites, selling cars, moving furniture, splitting logs, cooking omelets, forming concrete, tending bar, writing papers. Whenever you observe an amateur doing something you’re really good at, you will *always* spot major, self-defeating mistakes they are consistently making.
Amateur blunders take one of two forms: a tendency you’re aware of but don’t think is a big deal, or one you don’t even see. In both cases they’re essentially invisible to us until we either stumble across a better way, or (more often) a veteran points out the problem to us.
I’ve been lifting weights for almost three years now, and while I’m certainly fitter and better for it, I haven’t made very consistent progress. A determined beginner could start today and easily surpass me in six months.
I’ve coasted on this trajectory for a while, and only recently sat down with a pencil and paper to diagnose the problem. When I thought about what accomplished lifters do that I don’t, one thing jumped out: they definitely don’t miss as many workouts as I do. When I get busy, tired, or cranky, I frequently cut a workout short, or I “reschedule” it, and then it usually doesn’t happen at all.
This is a classic rookie mistake: trying to train without a consistent standard. I’m probably doing dumb things too but that’s a big one.
So I sketched up a very modest program, less than half of what I “normally” do, and then resolved to complete every rep of it for three months. The idea is to completely eliminate this one gaping blunder, my casual inconsistency, before trying to improve at the “top end” of things—personal bests, big volume, showy numbers.
Three weeks in, I haven’t missed a set, and the whole endeavor feels completely different. There’s no second-guessing, no bargaining with myself. I’m doing less work and advancing more quickly.
Now that I’m watching my consistency closely, I realize I had probably *never *completed ten consecutive workouts without skipping or shortening at least one of them.
That’s kind of embarrassing, but it’s also a hugely valuable discovery. There was a great big hole in the bathtub the entire time; no wonder it was filling so slowly. Meanwhile, I was fixated on the faucet part of the equation, and how I could crank it open even wider.
What are your chronic amateur mistakes? Think about where you’ve been stagnant, and ask yourself what you do that the pros would never allow themselves to do.
What are the big holes in the tub? If you don’t know, any veteran can surely tell you.
***
You have a tendency to post profoundly helpful, insightful things at exactly the time I most need to read them.
More than that, you have the ability to take small truths that seem so self-evident that they’re almost always overlooked, and you drill right down to the core of them and present them in a way that makes them new again to your readers and makes them (or me, at least) wonder why they forgot them for so long. Very much your style, I think – spotting meaning by stopping, and watching, and actually paying attention to the things others rush by.
Anyhow, I don’t want to get too ebullient here. But thank you. This bit made my night. And gives me a pretty big understanding of how to tackle some things tomorrow.
One small helpful bit here: a good way that comes to mind to find the big, obvious holes you’re talking about is through mental inversion. When youre trying to improve at something, instead of asking yourself how best to succeed at it, ask yourself how best to fail at it. Come up with a list of big, obvious ways to fail and ask yourself how many of those things you do. Often, there are a lot. It’s just that they’re obscured by a layer of excuses that hides them from notice until you look for them directly.
Ah, I love this mental inversion idea. I’ll look forward to trying it. I suspect that for a lot of things, the answer to “How best to fail at this” will be close to what I normally do :)
Great comment. I dig the inversion idea as well as a compliment to what David wrote. Between the two ‘views’ one could potentially really get a handle on where to focus efforts and what not to repeat. Thanks.
Thank YOU. The mental inversion idea bridged a gap for me that will help me actually act on this article’s premise.
Hey, guys,
I’m really glad that you found the mental inversion thing useful. Between that and the advice that David provided from this article, I’ve sat down over the last couple days and asked myself some hard questions. They’re mostly variants on the theme that David discussed here, although occasionally I reframe them. So, some of the areas of my life where I really feel like I’m falling apart – productivity, money management, close social relationships. And then I ask myself, is there a simple, basic, easily corrected thing that I’m failing to do that is sabotaging me? Really simple, easy-to-implement stuff that’s like “if you put 10% effort into this it will take you 50% of the way to where you want to be.”
And, of course, for each area of my life I’m struggling in, there’s something like that. A very simple “basic technique” thing that I’m messing up on which undermines any other attempts I make at improving. And with some small tweaks to my behavior I’m noticing some small effects.
An example: I have trouble managing my money, which is very dangerous because I’m a poor graduate student. Some of the most basic skills, I’ve neglected for years – simple things like tracking my spending. So I downloaded a tracking app (simpler for me than a pad of paper) and am logging my purchases. The simple act of being aware of it has curbed impulse purchases substantially.
An interesting side note – I’m a smoker, and I usually smoked half a pack a day until two days ago when (you guessed it) I started logging my money. Apparently, the simple act of logging my purchases and having to look at a $10 expense for a pack of cigarettes is aversive enough that I couldn’t justify buying a new pack. So I’ve been smoke free for the last two days, and I figure, since I’ve broken through the worst of the withdrawals (today was nasty), I’ll ride this wave for a while and see if I can drop this habit permanently.
I did not expect this; it was actually a very impulsive choice, not premeditated at all. I think that a lot of things had to line up to make it happen, though, so it’s not just the “find the hole where the success leaks out” thing that made me quit. And, of course, these types of changes are only made meaningful by sticking to them long-term: I’m not fatalistic, but I’m hesitant to talk about having “quit” until I’ve proven to myself with my own actions that, yes, I’m sticking with this change.
But still, I think that’s a pretty cool ripple effect. I thought you all might appreciate it.
Hey well done. This notion has really changed my life in the last two years or so: that we’re always closer than we think to making substantial changes. Like you say, the simple change from not tracking spending to tracking spending makes an immediate difference to the outcome, and in many ways it is easier to live that way. It’s just a matter of hopping out of our usual groove to a different way, and often that different way is easy to identify if we just reflect on it a bit, or better, consult someone who is good at what we want to be good at. Best of luck with the no-smoking adventure!
What a simple yet powerful way of looking at things. I seem to spend more time thinking about the things I should (or want to) be doing than actually doing them. As a result, I end up resenting myself for not putting those thoughts in motion more often and almost like a form of punishment, maintain myself in this inertia. It’s a vicious cycle :-(
I suppose we can never do everything we think we should do, because doing takes much more time and energy than thinking. So we always need to be giving ourselves a break, for most of it at least.
Thank you yet again, David. I recently doubled down on my commitment to my marriage and we are communicating through problems that had scared us both for two years.
Your blog post reminds me that I need to make the same level of commitment to succeeding at my freelance work. I hereby commit myself wholly to the success of my freelance career.
Thank you for inspiring and for witnessing it.
By the way, I have completely stepped away from social media. I do occasionally miss some of my online friends, but we swapped emails so I shoot the occasional message out and get one as well. I’ll go back later, but this is good for me. Thanks again for helping me to rethink that.
That’s great. This way of thinking is new to me but so far it really simplifies things. The biggest difference is that you don’t need to commit to “stepping up” or “sorting things out” in some vague general way, you just commit to identifying and fixing one major hole in your “game” so to speak.
switching tasks when you hit a hard partThat’s me when writing a blog post or doing a new design. And social media is always the culprit. “I’m stuck at the hard part of explaining what I’m trying to explain, so let’s check Reddit”
Ugh. I know I do it but it’s so hard to change habits like that. This is a reminder that I need more discipline, thanks.
Realizing this mechanism in my writing has been huge. There is zero temptation to take breaks when I’m on a roll, and huge temptations when I’m struggling with something. But it’s really the worst time to take a break. I try to quit for the day while I still have some momentum.
switching tasks when you hit a hard part.
This was an Aha! for me as well. Now that I’m aware, I hope to change this. It will be interesting to start paying attention and discover what other holes might be leacking.
Switching parts when hitting a hard part.
You know David, when that hard part of a task gets hard I end up more often than not, going to this blog and reading your latest entry. What a irony lol. But alas, stopping this behavior is critical for fixing the big holes. Great article!
I really like your idea of approaching learning and improving a skill as discovering answers instead of tediously accomplishing goals and reaching targets. School has trained people that learning is a means to an end rather than something to be enjoyed in and of itself. This is why many people like the idea of learning something new instead of actually learning it because it’s painful to see the gap between where you are and where you want to be.
When you view learning as something where you’re behind and you need to catch up as opposed to a process driven by curiosity, learning becomes embarrassing rather than something that could be exciting.
Agreed… two years ago I took a writing course at the local university, not having attended a class for probably ten years. And I was amazed at how different it felt, being there purely to learn, rather than to graduate or gain a credential. I feel like I learned five times as much in every class. I’m not sure what the state of public schooling is these days but I remember just wanting it all to be over.
I enjoyed reading this post very much. You’ve definitely got this nailed.
This past year, I’ve read a lot about “learning”, and am fascinated by what differentiates excellent performers/achievers from “standard” ones. My main takeaway is that pros consciously focus on eliminating their weaknesses, while amateurs will settle for a relatively satisfactory level of competence.
This is why, for example, people tend to stagnate after a year of learning an instrument. Once they get good enough to play a few pop songs, they get complacent, and stop improving, even after many, many more years of “practice”.
What are you actively trying to improve at the moment? You mention weight lifting, but I’m curious as to what other skills you’re applying this mindset toward.
I am focusing on four areas this year: fitness, meditation, work habits and relationships. In each area I’m trying to identify the most obvious self-defeating rookie mistake, and I have a pretty good idea for each of them now.
Ah, now I’m curious. What are they? If you don’t mind sharing. I can’t imagine you’re making rookie mistakes in meditation.
I’m also focusing on relationships and also decluttering/organizing. My mistake in relationships up to now has been to presume the other gender has the same mating psychology as I do.
And I’m making pretty much ALL the mistakes you can make in decluttering: not having a place for everything, not putting things back, assuming unpleasant tasks take a long time and therefore, not doing them, etc.
I feel like this with my website. How is it that I’ve been doing it for 6 years, yet blogger x comes along and in 6 months they are making lots of money off their blog and have thousands of views? I imagine it’s the same thing. I haven’t make the commitment to strengthen those “weaknesses.” Good stuff as usual!
Same here… I’m happy with this blog but I’ve been at this almost a decade, and I remember seeing bloggers fly past my numbers in less than a year. That’s good though — it means there are still big holes to plug that will make a big difference.
Numbers don’t mean anything. Popularity is not always liked to quality. Your blog is amazing.
Thank you for the insights! Identifying and eliminating weak areas in what we do is essential in getting better. Some say that the path to perfection is never being quite satisfied with your achievements. Perhaps that is what helps us discover what we lack or identify the “hole”. I would think you would agree that even pros have “holes” to overcome and weaknesses to eliminate. I also feel there is something greater that helps us on our path to getting better in something, and I feel it stems from deep inside, from who we are, from how we have grown and developed so far.
And what about faith, passion, excitement? They surely would add that “spice” or boost to our persistence to keep on track, to identify the “holes”, to simply want to be better …
> Three weeks in, I haven’t missed a set, and the whole endeavor feels completely different. There’s no second-guessing, no bargaining with myself. I’m doing less work and advancing more quickly.
A wise man once said “rules are good decisions made in batches” :)
Haha I should read that guy’s stuff :)
This is quality stuff. I read through the entire article, and there was a lot of in depth points you made in every part. But yes “The Hole Where All The Success Leaks Out”, this is real. It took me sometime to realize why I struggled so much at my job. I could do well sometimes, but it was always a bathtub that was not going to fill (other sides of the equation — social life, family, long-term goals).
David,
Thank you so much for your article. I really enjoyed the ideas that you present here. It is incredibly beneficial for me to think about fixing the glaring mistakes in my endeavors, because I believe there is a whole lot of low-hanging fruit there, rather than my typical tendency to try to fix everything all at once as I wonder why I am not an expert. I really like the way that you presented the information by viewing it through the lens of someone who really wanted to become an expert at tennis.
Low hanging fruit for sure. Every skillset is different but we’re talking about fixing the what’s most glaring and obvious to any veteran of that skillset. And the more glaring/detrimental the hole is, the greater the gain from addressing it.
Oh my gosh, thank you for writing this. This gave me a huge aha. Thinking about all the things I don’t have compared to competitors and things I should be doing is suffocating, but focusing on the blunders to avoid is so much easier to think about. Thank you.
Great blog and love your insights, as always. Really provoking self awareness. Seeing the unvarnished truth has such a positive effect!
I’ve had that exact insight with respect to chess. Step one of improving your game is to stop making completely horrible blunders that immediately lose you the game. Step two is to notice when your opponent makes blunders so that you actually capitalize on them. Chess is nice, because after a game you can have a computer find your blunders, and your opponents, to see how you are doing. I have many, many games where it was just one blunder after another and neither of us noticed for 5 or 6 moves. If only there was a super-human life coach to review everything else I do looking for the big gaping holes:)
Totally agree, it’s both easier and more worthwhile to avoid a few big mistakes than to accomplish difficult acts.
I think that in business as in fitness, consistency is the key. Even if it’s obvious, we often get distracting by any new shiny thing.
Great post as always David, you suceeded in going deeper in the topic while keeping everything simple.
David, love this post and I don’t want to add what has already been added to the comments above. I’m interested that you used weight training as an example. I’ve been a personal trainer for 8 years and I’m pretty good at coaching clients but lousy at coaching myself. That’s why I hire a coach to write my programs. Not only do they see the gaps, it exposes me to a different way of doing things.
If you keep showing up and not missing a rep, good things will happen .
The most obvious gap in most people’s life when it comes to most part of their lives is – I think – not getting enough sleep. It’s almost like a sort of meta hole.
I really appreciate the way you consistently share things that are simple and obvious and never appear that way to me until I read your take on them. I also love your personal examples, and have begun incorporating more of that into my own blogging/teaching.
But more than anything, I’m commenting on this one to say thank you for using “her” in reference to your example of a boxer. Yep. This is how we shift the paradigm.
Good stuff! Keep it coming.
>> So I sketched up a very modest program, less than half of what I “normally” do, and then resolved to complete every rep of it for three months. The idea is to completely eliminate this one gaping blunder, my casual inconsistency, before trying to improve at the “top end” of things—personal bests, big volume, showy numbers.
This is exactly what I also figured out! After years of “lifting”, which meant a whole lot of skipping workouts with an occasional workout actually fit in where I pounded myself into the dirt, I finally realized that just showing up is half the battle.
So I transitioned to a very simple 5×5 protocol two days/week. 5setsx5reps, 3 compound exercises each day with only squats being repeated twice/week. Also, my only rule is: “just show up and get busy for 5min; unless I’m sick enough to stay in bed that day, I show up; if after 5min I’m still not feeling it, I am allowed to stop”.
And guess what? While I do sometimes choose to abbreviate my workout (don’t complete my reps or sets for every exercise), I always show up and I think I’ve only cut things at the 5min-mark like twice ALL YEAR! During about a solid 12 months, I’ve only done 2 5-min workouts. The others have all gone for at least 30min.
I don’t need to tell you the difference in growth this year (even though I’m older and biologically less capable of it) vs all of my previous years. It speaks for itself just by glancing at my physique.
Comments on this entry are closed.
{ 5 Trackbacks }
| true | true | true |
Each of us has a few professional-level skills—usually ones relating to our jobs, or hobbies we’ve been trained in formally. But when it comes to everything else we do, we’re amateurs. Being an amateur just means below pro level—you may do some aspects quite well, but you still muck things up that a professional never would. For example, there are dishes
|
2024-10-12 00:00:00
|
2018-05-14 00:00:00
| null |
article
|
raptitude.com
|
Raptitude.com
| null | null |
6,030,293 |
http://www.pittsburghmagazine.com/Best-of-the-Burgh-Blogs/The-412/July-2013/Nine-Pittsburgh-Startups-That-Could-Be-the-Next-Instagram/
|
9 Pittsburgh Startups That Could Be the Next Instagram | Pittsburgh Magazine
|
PM Staff
|
# 9 Pittsburgh Startups That Could Be the Next Instagram
##### Innovations being fostered at AlphaLab include an online marketplace for unused storage and an app for connecting the LGBT community.
How many times a week are you surfing Etsy, Pinterest or Amazon when you’re faced with this conundrum:* I love it, but can I afford it? *
Pittsburgh-based startup **BudgetSimple** has developed an online financial advisor to make that daunting question easy as pie. Users can create straightforward budget plans at the beginning of every month and allocate money for everything from car insurance to big-ticket electronics purchases. There’s even a smartphone app in case you’re debating an impulse buy at a brick-and-mortar store.
BudgetSimple is one of nine innovative tech companies selected by South Side startup incubator **AlphaLab** for its newest cycle of mentoring and development.
During the 20-week program, AlphaLab provides $25,000 in investment capital, office space, strategic support, mentorship and educational sessions to help the companies accelerate the launch and growth of their company. No lawyer fees. No accounting fees. At the end of the program, the startups will present their products to potential investors from around the nation.
Check out the full roster of innovative companies:
-
**AthleteTrax**is a Web app that helps college athletic departments improve academic and athletic performance.
-
**BudgetSimple**is an online financial advisor that helps you answer the most important question, “Can I afford this?”
-
**Collected**helps users easily find and access content across the many Web services used to create and edit documents.
-
**Crowdasaurus**hosts brandable crowdfunding communities for businesses and organizations to engage their user bases around funding campaigns.
-
**DropKicker**is a social accountability platform that encourages better health through habits.
-
**MegaBits**is one of the first massively multiplayer mobile games based on real-world physical locations.
-
**ProfilePasser**is a mobile platform that connects high-school athletes with college recruiters and increases interaction at events.
-
**Spacefinity**is an online marketplace to share unused storage space and rent it out to others.
-
**Wing Ma’am**is a mobile app that connects users with other LGBT women and notifies them of related events going on in their city.
Want to have a chat with these bright individuals? AlphaLab is hosting a **Open Coffee Club** July 25 at its East Carson Street office. It’ll be like a live episode of “The Big Bang Theory.” You can even ask the AthleteTrax guys if their app will eventually expand to **track arrests**.
###
** What’s going on today?**
- It’s $1 Night at North Versailles Bowling Center! After 7 p.m., bowling games, shoe rentals and food cost $1 each.
| true | true | true |
Innovations being fostered at AlphaLab include an online marketplace for unused storage and an app for connecting the LGBT community.
|
2024-10-12 00:00:00
|
2013-07-10 00:00:00
|
article
|
pittsburghmagazine.com
|
Pittsburgh Magazine
| null | null |
|
29,890,730 |
https://blog.goncharov.page/how-to-get-an-online-masters-in-cs-for-a-price-of-your-morning-latte
|
How to get an online Master's in CS for a price of your morning latte
|
Andrey Goncharov
|
# How to get an online Master's in CS for a price of your morning latte
I'll tell you:
- How I got into Georgia Tech's Online Master of Science in Computer Science (OMSCS) program while working and living in Russia.
- Requirements to enroll for foreign students.
- How much it costs.
- My experience with the courses.
- Why I am still pursuing the degree even after I started working at Facebook.
## Who
Hiya!
ℹ️ My name is Andrey G. I am a software engineer from London, UK. Primarily, I am a full-stack web developer (think React, Angular, Node.js), but I also have a keen interest in low-level stuff (hello, C) and finance (love-hate relationship with Pandas).
G. stands for Goncharov. I wanted to save you the pain of reading my Cyrillic last name.
💼 Full-stack (web, blockchain, and even a bit of embedded) at software consultancies (DSR, DataArt) ➡️ Headed front-end development at Hazelcast ➡️ Front-end at Bricks (next-gen spreadsheet web app) ➡️ Full-time maintainer of Flipper at Meta (ex-Facebook).
📝 I write about tech in my small blog.
🎤 Occasionally, I speak at conferences.
🇬🇧 I help people get their Global Talent visas.
🎓 I am currently pursuing a Master's in Computer Science (OMSCS) from Georgia Tech.
❤️ I love math, physics, rational thinking, and figuring out how things work. In my spare time, I enjoy hiking, snowboarding, boxing, and weight lifting.
📫 Stay in touch on Twitter, LinkedIn, and Instagram. Drop me a DM on Matrix - @andrey:goncharov.ai.
🧙 Buy me a coffee to support my endeavours!
## What
As you might have inferred at this point, the online Master's in CS for a price of your morning latte is the one from Georgia Tech. They call it Online Master of Science in Computer Science or OMSCS in short.
What it is:
- It is a proper Master's in CS. Your degree name is going to be the same as the on-campus one.
- It is a course-based Master's. You don't have to write a thesis to graduate. You only need to take 30 hours of classes (10 classes).
- It is taught solely in English. You need to pass a language exam (TOEFL) unless you are a US citizen, a green card holder, or you studied at the US uni before. My gut tells me that you should be fine if you hold UK, Canadian, Australian, or Irish citizenship, but you'd better ask yourself.
- It is 100% asynchronous program. All lectures are pre-recorded. All office hours are recorded. Recordings are available for all students. All communication with TAs (teaching assistants) and professors goes through Piazza. It is a Reddit-like platform where every class gets its own subreddit every semester. Even exams are asynchronous. They are carried online in a form of a test. Usually, students have an entire week to pass an exam. Therefore, it doesn't matter in which timezone you live or how busy you are at work.
- It is a Master's from a highly ranked uni. QS World University Rankings puts Georgia Tech on the 88th place. CS Rankings places Georgia Tech on the 5th place in 2021.
- It is a rigorous and demanding program. You are expected to complete graduate-level work and have graduate-level knowledge. While some classes might take you as little as 5-6 hours a week, other classes might squeeze every last free minute out of your schedule with their workload worth 30 hours a week. I don't want to scare you off. I heard that people without formal CS background still succeeded in the program, but be prepared to make sacrifices and invest in a good chair ;).
- It has a flexible schedule. You can take as little as 1 class per semester. It means 3 classes per year. You can even skip one of the semesters if needed. At this pace, the program is going to take you a little bit over 3 years. If you have some spare time, you can rush through the courses taking 3 classes per semester.
- It is more than affordable. I was not joking about the price of your morning latte. If you take 1 class per semester, you finish in 3 years and 4 months or 40 months or 1200 days. With 1 class they charge you $841 per semester. It adds up to $8410 for the whole program. Divided by 1200 days, we get $7 per day. Large latte in London is around 3 GBP. Muffin to accompany your latte is going to be another 2 GBP. 5 GBP is $6.79 at the current exchange rate.
You might have seen CS-7210 Distributed Computing with stunning 62 hours of weekly workload. It is a brand new class which, I believe, is going to be adjusted over the next few semesters to normalize the hours.
You'll be within your rights to claim that I lied about the Master's being at the same price as your morning latte. First, I sneakily added a muffin to the equation. Second, even with the muffin it is still 21 cents more. In my defense, statista.com claims that 60% of the UK population has a sweet tooth. It implies that they rarely enjoy their latte without a sweet delicious companion. I would also go out on a limb and claim that $6.79 is on par with $7. Nevertheless, I sincerely beg your forgiveness if it caught your eye and completely ruined the article for you.
You might have raised your eyebrows when I mentioned 3 semesters per year. In Russia (and, probably, in some other countries) we have 2: fall and spring. In the US, you can also study on summer.
Wha it isn't:
- It is not a research program. Reddit folks suggest that there might be a way to do research in scope of OMSCS, but it is limited.
- Georgia Tech will not sponsor your F-1 visa for the
*online*program. As result, you are not eligible for STEM OPT after graduation.
OMSCS offers a vast selection of 56 classes. Most of the classes belong to 1 of 4 specializations. Each specialization is a list of courses. You need to take 5-6 courses from the specialization list and 4-5 courses of your choice from the overall list of classes available to OMSCS students. Each course has a public page with a syllabus and prerequisites. Most courses also have dozens and hundreds of reviews on OMSCentral. Bottom line is that everyone should be able to find something aligned with their interests.
## How0
... to get in.
- You need to have an undergraduate degree in Computer Science or related field (mathematics, computer engineering, electrical engineering). It might be any regionally accredited institution in you country. For instance, I graduated from a regional Russian university (Voronezh State University if you are interested) 5 years prior to starting my OMSCS journey. YOu might be fine with a BA in a totally unrelated field. Reddit people share their stories of getting in with a BA in Philosophy and Psychology. You can browse through various admission threads (like this one) to find a background matching to yours and see the outcome.
- It is officially recommended to have a GPA of 3.0 and higher. However, it is not a strict requirement. I got in with a low GPA around 2.98.
- You need to send transcripts from all academic institutions attended. If the transcripts are not in English, you have to translate them. Last time I checked, you need to send them a paper degree with the translation or ask your previous uni send these documents from the official university domain. I translated my degree myself, and then asked the dean of the Computer Science faculty where I studied before to send it over to Georgia Tech. God bless you, Alexander Krylovetsky for making my life so much easier!
- Collect 3 recommendation letters. They should come from your previous professors or from your professional contacts. They should be able to testify on your technical skills. In other words, do not ask the headmaster of your previous uni or your CEO write a recommendation letter. Ask someone credible, but whom you interacted with directly. Ask your previous professor whose class you enjoyed and did well at. Ask you direct manager at work, or a skip-level manager if they have enough context about your work. I was a bad student before, so I, kind of, burnt all bridges with my potential academic references. In the end, I asked 2 of my skip-level managers from previous gigs and my current direct manager to write my recommendations. They don't have to provide a paper letter. You submit their emails into the system when you apply. Georgia Tech contacts them later on with the instructions on how to fill a recommendation letter for you online.
- Write and submit your statement of purpose and your background statement. Here is my SoP and my background statement if you are interested.
- Pass TOEFL with a minimum score of 100 or IELTS with a minimum score of 7.5. At different times, I did both exams: TOEFL for OMSCS, IELTS for my UK visa. I found them pretty similar. I cannot recommend enough this YouTube channel that gave me amazing tips on reading, listening, and speaking. I also watched a couple of random videos on YouTube on the writing section but I can't name anything that stands out. Here is my short advice for every section:
- Reading - do not read the entire text. Instead, skim over the first sentence of every paragraph. Next, carefully read every first and last sentence of each paragraph. Go straight to the questions. You should have enough context to quickly find the right paragraph, and then find your answer.
- Listening - keep
*short*notes about the essential information. - Speaking - memorize several templates on how to answer. They are not interested in how reasonable your answers on their questions are. You do not need to provide facts and proofs. They want to hear your broad vocabulary and well-structured speech.
- Writing - once again, memorize a template of two and you should be fine.
- Pay your admission fee.
- Mind the deadlines. For the fall matriculation it's March 10. For the spring matriculation it's August 10. Yes, you have to apply almost half a year before you start.
- All done!
Remember that in the US the highest grade is 4.0. For example, in Russia it is 5. You might need to convert your grades to calculate your GPA properly.
## How1
... it is going so far.
As of this moment, I have been taking things slowly doing a single class at a time. In the upcoming spring semester I'll try to take 2 for the first time. Wish me luck!
So far I managed to get through:
- CS 6200: Introduction to Operating Systems
- CS 6250: Computer Networks
- ISYE 6644: Simulation and Modeling for Engineering and Science
- CS 7646: Machine Learning for Trading
All these classes were of exceptional quality. Each one of them produced enormous returns for every penny and every minute invested.
A few tips and tricks I learned so far:
- Don't try to come up with a perfect order of taking classes in advance. Each class has a limited number of seats. Many popular classes get filled in a matter of hours if not minutes. Moreover, if you are just starting the program, you'll get to pick classes last. So most high-demand classes are going to be taken by seniors at that point. Don't worry! You'll still get a spot in a class. But it might be not the class you first thought of. It is exactly how I wound up taking "CS 6200: Introduction to Operating Systems" as my first class. Believe me, I don't regret it a single minute! My advice here is to identify a few potential classes you'd like to take. If none of them have seats, use OMSCentral and just take the highest-rated class available.
- Get ready to suffer if you haven't read academic literature for a while ;).
- If you're on a tight schedule, do your projects over the last weekend. This way 99% of your potential questions will already be answered on Piazza.
- Take prerequisites seriously. If you only knew how much I struggled with "ISYE 6644: Simulation and Modeling for Engineering and Science" because of my arrogance and my neglection of the math and statistics prerequisite.
- You're a student now. Start asking for student discounts everywhere! If you are not in your twenties anymore doesn't mean you're not eligible.
## Why
... I am still studying after getting into FAANG.
In 2009, when I got into VSU (Voronezh State University), I was a careless 16-year old boy with an emerging dream of becoming a rock star. Not a rock star developer, but the one playing music. It took me a long while to realize that my music career went downhill and to rediscover my passion for computer science. As an unavoidable consequence, I missed tons of opportunities to learn fundamentals from great people. I picked up web development and JavaScript myself. I tried to fill the gaps in fundamental knowledge over the years. Yet, the more I learned, the more I realized how much I don't know. I felt like there is a huge, potentially, infinite surface of unknown. Every time I learn something, I create a tiny island of known. However, without systematic, structured learning, these islands, sometimes, don't have proper connections. That's exactly what OMSCS classes give. They don't only create new dots of knowledge, but also connect the dots.
Another major reason is that if I ever decide to move to the US on H-1B visa, I'll be eligible for the separate Master's quota. Hopefully, it gives me a better chance of winning the visa lottery.
To be honest, if I ever decide to move to the US, it is, probably, going to be on L-1B. I do enough gambling on the stock market.
Last, I am, kind of, tired of explaining to everyone what is a Specialist's degree. Yes, it could be considered a Masters' equivalent, but wouldn't it be better to have a proper Master's?
## Null byte
I hope I have made at least a semi-decent argument why education is cool. It also does not hurt when it's cheap, right? See you soon in one of the classes ;)
Here are some useful links:
- omscs.gatech.edu/program-info/admission-cri..
- omscs.gatech.edu/program-info/application-d..
- grad.gatech.edu/degree-programs/computer-sc..
- omscs.gatech.edu/additional-app-guidelines
- omscs.gatech.edu/prospective-students/faq
- omscs.gatech.edu/current-courses
- omscs.gatech.edu/program-info/specializations
- bursar.gatech.edu/tuition-fees
- reddit.com/r/OMSCS
- omscentral.com/courses
Check out my new guide on the Global Talent visa with concrete document examples on Gumroad!
P.S. Let's stay in touch! Twitter, LinkedIn, newsletter, RSS, Instagram, Instagram - pick your poison. Feel free to drop me a DM with any questions.
| true | true | true |
How I got into the OMSCS program while working and living in Russia. Why I am still doing it after starting working at FAANG.
|
2024-10-12 00:00:00
|
2022-01-11 00:00:00
|
https://hashnode.com/utility/r?url=https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1641903554998%2FTBFlRE6BP.jpeg%3Fw%3D1200%26h%3D630%26fit%3Dcrop%26crop%3Dentropy%26auto%3Dcompress%2Cformat%26format%3Dwebp%26fm%3Dpng
|
article
|
goncharov.page
|
Andrey Goncharov
| null | null |
31,109,993 |
https://www.wsj.com/articles/sandberg-facebook-kotick-activision-blizzard-daily-mail-11650549074
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
22,896,956 |
https://arxiv.org/abs/2004.06195v1
|
AiR-ViBeR: Exfiltrating Data from Air-Gapped Computers via Covert Surface ViBrAtIoNs
|
Guri; Mordechai
|
# Computer Science > Cryptography and Security
[Submitted on 13 Apr 2020]
# Title:AiR-ViBeR: Exfiltrating Data from Air-Gapped Computers via Covert Surface ViBrAtIoNs
View PDFAbstract:Air-gap covert channels are special types of covert communication channels that enable attackers to exfiltrate data from isolated, network-less computers. Various types of air-gap covert channels have been demonstrated over the years, including electromagnetic, magnetic, acoustic, optical, and thermal.
In this paper, we introduce a new type of vibrational (seismic) covert channel. We observe that computers vibrate at a frequency correlated to the rotation speed of their internal fans. These inaudible vibrations affect the entire structure on which the computer is placed. Our method is based on malware's capability of controlling the vibrations generated by a computer, by regulating its internal fan speeds. We show that the malware-generated covert vibrations can be sensed by nearby smartphones via the integrated, sensitive \textit{accelerometers}. Notably, the accelerometer sensors in smartphones can be accessed by any app without requiring the user permissions, which make this attack highly evasive. We implemented AiR-ViBeR, malware that encodes binary information, and modulate it over a low frequency vibrational carrier. The data is then decoded by malicious application on a smartphone placed on the same surface (e.g., on a desk). We discuss the attack model, provide technical background, and present the implementation details and evaluation results. Our results show that using AiR-ViBeR, data can be exfiltrated from air-gapped computer to a nearby smartphone on the same table, or even an adjacent table, via vibrations. Finally, we propose a set of countermeasures for this new type of attack.
### References & Citations
# Bibliographic and Citation Tools
Bibliographic Explorer
*(What is the Explorer?)*
Litmaps
*(What is Litmaps?)*
scite Smart Citations
*(What are Smart Citations?)*# Code, Data and Media Associated with this Article
CatalyzeX Code Finder for Papers
*(What is CatalyzeX?)*
DagsHub
*(What is DagsHub?)*
Gotit.pub
*(What is GotitPub?)*
Papers with Code
*(What is Papers with Code?)*
ScienceCast
*(What is ScienceCast?)*# Demos
# Recommenders and Search Tools
Influence Flower
*(What are Influence Flowers?)*
Connected Papers
*(What is Connected Papers?)*
CORE Recommender
*(What is CORE?)*# arXivLabs: experimental projects with community collaborators
arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.
Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.
Have an idea for a project that will add value for arXiv's community? **Learn more about arXivLabs**.
| true | true | true |
Air-gap covert channels are special types of covert communication channels that enable attackers to exfiltrate data from isolated, network-less computers. Various types of air-gap covert channels have been demonstrated over the years, including electromagnetic, magnetic, acoustic, optical, and thermal. In this paper, we introduce a new type of vibrational (seismic) covert channel. We observe that computers vibrate at a frequency correlated to the rotation speed of their internal fans. These inaudible vibrations affect the entire structure on which the computer is placed. Our method is based on malware's capability of controlling the vibrations generated by a computer, by regulating its internal fan speeds. We show that the malware-generated covert vibrations can be sensed by nearby smartphones via the integrated, sensitive \textit{accelerometers}. Notably, the accelerometer sensors in smartphones can be accessed by any app without requiring the user permissions, which make this attack highly evasive. We implemented AiR-ViBeR, malware that encodes binary information, and modulate it over a low frequency vibrational carrier. The data is then decoded by malicious application on a smartphone placed on the same surface (e.g., on a desk). We discuss the attack model, provide technical background, and present the implementation details and evaluation results. Our results show that using AiR-ViBeR, data can be exfiltrated from air-gapped computer to a nearby smartphone on the same table, or even an adjacent table, via vibrations. Finally, we propose a set of countermeasures for this new type of attack.
|
2024-10-12 00:00:00
|
2020-04-13 00:00:00
|
/static/browse/0.3.4/images/arxiv-logo-fb.png
|
website
|
arxiv.org
|
arXiv.org
| null | null |
24,901,423 |
https://en-global.help.yahoo.com/kb/SLN35505.html
|
help
| null |
Select the product you need help with and find a solution
Was this article helpful?
Please tell us why you didn't find this helpful.
Thank you! Your feedback has successfully been submitted.
You have been redirected to this page because the page you requested was not found.
| true | true | true | null |
2024-10-12 00:00:00
|
2017-01-01 00:00:00
| null | null | null | null | null |
|
20,989,213 |
https://www.collectorsweekly.com/articles/cliftons-brookdale-cafeteria/
|
L.A.’s Wildest Cafeteria Served Utopian Fantasy With a Side of Enchiladas
|
Hunter Oatman-Stanford
|
On a decrepit block of Broadway in downtown Los Angeles, hidden behind a dilapidated, aging façade, lies the ghost of a palatial dining hall filled with towering redwoods and a gurgling stream. Known as Clifton’s Brookdale Cafeteria, this terraced wonderland recalls a different time, when cafeterias were classy and downtown living was tops. Against all odds, the Brookdale outlasted attacks from notorious L.A. mobsters and decades of neighborhood decline. Over the last few years, Clifton’s has been closed while staging its comeback, finally being restored to its original Depression-era grandeur [Update: It closed again in 2018.]
“At its height, the Brookdale could seat up to 15,000 people a day. No other restaurant on Earth could do that.”
As the last jewel of a 20th-century cafeteria empire, the Brookdale’s unique vision of paradise has earned a spot in the hearts of many longtime Angelenos. Real-estate developer and entrepreneur Andrew Meieran, the restaurant’s current owner, says he was first drawn to the idea of rehabilitating Clifton’s almost a decade ago, enticed by its quirkiness and his disbelief that it was still in operation. “I just became fascinated by it. It’s this wonderful link to old L.A.,” he says.
The overhaul of the Brookdale restaurant, which Meieran began after purchasing it in 2010, is meant to restore many of its original details while also updating the business for a more modern L.A. “We’re bringing Clifton’s back to the way it originally was, to be the center of the community,” says Meieran. “It was a place where everybody could meet, a place that fueled artistic passions. You had everyone from Jack Kerouac to Ray Bradbury eating here. People were inspired by this complete fantasy environment.”
Clifford and Nelda Clinton opened their first restaurant in 1931, on the site of a run-down Boos Brothers cafeteria at 618 Olive Street, naming it Clifton’s by combining Clifford’s two names. “It was during the height of the Great Depression when we came to Los Angeles from Berkeley,” explains Don Clinton, Clifford Clinton’s son.
“My dad had grown up in San Francisco, working with his father in the Clinton Cafeteria. He sold his interest to his brother-in-law and cousin, and moved south because his ideas were a little more liberal, exotic—progressive even. Dad wanted to feed people even though they couldn’t afford it. If they were hungry, they’d be welcome just the same.” In the thick of the Depression, Clifford Clinton built his restaurant as a place of refuge for those unable to afford a hot meal (one of the neon signs out front read “PAY WHAT YOU WISH”). Soon after the first Clifton’s opened, customers began referring to it as the “Cafeteria of the Golden Rule.”
Having grown up in a family of strong Christian faith, Clinton was taken on family missionary trips to China, and was profoundly affected by the poverty he witnessed. “As a boy, my dad spent time in China on two different trips with his parents,” Don says. “The last trip was when he was 10 or 11 years old, and he saw so much starvation—mothers trying to feed their children by giving them roots or mud to fill them up. Dad was so impressed that he vowed if he were able, he would feed people who were hungry. That was his motivation, the feeling that we were put on Earth to do something good for others, and he carried that theme pretty much throughout his life.”
In an era when profit-oriented businesses often attempt to cast themselves as philanthropic ventures (see the conceit of certain internet entrepreneurs, or delusional luxury retailers like Restoration Hardware), Clifton’s original mission comes as a breath of fresh air. At the Clintons’ second restaurant, the Penny Caveteria—named for its basement-level locale—meals cost only one cent. They were free if you used one of Clifton’s redeemable tickets, which were frequently given out to the homeless.
Long before the Civil Rights movement allowed black Americans to freely patronize white-run establishments, Clifton’s restaurants were integrated. In response to a complaint about his progressive policy, Clinton wrote in his weekly newsletter, “If colored skin is a passport to death for our liberties, then it is a passport to Clifton’s.” Regardless of income or skin color, Clinton wanted everyone who ate at his restaurants to be completely satisfied, so the phrase “Dine free unless delighted” was printed on every check. Though many patrons ate for free, enough customers gave significantly more than they were asked to keep the business afloat.
“The Clintons were true missionaries, in that they wanted to show by deed and example, and did a fantastic job with it,” says Meieran. “They offered self-improvement classes; they provided a barber and ways to clean up if you couldn’t afford them yourself. It was all about the community and providing help, as opposed to just making money. That was never the goal.” Working with local doctors and pharmacists, the company created a fully paid medical plan for its staff. After surgery or a hospital stay, recuperation was provided in the sprawling Clinton family home in the Los Feliz neighborhood of L.A.
Eventually, one of the Caveteria’s regular patrons offered Clinton the chance to open a location in his building at 648 South Broadway. In 1935, this spot would become the Brookdale, the largest cafeteria in the world. Only a few years after opening the Brookdale, the Clintons redesigned the space as a lavish distraction from the country’s financial hardships, with an over-the-top themed environment that made it one of L.A.’s new hotspots.
The Brookdale’s interior was inspired after California’s great national parks, with columns made from actual redwoods, rocks and foliage bursting from the walls, and a stream running right through the dining room. “It was copied after Brookdale Lodge in the Santa Cruz mountains not far from Felton, California,” Don explains, referencing a Prohibition-era hotel whose Brook Room was built around live redwoods and a running stream. “My dad spent a lot of years around there as a boy, and he loved those redwoods, so he wanted to replicate that as interior décor.”
Today, it’s hard to imagine cafeterias as groundbreaking, since they’re now known for drawing the senior-citizen crowd with bland, overcooked food and hospital-style furnishings. But in the 1930s, cafeteria dining was a completely modern innovation, representing the freshest fare available.
“If colored skin is a passport to death for our liberties, then it is a passport to Clifton’s.”
“The cafeteria was a very European idea, and it grew out of the Scandinavian smorgasbord, brought to the United States in the 1880s,” explains Meieran. “Cafeterias originally served top-end, high-quality food because their money was spent on the product instead of the service. Since people were serving themselves, they didn’t have to pay the wait staff as much, so they could spend more money on food.
“In addition, it had to be good because it was actually out there in front of you, and customers would never pick items if they looked or smelled bad. On a normal restaurant menu, you don’t have any idea what you’re going to get until it finally arrives. But in a cafeteria, the food is right up front and personal. And if it don’t look good, you ain’t gonna buy it.”
According to Meieran, the standard Clifton’s menu has changed significantly since the company’s birth. “I actually found all the original recipes for Clifton’s, all the way back to the opening,” Meieran says. “In fact, the recipes go back to the family’s cafeterias in San Francisco, in the late ’20s. It really began with much more extensive, high-quality, eclectic fair. I have pictures of Clifton’s in the 1930s, when they were serving lobster, whole lobsters. They had them all laid out, they had cracked crab, and this whole raw bar. It was beautiful, just incredible.”
“Cafeterias became the original fast food,” he adds, “and they allowed people to time their days better, instead of worrying if they’d be served quickly at a normal restaurant. It was the first real convenience food. The other big factor was that you could serve many more people: At its height, Clifton’s Brookdale location could seat up to 15,000 people a day. No other restaurant on Earth could do that.” Besides their forested main dining hall, the Brookdale also included a tiny, two-seat chapel for spiritual reflecting plus a separate top floor with a more traditional, red-and-white interior.
The Brookdale was such a success, the Clintons decided to give their original restaurant a makeover, transforming it into the Pacific Seas in 1939—a tropical wonderland filled with waterfalls, palm trees, neon lights, tiki furnishings, and a Rain Room, where a fake thunderstorm occurred every 20 minutes. Clinton called the restaurant “a poor man’s nightclub.” Never one to overlook their religious commitments, the Clintons paid homage to the biblical Garden of Gethsemane in a small basement room.
Clifton’s over-the-top interiors also had influential admirers. “Welton Beckett, who designed the original Brookdale, was best friends with Walt Disney,” says Meieran. “Several people who knew him said that Disney went to Clifton’s and was inspired by the design. There had already been fantasy architecture, but Clifton’s took it to a new level.” Disneyland, the pinnacle of fantasy environments, wouldn’t open for another two decades.
Even as Clifton’s was succeeding via the family’s idealistic efforts, the city of Los Angeles was awash with corruption, particularly following the election of Mayor Frank Shaw in 1933. Instead of taking a cut and turning a blind eye to organized crime as previous politicians had, Shaw streamlined the city’s mob rackets, thereby increasing his own payoff. Along with his brother, whom he appointed secretary, Shaw began selling off city appointments and set the going bribe rates for illegal activities like prostitution and gambling.
Clifford Clinton never had his sights set on politics, but in 1936, a city supervisor asked for his opinion on the food-service problems at L.A. County General Hospital. True to his nature, Clinton performed a detailed inquiry that revealed huge misappropriations, resulting in the dismissal of the hospital director. The following year, Clinton joined the county Grand Jury as chairman of a committee to investigate vice. Using his customer base as a huge insider network, Clinton tipped the jury off to L.A.’s widespread corruption, demanding a thorough investigation. Only when they ignored his request did Clinton realize how deep the corruption really ran.
“There was quite the gangster element coming in from the east and becoming more established,” says Don. “Once dad was on the grand jury, he had a badge and credentials to push for change. There were lots of protected houses of prostitution, gambling, and other things that were clearly against the law, but many of the police were being paid off to just wink at that.”
“There had already been fantasy architecture, but Clifton’s took it to a new level.”
Not only was the mayor’s office in on these shady dealings, but the local news conglomerate, controlled by the Chandler family of the “Los Angeles Times,” plus District Attorney Buron Fitts and Chief of Police James E. Davis were all part of the plot. People who attempted to expose their nefarious behavior were blackmailed, or sometimes just murdered.
Yet Clinton didn’t back down. Instead, he filed a minority complaint to the grand jury and established the Citizens Independent Vice Investigating Committee (CIVIC). Clinton soon had evidence of nearly 600 brothels, 1,800 bookies, and 300 gambling houses. In response, his businesses were suddenly attacked: Notices for phony sanitation violations and false taxes were delivered, new permits were denied, stink bombs were left in kitchens and bathrooms, food-poisoning complaints poured in, and buses full of supposedly “undesirable” customers were dropped off at the cafeterias’ entrances.
Things quickly went from bad to worse. In October, a bomb was detonated in the kitchen of the Clinton home. “It was just a couple of days before Halloween in 1937,” says Don, “when the corrupt elements of the police department put a bomb under our house. I was about 11 or 12, and I was sleeping with my brother and sister on the outdoor sleeping porch. The bomb was supposed to be a warning that our dad was getting too close trying to uncover all this corruption in Los Angeles.”
Luckily, nobody was hurt, but only a few months later, a car-bomb critically injured former police chief Harry Raymond, who was working as an investigator for Clinton’s CIVIC group. Shortly after, the director of LAPD’s Special Intelligence Unit was arrested for the crime when bomb parts and other evidence were found at his home.
Clinton still wanted to take down Shaw, so he pushed for a mayoral recall. Despite being blocked from all major news outlets, he found a small radio station that would broadcast CIVIC’s findings four times a day. “My dad finally initiated a recall movement, the first recall of a big-city mayor in America. It made a lot of ripples,” says Don. Though mob bosses tried everything to slow the campaign, Clinton gathered the necessary signatures, and the city overwhelmingly supported his new candidate, Judge Fletcher Bowron, who got nearly double the votes Shaw did.
After winning his war against the Shaw regime, Clinton began working with Dr. Henry Borsook, a biochemist at Caltech, to create a cheap meal supplement to fight hunger. “He went to Caltech in the late ’30s to develop this food product using a derivative from soybeans,” says Don. Borsook’s research led to Multi-Purpose Food (MPF), a high-protein supplement that cost only three cents per meal. Clinton went on to found Meals for Millions in 1946, which eventually produced and distributed millions of pounds of MPF to relief agencies around the globe.
By the late 1940s, Don had taken over the Clifton’s business with his brother, Edmond, and his sister, Jean, and they continued expanding their cafeteria empire to 11 different locations in Southern California area. As Meieran explains, the décor was no longer limited to kitsch. “One was a sort of European garden, and another one was an Etruscan villa-esque place,” says Meieran.
“There was a very space-age, modern Clifton’s fitting with the whole Googie-architecture style. A couple were very small, and those didn’t really have themes. But Clifton’s Silver Spoon location is a good example: Before, the building had been an old jewelry store, so the Clifton’s interior utilized the existing architecture and location in the Jewelry District, repurposing the jewelry cabinets as display cases.”
The Brookdale building was originally constructed in 1904 as a furniture store, signs of which contractors discovered during the most recent renovation, like a column painted with directions to different store departments. Though the restaurant has undergone alterations every few decades over the last 80 years, many interior elements remain virtually unchanged.
In fact, Meieran’s crew discovered a neon light sealed within the walls, likely powered continuously since 1935. “It was embedded like 8 inches into the wall, and closed in with a piece of plywood and then sheetrock and tile,” says Meieran. The six rows of neon tubing were originally installed to backlight a painted woodland scene in the basement restroom, and then accidentally walled-over in 1949 when part of the space was converted to a storage area.
“The 1960 renovation was when they covered the old façade, and tore out a lot of the interior elements,” explains Meieran. “They painted the woodwork battleship gray and white. They took out the old water wheel and the old wishing well.”
During its prime, the Brookdale was located in the busiest neighborhood of the biggest boomtown on the West Coast, but L.A.’s vast suburban expansion and crumbling transit infrastructure hit its downtown hard. “From the mid ’50s, it started declining, and by the time you get to roughly 1965, development was in full retreat,” Meieran says. “L.A. had an unlimited amount of space at the time: You could build all the way to the beaches in every direction, and to the desert in the other side. And you had all these different centers—places like Century City and parts of Santa Monica and Beverly Hills were booming.
“People and industry started leaving the old-school Financial District because it was too far, a bit dirty, and there were more homeless people. And as traffic got worse, it was harder to get here, and of course, they took out the bloody transit system. By the time you’re in the mid-’60s, downtown was no longer the city’s center.”
Even with its wild interior décor, the Clifton’s chain was still hit hard by shifts in the food industry, and the Pacific Seas location became the first branch to close in 1960. Clifford Clinton passed away in 1969, but his children kept the restaurants going as long as they could. “As leases ran out and fast food took away more and more of the youngsters, cafeterias became a little passé, and we just got down to the final one on Broadway,” Don says. Today, the Brookdale sits like a bizarre time capsule on a block of fast-food joints, cheap electronics stores, and signs reading “CASH 4 GOLD.”
So how did the Brookdale manage to survive these turbulent times? “It was such an incredible concept and so well executed that the Brookdale maintained itself through a lot of inertia and public goodwill,” says Meieran. “They also adapted to a different economic environment as downtown shifted to a different demographic. They started catering to the neighborhood by lowering the quality of food and lowering their prices, or at least maintaining prices when everything else went up. When we bought the place, it still serving 25-cent coffee, in an era of $3 or $4 coffees. But the quality was commensurate with 25-cent coffee. It was a great place that was slowly evaporating.”
“I think its reputation and ambiance and unique décor were all pulling factors,” Don says, though he admits that toward the end, much of the clientele was made up of lifetime supporters rather than new customers. “Young people didn’t like cafeterias because they had enough of cafeterias, either in the military or in school, and they didn’t care for them very much.”
Whether or not it was hip to eat there, the Clinton family managed to keep meals available to those who couldn’t afford them throughout the Brookdale’s many decades of operation. “We tried to keep that up until the very end. In honor of my dad’s commitment, we felt that it made good sense, and it was the right thing to do, giving back to the community that’s supporting you and having empathy for the down and out. It always worked out.”
Meieran hopes that the current rehab of the Brookdale will draw in a younger generation, as they bring back many original features and even add some new, over-the-top designs. “Clifford Clinton always described it as a forest oasis in the urban jungle,” says Meieran. “We’re going to expand that to include parts of the Pacific Seas and other thematic elements, but I would say it’s an urban fantasy night-life spot and restaurant. The idea is to reinvent and reinvigorate the cafeteria.”
As for the menu, Meieran calls it a “contemporized version of classic cafeteria fare.” In addition to standard comfort foods like macaroni and cheese, Meieran wants the broad nature of his cafeteria’s menu to reflect the eclectic cultures that make up Los Angeles, ranging from Chinese food to Mexican dishes. That should go over well with the Brookdale’s longtime customers, as Don remembers enchiladas being a top seller, moving more than 1,000 orders during an average lunchtime.
Despite several delays, Meieran says the new Brookdale, in some form, will be open within six months. “Almost daily,” he adds, “I am blown away by its history, by its infrastructure, by these weird, little, quirky things, or by the people that come out of the woodwork and say, ‘I went with my grandmother in the ’40s.’ When I first got involved with the restaurant, there was a guy who was 101 and had been at the opening of Clifton’s in the ’30s, and he was still coming in once a month, always walking in on his own power. The barrage of stories is astonishing.”
(*Special thanks to Chris Jepsen, Jesse Monsour, and J. Eric Lynxwiler for the use of their images.*)
I wanted to know if there are any records or photos of the chiefs that worked at the Clifton’s on Broadway in DTLA. I was told that my Great Grandfather Thomas Sanchez was one of the long time chiefs that worked there in the 4o’s. Looking forward to the new Grand Opening.
Frank
Hi Frank,
The LA Historical Society recommended checking out the UCLA Special Collections Library or the Historical Society of Southern California. I didn’t come across many images of the Clifton’s staff on the web, so you might want to try calling those organizations to start with…
Cheers,
Hunter
………Great article, Clifton’s must have inspired many restaurants during his time.
Most shocking is to see how hard he fought corruption, unlike the clintons of today.
Looking forward to Clifton’s returning to the #DTLA landscape with all the food “upfront and personal” for a new generation! Cheers!
Very good to learn of both Clifton’s entrepreneurship and his generosity in passing his good fortune on to feed those who could not afford. Very admirable to see his returning some of what he earned from the people to the people and doing his best to help through public service as well. One of the Angels of Los Angeles. Well done, Mr. Oatman-Stanford!
I’ve been excitedly awaiting the reopening! Is there any plan for publishing a cookbook from the old recipes? How about republishing the gorgeous old postcards, or publishing new ones?
I have eaten here many times starting in the 1940’s. (Maybe before that but the 40’s is what I remember) It was always a must when we took the 5 car from Inglewood to LA. Shopping then to Clifton’s. The food was wonderful and we always ate more than we should. I didn’t know about the other Clifton’s but for me there will only be ONE Clifton’s and that is the one in downtown LA. What great memories.
This is wonderful. I’ve got to figure a way to work it into my vintage novel. I love this and would love to go there. The terrazzo still looks great. I will share on my FB and twitter! Wonderful and GREAT site!
~ Tam Francis ~
http://www.girlinthejitterbugdress.com
Great article Hunter. Here’s a part of the Clifton’s story you may not have known: http://www.laweekly.com/publicspectacle/2013/06/20/my-mother-was-the-mistress-of-the-owner-of-cliftons-cafeteria
Best of luck–cannot wait to patronize the new iteration of Clifton’s. This story was great. It reminded me of another cafeteria/city institution, Sholl’s in Washington DC. Sholl’s was not an imaginative place to eat, but its owners were as philanthropically- and community-minded as the Clintons (minus car bombs!).
Thanks for the link, Ray. Wow – what a story! :)
loved this place as a kid…both of them!
went to the one in West Covina and the one downtown (late ’60s) – it is so true – the whole staff was warm, kind and caring despite the huge crowds they catered to – they always gave you a personal welcome – here’s hoping the new Cliftons will also be open at lunchtime and not just as a nightclub?
My grandmother took mej there on the RedCar from Pasadena. I don’t know the year, but it was
During the war & there was no sugar on the table. I think I remember a water fall inside. I was
About 5 yrs. old.
Great article, but nothing about the role of Mrs. Clinton in the kitchen — her recipes, her involement in the kitchen even as she aged.
Great to see these articles….so good to know that there are always good, helpful people in all of our generations,,,,Thanks
I can’t wait till you open so we can go back. I have been going here since I was a little girl, my grandmother and I had lunch her once a week I can’t wait to visit when it opens. I introduced my husband to this place when we got together and he loves the place as well. My family will be there when you open.
I have a rose cut glass window that came out of the Clifton’s restaurant in West Covina. I’m am trying to get any information on it as possible. ie: circa, designer ect. Any help would be appreciated. photo is avail to email if needed.
Thank you
Big Basin Redwoods is the first California state park, not a national park. And, while the Clinton family might have some solid civil rights credentials, the SF Clinton’s Cafe mentioned was the site of a gay revolt of sorts, 3 years before the more famous Stonewall Riot in 1969, now considered the touchstone of the gay rights movement.
When IS this place going to reopen? Has the scaffolding come down from the facade—is it, at least, fully restored? Can’t wait!
I’m looking for the name of a cafeteria that was in Santa Monica, CA around 1980-85. It was located on Ocean Avenue, around California street, overlooking the Palisades and the beach. I think it was a French firm (since I saw a duplicate once in Paris). It had patio dining.
Any thoughts on the name?
A couple of years before he died, my father-in-law wanted all of us to go for a meal at Clifton’s; despite having grown up in Pasadena, he’d never been there because his mother was a snob who believed that poor people were simply shiftless and lazy, and was outraged that this place was handing out free meals to “bums”! But his daughter and I had tried it and loved it, so we loaded Mom and Dad into the car one Saturday and went down there.
If the quality of the food had degenerated as the article suggests, we found no proof of that. My plate of braised short ribs with real mashed (NOT whipped) potatoes and green beans was as good as I’d ever eaten, and everyone else was pleased as well. Nobody was happier than Dad, though – he always paid for our communal meals, but was still (let’s say) careful with his money, and the smile on his face when he got past the cash register was wonderful to behold!
I was thrilled to hear that Clifton’s Brookdale will be reopened. Many years back I’d heard that it was closed and was so disappointed that I would never be able to go back there, or to introduce my children and grandchildren to this wonderful cafeteria. Apparently I was misinformed, and it was actually closed much more recently than I thought. Is there any current info on an opening date? I can’t wait!
I’m wondering if a “collection of recipes over the decades” cookbook might become available? I would stand in line all night long to purchase one!! Clifton’s Cafeteria is a huge wonderful childhood memory for me (1950s) as it meant my mom, grandma and sister were downtown shopping. The really best part was the star on my receipt, which allowed me to pick a toy from the Treasure Chest. Please consider publishing a collection of recipes — I would so love to have one. Thanks
While at UCLA in the very late 1950s, I worked in a Beverly Hills art store (Duncan Vail) with a young guy named Bill Tangeman whose father was the organist at the Brookdale. While he played, the caged canaries sang! I hope the canaries will be back.
As a child, we’d go to a Clifton’s after the “Young People’s Concerts” at the old Philharmonic Hall across the street from the north side of Pershing Square. I particularly loved the “Pacific” one with the thatched roof cabanas and rain and thunderstorms every twenty minutes!
And what’s not to like about a fountain that spouts limeade? I also remember mint water (?) coming out of a grotto wall at one of them.
It was the 1940s and I was very young!
I am looking for the old Mac n Cheese receipe. That one was a family favorite. Is there a site with receipes? You stated you had all the recipes. Any chance you would share that with me via email?
Are there pictures of the Garden of Gesemenie.I remember being there as a child
Would love that wonderful enchilada recipe. I remember eating there when I was 16, working at Kay Jewelers. There was a May Company as well at Eastland Mall. What great memories. I would absolutely love to make those. I am now 65. It’s on my bucket list.
Thanks a Million, if you can send it
I would love the enchilada recipe. Have they ever had a book made of the old recipes? I am looking for a recipe from the 40-50’s since most recipes of todays time are basically fast food type or in the microwave. I am 55 but like to cook the old fashion way
| true | true | true |
On a decrepit block of Broadway in downtown Los Angeles, hidden behind a dilapidated, aging façade, lies the ghost of a palatial dining hall filled with...
|
2024-10-12 00:00:00
|
2014-02-13 00:00:00
|
/uploads/2014/02/brookdale-exterior%2B400x355.jpg
|
article
|
collectorsweekly.com
|
Collectors Weekly
| null | null |
3,361,400 |
http://wbond.net/sublime_packages/community
|
Package Control
| null |
Install Now The Sublime Text package manager that makes it exceedingly simple to find, install and keep packages up-to-date.
## Trending
A recent, relative, increase in installs
## New
Just added to Package Control
## Popular
Randomly selected from the top 100
## Labels
Labels with the biggest selection
- language syntax 718
- snippets 503
- color scheme 412
- linting 318
- auto-complete 259
- theme 197
- formatting 164
- text manipulation 124
- javascript 113
- build system 86
| true | true | true | null |
2024-10-12 00:00:00
|
2010-01-01 00:00:00
| null | null | null | null | null | null |
35,525,170 |
https://aaronson.org/blog/whats-the-name-of-this-university
|
What’s the Name of This University?
|
Adam Aaronson
|
# What's the Name of This University?
Amid the cornfields of central eastern Illinois lies a public land-grant research university. What’s it called? For most universities, this question is simple, but in this case, its answer has perplexed students, alumni, and Wikipedia editors for decades.
If you ask the university’s marketing office, their answer is perfectly clear: **University of Illinois Urbana-Champaign**. If you came here for a simple answer, there you go, but buckle up, because there’s a lot more to the story.
To truly understand the nuances of Illinois’s flagship university’s name and why people are so confused about it, we have to take a journey through 156 years of geopolitics, branding, and grammar.
## Land of Lincoln
The year was 1867, and Illinois needed a new school. When Abraham Lincoln signed the Morrill Land-Grant Act five years earlier, the federal government granted every state a piece of land to establish a federally endowed university, and each state got to choose where to put it. The states also kicked out lots of Indigenous people in the process, which the universities occasionally acknowledge to this day.
After a bidding war, the humble town of Urbana won Illinois’s jackpot, and in 1867, a new land-grant university was born: **Illinois Industrial University**. It was founded in Urbana by academic warhorse John Milton Gregory, who was more of a liberal arts guy himself but called the university “industrial” to appease industry-obsessed lawmakers.
Gregory served as president of the university for 13 years until he tossed his papers into the air and resigned in 1880. Soon after, the university was beginning to realize it wasn’t just “industrial,” with burgeoning programs in agriculture, engineering, and Gregory’s favorite liberal arts. So in 1881, a year after Gregory’s resignation, students voted 250–20 to ditch the word “industrial” in favor of a new name. It took four years, but in 1885, the university finally changed its name to the more holistic **University of Illinois**, a name that stuck for a while.
John Milton Gregory died in 1898 and was buried next to Altgeld Hall on the university’s Main Quad. Legend has it, Gregory’s dying wish was to leave a modest legacy and have nothing named after him. So he’d be thrilled to know the university’s Department of History is now housed at Gregory Hall, which is a quick walk away from both Gregory Street and Gregory Drive.
## A tale of two cities
It was the turn of the 20th century, and the University of Illinois was expanding. It was already leaking into Champaign, Urbana’s larger neighbor to the west, but it was time to go north.
In 1896, the Chicago College of Pharmacy joined forces with the university, officially becoming the School of Pharmacy of the University of Illinois. Over the next couple decades, the University of Illinois family also gained a College of Medicine and a College of Dentistry, both up in the Windy City.
Sometime around 1905, letters and publications from University of Illinois administrators gradually started including “Urbana” with the university’s name, probably to distinguish the university’s main campus from its growing medical presence in Chicago. This riled up citizens and business owners of Champaign, who wanted their name on the university that spilled into their city. Champaignians published multiple op-eds in local newspapers arguing that Champaign and Urbana should split the bill. Urbanans rightly pointed out that the bulk of the university was in Urbana, including its administrative offices (and thus the university’s mailing address).
The Urbana vs. Champaign debate heated up, and in September 1906, the university’s Board of Trustees held an actual meeting to resolve it. What came out of this meeting was the name “Urbana-Champaign”—with Urbana first and foremost, like the university itself. Soon after, “Urbana-Champaign” began appearing on official university correspondence, and over the course of the next few decades, it became a commonplace way to refer to the campus. But it wasn’t until 1969 that the university officially codified its new name, the **University of Illinois at Urbana-Champaign**.
*The Illini Union, as seen from the Main Quad (in Urbana, not Champaign)*
If you solved my latest crossword, or if you’re from the area, or if you know too much, you’d know that the metro area including the twin cities of Urbana and Champaign is called Champaign–Urbana (or C‑U, or Chambana, or Shampoo–Banana), not Urbana–Champaign. That’s because Champaign has pretty much always been more populous than Urbana, and metro areas are conventionally named with the more populous cities first, like Dallas–Fort Worth or New York–Newark–Jersey City.
So we have a university campus called Urbana-Champaign, in Champaign–Urbana. And you’re just gonna have to deal with it.
## Drawing the line
You might have noticed another difference between Urbana-Champaign and Champaign–Urbana: Urbana-Champaign is written with a hyphen (-), while Champaign–Urbana is written with the slightly longer en dash (–). This isn’t a mistake, because if it was, I wouldn’t be pointing it out. So what’s going on here?
If you’re big into style guides, you might know that hyphens generally join two parts of one word or name (like post-punk or Anya Taylor-Joy), whereas en dashes join two associated but distinct things (like red–green colorblindness or the Spanish–American War). You can remember that hyphens are shorter, so they connect things more closely than en dashes do.
As far as I could tell, the campus name Urbana-Champaign has always used a hyphen in an official capacity, possibly because Urbana and Champaign are two continuous parts of one campus, or possibly because hyphens are easier to type than en dashes. However, this didn’t stop the Wikipedia article for the school from being titled **University of Illinois at Urbana–Champaign** (with an en dash), after a zealous editor decided it adhered to Wikipedia’s style guide in 2010. The article’s title stayed this way until 2021, when the hyphen triumphantly returned after a lengthy talk page discussion. As the user JustinMal1 put it, “In many ways, the campus is much like a marital union, and marital unions are hyphenated, not en dashed.”
The metro area Champaign–Urbana, on the other hand, takes an en dash, since Champaign and Urbana are two distinct entities that just so happen to be the metro’s two largest cities. Or if you’re typing on a typewriter, you’ll just have to settle for a hyphen.
## Avengers assemble
Remember those medical schools in Chicago? In 1961, they officially became a new campus, called the University of Illinois at the Medical Center. Then in 1965, another Chicago campus was established, named the University of Illinois at Chicago Circle after a nearby freeway interchange. In 1982, these two Chicago campuses consolidated into the **University of Illinois at Chicago** (UIC), a proud member of the University of Illinois family.
And then there was little Sangamon State University, Illinois’s smallest state university in its capital city Springfield, which lies in Sangamon County. In 1995, Sangamon State University was incorporated into the University of Illinois family and renamed the **University of Illinois at Springfield** (UIS).
Since then, these three campuses—Urbana-Champaign, Chicago, and Springfield—have comprised the **University of Illinois System**, whose website is uillinois.edu, not to be confused with Urbana-Champaign’s illinois.edu, and whose legal name is the University of Illinois, not to be confused with the university formerly known as the University of Illinois.
## Error in the system
The University of Illinois System was a well-oiled machine until 2009, when Springfield went rogue and axed the “at” in their name, becoming **University of Illinois Springfield**. The inconsistency remained for 11 years, with the other universities still “at Chicago” and “at Urbana-Champaign.”
Then something in 2020 gave Chicago and Urbana-Champaign some time for self-reflection. That fall, they finally followed Springfield’s lead, quietly removing the “at” and rebranding to the **University of Illinois Chicago** and the **University of Illinois Urbana-Champaign**. But not everyone got the memo.
*Excerpt from the University of Illinois System style guide*
It wasn’t until spring 2021 that the university’s Wikipedia article was moved to remove the “at,” as a result of the same talk page discussion that restored the hyphen. Even still, the press isn’t on the same page about the name of the University of Illinois Urbana-Champaign. You’ll still find the “at” in *The New York Times*, the *Chicago Tribune*, and even style guide goliath *AP*. If you even ask a current student at the university, chances are they won’t know the “at” was removed, since the university never formally announced it.
Well, consider this your announcement. There is no “at” in the University of Illinois Urbana-Champaign.
## What should I call it?
In casual conversation, reciting the 14-syllable University of Illinois Urbana-Champaign every time you refer to the school will get tiring. But lucky us, the university officially recognizes four nicknames for use on “second and subsequent references.” Let’s break them down:
### University of Illinois
This former name of the university still sticks around as an abbreviation of sorts, but the university has mixed feelings about it. Since it’s also the official name of the University of Illinois System, the Office of Public Affairs at the Urbana-Champaign campus declared as of 2018, “Do not use the name ‘University of Illinois’ to refer to this campus.”
But people do anyway. In fact, if you search “University of Illinois” on Wikipedia, it redirects to the Urbana-Champaign campus, not the system.
You might be thinking, aren’t there three Universities of Illinois? What do the Chicago and Springfield campuses think of this? Well, the Urbana-Champaign campus is the O.G., the flagship, and the system’s largest campus to this day, with about 56,000 students compared to UIC’s 34,000 and UIS’s 4,000. As an anonymous redditor posted last year, “nobody on this planet refers to UIC or UIS as ‘The University of Illinois,’” so if you trust that comment’s 50-something upvotes, I don’t think anybody’s feelings are being hurt. But “University of Illinois” is still kind of a mouthful.
### U of I
This is probably the most common way to refer to the school if you’re in Illinois, talking to other people from Illinois. In the Chicago suburbs, where I’m from, it’s what everyone calls the school. It’s also an officially sanctioned shorthand for campus tour guides to use, and the Office of Public Affairs permits it “for in-state and alumni audiences.”
*The mammoth statue at the university’s Natural History Building. I just think he’s neat.*
Only problem is, if you say “U of I” anywhere outside of Illinois, you’ll be met with confused looks. As of now, Wikipedia lists seven different universities on the “U of I” disambiguation page, including neighboring state university and fellow Big Ten member University of Iowa. Not great! But luckily, there’s another option, and it’s the same number of letters.
### UIUC
Every good school has an acronym. Chicago has UIC, Springfield has UIS, and Urbana-Champaign has UIUC.
The acronym UIUC has been in use to some degree since the ’70s, especially by professors and nerds, and especially on the internet. Registered in 1985, uiuc.edu was one of the oldest .edu domains, serving as the university’s website and email domain until it moved to illinois.edu in a 2008 rebrand. UIUC is also the name of the university’s subreddit, which was at one point the largest university subreddit in the country (curse you r/berkeley).
But the acronym isn’t without its downsides. The university doesn’t really use it in any official marketing material, especially since the 2008 rebrand. The acronym is better suited for text than for speech, with the muddle of “you-eye-you-see” often indistinguishable from UIC when spoken aloud. Relatively new in the lifespan of the university, the acronym also leaves a bit of a generational gap. My grandparents, who attended the university in the 1950s, never called it UIUC, and my mom, who has lived in Illinois all her life, had never heard UIUC until I was applying there in high school.
At the end of the day, kids these days still call it UIUC, and if you say it to someone who has been in school in the past decade, they’ll probably know what you’re talking about. And it’s about time someone puts it in a crossword.
*Appearances (or lack thereof) of UIUC in major crossword outlets, per Crossword Tracker*
### Illinois
The university has leaned into this nickname since 2008, and for good reason. It’s iconic, it works in both text and speech, and it’s unambiguous (assuming you’re not talking about UIC, UIS, Illinois State, Illinois Tech, or Illinois College). As the flagship state university of Illinois, it’s metonymous with the state itself, like how Michigan and Minnesota also refer to their respective flagship schools.
The nickname “Illinois” will be especially recognizable to anyone who’s ever looked at the Big Ten standings or a March Madness bracket, since ESPN has no time to rattle off the university’s full name. It’s all over T-shirts and hoodies, it’s in every student’s email address, and it’s plastered at the top of the university’s website.
*The Alma Mater statue, as pictured on illinois.edu. Not to be confused with the university’s alma mater song “Hail to the Orange,” which aptly ends “Victory, Illinois, Varsity.”*
This school has gone through a lot of names in its 156 years, from **Illinois Industrial University** to the **University of Illinois** to the **University of Illinois at Urbana-Champaign** to the
**University of Illinois Urbana-Champaign**. But today, if someone asks me where I go to school, my answer will be simple, and it’s been in the name all along:
**Illinois**.
For more adjacent to the University of Illinois Urbana-Champaign, check out these Wikipedia articles I recently wrote on Pinto Bean and Unofficial!
***
### Sources and further reading
- A Brief History of the University of Illinois, University of Illinois Archives (1970)
- History of the name of the University of Illinois at Urbana-Champaign, University of Illinois Archives (2011)
- Urbana-Champaign Campus Designation, Campus Administrative Manual (2018)
- Our name, Office of Strategic Marketing and Branding
- History of the Universities, University of Illinois System
- Writing Style Guide, University of Illinois System
- History, University of Illinois Chicago
- The power of a name, University of Illinois College of Liberal Arts & Sciences (2021)
| true | true | true |
Amid the cornfields of central eastern Illinois lies a public land-grant research university. What’s it called? For most universities, this question is simple, but in this case, its answer has perplexed students, alumni, and Wikipedia editors for decades.
|
2024-10-12 00:00:00
|
2023-04-10 00:00:00
|
article
|
aaronson.org
|
Adam Aaronson
| null | null |
|
7,062,627 |
http://www.wired.com/underwire/2012/02/headless-chicken-solution/
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
12,343,924 |
https://github.com/gokr/spry
|
GitHub - gokr/spry: A Smalltalk and Rebol inspired language implemented as an AST interpreter in Nim
|
Gokr
|
This is the Spry language, inspired by Rebol/Smalltalk/Self/Forth and Nim. Characteristics:
- A dynamically typed minimalistic language with a free form syntax similar to Rebol/Forth
- Parser produces an AST which in turn is interpreted by the interpreter
- Functional in nature and has closures and non local return
- Homoiconic which means code and data has the same form
- Meant to be 100% live and support interactive development
Here are my articles about Spry
- You find ideas in Rebol/Ren/Red interesting but would like something different :)
- You love Smalltalk but can imagine a simplified similar language and want to play with multicore or small platforms and more easily use the C/C++/Nim eco system
- You love Nim but want to have a dynamic language running inside Nim
- You find Spry cool and would like to port it to another host language
- ...or you just love freaky programming language ideas!
Spry only depends on Nim, so it should work fine on Windows, OSX, Linux etc, but for the moment **I only use Linux for Spry development**. The shell scripts will probably be rewritten in nimscript and thus everything can be fully cross platform - feel free to help me with that!
The following commands can get you running inside LXC very quickly, tested on Ubuntu 19.04:
Start a Ubuntu 22.04 (Jammy Jellyfish, LTS) LXC machine and login to it:
```
lxc launch ubuntu:22.04 spry
lxc exec spry -- su --login ubuntu
```
Install dependencies, Nim and eventually Spry itself. Note that this is not a minimal Spry but one that includes LMDB, GUI, Python wrapper etc:
```
sudo apt update
sudo apt install gcc pkg-config libgtk-3-dev liblmdb0 libpython2.7
curl https://nim-lang.org/choosenim/init.sh -sSf | sh
export PATH=/home/ubuntu/.nimble/bin:$PATH
echo "export PATH=/home/ubuntu/.nimble/bin:$PATH" >> .profile
nimble refresh
nimble install spry
```
Then make sure Spry works:
```
ubuntu@spry:~$ spry --version
Spry 0.9.4
ubuntu@spry:~$ spry -e "echo (3 + 4)"
7
ubuntu@spry:~$ ispry
Welcome to interactive Spry!
An empty line will evaluate previous lines, so hit enter twice.
>>> 3 + 4
>>>
7
```
Thales Macedo Garitezi also made a Docker image for testing out the Spry REPL (ispry):
- Github: https://github.com/thalesmg/docker-spry
- Docker Hub: https://hub.docker.com/r/thalesmg/spry/
You can run it like this (with or without sudo):
```
docker run --rm -it thalesmg/spry
```
...and that should get you into the REPL.
The following should work on a Ubuntu/Debian, adapt accordingly for other distros.
-
Get GCC and Nim! I recommend using choosenim or just following the official instructions. Using choosenim it's as simple as:
`sudo apt install gcc pkg-config libgtk-3-dev liblmdb0 libpython2.7 curl https://nim-lang.org/choosenim/init.sh -sSf | sh`
-
Clone this repo. Then run
`nimble install`
in it. That should hopefully end up with`spry`
and`ispry`
built and in your path. You can also just run`nimble install spry`
but then you have no access to examples etc in this git repository. -
Try with say
`spry --version`
or`spry -e "echo (3 + 4)"`
. And you can also try the REPL with`ispry`
.
So now that you have installed Spry, you can proceed to play with the examples in the `examples`
directory, see README in there for details.
The following should work on OSX.
-
Install Homebrew unless you already have it.
-
Get Nim! I recommend using choosenim or just following the official instructions. Using choosenim it's as simple as:
`curl https://nim-lang.org/choosenim/init.sh -sSf | sh`
You can also use brew (although not sure how good it follows Nim releases):
`brew install nim`
3. Install extra dependencies, at the moment LMDB is one:
`brew install lmdb`
4. Clone this repo. Then run `nimble install`
in it. That should hopefully end up with `spry`
and `ispry`
built and in your path. You can also just run `nimble install spry`
but then you have no access to examples etc in this git repository.
- Try with say
`spry --version`
or`spry -e "echo (3 + 4)"`
. And you can also try the REPL with`ispry`
.
So now that you have installed Spry, you can proceed to play with the examples in the `examples`
directory, see README in there for details.
**NOT UPDATED INSTRUCTIONS**
You can "cheat" and try out Spry using a zip with binaries.
-
First you want to have git installed, and ideally
**with the unix utilities**included so that some of the basic unix commands work on the Windows Command prompt. -
Install Nim using binaries. Just follow the instructions and make sure to answer yes to include the directories in the PATH as
**finish.exe**asks you if you want. NOTE: Currently using Choosenim on Windows will produce a 32 bit Nim and Spry, even on a 64 bit Windows, so I don't recommend Choosenim on Windows just yet. -
There are no dependencies other than some dlls that are included in the Nim bin directory.
-
Clone this repo. Then run
`nimble install`
in it. That should hopefully end up with`spry`
and`ispry`
built and in your path. You can also just run`nimble install spry`
but then you have no access to examples etc in this git repository. -
Try with say
`spry --version`
or`spry -e "echo (3 + 4)"`
. And you can also try the REPL with`ispry`
.
So now that you have installed Spry, you can proceed to play with the examples in the `examples`
directory, see README in there for details.
-
If you want to build the interpreter manually, go into
`src`
and run`nim c -d:release spry`
to build the Spry interpreter, or`nim c -d:release ispry`
for the REPL. It should produce a single binary each. That's the standard invocation to build a nim program in release mode. -
Then go into examples and look at
`hello.sy`
as the next mandatory step :). Its simply Spry source being run by the`spry`
executable interpreter using the "shebang" trick. -
Then you can cd into bench and run
`bench.sh`
which starts by building the standalone Spry interpreter and then use it to run`factorial.sy`
which is a program that calculates`factorial 12`
100k times. It takes 2.7 seconds on my laptop which is quite slow, about 6x slower than Rebol3, 20x slower than Python and 100x slower than Pharo Smalltalk. :) You can run`compare.sh`
to see yourself. With a bit of work removing unneeded silly stuff in the interpreter it should be reasonable to reach Rebol3 in performance. -
Ok, so at this point
**you want to learn a bit more how Spry works**. Not much material around yet since its evolving but you can:
- On Linux or OSX you should be able to build a trivial "IDE", see below.
- Look at
`tests/*.nim`
which is a series of low level Spry code snippets and expected output. - Look at the various
`examples`
- Try running
`tutorial1.sy`
in tutorials, which is just showing we can do interactive tutorials with the repl - Try out the interactive REPL by running
`ispry`
- And of course, read the source code
`spryvm.nim`
. Its hopefully not that messy.
There is also a small experiment of a Spry VM module (src/modules/spryrawui.nim) for making GUI stuff using the excellent libui project. A small trivial little "IDE" written in Spry itself exists and you can build it on Linux or OSX.
**OSX:**Just run`./makeideosx.sh`
in`src`
and if you are lucky that produces a binary file called`ideosx`
. Try running it with`./ideosx`
.**Linux:**Just run`./makeide.sh`
in`src`
and if you are lucky that produces a binary file called`ide`
. Try running it with`./ide`
.
Spry started out as a Rebol inspired interpreter - since I felt the homoiconicity of Rebol was interesting to experiment with. Lispish, but not insanely filled with parenthesis :)
Then I started sliding towards Smalltalk when I added both support for infix arguments (so that a "receiver" can be on the left, similar to Nim), and later even keyword syntax for functions taking multiple arguments. I also changed func definitions to be more light weight (compared to Rebol) like Smalltalk blocks.
Spry is meant to mix with Nim. The idea is to use Nim for heavy lifting and binding with the outside world, and then let Spry be a 100% live dynamically typed language inside Nim. Spry will stay a very small language.
And oh, this is just for fun and I am not a good Nim hacker nor a language implementor!
| true | true | true |
A Smalltalk and Rebol inspired language implemented as an AST interpreter in Nim - gokr/spry
|
2024-10-12 00:00:00
|
2015-06-03 00:00:00
|
https://opengraph.githubassets.com/8ce72e9a455956f4a6d0095ecb29ebd56ca8ffb3fb2844b612caa710148eea19/gokr/spry
|
object
|
github.com
|
GitHub
| null | null |
770,248 |
http://journal.adityamukherjee.com/essay/value-of-apples/
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
28,127,978 |
https://www.wsj.com/articles/wework-to-run-co-working-spaces-in-some-saks-fifth-avenue-stores-11628596800
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
9,641,314 |
http://www.cbc.ca/radio/thecurrent/the-current-for-june-1-2015-1.3095021/elon-musk-out-to-change-the-way-we-live-on-earth-and-in-space-1.3095071
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
14,068,415 |
https://github.com/mohak1712/Insta-Chat
|
GitHub - mohak1712/Insta-Chat: InstaChat offers a new way to read messages of your favourite messengers. It overlays every other app and you can reply from anywhere you want.
|
Mohak
|
InstaChat offers a revolutionary way to read messages of your favourite messengers. It overlays every other app and you can reply from anywhere you want. Sometimes you did not want to quit your current app but also need to read some important messages or reply to them. Thats the point where InstaChat will help you. Use floating chatHeads bubbles like in Facebook for Whatsapp, Telegram and others!
**Read messages from anywhere****Reply to messages from anywhere****Separate notifications for every contact****Chat heads notification to write messages at any time**
**Whatsapp**
**Hangouts****Skype****Telegram****Line****Slack**
**READ_CONTACTS****ACTION_NOTIFICATION_LISTENER_SETTINGS****SYSTEM_ALERT_WINDOW****VIBRATE**
| true | true | true |
InstaChat offers a new way to read messages of your favourite messengers. It overlays every other app and you can reply from anywhere you want. - mohak1712/Insta-Chat
|
2024-10-12 00:00:00
|
2017-03-02 00:00:00
|
https://opengraph.githubassets.com/f217fdb46f4d123f5cbabff7ae75ff3abda39c68bdd3bbca5c4f636c61a7fa46/mohak1712/Insta-Chat
|
object
|
github.com
|
GitHub
| null | null |
24,930,990 |
http://jonathanstray.com/to-apply-ai-for-good-think-form-extraction
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
5,975,620 |
http://www.guardian.co.uk/world/2013/jul/02/ecuador-rafael-correa-snowden-mistake
|
Ecuador says it blundered over Snowden travel document
|
Rory Carroll
|
Ecuador is not considering Edward Snowden's asylum request and never intended to facilitate his flight from Hong Kong, president Rafael Correa said, as the whistleblower made a personal plea to Quito for his case to be heard.
Snowden was Russia's responsibility and would have to reach Ecuadorean territory before the country would consider any asylum request, the president said in an interview with the Guardian on Monday.
"Are we responsible for getting him to Ecuador? It's not logical. The country that has to give him a safe conduct document is Russia."
The president, speaking at the presidential palace in Quito, said his government did not intentionally help Snowden travel from Hong Kong to Moscow with a temporary travel pass. The Ecuadorean consul in London acted without authority from Quito, he said.
Asked if he thought the former NSA contractor would ever make it to
Quito, he replied: "Mr Snowden's situation is very complicated, but in
this moment he is in Russian territory and these are decisions for the
Russian authorities."
On whether Correa would like to meet him, the president said: "Not
particularly. It's a complicated situation. Strictly speaking, Mr
Snowden spied for some time."
The comments contrasted with expressions of gratitude the 30-year-old fugitive issued hours later, before Correa's views had been published.
"I must express my deep respect for your principles and sincere thanks for your government's action in considering my request for political asylum," Snowden said, according to a letter written in Spanish and obtained by the Press Association news agency, based in London.
"There are few world leaders who would risk standing for the human rights of an individual against the most powerful government on earth, and the bravery of Ecuador and its people is an example to the world."
Snowden compared the silence of governments afraid of US retaliation with Ecuador's help in his flight to Moscow on 22 June. A temporary Ecuadorean travel document substituted for his cancelled US passport.
"The decisive action of your consul in London, Fidel Narvaez, guaranteed my rights would be protected upon departing Hong Kong – I could never have risked travel without that. Now, as a result, and through the continued support of your government, I remain free and able to publish information that serves the public interest."
The letter will boost Ecuador's reputation with Snowden's supporters but sat awkwardly with the president's attempt to distance Quito from the saga. Correa said Quito respected the right of asylum and appreciated Snowden exposing the extent of US spying, but would not consider an asylum request unless he made it to an Ecuadorean embassy or the country itself – a remote possibility while he remains reportedly marooned in Sheremetyevo airport's transit lounge. "He must be on Ecuadorean territory," the president said.
Earlier on Monday, Moscow confirmed that Snowden had applied for asylum in Russia. The Los Angeles Times said he had made similar applications to a total of 15 countries. In another statement, issued through by the campaigning website Wikileaks, Snowden attacked President Obama for putting pressure behind the scenes on countries to which he had petitioned for asylum.
In his Guardian interview, Correa said his government had not, and would not, give Snowden an authorised travel document to extract himself from Moscow airport. "The right of asylum request is one thing but helping someone travel from one country to another — Ecuador has never done this. "
He said the temporary travel document issued by his London consul on 22 June – and publicly disowned five days later – was issued without official authorisation. "There is a mistake … Look, this crisis hit us in a very vulnerable moment. Our foreign minister was on a 15-day tour. He was in Vietnam. Our deputy foreign minister was in the Czech Republic. Our US ambassador was in Italy."
Narvaez and the WikiLeaks founder Julian Assange, who has sheltered at Ecuador's London embassy for the past year to escape extradition, took matters into their own hands because they feared Snowden risked capture, Correa said.
"The consul, in his desperation, probably he couldn't reach the foreign minister … and he issued a safe conduct document without validity, without authorisation, without us even knowing."
Correa said the consul was a "cultured" man who cited the example of Ecuadorean diplomats in Czechoslovakia giving Jews visas in defiance of their foreign ministry during the second world war.
"Look, he [Assange] is in the embassy, he's a friend of the consul, and he calls him at four in the morning to say they are going to capture Snowden. The [consul] is desperate – 'how are we going to save the life of this man?' – and does it.
"So I told him: OK, if you think you did the right thing, I respect your decision, but you could not give, without authorisation, that safe conduct pass. It was completely invalid, and he will have to accept the consequences."
Narvaez would be "sanctioned", the president said, without elaborating.
Some Ecuadorean diplomats have complained that Assange appeared to usurp Quito but the president said there was no rupture. "Mr Assange continues to enjoy our total respect and is under the protection of the Ecuadorean state."
Correa, a standard bearer for the left in Latin America, has joined European and other Latin Americans leaders in denouncing US espionage.
However, he softened his tone over the weekend and praised the US vice-president, Joe Biden, for a gracious phone call, saying he would consider Washington's request to refuse any asylum claim from Snowden while retaining Ecuador's sovereignty.
## Comments (…)
Sign in or create your Guardian account to join the discussion
| true | true | true |
Ecuador's president reveals the whistleblower was granted a temporary travel card at 4am 'without authorisation or validity'
|
2024-10-12 00:00:00
|
2013-07-04 00:00:00
|
article
|
theguardian.com
|
The Guardian
| null | null |
|
29,742,638 |
https://blog.wesleyac.com/posts/golang-error-handling-long-lines
|
Go, Error Handling, and Big Text Files
| null |
I was recently trying to work with a dataset that was provided to me as a MySQL database dump. Since I'm not a masochist, I certainly wasn't going to run a MySQL database. Instead, I figured, I'd just write a quick little script to parse the database dump and write it out to a nicer format.
I planned to just grab some off-the-shelf SQL parser, write a tiny bit of hacky glue code, and be done — sqlparser-rs was the first thing I reached for, but I quickly found that its support for MySQL was not up to snuff. Abandoning sqlparser-rs, I turned to pingcap/parser, which claimed to target the MySQL dialect. The only catch: I have to deal with writing Go. But it's just a tiny script, how bad can it be?
I started with a simple script to loop through all the lines in the file:
```
package main
import (
"bufio"
"log"
"fmt"
"os"
)
func main() {
file, err := os.Open(os.Args[1])
if err != nil {
log.Fatal(err)
}
scanner := bufio.NewScanner(file)
scanner.Split(bufio.ScanLines)
lines := 0
for scanner.Scan() {
lines++
}
fmt.Println(lines)
file.Close()
}
```
Let's go ahead and run this with a test file:
```
$ go run main.go test.txt
2
```
It's got two lines. Except:
```
$ wc -l test.txt
4 ./test.txt
```
What's going on here? Well, it turns out that when a line is longer than `bufio.MaxScanTokenSize`
, which defaults to 65536, the scanner will silently stop scanning. I guess you're supposed to know that:
```
if err := scanner.Err(); err != nil {
log.Fatal(err)
}
```
Is the invocation that you're supposed to use, but the compiler won't warn you, `go vet`
won't warn you, `staticcheck`
won't warn you. It seems like the Go philosophy is that you should simply read the docs for every function you call, think carefully about what kinds of errors can happen, and then write the code to handle them. If you forget, well, shoulda thought more carefully about it!
I was interested in giving Go another shot, after having some major compile-speed frustrations with Rust stemming from people being too clever with types, but I don't feel that I can take a language where it's this easy to fuck up error handling seriously.
Error handling can be subtle and nuanced in a lot of ways, but one of my baseline expectations for any modern language is that errors will not be silent: static, default-on tooling should be able to give you a list of every error you aren't handling, and ideally, just reading the source code should make unhandled errors legible without having to have knowledge of library code. I'm sad that Go seems to fail that benchmark.
Update: Someone asked me about the feasibility of writing a static check to catch this error. It seem to me that it should be quite easy, given the static analysis infrastructure that exists already for Go, but th problem is that this kind of API (which Go seems to have a lot of) doesn't express the error semantics in the types, but rather leaves it implicit in the structure of the library code — this means that a static check for this can't be generic, but instead has to specifically know about the error semantics of `bufio.Scanner`
, which are only expressed in the docs. It's easy to make a static check for this exact thing, but it's essentially impossible to write a static check that will catch this entire category of error.
| true | true | true | null |
2024-10-12 00:00:00
|
2021-10-21 00:00:00
| null |
website
| null |
Wesley Aptekar-Cassels
| null | null |
27,882,901 |
https://www.foxnews.com/politics/bill-bennet-we-need-to-designate-mexican-cartels-as-foreign-terrorists-like-al-qeada
|
Bill Bennett: 'We need to designate Mexican cartels as foreign terrorists like Al Qaeda'
|
Joshua Nelson
|
After a new government report revealed U.S. drug overdose deaths hit a __new record in 2020,__ former National Drug Control Policy Director Bill Bennett said Thursday that the United States needs to designate Mex__ic__an cartels as foreign terrorists.
"There is a poison that is coming across. We talked about the getaways. Do the people get away? How about the drugs that get away? " Bennett said on "America Reports." "You guys are very good at showing these huge caches of drugs that are caught and authorities seize. But there’s a lot that we don’t see, and it’s making its way in the country."
Fox News reported that an angel mom argued that the current border policies under the Biden administration allowed __China’s__ criminal "partnership" with __Mexican cartels__ to flourish.
"This partnership in combination with our current border policies have allowed this __fentanyl__ to continuously pour over the Southwest border practically unabated," Virginia Krieger, founder of Parents Against Illicit Narcotics, said Thursday on "America's Newsroom."
Krieger, who lost her daughter Tiffany Leigh Robertson to a fentanyl overdose in 2015, warned that the flow of drugs into the United States has caused a "fentanyl poisoning" crisis, wherein online sellers have targeted unaware American teenagers with fraudulent prescriptions filled with the potent synthetic opioid analgesic.
This, in addition to contaminated party drugs such as cocaine and methamphetamines, has affected an "entirely new population"--mainly teenagers – who were previously not a major sector of the substance abuse community. Krieger said that a significant portion of these teens were first-time users when they overdosed.
According to the __National Institute of Drug Abuse (NIDA),__ fentanyl is between 50 and 100 times stronger than morphine, an opiate used in hospitals to treat serious pain.
In 2020, 93,331 Americans died from drug overdoses, a 29% increase from the previous year.
__CLICK HERE FOR THE FOX NEWS APP__
Bennett suggested that the U.S. put "pressure on China" because that is where the "precursor of drugs are made from fentanyl."
"One thing we need on our own, we don’t need Mexico’s help to designate these cartels as foreign terrorist organizations just like Al Qaeda," Bennett said. "They’re killing a lot more people than Al Qaeda every day. And then we can act in that country, we don’t have to wait for them to cross the border."
"Governor Abbott has urged the president to do this," Bennett added, saying that "others need to as well."
"This is really unbelievable the numbers that we’re seeing. Again, we have to underscore that we’ve never seen anything like this," he said. "Of the 93,000 deaths, Sandra [Smith], 70,000 we think are due to fentanyl. That’s one powerful drug."
The Drug Enforcement Administration (DEA) issued a nationwide report in the summer of 2016 indicating hundreds of thousands of counterfeit prescription pills had been entering the U.S. drug market since 2014, with some of the pills containing deadly levels of fentanyl.
"The current fentanyl crisis continues to expand in size and scope across the United States," according to the __Centers for Disease Control (CDC).__
*Fox News' **Nikolas Lanum** contributed to this report.*
| true | true | true |
After U.S. drug overdose deaths hit a new record in 2020, former National Drug Control Policy Director Bill Bennett said on Thursday that the United States needs to designate Mexican cartels as foreign terrorists.
|
2024-10-12 00:00:00
|
2021-07-15 00:00:00
|
article
|
foxnews.com
|
Fox News
| null | null |
|
15,334,732 |
http://fortune.com/2017/09/25/apple-iphone-8-sales-data/
|
Apple Might Have an iPhone 8 Sales Problem
|
Don Reisinger
|
The iPhone 8’s opening weekend didn’t go nearly as well as Apple might have liked, based on new data.
By the end of its first weekend on store shelves, Apple’s (AAPL) iPhone 8 could only muster a 0.3% international market share across all iOS-based devices, according to new data from researcher Localytics. Apple’s iPhone 8 Plus was able to capture 0.4% of the iOS market, the company’s data shows.
According to Localytics, which has been analyzing iPhone adoption rates each year, the iPhone 8 is far behind the iPhone 7 after its first weekend, when last year’s Apple handset captured 1% share of the marketplace. The iPhone 6, which was released in 2015, nabbed 2% iOS market share during its first weekend, according to Localytics.
The only bright spot so far this year for Apple is with its iPhone 8 Plus, which at 0.4% market share, topped early performance for the iPhone 7 Plus and iPhone 6 Plus, which attracted 0.2% and 0.3% market share, respectively.
Apple released its iPhone 8 line on Friday. The device comes with several upgrades over last year’s iPhone 7 models, including improved performance with its A11 Bionic processor. It also has a new glass finish and is the first iPhone to support wireless charging.
*Get Data Sheet, *Fortune*’s technology newsletter*
There had been hints that iPhone 8 demand was lower than previous models earlier this month, when the company started offering pre-orders on the iPhone 8. Days after pre-orders were offered, some iPhone 8 models were still available. In years past, whenever Apple has offered new iPhones on pre-order, they sell out of their initial stock in minutes.
Most industry watchers say Apple’s problems aren’t necessarily a big concern for the company. Competitors aren’t stealing iPhone customers, those pundits say, the iPhone X is.
Announced alongside the iPhone 8, Apple’s iPhone X is a major upgrade over previous handsets. The iPhone X comes with a screen that nearly entirely covers its face, and supports a new facial-scanning technology Apple is calling Face ID. It also comes with many of the features users would find in the iPhone 8, including wireless charging and the A11 Bionic chip. And although its starting price is $999, customers don’t seem all that concerned, further bolstering Apple CEO Tim Cook’s contention that the iPhone X offers a good “value” for the price.
Still, the iPhone 8 could prove critical to Apple’s business, and if iPhone X supply is indeed low, as reports have suggested, Apple might be waiting a considerable time to collect on the widespread demand.
For its part, Apple hasn’t announced actual sales data. Localytics’ data is based on an analysis of 70 million iOS devices in use around the world.
__Recommended newsletter__**Data Sheet:**Stay on top of the business of tech with thoughtful analysis on the industry's biggest names.
Sign up here.
| true | true | true |
The iPhone 8's first weekend looks much different than iPhone 7's.
|
2024-10-12 00:00:00
|
2017-09-25 00:00:00
|
article
|
fortune.com
|
Fortune
| null | null |
|
25,115,576 |
https://techcrunch.com/2020/11/16/strava-raises-110-million-touts-growth-rate-of-2-million-new-users-per-month-in-2020/
|
Strava raises $110 million, touts growth rate of 2 million new users per month in 2020 | TechCrunch
|
Darrell Etherington
|
Activity and fitness tracking platform Strava has raised $110 million in new funding, in a Series F round led by TCV and Sequoia, and including participation by Dragoneer group, Madrone Capital Partners, Jackson Square Ventures and Go4it Capital. The funding will be used to propel the development of new features, and expand the company’s reach to cover even more users.
Already in 2020, Strava has seen significant growth. The company claims that it has added more than 2 million new “athletes” (how Strava refers to its users) per month in 2020. The company positions its activity tracking as focused on the community and networking aspects of the app and service, with features like virtual competitions and community goal-setting as representative of that approach.
Strava has 70 million members, according to the company, with presence in 195 countries globally. The company debuted a new Strava Metro service earlier this year, leveraging the data it collects from its users in an aggregated and anonymized way to provide city planners and transportation managers with valuable data about how people get around their cities and communities — all free for these governments and public agencies to use, once they’re approved for access by Strava.
The company’s uptick in new user adds in 2020 is likely due at least in part to COVID-19, which saw a general increase in the number of people pursuing outdoor activities, including cycling and running, particularly at the beginning of the pandemic when more aggressive lockdown measures were being put in place. As we see a likely return of many of those more aggressive measures due to surges in positive cases globally, gym closures could provoke even more interest in outdoor activity — though winter’s effect on that appetite among users in colder climates will be interesting to watch.
Future raises $24M Series B for its $150/mo workout coaching app amid at-home fitness boom
Strava’s app is available free on iOS and Android, with in-app purchases available for premium subscription features.
| true | true | true |
Activity and fitness tracking platform Strava has raised $110 million in new funding, in a Series F round led by TCV and Sequoia, and including
|
2024-10-12 00:00:00
|
2020-11-16 00:00:00
|
article
|
techcrunch.com
|
TechCrunch
| null | null |
|
33,720,129 |
https://trpc.io/blog/announcing-trpc-10
|
Announcing tRPC v10 | tRPC
|
Alex; KATT
|
# Announcing tRPC v10
tRPC provides a great developer experience by enforcing tight, full-stack type bindings through the power of TypeScript. No API contract drift, no code generation.
Since our last major version release in August 2021, the tRPC community has seen substantial growth:
- We now have over 15,000 stars on GitHub
- A Discord community with over 2,000 members
- 100k+ weekly npm downloads
- Nearly 200 contributors
- A growing ecosystem of extensions, examples, and content
**Today, we're launching tRPC v10**. We're excited to share that v10 is already being used in production by many large TypeScript projects. This official release announces general availability to the wider community.
For new projects, you can get up and running with an example application to learn about tRPC v10. For projects that were already enjoying tRPC v9, visit the v10 migration guide.
## Overview of changes
v10 is tRPC's biggest release ever. This is the first time we've made any fundamental changes to the structure of tRPC and we believe these changes unlock new possibilities for fast-moving teams working on cutting edge applications.
### Improved developer experience
tRPC v10 embraces your IDE. We want to unify your types - but we've also brought together your frontend, backend, and editing experience in this version.
With v10, you can:
- Use
*"Go to Definition"*to jump straight from your frontend consumer to your backend procedure - Use
*"Rename Symbol"*to give a new name to an input argument or procedure across your whole application - Infer types more easily for when you'd like to use your tRPC types in your application manually
### Powerful backend framework
In v10, we've revisited the syntax for how you define your backend procedures, opening up more opportunities to bring in your desired logic in healthy ways. This version of tRPC features:
- Reusable middlewares with Context Extension
- Chainable & reusable procedures with the ability to use multiple input parsers
- Flexible error handling with custom error formatting
- Procedure metadata to decorate your procedures with more information
### Massively improved TypeScript performance
TypeScript enables developers to do incredible things - but it can come at a cost. Many of the techniques we use to keep your types tight are heavy work on the TypeScript compiler. We heard community feedback that the largest applications using tRPC v9 were beginning to suffer from decreased performance in developers' IDEs as a result of this compiler pressure.
Our goal is to enhance the developer experience for applications of all sizes. In v10, we've dramatically improved TypeScript performance (especially with TS incremental compilation) so that your editor stays snappy.
## Incremental migration
We've also put in a lot of work to make the migration experience as straightforward as possible, including an `interop()`
method that allows (almost) full backward compatibility with v9 routers. Visit the migration guide for more information.
Sachin from the core team has also made a codemod that can do much of the heavy lifting of the migration for you.
## A growing ecosystem
A rich set of sub-libraries is continuing to form around tRPC. Here are a few examples:
- trpc-openapi to easily create REST-compatible endpoints
- create-t3-app to bootstrap a full-stack Next.js application with tRPC
- create-t3-turbo to kickstart your next React Native app with tRPC
- trpc-chrome for building Chrome extensions using tRPC
- Adapters for frameworks like Solid, Svelte, and Vue
For more plugins, examples, and adapters, visit the Awesome tRPC collection.
## Thank you!
The core team and I want you to know: we're just getting started. We're already busy experimenting with React Server Components and Next.js 13.
I also want to give a huuuge shoutout to Sachin, Julius, James, Ahmed, Chris, Theo, Anthony, and all the contributors who helped make this release possible.
Thanks for using and supporting tRPC.
- Follow @trpcio on Twitter.
- Join our Discord-community
- Try out tRPC in your browser
| true | true | true |
tRPC provides a great developer experience by enforcing tight, full-stack type bindings through the power of TypeScript. No API contract drift, no code generation.
|
2024-10-12 00:00:00
|
2022-11-21 00:00:00
|
https://og-image.trpc.io/api/blog?input=%7B%22title%22%3A%22Announcing%20tRPC%20v10%22%2C%22description%22%3A%22tRPC%20provides%20a%20great%20developer%20experience%20by%20enforcing%20tight%2C%20full-stack%20type%20bindings%20through%20the%20power%20of%20TypeScript.%20No%20API%20contract%20drift%2C%20no%20code%20generation.%22%2C%22authorName%22%3A%22Alex%20%2F%20KATT%20%F0%9F%90%B1%22%2C%22authorTitle%22%3A%22Creator%20of%20tRPC%22%2C%22authorImg%22%3A%22https%3A%2F%2Fgithub.com%2FKATT.png%22%2C%22date%22%3A%222022-11-21T00%3A00%3A00.000Z%22%2C%22readingTimeInMinutes%22%3A3.305%7D
|
article
|
trpc.io
|
trpc.io
| null | null |
17,700,893 |
http://fortune.com/2018/08/06/walmart-meal-kit-gobble/
|
Exclusive: Walmart Partners with Meal Kit Company Gobble
|
Grace Donnelly
|
In the battle over the future of food purchases, Walmart is making moves to maintain a fighting chance against Amazon and Whole Foods.
On Monday, meal kit company Gobble announced a partnership with the retail giant to sell its products through Walmart’s e-commerce site, Gobble founder Ooshma Garg told *Fortune* in an exclusive.
The business relationship between the San Francisco-based startup and the largest retailer in the world comes after Amazon’s acquisition of Whole Foods in June 2018 sent Walmart and Kroger scrambling to improve their digital offerings. (Kroger announced the addition of meal-kits in June after it acquired Home Chef.)
## E-Commerce’s ‘Last Frontier’
Ordering food online is “the last frontier of e-commerce,” *Fortune*‘s Beth Kowitt wrote in her analysis of the Amazon-Whole Foods deal: About 20% of retail spending goes toward food, but only 2% of those sales currently take place on the Internet, she says.
The battle over online food spending has seen traditional retailers turn to the subscriber model of companies like Blue Apron and Hello Fresh. Overall, U.S. meal-kit sales grew 40.7% last year, according to Earnest Research.
## What Is Gobble?
For Gobble’s founder, the business is more than just making sure raw ingredients are prepared, measured, packaged, and arrive to customers safely.
“To me, food is family,” says Garg, the daughter of a nutritionist who realized the need for quick and healthy dinner options. She talks about meals in these terms, calling Gobble’s product “a cooking equalizer.”
Gobble’s niche in the crowded meal-kit market is dishes that can be prepared in 15 minutes or less and require only one pan. Each kit contains two servings and costs about $24.
The company says nearly 50% of the company’s customers are between 35 and 44 years old (25% of Blue Apron’s customers are in the age group) and on average, customers spend more during their first year with Gobble than any other meal kit service, according to industry research.
Garg says Gobble aims to distinguish itself from the competition by focusing on busy parents and taking time to find out what they need, rather than only prioritizing rapid growth.
## Angel Investment by Alexis Ohanian
She raised about $11 million in Series A funding just a year after the startup launched from the Y Combinator program in order to take the company national in 2015. After raising $15 million in Series B funding last year, Gobble’s investors include Khosla Ventures, Andreessen Horowitz, and Trinity Ventures.
Reddit founder Alexis Ohanian was an angel investor in Gobble, first backing the company in 2012. He says his firm, Initialized Capital, was attracted to the quality of product and the vision of its founder.
“This was before meal kits were a thing,” Ohanian told *Fortune*. “Even then, [Garg] was thinking of this business really intelligently.”
He says the Walmart news is proof that Garg’s vision is paying off.
“Walmart is the largest retailer in the world. It bears mentioning that Amazon is the largest e-commerce company in the world. And clearly there is a battle—not even brewing, there’s a battle being waged right now,” he said. “I think it’s a huge opportunity for Walmart.”
## Walmart v. Amazon
Last year, 40 cents of every dollar spent online went to Amazon. The company could surpass Walmart this year as the world’s biggest seller of apparel, according to Morgan Stanley estimates, and Amazon may match Walmart sales in the U.S. by 2021, according to JPMorgan projections.
This means, as Kowitt described in *Fortune’s* June issue, that retail domination will turn on capturing the items on our plates and in our pantries.
In the meal-kit space, this objective has seen more and more companies trying to find the sweet spot of convenience, variety, and consistency. It’s hard to do and even more difficult to turn a profit, and Gobble is no exception.
*Fortune* estimates Gobble brought in between $25 million to $50 million in annual revenue last year, but the company has not yet made money.
But Ohanian says he believes Garg’s methodical approach and engineering background will pay off in the long-term.
## Walmart’s ‘Options’
“With more than 75 million items on Walmart.com, we continue to look for new options to offer customers. This includes specialty food items like the meal delivery kits by Gobble, farm fresh crates and snack boxes that give customers convenient options to plan and prepare meals,” corporate spokesperson at Walmart told *Fortune* in an email statement confirming the partnership.
Prior to this, Walmart offered customers kit options from companies like Sun Basket, Takeout Kit, and Home Chef, allowing shoppers to try the meal services out before subscribing.
Walmart’s stock took the largest dip in three decades in February after reporting that e-commerce sales declined in the last quarter of 2017, but the company knows about growing food sales.
While Amazon is arguably the most disruptive and innovative company in retail, it’s Walmart that became the biggest retailer in the U.S., in large part, by turning itself into the nation’s largest grocer. Since fiscal 1998, the company has grown grocery sales from 14% of its total U.S. revenue to 56% this year.
“This is a game-changing partnership for Gobble with the world’s largest retailer. It leapfrogs our customer reach to the very highest level. Private companies entering similar deals in the past now show up to 70% of their sales coming from nationally-dominant retailers like Walmart.” says Garg.
Now it’s about keeping up with scale.
## How Will Gobble Scale?
“I have applied the same efficiencies, process engineering, and intelligent automation to Gobble as I did when I managed the supply chain at Walmart,” Steve Robinson, Gobble’s vice president of supply chain and operations, and previously vice president of supply chain at Walmart, told *Fortune* in an email. “It is phenomenal to see my past and present teams now working together.”
The company has built a vertically-integrated East Coast facility and invested heavily in automation over the last year to prepare the new location to handle new product development, food manufacturing, and order personalization, he said.
“Our state-of-the-art supply chain infrastructure is agile and built for scale to capture opportunities such as this new partnership with Walmart. We are ready for more Gobble families who need this level of convenience in their daily lives,” says Robinson.
| true | true | true |
The startup, backed by Reddit founder Alexis Ohanian, will sell its products on Walmart's website beginning Monday.
|
2024-10-12 00:00:00
|
2018-08-06 00:00:00
|
article
|
fortune.com
|
Fortune
| null | null |
|
35,788,856 |
https://arstechnica.com/gaming/2023/04/i-for-one-welcome-our-new-steam-deck-killing-windows-running-overlords/
|
I, for one, welcome our new Steam Deck-killing, Windows-running overlords
|
Kevin Purdy
|
I held off on buying the Steam Deck after it came out a little more than a year ago. At the time, it had lots of bugs, long shipping delays, and some uncertainty about how much commitment its maker, Valve, really had for its latest hardware experiment. I gave in and bought one about two months ago, and I've been enjoying this much-improved device ever since.
But now comes the Asus ROG Ally, which is powered by a new 55 percent faster AMD Ryzen Z1 chip yet costs only $50 more than the highest-end Steam Deck at a purported (store-leaked) $700. It also runs Windows 11 (perhaps in "handheld" mode), making it easier to run many games, including blockbuster AAA titles and massive online titles like *Fortnite*. The Ally is seemingly a bit flatter and lighter than the Steam Deck, and it has a sharper and brighter screen and a refresh rate of up to 120 Hz. It might even run cooler and quieter, according to brief hands-on experiences posted by reporters. It will seemingly arrive in early May or shortly thereafter.
As you might expect, the gaming press—which, like all press, is fundamentally geared toward conflict—is cheering to see another combatant enter the arena. Asus' device is "gunning straight for the Steam Deck," "bad news for Steam Deck," and "big trouble for the Steam Deck." It has one writer "ready to ditch my Steam Deck." The ROG Ally, Giovanni Colantonio writes at Digital Trends, "fixes almost every single problem I have with the Steam Deck."
Normally, I might feel dismayed at this news. I am predisposed to impulse purchases of weird little computers and deep regret upon considering them in hindsight. And yet seeing that a seemingly much-improved version of my handheld PC would be available just two months after I gave in? I'm OK with it.
## Defining what you want from handheld gaming
Consider the Switch. What makes Nintendo's system great for me is not its graphical prowess, which was already a bit underwhelming at its 2017 launch. For most people, it's Nintendo's games, the system's portability, and its deeply embedded pick-up/put-down nature. Nearly every game can be stopped and resumed at a moment's notice (a developer friend once told me that implementing this was the toughest part of his game's Nintendo certification). The Switch was my system for airports, couches, vacations, and other uncertain spans of idle time. The Steam Deck now serves much the same purpose for me, albeit with 150 percent more weight, plus Linux and many more buttons.
| true | true | true |
How the Asus ROG Ally and other handheld PCs could make Steam Decks even better.
|
2024-10-12 00:00:00
|
2023-04-28 00:00:00
|
article
|
arstechnica.com
|
Ars Technica
| null | null |
|
16,477,766 |
https://brauner.github.io/2018/02/27/lxc-removes-legacy-template-build-system.html
|
On The Way To LXC 3.0: Splitting Out Templates And Language Bindings
|
Christian Brauner
|
# On The Way To LXC 3.0: Splitting Out Templates And Language Bindings
Hey everyone,
This is another update about the development of `LXC 3.0`
.
We are currently in the process of moving various parts of `LXC`
out of the
main LXC repository and into separate
repositories.
#### Splitting Out The Language Bindings For `Lua`
And `Python 3`
The lua language bindings will be moved into the new lua-lxc repository and the Python 3 bindings to the new python3-lxc repository. This is in line with other language bindings like Python 2 (see python2-lxc) that were always kept out of tree.
#### Splitting Out The Legacy Template Build System
A big portion of the `LXC`
templates will be moved to the new
lxc-templates repository.
`LXC`
used to maintain simple shell scripts to build container images from for
a lot of distributions including `CentOS`
, `Fedora`
, `ArchLinux`
, `Ubuntu`
,
`Debian`
and a lot of others. While the shell scripts worked well for a long
time they suffered from the problem that they were often different in terms of
coding style, the arguments that they expected to be passed, and the features
they supported. A lot of things these shells scripts did when creating an image
is not needed any more. For example, most distros nowadays provide a custom
cloud image suitable for containers and virtual machines or at least provide
their own tooling to build clean new images from scratch. Another problem we
saw was that security and maintenance for the scripts was not sufficient. This
is why we decided to come up with a simple yet elegant replacement for the
template system that would still allow users to build custom `LXC`
and `LXD`
container images for the distro of their choice. So the templates will be
replaced by distrobuilder as the
preferred way to build `LXC`
and `LXD`
images locally.
distrobuilder is a project my colleague
Thomas is currently working on. It aims
to be a very simple Go project focussed on letting you easily build full system
container images by either **using the official cloud image** if one is
provided by the distro or by **using the respective distro’s recommended
tooling** (e.g. `debootstrap`
for `Debian`
or `pacman`
for `ArchLinux`
). It
aims to be declarative, using the same set of options for all distributions
while having extensive validation code to ensure everything that’s downloaded
is properly validated.
After this cleanup only four `POSIX`
shell compliant templates will remain in
the main `LXC`
repository:
`busybox`
This is a very minimal template which can be used to setup a `busybox`
container. As long as the `busybox`
binary is found you can always built
yourself a very minimal privileged or unprivileged system or application
container image; no networking or any other dependencies required. All you need
to do is:
```
lxc-create c3 -t busybox
```
`download`
This template lets you download pre-built images from our image servers. This is likely what currently most users are using to create unprivileged containers.
`local`
This is a new template which consumes standard `LXC`
and `LXD`
system
container images. A container can be created with:
```
lxc-create c1 -t local -- --metadata /path/to/meta.tar.xz --fstree /path/to/rootfs.tar.xz
```
where the `--metadata`
flag needs to point to a file containing the metadata
for the container. This is simply the standard `meta.tar.xz`
file that comes
with any pre-built `LXC`
container image. The `--fstree`
flag needs to point to
a filesystem tree. Creating a container is then just:
`oci`
This is the template which can be used to download and run OCI containers. Using it is as simple as:
```
lxc-create c2 -t oci -- --url docker://alpine
```
Here’s another asciicast:
| true | true | true | null |
2024-10-12 00:00:00
|
2018-02-27 00:00:00
| null |
article
|
brauner.io
|
Personal blog of Christian Brauner
| null | null |
9,552,279 |
http://theplate.nationalgeographic.com/2015/05/12/wasabi-more-than-just-a-hot-sushi-condiment/
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
30,798,912 |
https://www.newyorker.com/news/essay/the-disillusionment-of-a-rikers-island-doctor
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
16,553,101 |
https://www.curbed.com/2018/3/7/17087588/home-renovation-unnecessary-mcmansion-hell-wagner
|
There is nothing wrong with your house
|
Kate Wagner
|
A few months ago, I received an email from a woman who had bought a 1964 ranch with all its original interiors: wood paneling, Formica countertops, a blue bathroom, the works. She hosted a housewarming party for her friends and relatives. Six* *different people at the party asked her The Question: “So, when are you going to flip this place?” When they heard that she had no desire to flip the house, which she found to be interesting and charming, her guests were shocked and tried to convince her otherwise. She should try for a return on her investment; the house was dated; it would need future repairs. One guest called the house “plain ugly.” She asked me if I thought the guests were right: Should she think about remodeling?
Remodeling and other house-fussery has become a national pastime. In 2015 alone, Americans spent $326.1 billion on renovating. Previously contained to affluent households and the glossy pages of architecture magazines, remodeling has been transformed by 24/7 media like HGTV and websites like Houzz, Pinterest, and Dezeen. While older media, like early issues of *House Beautiful, *discusses the process as mastering the careful art of interior design, newer media is more neurotic and self-loathing, describing houses in need of renovation with words like “dated”, “immature,” or “wrong.” Whether presented as a self-improvement project (update your house lest you be judged for owning a dated one) or a form of self-care (renovate because it will make you feel better), the home remodel is presented as both remedy and requirement.
Instead of falling prey to this thinking, take a moment to consider this simple idea: There is nothing wrong with your house.
Most of the time, this statement is true (especially if one lives in a house constructed relatively recently). The roof does not leak; the house is warm or cool when it needs to be; there are no structural or electrical issues; nothing is broken or needs to be replaced from routine wear and tear. Why, then, do so many of us feel dissatisfied with our perfectly fine houses?
A fixation on the ills of one’s house is cultural, and has come in many different forms in as many centuries. House-positivity is seen as bizarre. Consider the HGTV series *Love It or List It*, in which the show’s hosts, a realtor and an interior designer, compete to sway a family to leave or stay in their current home. Always, one member of the family (and it’s usually the one who manages the finances) wants to stay, and defends the home—and the family’s life within it—even if it is a little dated or cramped. This person is almost always painted as being wrong or in need of fixing, and though the house is changed regardless, either renovated or discarded, the person who wanted to stay is always a downer and always the loser. Though the show plays on the rivalry between the two hosts, the stay-er is always the most despised character.
Even the show *Fixer Upper*, in which Chip and Joanna Gaines flip mostly postwar-era homes in their signature rustic modern style, is driven by the premise that the house in its original state is somehow wrong. Under the guise of celebrating authenticity (often talking about the “rustic charm” of the homes, yet touching only briefly on their unique architecture), the show erases every trace of it from these often historical houses. Rustic modernism is the perfect example of the simulacrum, the copy for which no original exists: it is historical without much history, a color palette and vague ruralness masquerading as a pure American legacy. The truth is, though there are the occasional episodes featuring truly decrepit properties, the vast majority of the homes featured on the show have *nothing wrong with them. *In fact, the hosts have time and time again muddied up houses (especially midcentury modern ones) with genuinely authentic, or even irreplaceable, interiors.
And then there are the financials: Some episodes feature structural remodels whose costs must reach into at least the tens of thousands of dollars, which could have been more reasonably spent on a house that suited the needs of the family from the start. All of this effort just to lobotomize a ’50s ranch in the name of “authenticity” or “down-to-earth-ness”! The truth exposed by *Fixer Upper *and similar shows is that we are not content with authenticity. Authenticity is incompatible with the more pressing (read: commercial) narrative that, when it comes to our homes, there is always something wrong or in need of improvement.
Prior to mass production and globalization, the kind of room-by-room makeover that dominates our remodeling discourse was the domain of the wealthy. Most changes in the average household came from gradual replacement of household goods with newer or better ones over time, rather than a premeditated overhaul. What we don’t realize is that this shift from partial to total is the outward sign of a more sinister change that occurred during the housing bubble leading up to the Great Recession: Average Americans began thinking of their homes as monetary *objects* to be bought, sold, invested in—*consumed**—*rather than *places* to be experienced, places in which our complex lives as human beings unfold.
Of course, there is nothing inherently wrong about wanting to freshen up one’s home—houses go through many makeovers in their lifetimes, and always have. Believe me, I would love to replace the gross, vacuum-killing high-pile carpet in my apartment, if I owned it.
But the desire to remake our homes often doesn’t come from us. Paint commercials make it seem like a fresh coat of paint will cure all household ills, from depression to marital troubles; hardware store ads imply that building a deck will bring father and son together at last. And then there’s Pinterest and the world it has wrought. At least half of my friends’ Instagram feeds consist of twee shots of wood tables bedazzled with color-coordinated books and succulents in stone pots. (Instagram, specifically, has had a tremendous influence on interior design, creating a landscape of algorithmically satisfying minimalist nowhere spaces, a condition described by the essayist Kyle Chayka as “Airspace.”)
Consciously or subconsciously, our constant remodeling is an effort to make ourselves more acceptable to others, something we should do as “good” homeowners. Like the beauty industry, the home-improvement industry plays on (usually gendered) insecurity—the fear that we are unattractive or inadequate. But the truth is, “other people” don’t have to live in your house, and when they come to visit, they’re there to see you, not your succulents and marble-and-brass side table. It’s time we reconsidered the house as a place instead of an object, to be lived in, rather than consumed; time we stopped thinking of a home as something that constantly has to be “improved”; time we enjoyed the historicity of our old houses, or the personality of our new houses. It’s time for a new era: an era of house-positivity.
*Kate Wagner** is the creator of the viral blog McMansion Hell, which roasts the world’s ugliest houses. Outside of McMansion Hell, Kate is a guest contributor for Curbed, 99 Percent Invisible, and Atlas Obscura. In addition to writing about architecture, Kate has worked extensively as a sound engineer and is currently a graduate student in Acoustics as part of a joint program between Johns Hopkins University and Peabody Conservatory, where her focus is in architectural acoustics.*
| true | true | true |
Mcmansion Hell’s Kate Wagner argues that in a culture that continually celebrates renovation, what our homes actually need is to be left alone.
|
2024-10-12 00:00:00
|
2018-03-07 00:00:00
|
article
|
curbed.com
|
Curbed
| null | null |
|
17,663,422 |
https://moveax.me/nebula-level12
| null | null | null | true | false | false | null | null | null | null | null | null | null | null | null |
20,815,063 |
https://dont-drive-drowsy.glitch.me/
|
Don't Drive Drowsy
| null |
This webapp needs camera to run. It will obviously need camera permission as well.
The tracker needs proper even lighting on your face to work.
The alarm sound is for waking you up so it quite loud.
| true | true | true | null |
2024-10-12 00:00:00
|
2018-01-01 00:00:00
| null | null | null | null | null | null |
18,671,539 |
https://en.wikipedia.org/wiki/Centralia_mine_fire
|
Centralia mine fire - Wikipedia
| null |
# Centralia mine fire
This article has multiple issues. Please help improve it or discuss these issues on the talk page. (Learn how and when to remove these messages)
|
The **Centralia mine fire** is a coal-seam fire that has been burning in the labyrinth of abandoned coal mines underneath the borough of Centralia, Pennsylvania, United States, since at least May 27, 1962. Its original cause and start date are still a matter of debate.[1][ page needed]
[2]
[3]
[It is burning at depths of up to 300 ft (90 m) over an 8 mi (13 km) stretch of 3,700 acres (15 km
*page needed*]2).
[4]At its current rate, it could continue to burn for over 250 years.
[5]Due to the fire, Centralia was mostly abandoned in the 1980s. There were 1,500 residents at the time the fire is believed to have started, but as of 2017 Centralia has a population of 5
[6]and most of the buildings have been demolished.
## Background
[edit]On May 7, 1962, the Centralia Council met to discuss the approaching Memorial Day and how the town would go about cleaning up the Centralia landfill, which was introduced earlier that year. The 300-foot-wide, 75-foot-long (91 m × 23 m) pit was made up of a 50-foot-deep (15 m) strip mine that had been cleared by Edward Whitney[ clarification needed] in 1935, and came very close to the northeast corner of Odd Fellows Cemetery. There were eight illegal dumps spread about Centralia, and the council's intention in creating the landfill was to stop the illegal dumping, as new state regulations had forced the town to close an earlier dump west of St. Ignatius Cemetery. Trustees at the cemetery were opposed to the landfill's proximity to it but recognized the illegal dumping elsewhere was a serious problem and envisioned that the new pit would resolve it.
[7]
Pennsylvania had passed a precautionary law in 1956 to regulate landfill use in strip mines, as landfills were known to cause destructive mine fires. The law required a permit and regular inspection for a municipality to use such a pit. George Segaritus, a regional landfill inspector who worked for the Department of Mines and Mineral Industries (DMMI) became concerned about the pit when he noticed holes in the walls and floor, as such mines often cut through older mines underneath. Segaritus informed Joseph Tighe, a Centralia councilman, that the pit would require filling with an incombustible material.[7]
## Fire
[edit]This was a world where no human could live, hotter than the planet Mercury, its atmosphere as poisonous as Saturn's. At the heart of the fire, temperatures easily exceeded 1,000 degrees Fahrenheit [540 degrees Celsius]. Lethal clouds of carbon monoxide and other gases swirled through the rock chambers.
— David DeKok,Unseen Danger: A Tragedy of People, Government, and the Centralia Mine Fire(University of Pennsylvania Press, 1986)[8]
### Plan and execution
[edit]The town council arranged for cleanup of the strip mine dump, but council minutes do not describe the proposed procedure. DeKok surmises that the process—setting it on fire—was not specified because state law prohibited dump fires. Nonetheless, the Centralia council set a date and hired five members of the volunteer firefighter company to clean up the landfill.[2]
A fire was ignited to clean the dump on May 27, 1962, and water was used to douse the visible flames that night. However, flames were seen once more on May 29. Using hoses hooked up from Locust Avenue, another attempt was made to douse the fire that night. Another flare-up in the following week (June 4) caused the Centralia Fire Company to once again douse it with hoses. A bulldozer stirred up the garbage so that firemen could douse concealed layers of the burning waste. A few days later, a hole as wide as 15 ft (4.6 m) and several feet high was found in the base of the north wall of the pit. Garbage had concealed the hole and prevented it from being filled with incombustible material. It is possible that this hole led to the mine fire, as it provided a pathway to the labyrinth of old mines under the borough. Evidence indicates that, despite these efforts to douse the fire, the landfill continued to burn. On July 2, Monsignor William J. Burke complained about foul odors from the smoldering trash and coal reaching St. Ignatius Church. Even then, the Centralia council still allowed the dumping of garbage into the pit.[7]
Clarence "Mooch" Kashner, the president of the Independent Miners, Breakermen, and Truckers union, came at the invitation of a council member to inspect the situation in Centralia. Kashner evaluated the events and called Gordon Smith, an engineer of the Department of Mines and Mineral Industries (DMMI) office in Pottsville. Smith told the town that he could dig out the smoldering material using a steam shovel for $175. A call was placed to Art Joyce, a mine inspector from Mount Carmel, who brought gas detection equipment for use on the swirling wisps of steam now emanating from ground fissures in the north wall of the landfill pit. Tests concluded that the gases seeping from the large hole in the pit wall and from cracks in the north wall contained carbon monoxide concentrations typical of coal-mine fires.[7]
### Escalation
[edit]The Centralia Council sent a letter to the Lehigh Valley Coal Company (LVCC) as formal notice of the fire. It is speculated that the town council had decided that hiding the true origin of the fire would serve better than alerting the LVCC of the truth, which would most likely end in receiving no help from them. In the letter, the borough described the starting of a fire "of unknown origin during a period of unusually hot weather".[9]
Preceding an August 6 meeting at the fire site, which would include officials from the LVCC and the Susquehanna Coal Company, Deputy Secretary of Mines James Shober Sr. expected that the representatives would inform him that they could not afford mounting a project that would stop the mine fire. Therefore, Shober announced that he expected the state to finance the cost of digging out the fire, which was at that time around $30,000 (roughly equivalent to $302,000 in 2023). Another offer was made at the meeting, proposed by Centralia strip mine operator Alonzo Sanchez, who told members of council that he would dig out the mine fire free of charge as long as he could claim any coal he recovered without paying royalties to the Lehigh Valley Coal Company. Part of Sanchez's plan was to do exploratory drilling to estimate the scope of the mine fire, which was most likely why Sanchez's offer was rejected at the meeting. The drilling would have delayed the project, not to mention the legal problems with mining rights.[7]
At the time, state mine inspectors were in the Centralia-area mines almost daily to check for lethal levels of carbon monoxide. Lethal levels were found on August 9, and all Centralia-area mines were closed the next day.[ citation needed]
## Early remediation attempts
[edit]### First excavation project
[edit]Pressed at an August 12 meeting of the United Mine Works of America in Centralia, Secretary of Mines Lewis Evans sent a letter to the group on August 15 that claimed he had authorized a project to deal with the mine fire, and that bids for the project would be opened on August 17. Two days later, the contract was awarded to Bridy, Inc., a company near Mount Carmel, for an estimated $20,000 (roughly equivalent to $201,000 in 2023). Work on the project began August 22.[7]
The Department of Mines and Mineral Industries (DMMI), who originally believed Bridy would need only to excavate 24,000 cu yd (18,000 m3) of earth,[1][ page needed] informed them that they were forbidden from doing any exploratory drilling in order to find the perimeter of the fire or how deep it was, and that they were to strictly follow plans drawn up by the engineers
[who did not believe that the fire was very big or active. Instead, the size and location of the fire was estimated based on the amount of steam issuing from the landfill rock.
*which?*][
*citation needed*]Bridy, following the engineering team plan, began by digging on the northern perimeter of the dump pit rim and excavated about 200 ft (61 m) outward to expand the perimeter. However, the project was ultimately ineffective due to multiple factors. Intentional breaching of the subterranean mine chambers allowed large amounts of oxygen to rush in, greatly worsening the fire. Steve Kisela, a bulldozer operator in Bridy's project, said that the project was ineffective because the inrush of air helped the fire to move ahead of the excavation point by the time the section was drilled and blasted.[ citation needed] Bridy was also using a 2.5 cu yd (1.9 m
3) shovel, which was considered small for the project.
[
*citation needed*]Furthermore, the state only permitted Bridy's team to work weekday shifts which were eight hours long and only occurred during the day time, commonly referred to as "first shift" in the mining industry.[10] At one point, work was at a standstill for five days during the Labor Day weekend in early September.[ why?]
[Finally, the fire was traveling in a northward direction, which caused the fire to move deeper into the coal seam. This, combined with the work restrictions and inadequate equipment, greatly increased the excavation cost. Bridy had excavated 58,580 cu yd (44,790 m
*citation needed*]3) of earth by the time the project ran out of money and ended on October 29, 1962.
[7]
### Second excavation project
[edit]On October 29, just prior to the termination of the Bridy project, a new project was proposed that involved flushing the mine fire. Crushed rock would be mixed with water and pumped into Centralia's mines ahead of the expected fire expansion. The project was estimated to cost $40,000 (roughly equivalent to $403,000 in 2023). Bids were opened on November 1, and the project was awarded to K&H Excavating with a low bid of $28,400 (roughly equivalent to $286,000 in 2023).[7]
Drilling was conducted through holes spaced 20 ft (6.1 m) apart in a semicircular pattern along the edge of the landfill. However, this project was also ineffective due to multiple factors. Centralia experienced an unusually heavy period of snowfall and unseasonably low temperatures during the project. Winter weather caused the water supply lines to freeze. Furthermore, the rock-grinding machine froze during a windy blizzard. Both problems inhibited the timely mixture and administration of the crushed-rock slurry. The DMMI also worried that the 10,000 cu yd (7,600 m3) of flushing material would not be enough to fill the mines, thus preventing the bore holes from filling completely. Partially filled boreholes would provide an escape route for the fire, rendering the project ineffective.[7]
These problems quickly depleted funds. In response, Secretary Evans approved an additional $14,000 (roughly equivalent to $141,000 in 2023) to fund this project. Funding for the project ran out on March 15, 1963, with a total cost of $42,420[1][ page needed] (roughly equivalent to $427,000 in 2023).
On April 11, steam issuing from additional openings in the ground indicated that the fire had spread eastward as far as 700 ft (210 m),[7] and that the project had failed.
### Third project
[edit]A three-option proposal was drawn up soon after that, although the project would be delayed until after the new fiscal year beginning July 1, 1963. The first option, costing $277,490, consisted of entrenching the fire and back-filling the trench with incombustible material. The second, costing $151,714, offered a smaller trench in an incomplete circle, followed by the completion of the circle with a flush barrier. The third plan was a "total and concerted flushing project" larger than the second project's flushing and costing $82,300. The state abandoned this project in 1963.[7]
## Later remediation projects
[edit]David DeKok began reporting on the mine fire for *The News-Item* in Shamokin beginning in late 1976. Between 1976 and 1986, he wrote over 500 articles about the mine fire. In 1979, locals became aware of the scale of the problem when a gas-station owner, then-mayor John Coddington, inserted a dipstick into one of his underground tanks to check the fuel level. When he withdrew it, it seemed hot. He lowered a thermometer into the tank on a string and was shocked to discover that the temperature of the gasoline in the tank was 172 °F (77.8 °C).[11]
Beginning in 1980, adverse health effects were reported by several people due to byproducts of the fire — carbon monoxide and carbon dioxide — and low oxygen levels.[ citation needed] Statewide attention to the fire began to increase, culminating in 1981 when a 12-year-old resident named Todd Domboski fell into a sinkhole 4 ft (1.2 m) wide by 150 ft (46 m) deep that suddenly opened beneath his feet in a backyard.
[12]He clung to a tree root until his cousin, 14-year-old Eric Wolfgang, saved his life by pulling him out of the hole. The plume of hot steam billowing from the hole was measured as containing a lethal level of carbon monoxide.
[5]Todd began suffering nightmares of the incident later in life, for which he turned to prescribed medication. He died of an overdose on February 4, 2022.
[13]
## Possible origins
[edit]A number of competing hypotheses have arisen about the source of the Centralia mine fire. Some of them claim that the mine fire started before May 27, 1962. David DeKok says that the borough's deliberate burning of trash on May 27 to clean up the landfill in the former strip mine ignited a coal seam via an unsealed opening in the trash pit, which allowed the fire to enter the labyrinth of abandoned coal mines beneath Centralia.[7]
Joan Quigley argues in her 2007 book *The Day the Earth Caved In* that the fire had in fact started the previous day, when a trash hauler dumped hot ash or coal discarded from coal burners into the open trash pit. She noted that borough council minutes from June 4, 1962, referred to two fires at the dump, and that five firefighters had submitted bills for "fighting the fire at the landfill area". The borough, by law, was responsible for installing a fire-resistant clay barrier between each layer of trash in the landfill, but fell behind schedule, leaving the barrier incomplete. This allowed the hot coals to penetrate the vein of coal underneath the pit, lighting the subsequent subterranean fire. In addition to the council minutes, Quigley cites "interviews with volunteer firemen, the former fire chief, borough officials, and several eyewitnesses" as her sources.[3][ page needed]
[14]
Another hypothesis is that the fire was burning long before the alleged trash dump fire. According to local legend, the Bast Colliery coal fire of 1932, set alight by an explosion, was never fully extinguished.[15] In 1962, it reached the landfill area. Those who adhere to the Bast Theory believe that the dump fire is a separate fire unrelated to the Centralia mine fire. One man who disagrees is Frank Jurgill Sr., who claims he operated a bootleg mine with his brother in the vicinity of the landfill between 1960 and 1962. He says that if the Bast Colliery fire had never been put out, he and his brother would have been in it and been killed by the gases.[7]
Centralia councilman Joseph Tighe proposed a different hypothesis: that Centralia's coal fire was actually started by an adjacent coal-seam fire that had been burning west of Centralia's. His belief is that the adjacent fire was at one time partially excavated, but nonetheless, it set alight the landfill on May 27.[7]
Another hypothesis arose from a letter sent to the Lehigh Valley Coal Company by the Centralia Council in the days after the mine fire was noticed. The letter describes "a fire of unknown origin [starting] on or about June 25, 1962, during a period of unusually hot weather". This may refer to the hypothesis of spontaneous combustion being the reason for the start of the landfill fire, a hypothesis accepted for many years by state and federal officials.[7]
## Aftermath
[edit]In 1984, Wilkes-Barre Representative Frank Harrison proposed legislation, which was approved by Congress, that allocated more than $42 million for relocation efforts (equivalent to $123 million in 2023)[17] Most of the residents accepted buyout offers. A few families opted to stay despite urgings from Pennsylvania officials.[18]
In 1992, Pennsylvania governor Bob Casey invoked eminent domain on all properties in the borough, condemning all the buildings within. A subsequent legal effort by residents to have the decision reversed failed. In 2002, the U.S. Postal Service revoked Centralia's ZIP code, 17927.[4][19]
In 2009, Governor Ed Rendell began the formal eviction of Centralia residents.[20] By early 2010, only five occupied homes remained, with the residents determined to stay.[21] In lawsuits, the remaining residents alleged that they were victims of "massive fraud", "motivated primarily by interests in what is conservatively estimated at hundreds of millions of dollars of some of the best anthracite coal in the world".[22] In July 2012, the last handful of residents in Centralia lost their appeal of a court decision upholding eminent domain proceedings and were again ordered to leave.[23] State and local officials reached an agreement with the seven remaining residents on October 29, 2013, allowing them to live out their lives there, after which the rights of their properties would be taken through eminent domain.[24]
The Centralia mine fire also extended beneath the town of Byrnesville, a few miles to the south. The town had to be abandoned and leveled.[25]
The Centralia area has now grown to be a tourist attraction.[26] Visitors come to see the smoke and/or steam on Centralia's empty streets and the abandoned portion of PA Route 61, popularly referred to as the Graffiti Highway.[27]
As of April 2020, efforts to cover up Graffiti Highway by the private owner of the road were begun.[28] The abandoned highway was covered with dirt in April 2020, generally blocking public access to the road.[29][30]
Increased air pressure induced by the heat from the mine fires has interacted with heavy rainfalls in the area, which rush into the abandoned mines to form Pennsylvania's only geyser, the Big Mine Run Geyser, which erupts on private property in nearby Ashland. The geyser has been kept open as a means of flood control.[31]
The fire and its effects were featured in 2013 on *America Declassified* on the Travel Channel, on Radiolab's *Cities* episode, and on 99% Invisible's *Mini Stories: Volume 18 episode*.[32][18][33]
The film *Silent Hill* draws on these events, although the film is based in West Virginia.[34]
## See also
[edit]- Carbondale mine fire
- Darvaza gas crater
- Jharia coal field fire
- Laurel Run mine fire
- New Straitsville mine fire
## References
[edit]- ^
**a****b**Chaiken, Robert F. (1983).**c***Problems in the control of anthracite mine fires : a case study of the Centralia mine fire (August 1980)*(PDF). U.S. Dept. of the Interior, Bureau of Mines. OCLC 609303157. Archived (PDF) from the original on 2021-06-26. Retrieved 2019-12-01.[*page needed*] - ^
**a**Dekok 20–21.**b** - ^
**a**Quigley, Joan (2007),**b***The Day the Earth Caved In: An American Mining Tragedy*, New York: Random House, ISBN 978-1-4000-6180-8[*page needed*] - ^
**a**Krajick, Kevin (May 2005), "Fire in the hole",**b***Smithsonian Magazine*, archived from the original on 2008-12-01, retrieved 2009-07-27 - ^
**a**O'Carroll, Eoin (February 5, 2010). "Centralia, Pa.: How an underground coal fire erased a town".**b***Bright Green blog*. The Christian Science Monitor. Archived from the original on 2013-07-03. Retrieved 2013-08-05. **^**PA, Centralia (2017-01-02). "Centralia Loses Another Resident, Home Abandoned".*Centralia PA*. Archived from the original on 2019-07-17. Retrieved 2019-08-04.- ^
**a****b****c****d****e****f****g****h****i****j****k****l****m****n**DeKok, David (2010).**o***Fire Underground: The Ongoing Tragedy of the Centralia Mine Fire*. Globe Pequot Press. pp. 19–26. ISBN 978-0-7627-5427-4. Archived from the original on 2022-05-27. Retrieved 2020-11-25. **^**DeKok, David (1986),*Unseen Danger; A Tragedy of People, Government, and the Centralia Mine Fire*, Philadelphia: University of Pennsylvania Press, p. 17, ISBN 978-0-595-09270-3**^**Dekok, 25.**^**Sherman, Frasser (2018). "What Hours Are First Shift & Second Shift?". Archived from the original on 29 January 2018. Retrieved 26 January 2018.**^**Morton, Ella (4 June 2014). "How an Underground Fire Destroyed an Entire Town".*Slate*. Archived from the original on 7 August 2014. Retrieved August 2, 2014.**^**"Evansville Photos".*Evansville Courier & Press*. Associated Press. February 14, 1981. Archived from the original on 2013-09-21. Retrieved 2013-09-19.In this Feb. 14, 1981, file photo, Todd Domboski, 12, of Centralia, Pa., looks over a barricade at the hole he fell through just hours before this photo was taken in Centralia, Pa.
**^**Speaker, The Coal (2022-02-11). "THE BOY WHO FELL INTO THE HOLE: TODD DOMBOSKI DEAD IN ALTOONA".*The Coal Speaker*. Retrieved 2024-02-11.**^**Quigley, Joan (2007). "Chapter Notes to*The Day the Earth Caved In*" (DOC). p. 8. Archived from the original on 2014-02-27. Retrieved 2012-03-13.**^**Dekok, David (2009-10-01).*Fire Underground: The Ongoing Tragedy of the Centralia Mine Fire*. Rowman & Littlefield. p. 22. ISBN 978-0-7627-5824-1.**^**"A modern day Ghost Town, Centralia Pennsylvania". Sliprock Media LLC. Archived from the original on 8 August 2013. Retrieved 15 August 2014.**^**"Washington News Briefs". November 16, 1983. Retrieved July 7, 2022.- ^
**a**"Cities | Radiolab".**b***WNYC Studios*. Archived from the original on 2019-11-17. Retrieved 2019-12-01. **^**Currie, Tyler (April 2, 2003), "Zip Code 00000",*Washington Post*, archived from the original on 2011-06-29, retrieved 2009-12-19**^**Beauge, John (February 15, 2018). "The state no longer owns Centralia's 'Graffiti Highway.' Who does?".*Harrisburg Patriot-News*. Retrieved 2024-08-19.**^**Rubinkam, Michael (February 5, 2010). "Few Remain as 1962 Pa. Coal Town Fire Still Burns". ABC News (Australia). Associated Press. Archived from the original on November 29, 2012. Retrieved February 6, 2010.**^**Rubinkam, Michael (March 9, 2010). "Pa. coal town above mine fire claims massive fraud".*San Diego Tribune*. Associated Press. Archived from the original on 2021-12-22. Retrieved 2021-12-22.**^**Wheary, Rob (June 27, 2012). "Federal appeals court upholds ruling, meaning Centralia residents must move".*Pottsville Republican Herald*. Retrieved 2024-08-19.**^**"Agreement Reached With Remaing [sic] Centralia Residents".*WNEP-TV News*. October 30, 2013. Archived from the original on 2019-06-10. Retrieved 2021-12-22.**^**Holmes, Kristin E. (October 21, 2008). "Minding a legacy of faith: In an empty town, a shrine still shines". Philly.com. Archived from the original on 2022-05-27. Retrieved 2014-02-07.**^**"Visiting Centralia PA – Frequently Asked Questions about visiting Centralia".*Offroaders.com*. Archived from the original on 2017-05-09. Retrieved 2017-03-24.**^**"Graffiti Highway, Centralia Pennsylvania".*Centraliapa.org*. 2014-09-05. Archived from the original on 2017-02-26. Retrieved 2017-03-24.**^**David Williams (April 8, 2020). "Pennsylvania's colorful 'Graffiti Highway' is being shut down for good".*CNN.com*. Archived from the original on December 3, 2020. Retrieved September 13, 2020.**^**Strawser, Justin (April 6, 2020). "Graffiti Highway to be closed by owners".*The Daily Item*. Archived from the original on April 6, 2020. Retrieved April 6, 2020.**^**Reed, J. (April 6, 2020). "Work Begins on Centralia's Graffiti Highway; State Police Enforce".*Skook News*. Archived from the original on April 6, 2020. Retrieved April 6, 2020.**^**Albert, Jessica (June 17, 2018). "Getting to the Bottom of This Gushing Geyser in Schuylkill County".*WNEP-TV*. Archived from the original on December 7, 2021. Retrieved December 7, 2021.**^**"'America Declassified' Hiding in Plain Sight/City on Fire/Rock Star (2013)".*IMDb.com*. Archived from the original on 2017-03-25. Retrieved 2017-03-24.**^**"Mini Stories: Volume 18".*99% Invisible*. 2024-01-09. Retrieved 2024-01-15.**^**Gans, Christophe (2006-03-15). "'Silent Hill Production Diary: On Adapting Silent Hill Lore, The Red Pyramid, and Using "Centralia" as a Temp Film Title".*Sony Pictures*. Archived from the original on 2012-10-18. Retrieved 2023-07-07.
## External links
[edit]- Media related to Centralia mine fire at Wikimedia Commons
| true | true | true | null |
2024-10-12 00:00:00
|
2006-05-23 00:00:00
|
website
|
wikipedia.org
|
Wikimedia Foundation, Inc.
| null | null |
|
34,954,604 |
https://sharegpt.com
|
ShareGPT: Share your wildest ChatGPT conversations with one click.
| null |
ShareGPT
ShareGPT is deprecated. Please use OpenAI's built-in sharing instead. Thanks to everyone who used ShareGPT to share over 438,000 conversations.
| true | true | true |
ShareGPT is a Chrome extension that allows you to share your wildest ChatGPT conversations with one click.
|
2024-10-12 00:00:00
|
2022-12-08 00:00:00
| null |
sharegpt.com
|
Vercel
| null | null |
|
4,965,072 |
http://www.youtube.com/watch?v=3RiIYgCC7Gw
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
25,919,908 |
https://arstechnica.com/gadgets/2021/01/on-the-death-of-centos-red-hat-liaison-brian-exelbierd-speaks/
|
Why Red Hat killed CentOS—a CentOS board member speaks
|
Jim Salter
|
This morning, The Register's Tim Anderson published excerpts of an interview with the CentOS project's Brian Exelbierd. Exelbierd is a member of the CentOS board and its official liaison with Red Hat.
Exelbierd spoke to Anderson to give an insider's perspective on Red Hat's effective termination of CentOS Linux in December, in which the open source giant announced CentOS Linux was to be deprecated immediately—with security upgrades to CentOS Linux 8 ending later in 2021 rather than the 2029 end of support date CentOS users expected.
## The tail mustn’t wag the dog
"CentOS is a [Red Hat] sponsored project," Exelbierd told the Register. "We are the funding agent (the entity which receives and disburses grants), and we also happen to be a heavy contributor. We have learned that open source communities do well with independence. We let those governing bodies govern."
The devil in these particular details, of course, is that "the CentOS board doesn't get to decide what Red Hat engineering teams do." This is the contribution that Exelbierd mentioned earlier—specifically, the labor of Red Hat engineering teams. According to Exelbierd, Red Hat decided "we're going to make some fundamental changes in how we direct our investment," then "went to the CentOS project and said, here is a thing Red Hat is going to do."
That thing was the cessation of Red Hat's support for CentOS Linux while prioritizing its investment in CentOS Stream, which Exelbierd describes as "critical" to Red Hat. "We laid out our case and we said that we're moving our engineering contribution, people time in some cases... we want to call your attention to them because depending on what you decide to do, there are potential liability issues that could result, so we want to make sure you have a plan."
| true | true | true |
“The CentOS Board doesn’t get to decide what Red Hat engineering teams do.”…
|
2024-10-12 00:00:00
|
2021-01-26 00:00:00
|
article
|
arstechnica.com
|
Ars Technica
| null | null |
|
39,965,028 |
https://neon.tech/blog/architecture-decisions-in-neon
|
Architecture decisions in Neon - Neon
|
Heikki Linnakangas
|
The idea behind Neon is to create a new serverless Postgres service with a modern cloud-native architecture. When building for the cloud it usually is a good idea to separate storage and compute. For operational databases such design was first introduced by AWS Aurora 1, followed by many others 2 3, however none of the implementations were open source and native to Postgres.
We wanted to make Neon the best platform to run Postgres on. As we started to figure out the details we needed to understand what exactly the architecture should look like for an OLTP cloud database. We also knew that we couldn’t deviate from Postgres. People choose Postgres for many reasons. It’s open source, feature-rich, and has a large ecosystem of extensions and tools. But increasingly, it’s simply the default choice. There are a lot of databases out there with different strengths and weaknesses, but unless you have a particular reason to pick something else, you should just go with Postgres. Therefore, we don’t want to compete with Postgres itself or maintain a fork. We understood that Neon would only work in the market if it doesn’t fork Postgres and gives users 100% compatibility with their apps written for Postgres.
So before we wrote a single line of code, we had some big upfront decisions to make on the architecture.
## Separating storage and compute
The core idea of an Aurora-like architecture of separation of storage and compute is to replace the regular filesystem and local disk with a smart storage layer.
Separating compute and storage allows you to do things that are difficult or impossible to do otherwise:
- Run multiple compute instances without having multiple copies of the data.
- Perform a fast startup and shutdown of compute instances.
- Provide instant recovery for your database.
- Simplify operations, like backups and archiving, to be handled by the storage layer without affecting the application.
- Scale CPU and I/O resources independently.
The first major decision for us was if we should just use a SAN or an off-the-shelf distributed file system. You can certainly get some of these benefits from smart filesystems or SANs, and there are a lot of tools out there to manage these in a traditional installation. But having a smart storage system that knows more about the database system and the underlying infrastructure of the cloud provider makes the overall system simpler, and gives a better developer experience. We were aware of Delphix – a company that is built on the premise of providing dev and test environments for database products using zfs. If we took a similar approach due to the fact that we don’t control the filesystem tier, it would be hard to efficiently integrate it with the cloud and result in a clunky and expensive solution. We could still sell it to large enterprises, but we knew we could do better. **So the first decision was made: no SANs, no third party filesystems. Let’s build our own storage from the first principles.**
## Storage Interface
We started to think about what the interface should be between compute and storage. Since we have many Postgres hackers on the team, we already knew how it works in vanilla Postgres. Postgres generates WAL (write ahead log) for all data modifications, and the WAL is written to disk. Each WAL record references one or more pages, and an operation and some payload to apply to them. In essence, each WAL record is a diff against the previous version of the page. As Postgres processes a WAL record it applies the operation encoded in the WAL record to the page cache, which will eventually write the page to disk. If a crash occurs before this happens, the page is reconstructed using the old version of the page and the WAL.
Postgres architecture gave us a hint of how to integrate our cloud native storage. We can make Postgres stream WAL to Neon storage over the network and similarly read pages from Neon storage using RPC calls. If we did that, Postgres changes would be minimal and we can even hope to push them upstream.
It was clear that we will need to have a consensus algorithm for persisting the WAL – the database is the log, it has to be incredibly robust. And it was also clear that we need to organize pages so that we can quickly return them when requested by Postgres. What was not clear was if we should have *two* services: one for WAL and one for serving pages or *one* that combines all of it. Aurora has one, SQL Server, which came later, has two. There was a decision to make.
## Separating Page servers and the WAL service
One early decision was to separate the WAL and page service. The WAL service consists of multiple WAL *safekeeper* nodes that receive the WAL from Postgres, and run a consensus algorithm. The consensus algorithm ensures durability, even if one of the safekeeper nodes is down. It also ensures that only one Postgres instance is acting as the primary at any given time, avoiding split-brain problems. Pageservers store committed WAL and can reconstruct a page at any given point of WAL on request from the compute layer.
Separating the WAL service has several advantages:
- The WAL service and the page servers can be developed independently and in parallel.
- It is easier to reason about and verify the correctness of the consensus algorithm when it is a separate component.
- We can use hardware optimized for different purposes efficiently; the I/O pattern and workload of the safekeepers is very different from the page servers – one is append-only, and the other one is both read, write, and update.
## Relationship between compute and pageservers
Does one compute only talk to one pageserver or should we spread out pages from one database across multiple pageservers? Also, does one pageserver only contain data for one or many databases? The latter question is a simple one. We need to build multi-tenancy to support a large number of small databases efficiently. So one pageserver can contain pages from many databases.
The former question is a trade-off between simplicity and availability. If we spread database pages across many pageservers, and especially if we cache the same page on multiple page servers, we can provide better availability in case a page server goes down. To get started, we implemented a simple solution with one pageserver, but will add a pageserver “sharding” feature later to support high availability and very large databases.
## Treat historical data the same as recent data
The most straightforward model for the page servers would be to replay the WAL as it is received, to keep an up-to-date copy of the database – just like a Postgres replica. However, replicas connected to the storage system can lag behind the primary, and need to see older versions of pages. So at least you need some kind of a buffer to hold old page versions, in case a read replica requests them. **But for how long?** There’s no limitation on how far behind a read replica can lag.
Then we started to think about WAL archiving, backups and Point-in-Time Recovery (PITR). How are those things going to work in Neon? Do we need to build them as separate features or can we do better? Could the storage handle all of those?
PITR is a standard feature in most serious OLTP installations. The canonical use case for PITR is that you accidentally drop a table, and want to restore the database to the state just before that. You don’t do PITR often, but you want to have the capability. To allow PITR, you need to retain all old page versions in some form, as far back as you want to allow PITR. Traditionally, that’s done by taking daily or weekly backups and archiving all the WAL.
You don’t do PITR often, because it has traditionally been a very expensive operation. You start from the last backup and replay all the archived WAL to get to the desired point in time. This can take hours. And if you pick the wrong point to recover to, you have to start all over again.
What if PITR was a quick and computationally cheap operation? If you don’t know the exact point to recover to, that’s OK; you can do PITR as many times as you need to. You could use it for many things that are not feasible otherwise. For example, if you want to run an ad hoc analytical query against an OLTP database, you could do that against a PITR copy instead, without affecting the primary.
If you have a storage system that keeps all the old page versions, such operations become cheap. You can query against an older point in time just the same as the latest version.
We decided to embrace the idea of keeping old page versions, and build the storage system so that it can do that efficiently. It replaces the traditional backups and the WAL archive, and makes all of the history instantly accessible. The immediate question is “But at what cost”? If you have a PITR horizon of several weeks or months, that can be a lot of old data, even for a small database. We needed a way to store the old and cold data efficiently and the solution was to move cold and old data to cloud object storage such as S3.
## Leverage cloud object storage
The most efficient I/O pattern is to write incoming data sequentially and avoid random updates of old data. In Neon, incoming WAL is processed as it arrives, and indexed and buffered in memory. When the buffer fills up, it is written to a new file. Files are never modified in place. In the background, old data is reorganized by merging and deleting old files, to keep read latency in check and to garbage collect old page versions that are no longer needed for PITR. This design was inspired by Log-Structured Merge-trees.
This system, based on immutable files, has a few important benefits. Firstly, it makes compression easy. You can compress one file at a time, without having to worry about updating parts of the file later. Secondly, it makes it easy to scale the storage, and swap in and out parts of the database as needed. You can move a file to cold storage, and fetch it back if it’s needed again.
Neon utilizes cloud object storage to make the storage cost-efficient and robust. By relying on object storage, we don’t necessarily need multiple copies of data in the page servers, and we can utilize fast but less reliable local SSDs. Neon offloads cold parts of the database to object storage, and can bring it back online when needed. In a sense, the page servers are just a cache of what’s stored in the object storage, to allow fast random access to it. Object storage provides for the long-term durability of the data, and allows easy sharding and scaling of the storage system.
## One year later
We built a modern, cloud-native architecture that separates storage from compute to provide an excellent Postgres experience. Different decisions would have made some things easier and others harder, but so far, we haven’t regretted any of these choices. The immutable file format made it straightforward to support branching, for example, and we have been able to develop the page server and safekeeper parts fairly independently, just like we thought. You can get early access to our service and experience the benefits directly.
1. A. Verbitski et al., “Amazon Aurora,” Proceedings of the 2017 ACM International Conference on Management of Data. ACM, May 09, 2017 [Online]. Available: https://dx.doi.org/10.1145/3035918.3056101↩
2. P. Antonopoulos et al., “Socrates,” Proceedings of the 2019 International Conference on Management of Data. ACM, Jun. 25, 2019 [Online]. Available: https://www.microsoft.com/en-us/research/uploads/prod/2019/05/socrates.pdf↩
3. W. Cao et al., “PolarDB Serverless,” Proceedings of the 2021 International Conference on Management of Data. ACM, Jun. 09, 2021 [Online]. Available: http://dx.doi.org/10.1145/3448016.3457560↩
## 📚 Keep reading
**A deep dive into our storage engine:**Neon’s custom-built storage is the core of the platform—get details into how its built.**How we scale an open-source, multi-tenant storage engine for Postgres written in Rust:**keep learning about how we implemented sharding in our storage to allow Neon to host larger datasets and I/O.**1 year of autoscaling Postgres:**a review of our autoscaling design. Neon can autoscale your Postgres instance without dropping connections or interrupting your queries, avoiding the need for overprovisioning or resizing manually.
| true | true | true |
The idea behind Neon is to create a new serverless Postgres service with a modern cloud-native architecture. When building for the cloud it usually is a good idea to separate storage and compute. For operational databases such design was first introduced by AWS Aurora 1, followed by many others 2 3, however none of the implementations were […]
|
2024-10-12 00:00:00
|
2022-07-08 00:00:00
|
article
|
neon.tech
|
Neon
| null | null |
|
19,337,490 |
https://www.citylab.com/equity/2019/03/cashless-cash-free-ban-bill-new-york-retail-discrimination/584203/
|
Bloomberg
| null |
To continue, please click the box below to let us know you're not a robot.
Please make sure your browser supports JavaScript and cookies and that you are not blocking them from loading. For more information you can review our Terms of Service and Cookie Policy.
For inquiries related to this message please contact our support team and provide the reference ID below.
| true | true | true | null |
2024-10-12 00:00:00
| null | null | null | null | null | null | null |
40,458,771 |
https://www.tomorrow.io/blog/top-weather-apis/
|
The Best Weather APIs for 2024
|
Kelly Peters
|
## TL;DR:
- Weather APIs provide developers access to current, forecasted, and historical weather data.
- The best weather APIs for 2024 are judged based on functionality & scope, compatibility & ease of implementation, responsiveness & reliability, and cost.
- The top weather APIs in 2024 include Tomorrow.io, OpenWeatherMap, MeteoGroup, Weatherstack, Weatherbit, Weather2020, AerisWeather, Accuweather, and Visual Crossing.
- The all-around best Weather API for 2024 is Tomorrow.io’s Weather API, offering 80+ data layers and a top-rated interface.
Accurate, actionable weather forecasts are vital to the success of many organizations.
In fact, there are entire industries where weather conditions directly impact day-to-day operations, including shipping, on-demand, energy, and the supply chain (to name just a few).
Take utilities, for example. According to McKinsey, a typical utility company sees up to $1.4B in storm damage costs and lost revenue due to outages from storms over a 20-year period.
With actionable weather forecasts, businesses can better prepare for weather events, save money, and improve operations all in the same process.
With the shutdown of **Dark **Sky Weather API, many people have sought after a better weather API for their weather data needs. But how do you know what’s the *right* choice for you?
North American Weather Map via Tomorrow.io Weather Intelligence Platform
Whether you’re building an app, adding weather data to your software platform, or embedding data into a home assistant, you may be looking for very different capabilities from a free weather API.
That’s where we can help!
Depending on your priorities, some APIs will be a better fit than others.
We looked at all the top weather APIs on the market in 2024 to help you choose the right one and get started faster.
Scroll down to learn what to consider when choosing the best weather API for your project or business this year.
## What is a Weather API?
A weather API is an application programming interface that allows software developers to access and integrate weather data from various providers into their own applications and websites.
Weather APIs are similar to map APIs in terms of integration and versatility of data sources. Each service provider collects, aggregates, and processes meteorological and other relevant **weather data** then offers access to it via API.
As a developer, you likely already use APIs to perform different tasks and functions in your apps or across your site.
This data includes information such as:
**Temperature****Humidity****Wind Speed****Precipitation****Cloud cover****Visibility****…and more**
For example, if you are building a web or mobile app that needs to pull weather forecast data, the right weather API makes this easy. With this data, you can generate updates and alerts in web and mobile apps, creating solutions uniquely suited to your needs.
From on-demand weather forecasting to planning business operations, the use cases for a weather API are immense.
## How to Choose a Weather API
With the number of weather APIs out there, there are a few things to consider before making your selecting.
You should be aware that each has unique capabilities, varying costs, and different degrees of reliability. Some even target specific markets or communities with unique features for agricultural applications or **air quality monitoring**.
Here’s what we recommend you consider.
### Functionality and Scope
While most major weather APIs provide similar core functionality, they can vary in aspects like:
- Data resolution and accuracy
- Length of historical records
- Date/time formats
Selecting an API that provides the specific weather data your app needs (e.g., marine conditions, weather on Mars) without unnecessary features is essential.
Free APIs often have limitations like:
- Number of API calls
- Functionality depth
So if building a commercial app that needs advanced weather data for a business use case, additional costs for premium APIs with more robust capabilities may be warranted.
### Compatibility and Ease of Implementation
Most weather APIs today are based on RESTful architecture, with a handful offering a SOAP alternative. Be sure to pay attention to those subtle differences in date and time formats and well-tested compatibility with the app framework and language you’re coding in.
Another thing to take note of is the quality of API **documentation**.
**While some services have in-depth tutorials and guides, others expect you to figure it out yourself or come armed with previous experience implementing weather APIs**.
Depending on your level of expertise, you may want more documentation to effectively set things up.
### Responsiveness and Reliability
You need a weather API to not only do what you want and well, but you want it to be fast and available. There are several sources for information on uptime and response speed of available weather APIs, but the information there is inconsistent or inaccurate.
**The best way to find out which APIs are reliable and fast enough to make it into the production version of your app is to try them**. Fortunately, most offer a **free trial** option or a freemium subscription.
### Cost
Once you’ve determined the needs and scope of your project and narrowed down your list of potential weather API providers, it’s time to consider the price tag.
**The features, uptime, capacity, and responsiveness offered by the free API services are inferior to those of paid options**. That said, even the highest-priced APIs will become a significant expense only if and when your app’s number of API requests is exceptionally high.
## Our Picks for the Best Weather APIs
Based on our research, here are the top free and paid weather APIs available in 2024 based on functionality, price, and data access:
**Tomorrow.io API****OpenWeatherMap****Meteogroup****Weatherstack****Weatherbit****Weather2020****AerisWeather****Accuweather****Visual Crossing**
## 1. The Best Weather API for 2024: Tomorrow.io’s Weather API:
**Tomorrow.io’s API**** offers an all-in-one endpoint with 80 different data fields, including weather, air quality, pollen, road risk, and fire index, and also includes historical, real-time, and forecast weather data, globally.** A lean and flat payload creates a seamless developer experience with **comprehensive **documentation, cutting-edge functionality and features, including:
**Polygon/polyline locations**: Location types give developers the flexibility to choose the right bounding box to continuously observe inclement weather within that specific lat/long vicinity.**Monitoring and alerts**: Customize your own rules for what weather conditions you want to be monitored in which locations, and you can receive alerts through the Tomorrow.io Weather API for when those specific thresholds have been met.**Dynamic routes**: Get any of our data fields in real-time mapped to a travel route, with granularity in the forecast every step of the way.
Tomorrow.io’s Weather API is **trusted by top companies** such as Microsoft, Uber, JetBlue, World Triathlon, AWS, Ford, and more.
**App Integration/ Format**: AWS, Autodesk, REST Weather API using JSON for the requests and the responses, with HTTPS support
## 2. OpenWeatherMap
OpenWeatherMap offers weather data APIs for different types of timeline data. In a solution inspired by crowdsourcing projects like Wikipedia, **weather data is collected from meteorological broadcast services worldwide and over 40,000 weather stations**. This freemium solution also has a feature-limited free option that allows access to the 5 days/3 hour forecast API, as well as weather alerts and a weather map.
One thing to consider is that the free account for OpenWeatherMap limits your app to 60 API calls a minute at most.
**App Integration/Format**: JSON / XML
**Pricing**: Varies. Free up to Enterprise.
## 3. Meteogroup
Priding themselves by specializing in UK-specific and nautical weather and environmental data, Meteogroup offers four different APIs: **Nautical API (Beta), Point Forecast, Point Observations, and Radar Precipitation Forecast (Beta)**.
If you’re in search of weather data for nautical verticals and UK-based customers, then the MeteoGroup API could be a great fit.
**App Integration/Format**: JSON with HTTPS support
**Pricing**: Unknown
## 4. Weatherstack
Staying in the UK, The Weatherstack API is developed by a UK company that excels in SaaS with companies like Ipstack, Currencylayer, Invoicely, and Eversign. **Aimed mostly at websites and mobile apps looking to include a live weather widget at minimal cost, **offers real-time weather, historical weather, international weather, and more.
**App Integration/Format:** REST API returns JSON formatted responses, and supports JSONP callbacks. HTTPS is enabled for paid subscriptions.
**Pricing**: Varies. Free up to Enterprise.
## 5. Weatherbit
Weatherbit offers 5 different APIs for forecasts, historical data, and other weather data such as air quality, soil temperature, and soil moisture. **Collecting data from weather stations and other traditional sources, Weatherbit uses machine learning and AI to help predict the weather**.
Boasting a 95% uptime and highly responsive API servers, Weatherbit provides a free limited-functionality account for a single API key. If you’re looking to create a commercial app, note that the free API subscription will not be enough and you will have to upgrade to one of the paid plans.
**App Integration/Format**: JSON, HTTPS available for premium subscribers
**Pricing**: Premium pricing starts at $35 / month. The free version (not for use with commercial projects) is limited to 500 API calls/day.
## 6. Weather2020
Weather2020 brands itself as the only provider capable of delivering a 12-week forecast, so it’s great if you’re looking for long-term forecast data. However, there are some questions around the accuracy of forecasting after 10 days. The company also prides itself on being the weather data provider of leading weather apps like 1Weather.
If your focus is on long-range weather forecasting, and you’re willing to take your chances on famed meteorologist Gary Lezak’s forecasting model, Weather2020 is worth checking out.
**App Integration/Format**: JSON
**Pricing**: Premium pricing starts at $9.99 / month. The free version allows for up to 1000 API calls/day with each additional call priced at $0.002.
## 7. AerisWeather
AerisWeather API provides access to weather data and forecasts as well as **storm reports, earthquake warnings, and other unique data for premium subscribers**.
One of the main advantages of AerisWeather API is its documentation, as well as available developer toolkits for easier integration into your app.
**App Integration/Format**: RESTful calls and responses are formatted in JSON and JSONP
**Pricing**: Pricing starts at $23 / month. Free trial available (2 months).v
## 8. Accuweather
Accuweather is probably one of the most well-known names in weather. They offer current, historical, and international weather data, along with other specific data like mosquito activity. But when it comes to their API, they’re most well known for their imagery endpoints.
Note that while you can use this API in both commercial and non-commercial applications, including the Accuweather logo is required.
**App Integration/Format**: RESTful calls and responses are formatted in JSON and JSONP.
**Pricing**: Limited trial offers up to 50 calls per day, then three pricing tiers of $25 to $500/month
## 9. Visual Crossing API
Visual Crossing provides instant access to both historical weather records and weather forecast data, globally. The company aims to bring low-cost data and analysis tools to the public for use in data science, business analytics, machine learning, and other applications.
**App Integration/Format**: CSV, JSON, & OData
**Pricing**: 1000 calls per day, then $35/month.
**Choosing the Best Weather API**
The weather API that works best for you as a developer depends heavily on your goals, preferences, project scope, and budget. Numerous services offer data on topics ranging from air quality to earthquakes, fire risk index, and historical to forecast weather.
The services and solutions we listed cover all and any needs you might have as a developer.
Did we miss something? Reach out to let us know!
| true | true | true |
A weather API may not seem like an essential tool for business, but that couldn’t be further from the truth. Learn More!
|
2024-10-12 00:00:00
|
2024-01-05 00:00:00
|
article
|
tomorrow.io
|
Tomorrow.io
| null | null |
|
2,038,362 | null | null | null | true | true | false | null |
2024-10-12 00:00:00
|
2023-03-01 00:00:00
| null | null | null | null | null | null |
|
18,806,170 |
https://www.google.com/maps/place/Anfield/@53.4308326,-2.9630187,17z/data=!3m1!4b1!4m5!3m4!1s0x487b21654b02538b:0x84576a57e21973ff!8m2!3d53.4308294!4d-2.96083
|
Anfield · Anfield Rd, Anfield, Liverpool L4 0TH, United Kingdom
| null |
When you have eliminated the
JavaScript
, whatever remains must be an empty page.
Enable JavaScript to see Google Maps.
| true | true | true |
★★★★★ · Stadium
|
2024-10-12 00:00:00
|
2024-10-09 00:00:00
|
https://lh5.googleusercontent.com/p/AF1QipPvumyWhHxt_kO-NjuMHZG4x7eow78BIQoFzNCQ=w900-h900-p-k-no
| null | null |
Anfield · Anfield Rd, Anfield, Liverpool L4 0TH, United Kingdom
| null | null |
39,807,228 |
https://dafoster.net/articles/2024/03/24/redis-relicensing-why-is-this-a-problem/
|
Redis relicensing: Why is this a problem?
|
David Foster
|
I find the fire & heat around Redis changing licensing surprising. Maintaining Redis takes effort which cannot be free in a sustainable fashion.
Consider this post which complains Redis isn’t “uphold[ing] the ideals of Free and Open Source Software”, as if it was created as a vehicle to evangelize the FSF and OSI missions. But it was not, it was created to be *useful*, as a data structure server.
I also disagree with that post’s assertion that Redis contributions would have been withheld if it started with a different license. I expect contributions would come from existing users of Redis who wanted new features themselves and were willing to sponsor the related effort, regardless of the license in use.
Certainly if Redis has started under its new license and I was using Redis on Azure - which Microsoft has already bought a commercial license for - then I wouldn’t care that Redis was licensed under the SSPL.
Frankly **I think it would be a win for software sustainability if there was a trend of new software projects being offered under source-available licenses** requiring large cloud providers like AWS, Azure, and GCP to give back financially to the core maintainers.
I plan to be announcing a project of my own soon under a source-available license restricting unlicensed commercial use. I’ve already put several years of effort into this project. No way in hell do I want to “get Jeff'ed”, with someone else selling what I’ve put so much of my own effort into creating without giving anything back.
| true | true | true | null |
2024-10-12 00:00:00
|
2024-03-24 00:00:00
|
article
| null |
David Foster
| null | null |
|
976,075 |
http://kurtdaal.com/post/266240513/approcket-2-0-0-has-arrived
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
30,180,752 |
https://www.nextplatform.com/2022/02/02/amd-dimensions-for-success-in-the-datacenter/
|
AMD “Dimensions For Success” In The Datacenter
|
Timothy Prickett Morgan
|
Only two quarters ago, AMD’s datacenter business – meaning sales of Epyc CPUs plus Instinct GPU accelerators – broke through $1 billion. And it was a big deal.
If current trends persist, this business will break through $1.5 billion in the first quarter of 2022 and $2 billion in either the third quarter or the fourth quarter of this year. With demand from hyperscalers and cloud builders on the rise for Epyc CPUs, some major exascale-class HPC systems using a mix of Epyc and Instinct compute engines, and enterprises starting to turn to AMD instead of an Intel still struggling to unfold its Xeon SP roadmap, there is every reason to believe that this year will be the best one ever that AMD has had in the datacenter. Just like last year was its best ever. And we could be saying each year is AMD’s best ever in the datacenter for at least a couple of years.
“We have set out a roadmap for, frankly, not just 2022, but beyond, which allows very aggressive growth goals,” Lisa Su, AMD’s chief executive officer, explained in a conference call going over the fourth quarter 2021 financial results for the chip supplier. “We work on a regular basis with our customers and our supply chain partners. I would say we have better visibility than we have ever had from a customer demand standpoint, and so that gives us pretty good confidence in terms of what is needed, but there are always going to be some puts and takes. And so we have enough flexibility to do that. But our goal is to dimension for success. At the end of the day, that’s what we want to do is we want to satisfy customer demand.”
Ironically, given the tightness of the semiconductor foundry capacity and the supply chain for chip making components and packaging and testing, the hyperscalers and cloud builders who were the first buyers of Epyc processors when the line was launched in 2017, have to tell AMD what they need. They can’t be coy and only tell AMD what Epycs or now Instinct compute engines they might want off the AMD roadmap because there is no spare capacity anywhere in any foundries and they might not get them when they need them otherwise. Now, AMD is very much part of the capacity planning thanks to the pandemic-induced semiconductor shortages, and has “visibility now multiple quarters and, in some cases, multiple years out,” as Su put it.
This explains why Su and Team have the confidence to put together expanded wafer supply and chip etching contracts with Taiwan Semiconductor Manufacturing Co, as they did as 2021 was coming to a close. They already know what the customers who consume the majority of their datacenter chips are going to want, and when they are going to want them.
AMD has not experienced this since the peak of the Opteron server era nearly two decades ago.
In the quarter ended in December, AMD’s Compute and Graphics group, which sells PC chips and all GPUs, posted sales of $2.58 billion, up 31.8 percent, with an operating income of $566 million, up 34.8 percent.
The Enterprise, Embedded, and Semi-Custom group, which makes Epyc server chips, game console and other custom chips, as well as embedded versions of Ryzen desktop and Epyc server chips, posted $2.24 billion in sales, up 74.6 percent, with operating income of $762 million, up by a factor of 3.1X over Q4 2020.
Take out other corporate investments and expenses and pay the taxes, and AMD overall had $4.83 billion in sales, up 48.8 percent and had a net income of $974 million, off 45.3 percent but against a Q4 2020 when it booked a $1.3 billion tax benefit. Take that out, and net income more than doubled – just like graphics revenues did, overall datacenter CPU sales did, datacenter GPUs in the Instinct line did, sales to hyperscalers and cloud builders did, sales to enterprises did. It looks like game consoles did, too. So many parts of AMD’s business doubled it is hard to imagine why the company didn’t double its sales, too.
On the call with Wall Street, Su said that the datacenter business, which is not broken out specifically, contributed a “mid-20 percentage of overall revenue” and that AMD expected that datacenter percentage of overall sales to rise as 2022 progressed.
Here’s how the AMD datacenter business stacked up against its two groups:
We estimate that datacenter sales comprised 26 percent of overall revenues, or $1.26 billion in Q4 2021. Of this, Epyc CPU sales came to $1.11 billion, up 103.4 percent year on year, and Instinct GPUs accounted for $148 million in revenues, up 105 percent year on year. We took a wild estimate and figure that in Q3 and Q4 of 2021, the custom “Trento” Epyc 7003 CPUs and “Aldebaran” Instinct MI250X GPU accelerators used in the 1.5 exaflops “Frontier” supercomputer pumped about $223 million into AMD’s coffers, and that this is a little less than half of list price for these motors as best as we can estimate. Those supercomputer deals sure did help the top line, and we wonder if the hyperscalers and cloud builders get similar discounts. (Probably not, especially given the wonkiness and pricing in the CPU and GPU markets right now.)
While AMD says that all of its businesses are expected to grow in 2022, overall revenues are only expected to grow by 31 percent for the full year. AMD had $16.43 billion in sales in 2021, and with 31 percent growth, it would hit $21.5 billion. Now, some fun with math. If the datacenter business can grow at an average of 20 percent sequentially, as it did in 2021, then we can map out future datacenter sales pretty easily and then calculate the percentage of overall sales for each quarter to add up to a total revenue of $21.5 billion for all of 2022.
When you do that, here is what the revenue model we came up looks like:
There are many ways to fit curves to what Su said on the call, but this is a nice linear model that might reflect what customers will do, given shipment times for current and future compute engines and the competitive pressure.
And there will be competitive pressure from Intel on both the CPU front with “Sapphire Rapids” Xeon SPs and “Ponte Vecchio” Xe HPC GPU accelerators. As Intel’s Q4 2021 demonstrated, if you can make any compute engine in volume, even if the competition can kick the tar out of it, you can sell all you can make because companies *have to buy compute engines*. And demand is keeping prices high, even for relatively uncompetitive parts, as has been the case throughout the coronavirus pandemic. AMD is going to be able to hold its own against Intel with 96-core “Genoa” Epyc 7004 CPUs and Instinct MI200 GPU accelerators – and then some. But Su is not complacent about it, and is diversifying the compute engine lines to chase more opportunities.
“We always expect the competitive environment to be very strong and very aggressive,” Su explained. “And that’s the way we plan our business. That being the case, I think we’re very happy with the growth that we’ve seen in the business sort of last year. As we look forward, we see opportunities in both cloud and enterprise. On the cloud side, we are in ten of the largest hyperscalers in the world. As they get familiar with us over multiple generations, they are expanding the workloads that they are using AMD on. So we see that across internal and external workloads. In the enterprise segment, we doubled year-over-year here in 2021. We continue to add more field support to have more people get familiar with our architecture. We have very strong OEM relationships. So I feel very good about our server trajectory. And yes, it’s very competitive out there. But we think the datacenter business is a secular growth business. And within that, we can grow significantly faster than the market.”
At best, the server market will grow revenues by maybe a few points, so AMD is growing amazingly faster than the market at large. And by the way, at least some of that datacenter revenue is going to be driven by AMD passing through higher semiconductor manufacturing costs to customers and opportunistic pricing as demand chases too little supply.
Thank you for the very knowledgeable article.
“AMD is going to be able to hold its own…” against Intel, with 96-core Genoa? That is a very conservative estimate, considering you won’t see SPR until after Genoa is released.
AMD “Datacenter” = Instinct + Epyc = “INSTEPYC”
It’s not just feeds and speeds that matter, but availability. If AMD only makes X CPUs and Y GPUs for compute, that’s all it has got. Intel gets what is leftover. As its own Q4 shows. The server market had a killer quarter, and that is why Intel did. I strongly suspect some channel stuffing by Intel, but I can’t prove it.
A facet of this transparency is risk/funding, as we are seeing with TSMC.
~”If you want us to go out on an investment limb to assure ur future needs, we need some prepaid orders.”
TSMC have collected many such billions to fund new fabs – even from Intel. Sweet
| true | true | true |
Only two quarters ago, AMD’s datacenter business – meaning sales of Epyc CPUs plus Instinct GPU accelerators – broke through $1 billion. And it was a big
|
2024-10-12 00:00:00
|
2022-02-02 00:00:00
|
http://www.nextplatform.com/wp-content/uploads/2021/11/amd-mi200-oam-logo.jpg
|
article
|
nextplatform.com
|
The Next Platform
| null | null |
14,300,296 |
https://directorsblog.nih.gov/2017/05/09/muscle-enzyme-explains-weight-gain-in-middle-age/
|
Muscle Enzyme Explains Weight Gain in Middle Age
|
Dr Francis Collins
|
# Muscle Enzyme Explains Weight Gain in Middle Age
Posted on by Dr. Francis Collins
The struggle to maintain a healthy weight is a lifelong challenge for many of us. In fact, the average American packs on an extra 30 pounds from early adulthood to age 50. What’s responsible for this tendency toward middle-age spread? For most of us, too many calories and too little exercise definitely play a role. But now comes word that another reason may lie in a strong—and previously unknown—biochemical mechanism related to the normal aging process.
An NIH-led team recently discovered that the normal process of aging causes levels of an enzyme called DNA-PK to rise in animals as they approach middle age. While the enzyme is known for its role in DNA repair, their studies show it also slows down metabolism, making it more difficult to burn fat. To see if reducing DNA-PK levels might rev up the metabolism, the researchers turned to middle-aged mice. They found that a drug-like compound that blocked DNA-PK activity cut weight gain in the mice by a whopping 40 percent!
Jay H. Chung, an intramural researcher with NIH’s National Heart, Lung, and Blood Institute, had always wondered why many middle-aged people and animals gain weight even when they eat less. To explain this paradox, his team looked to biochemical changes in the skeletal muscles of middle-aged mice and rhesus macaques, whose stage in life would be roughly equivalent to a 45-year-old person.
Their studies, published recently in *Cell Metabolism*, uncovered evidence in both species that DNA-PK increases in skeletal muscle with age [1]. The discovery proved intriguing because the enzyme’s role in aging was completely unknown. DNA-PK was actually pretty famous for a totally different role in DNA repair, specifically its promotion of splicing the DNA of developing white blood cells called lymphocytes. In fact, lymphocytes fail to mature in mice without a working copy of the enzyme, causing a devastating immune disorder known as severe combined immunodeficiency (SCID).
Further study by Chung’s team showed that DNA-PK in the muscle acted as a brake that gradually slows down metabolism. The researchers found in these muscle cells that DNA-PK decreases the capacity of the mitochondria, the powerhouses that burn fat for energy. The enzyme also causes a decline in the number of mitochondria in these cells.
The researchers suspected that an increase in DNA-PK in middle age might lead directly to weight gain. If correct, then blocking the enzyme should have the opposite effect and help stop these mice from piling on the pounds.
Indeed, it did. When the researchers treated obese mice with a drug called a DNA-PK inhibitor, they gained considerably less weight while fed a high-fat diet. The treatment also protected the animals from developing early signs of diabetes, which is associated with obesity. Fortunately, there was no sign of trouble in the immune systems of middle-aged mice treated with the DNA-PK inhibitor, presumably because those essential DNA splicing events in lymphocytes had already occurred. Neither was there a sign of serious side effects, such as cancer.
As people age and their weight increases, they also tend to become less physically fit. The new evidence implicates DNA-PK in that process, too. Obese and middle-aged mice treated with the DNA-PK inhibitor showed increased running endurance. With treatment, they ran about twice as long on a tiny mouse treadmill than they would normally.
While the findings are in mice, they suggest that an increase in DNA-PK could explain why it becomes so frustratingly difficult for many of us to stay lean and fit as we age. It also paves the way for the development of a new kind of weight-loss medication designed to target this specific biochemical change that comes with middle age.
Chung says they are now looking for DNA-PK inhibitors that might work even better than the one in this study. But given the fact that DNA-PK has other roles, testing its safety and effectiveness will take time.
While we await the results, the best course to help fight that middle-age spread hasn’t changed. Eat right and follow an exercise plan that you know you can stick to—it will make you feel better. Take it from me, a guy who decided eight years ago that it was time to shape up, stopped eating honey buns, got into a regular exercise program with a trainer to keep me accountable, and lost those 30 pounds. You can do it, even without a DNA-PK inhibitor!
**Reference**:
[1] DNA-PK promotes the mitochondrial, metabolic, and physical decline that occurs during aging. Park SJ, Gavrilova O, Brown AL, Soto JE, Bremner S, Kim J, Xu X, Yang S, Um JH, Koch LG, Britton SL, Lieber RL, Philp A, Baar K, Kohama SG, Abel ED, Kim MK, Chung JH. Cell Metab. 2017 May 2;25(5):1135-1146.
**Links**:
Overweight and Obesity (National Heart, Lung, and Blood Institute/NIH)
Health Tips for Older Adults (National Institute of Diabetes and Digestive and Kidney Diseases/NIH)
Jay H. Chung (National Heart, Lung, and Blood Institute/NIH)
*NIH Support: National Heart, Lung, and Blood Institute; Office of the Director*
### Share this:
- Click to share on LinkedIn (Opens in new window)
- Click to share on Pinterest (Opens in new window)
- Click to share on Tumblr (Opens in new window)
- Click to share on Reddit (Opens in new window)
- Click to share on Telegram (Opens in new window)
- Click to share on WhatsApp (Opens in new window)
- Click to print (Opens in new window)
Tags: aging, aging process, biochemistry, diabetes, diet, DNA repair, DNA-PK, DNA-PK inhibitor, fat, healthy weight, lymphocytes, metabolism, middle age, mitochondria, muscle, obesity, overweight, physical fitness, SCID, severe combined immunodeficiency, skeletal muscle, weight gain, weight loss, weight loss medication
Is it known if exercise can act as the DNA-PK inhibitor?
Good to know, will focus on eating well and exercise.
Useful studies, thanks!
Wow! I love this blog …
This was a very meaningful post, so informative and encouraging. Thank you.
| true | true | true |
The struggle to maintain a healthy weight is a lifelong challenge for many of us. In fact, the average American packs on an extra 30 pounds from early adulthood to age 50. What’s responsible for th…
|
2024-10-12 00:00:00
|
2017-05-09 00:00:00
|
article
|
directorsblog.nih.gov
|
NIH Director's Blog
| null | null |
|
9,699,401 |
http://5pi.de/2015/02/10/prometheus-on-raspberry-pi/
| null | null |
(�/�X�����CIDd @ F G`��Q�o)�oy�9�Q�(��U��@�X%�=�1t֪�O #��P���3nO.Kvd/;���Y��9r�������-�/��ܽ�H6Z��ou��>ۉ��<[r��w��������S���w��{��w��ewY������N���L6��[��;���W��|�XaL��q�TF�%C{I���],��p�e���',�ld`�>n���+�Y����w�ŗ� ����߯/b�\��� �� ~�a�u�;|��j��� ��B �[E�����Kn��O��m������D�����7.^�[�ì�FKk�q�>���k��XI��n-���ʋ��%����Vy��\�!12yF��\�杬\�|i�|]�-�#^���{������X���U{y�p%q�o{���6�q}'N\�[-Ql?�=߉�Zr�����.�2�ĉ�}��˘��6y����֏�������J>�ƴ�e��j������B�{.�(VfM<5EG�ɵ�p��C�X���J�V��ĎO�FG�< )A$�7��l[Vr�/��K���3����}����jm��g����=�-~���q����9�Yk��-~?���~�W��\ȿ�ϰ��Z����UkO�ھ[_~��c���Wؒ�����Um�Z~���z���S����������ܛ�k����6��\|kɕ�l?[��_|ƒ���B���C��|ҵ{>��Q��l�e����͊�[��v>�M%��ظ�d;S�O"����� ��0(Q��}���b��f�Ov��b���.��J��ƽ5��b#� eܲ8�jTqR�/�M�'s�zYc��UW���S�__��h���������<�2��+����c����e���1^��en���*����}��%g`������!�܃M�G�ͮ/�_6_r���?����mS�_�"�����/������-����]�~��c� ���?��`(�������̲��.�=P[��E����Lߏd�P�M���g@���@�g17ě���ʜ^�m�j���&�l�Fvk������V�?�<���ߺ������/|�a����G�lX��.��9�ڦ\�>����\i�8Kw�@k�]�>�ga�0�i���5 1�Xa�]K��i��n � r=�J���ӣ�6ou��y�*|�0��@��"'�� ������@H&�������<�k�s�Q�D��㬅aj���(���8�X��E�E��1�s��,F�� Y��Vdk1�o�/K&�����[��}����9H����8p;��Vh���F�+ ��~��5#ۮ,�I[)CZ`eam�29��� '��r����q_�xqoY.��柞�����`ާ�q�����g[�|R���B�Գ�����Ղ�����X�oY�0o?��"��o��0�������c�J�T}�(�Wԫ��Yrn|��{ͅi?�S�9�o4���k�}-��mm���'�`ڶ6�e���P⯈�e�}]2���XE~��doߺ���_V�Z�-�/���g[���3�f�̚6�j�0�)��h��5J���zQ&/�q�&����q�E�� y�m>q�g�5qn3e�Z>�L����O����T���G�ZM��K�̽@/���B�/6�Y`E�'�?Xb �X��?Xʗ�!?���Fk|��m�S�+�W�����O�Y&����DP�zw^�9�P�M~F1�ק.~}�{�z��������b�i����n��Dͣ�5im�X��m�C3���ZH�� [�7���Y-����LY�a<ً@I��K3�7�"eܜe�|Ve��p�I�$�:������_b5��_~[�e_�,c���t+�t�َ`��ָ'�fa�[��� h��Ϙķ��`m?��e���@+�v��.���
| true | true | true | null |
2024-10-12 00:00:00
| null | null | null | null | null | null | null |
4,214,561 |
https://www.eff.org/press/releases/three-nsa-whistleblowers-back-effs-lawsuit-over-governments-massive-spying-program
|
Three NSA Whistleblowers Back EFF's Lawsuit Over Government's Massive Spying Program
|
Press Release
|
San Francisco - Three whistleblowers – all former employees of the National Security Agency (NSA) – have come forward to give evidence in the Electronic Frontier Foundation's (EFF's) lawsuit against the government's illegal mass surveillance program, Jewel v. NSA.
In a motion filed today, the three former intelligence analysts confirm that the NSA has, or is in the process of obtaining, the capability to seize and store most electronic communications passing through its U.S. intercept centers, such as the "secret room" at the AT&T facility in San Francisco first disclosed by retired AT&T technician Mark Klein in early 2006.
"For years, government lawyers have been arguing that our case is too secret for the courts to consider, despite the mounting confirmation of widespread mass illegal surveillance of ordinary people," said EFF Legal Director Cindy Cohn. "Now we have three former NSA officials confirming the basic facts. Neither the Constitution nor federal law allow the government to collect massive amounts of communications and data of innocent Americans and fish around in it in case it might find something interesting. This kind of power is too easily abused. We're extremely pleased that more whistleblowers have come forward to help end this massive spying program."
The three former NSA employees with declarations in EFF's brief are William E. Binney, Thomas A. Drake, and J. Kirk Wiebe. All were targets of a federal investigation into leaks to the New York Times that sparked the initial news coverage about the warrantless wiretapping program. Binney and Wiebe were formally cleared of charges and Drake had those charges against him dropped.
Jewel v. NSA is back in district court after the 9th U.S. Circuit Court of Appeals reinstated it in late 2011. In the motion for partial summary judgment filed today, EFF asked the court to reject the stale state secrets arguments that the government has been using in its attempts to sidetrack this important litigation and instead apply the processes in the Foreign Intelligence Surveillance Act that require the court to determine whether electronic surveillance was conducted legally.
"The NSA warrantless surveillance programs have been the subject of widespread reporting and debate for more than six years now. They are just not a secret," said EFF Senior Staff Attorney Lee Tien. "Yet the government keeps making the same 'state secrets' claims again and again. It's time for Americans to have their day in court and for a judge to rule on the legality of this massive surveillance."
For the full motion for partial summary judgment:
https://www.eff.org/document/plaintiffs-motion-partial-summary-judgment
For more on this case:
https://www.eff.org/cases/jewel
Contacts:
Cindy Cohn
Legal Director
Electronic Frontier Foundation
[email protected]
Lee Tien
Senior Staff Attorney
Electronic Frontier Foundation
[email protected]
| true | true | true |
San Francisco - Three whistleblowers – all former employees of the National Security Agency (NSA) – have come forward to give evidence in the Electronic Frontier Foundation's (EFF's) lawsuit against the government's illegal mass surveillance program, Jewel v. NSA. In a motion filed today, the three...
|
2024-10-12 00:00:00
|
2012-07-02 00:00:00
|
article
|
eff.org
|
Electronic Frontier Foundation
| null | null |
|
9,065,374 |
https://medium.com/cuepoint/consider-the-mix-tape-9f37839c1246
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
7,227,886 |
http://www.how-to-hack.net
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
12,945,989 |
https://en.wikipedia.org/wiki/British_Airways_Flight_5390
|
British Airways Flight 5390 - Wikipedia
| null |
# British Airways Flight 5390
Accident | |
---|---|
Date | 10 June 1990 |
Summary | Explosive decompression of cockpit window due to poor maintenance procedures |
Site | Didcot, Oxfordshire, United Kingdom 51°36′21″N 1°14′27″W / 51.60583°N 1.24083°W |
Aircraft | |
Aircraft type | BAC One-Eleven 528FL |
Aircraft name | County of South Glamorgan |
Operator | British Airways |
IATA flight No. | BA5390 |
ICAO flight No. | BAW5390 |
Call sign | SPEEDBIRD 5390 |
Registration | G-BJRT |
Flight origin | Birmingham Airport, United Kingdom |
Destination | Málaga Airport, Spain |
Occupants | 87 |
Passengers | 81 |
Crew | 6 |
Fatalities | 0 |
Injuries | 2 |
Survivors | 87 |
**British Airways Flight 5390** was a flight from Birmingham Airport in England for Málaga Airport in Spain. On 10 June 1990, the BAC One-Eleven 528FL suffered an explosive decompression. While the aircraft was flying over Didcot, Oxfordshire, an improperly installed windscreen panel separated from its frame, causing the captain to be partially ejected from the aircraft. He was held in place through the window frame for 20 minutes until the first officer landed at Southampton Airport.[1]
## Background
[edit]### Aircraft
[edit]The *County of South Glamorgan* was a BAC One-Eleven Series 528FL jet airliner, registered as G-BJRT.[2]
### Crew
[edit]The captain was 42-year-old Timothy Lancaster, who had logged 11,050 flight hours, including 1,075 hours on the BAC One-Eleven; the copilot was 39-year-old Alastair Atchison, with 7,500 flight hours, with 1,100 of them on the BAC One-Eleven.[3] The aircraft also carried four cabin crew and 81 passengers.
## Accident
[edit]Atchison handled a routine take-off at 08:20 local time (07:20 UTC), then handed control to Lancaster as the plane continued to climb. Both pilots released their shoulder harnesses and Lancaster loosened his lap belt. At 08:33 (07:33 UTC), the plane had climbed through about 17,300 feet (5,300 m)[3]: 3 over Didcot, Oxfordshire, and the cabin crew were preparing for meal service.
Flight attendant Nigel Ogden was entering the cockpit when a loud bang occurred[4] and the cabin quickly filled with condensation. The left windscreen panel, on Lancaster's side of the flight deck, had separated from the forward fuselage; Lancaster was propelled out of his seat by the rushing air from the decompression and forced headfirst out of the flight deck. His knees were caught on the flight controls and his upper torso remained outside the aircraft, exposed to extreme wind and cold. The autopilot disengaged, causing the plane to descend rapidly.[4] The flight deck door was blown inward onto the control console, blocking the throttle control (causing the aircraft to gain speed as it descended), flight documents and check lists were blown out of the cockpit, and debris blew in from the passenger cabin. Ogden rushed to grab Lancaster's belt, while the other two flight attendants secured loose objects, reassured passengers, and instructed them to adopt brace positions in anticipation of an emergency landing.
The plane was not equipped with oxygen for everyone on board, so Atchison began a rapid emergency descent to reach an altitude with sufficient air pressure. He then re-engaged the autopilot and broadcast a distress call, but he was unable to hear the response from air traffic control (ATC) because of wind noise; the difficulty in establishing two-way communication led to a delay in initiation of emergency procedures.
Ogden, still holding on to Lancaster, was by now becoming exhausted, so Chief Steward John Heward and flight attendant Simon Rogers took over the task of holding on to the captain.[5] By this time, Lancaster had shifted several centimetres farther outside and his head was repeatedly striking the side of the fuselage. The crew believed him to be dead, but Atchison told the others to continue holding onto him, out of fear that letting go of him might cause him to strike the left wing, engine, or horizontal stabiliser, potentially damaging it.
Eventually, Atchison was able to hear the clearance from ATC to make an emergency landing at Southampton Airport. The flight attendants managed to free Lancaster's ankles from the flight controls while still keeping hold of him. At 08:55 local time (07:55 UTC), the aircraft landed at Southampton and the passengers disembarked using boarding steps.[6]
Lancaster survived with frostbite, bruising, shock, and fractures to his right arm, left thumb, and right wrist.[4][7] Ogden had frostbite in his face, a dislocated shoulder, and later suffered from post-traumatic stress disorder. There were no other injuries.[7]
## Investigation
[edit]Police located the blown-off windscreen panel and many of the 90 bolts used to secure it near Cholsey, Oxfordshire.[3]: 12 Investigators determined that when the windscreen was installed 27 hours before the flight, 84 of the bolts used were 0.026 inches (0.66 mm) too small in diameter (British Standards A211-8C vs A211-8D, which are #8–32 vs #10–32 by the Unified Thread Standard) and the remaining six were A211-7D, which is the correct diameter, but 0.1 inches (2.5 mm) too short (0.7 inch vs. 0.8 inch).[3]: 52 The previous windscreen had also been fitted using incorrect bolts, which were replaced by the shift maintenance manager on a like-for-like basis without reference to maintenance documentation, as the plane was due to depart shortly.[3]: 38 The undersized bolts were unable to withstand the force due to the air pressure difference between the cabin and the outside atmosphere during flight.
(The windscreen was not of the "plug" type – fitted from the inside so that cabin pressure helps to hold it in place, but of the type fitted from the outside so that cabin pressure tends to dislodge it.)[3]: 7
Investigators found that the shift maintenance manager responsible for installing the incorrect bolts had failed to follow British Airways policies. They recommended that staff with prescription glasses should be required to wear them when undertaking maintenance tasks. They also faulted the policies themselves, which should have required testing or verification by another individual for this critical task. Finally, they found the local Birmingham Airport management responsible for not directly monitoring the shift maintenance manager's working practices.[3]: 55
## Awards
[edit]First Officer Alastair Atchison and cabin crew members Susan Gibbins and Nigel Ogden were awarded the Queen's Commendation for Valuable Service in the Air; Ogden's name was erroneously omitted from the published supplement.[8] Atchison was also awarded a 1992 Polaris Award for outstanding airmanship.[9]
## Aftermath
[edit]The aircraft was repaired and returned to service. In 1993 it was sold to Jaro International and flew with them until they ceased operations in 2001; the aircraft was scrapped in 2002.[ citation needed]
Lancaster returned to work after less than five months. He left British Airways in 2003 and flew with EasyJet until he retired from commercial piloting in 2008.[4][7]
Atchison left British Airways shortly after the accident and joined Channel Express (later rebranded as Jet2) until he made his last commercial flight on a Boeing 737-33A from Alicante to Manchester on the day of his 65th birthday on 28 June 2015.[4]
Ogden returned to work, but subsequently suffered from PTSD and retired in 2001 on the grounds of ill health. As of 2005, he was working as a night watchman at a Salvation Army hospital.[7]
## See also
[edit]- Sichuan Airlines Flight 8633, a similar accident in which the first officer survived being partially blown out of the cockpit following a failure of the windshield of the Airbus A319 operating the flight
- Southwest Airlines Flight 1380 had an explosive decompression accident following an uncontained engine failure that resulted in a passenger being partially blown out of a window; the passenger later died from her injuries.
## References
[edit]**^**Ranter, Harro. "ASN Aircraft accident BAC One-Eleven 528FL G-BJRT Didcot".*Aviation Safety Network*. Flight Safety Foundation. Archived from the original on 16 July 2019. Retrieved 28 August 2020.**^**"G-INFO Database".*Civil Aviation Authority*.- ^
**a****b****c****d****e****f**Report No: 1/1992. Report on the accident to BAC One-Eleven, G-BJRT, over Didcot, Oxfordshire on 10 June 1990 (Report). Air Accidents Investigation Branch, HMSO. 1 February 1992. ISBN 0115510990. Archived from the original on 29 September 2017. Retrieved 29 September 2017.**g** - ^
**a****b****c****d**"Tributes to the reluctant hero of Flight 5390".**e***The Sunday Post (Inverness)*. 5 July 2015. Archived from the original on 24 October 2020. Retrieved 26 April 2020. **^**"June 10, 1990: Miracle of BA Flight 5390 as captain is sucked out of the cockpit – and survives".*BT*. 2018. Archived from the original on 12 January 2019. Retrieved 26 April 2018.**^**"Image of pilot hanging out window captures heroic story 30 years on".*NZ Herald*. Retrieved 17 September 2021.- ^
**a****b****c**"This is your captain screaming (interview with Nigel Ogden)".**d***The Sydney Morning Herald*. 5 February 2005. Archived from the original on 10 April 2008. Retrieved 21 April 2008. **^**"No. 52767".*The London Gazette*(Supplement). 30 December 1991. p. 27.**^**Lee, Jim (13 September 2015). "Jet2.com announces significant investment in additional aircraft".*FlyingInIreland.com*. Retrieved 8 March 2023.His skill and heroism was recognised by the awarding of the Queen's Commendation for Valuable Service in the Air and a 1992 Polaris Award which is the highest decoration associated with civil aviation, ...
## External links
[edit]- Air Accidents Investigation Branch
- Report No: 1/1992. Report on the accident to BAC One-Eleven, G-BJRT, over Didcot, Oxfordshire on 10 June 1990
- Final report
- Transcript of Air Traffic Control communications during the incident (Archive)
- Summary of the Final Report (Archive)
- Database entry (Archive)
- News article showing image of cockpit exterior after landing Archived 12 January 2019 at the Wayback Machine
- 1990 in England
- Airliner accidents and incidents caused by maintenance errors
- Aviation accidents and incidents in England
- Aviation accidents and incidents in 1990
- British Airways accidents and incidents
- Airliner accidents and incidents caused by in-flight structural failure
- Airliner accidents and incidents involving in-flight depressurization
- Accidents and incidents involving the BAC One-Eleven
- June 1990 events in the United Kingdom
- Airliner accidents and incidents in the United Kingdom
- Airliner accidents and incidents caused by pilot incapacitation
| true | true | true | null |
2024-10-12 00:00:00
|
2004-12-31 00:00:00
|
website
|
wikipedia.org
|
Wikimedia Foundation, Inc.
| null | null |
|
19,928,036 |
http://www.bbc.com/future/story/20190513-it-only-takes-35-of-people-to-change-the-world
|
The '3.5% rule': How a small minority can change the world
|
David Robson
|
# The '3.5% rule': How a small minority can change the world
**Nonviolent protests are twice as likely to succeed as armed conflicts – and those engaging a threshold of 3.5% of the population have never failed to bring about change.**
In 1986, millions of Filipinos took to the streets of Manila in peaceful protest and prayer in the People Power movement. The Marcos regime folded on the fourth day.
In 2003, the people of Georgia ousted Eduard Shevardnadze through the bloodless Rose Revolution, in which protestors stormed the parliament building holding the flowers in their hands. While in 2019, the presidents of Sudan and Algeria both announced they would step aside after decades in office, thanks to peaceful campaigns of resistance.
*You might also enjoy:*
In each case, civil resistance by ordinary members of the public trumped the political elite to achieve radical change.
There are, of course, many ethical reasons to use nonviolent strategies. But compelling research by Erica Chenoweth, a political scientist at Harvard University, confirms that civil disobedience is not only the moral choice; it is also the most powerful way of shaping world politics – by a long way.
Looking at hundreds of campaigns over the last century, Chenoweth found that nonviolent campaigns are twice as likely to achieve their goals as violent campaigns. And although the exact dynamics will depend on many factors, she has shown it takes around 3.5% of the population actively participating in the protests to ensure serious political change.
Chenoweth’s influence can be seen in the recent Extinction Rebellion protests, whose founders say they have been directly inspired by her findings. So just how did she come to these conclusions?
Needless to say, Chenoweth’s research builds on the philosophies of many influential figures throughout history. The African-American abolitionist Sojourner Truth, the suffrage campaigner Susan B Anthony, the Indian independence activist Mahatma Gandhi and the US civil rights campaigner Martin Luther King have all convincingly argued for the power of peaceful protest.
Yet Chenoweth admits that when she first began her research in the mid-2000s, she was initially rather cynical of the idea that nonviolent actions could be more powerful than armed conflict in most situations. As a PhD student at the University of Colorado, she had spent years studying the factors contributing to the rise of terrorism when she was asked to attend an academic workshop organised by the International Center of Nonviolent Conflict (ICNC), a non-profit organisation based in Washington DC. The workshop presented many compelling examples of peaceful protests bringing about lasting political change – including, for instance, the People Power protests in the Philippines.
But Chenoweth was surprised to find that no-one had comprehensively compared the success rates of nonviolent versus violent protests; perhaps the case studies were simply chosen through some kind of confirmation bias. “I was really motivated by some scepticism that nonviolent resistance could be an effective method for achieving major transformations in society,” she says
Working with Maria Stephan, a researcher at the ICNC, Chenoweth performed an extensive review of the literature on civil resistance and social movements from 1900 to 2006 – a data set then corroborated with other experts in the field. They primarily considered attempts to bring about regime change. A movement was considered a success if it fully achieved its goals both within a year of its peak engagement and as a direct result of its activities. A regime change resulting from foreign military intervention would not be considered a success, for instance. A campaign was considered violent, meanwhile, if it involved bombings, kidnappings, the destruction of infrastructure – or any other physical harm to people or property.
“We were trying to apply a pretty hard test to nonviolent resistance as a strategy,” Chenoweth says. (The criteria were so strict that India’s independence movement was not considered as evidence in favour of nonviolent protest in Chenoweth and Stephan’s analysis – since Britain’s dwindling military resources were considered to have been a deciding factor, even if the protests themselves were also a huge influence.)
By the end of this process, they had collected data from 323 violent and nonviolent campaigns. And their results – which were published in their book Why Civil Resistance Works: The Strategic Logic of Nonviolent Conflict – were striking.
**Strength in numbers**
Overall, nonviolent campaigns were twice as likely to succeed as violent campaigns: they led to political change 53% of the time compared to 26% for the violent protests.
This was partly the result of strength in numbers. Chenoweth argues that nonviolent campaigns are more likely to succeed because they can recruit many more participants from a much broader demographic, which can cause severe disruption that paralyses normal urban life and the functioning of society.
In fact, of the 25 largest campaigns that they studied, 20 were nonviolent, and 14 of these were outright successes. Overall, the nonviolent campaigns attracted around four times as many participants (200,000) as the average violent campaign (50,000).
The People Power campaign against the Marcos regime in the Philippines, for instance, attracted two million participants at its height, while the Brazilian uprising in 1984 and 1985 attracted one million, and the Velvet Revolution in Czechoslovakia in 1989 attracted 500,000 participants.
“Numbers really matter for building power in ways that can really pose a serious challenge or threat to entrenched authorities or occupations,” Chenoweth says – and nonviolent protest seems to be the best way to get that widespread support.
Once around 3.5% of the whole population has begun to participate actively, success appears to be inevitable.
“There weren’t any campaigns that had failed after they had achieved 3.5% participation during a peak event,” says Chenoweth – a phenomenon she has called the “3.5% rule”. Besides the People Power movement, that included the Singing Revolution in Estonia in the late 1980s and the Rose Revolution in Georgia in the early 2003.
Chenoweth admits that she was initially surprised by her results. But she now cites many reasons that nonviolent protests can garner such high levels of support. Perhaps most obviously, violent protests necessarily exclude people who abhor and fear bloodshed, whereas peaceful protesters maintain the moral high ground.
Chenoweth points out that nonviolent protests also have fewer physical barriers to participation. You do not need to be fit and healthy to engage in a strike, whereas violent campaigns tend to lean on the support of physically fit young men. And while many forms of nonviolent protests also carry serious risks – just think of China’s response in Tiananmen Square in 1989 – Chenoweth argues that nonviolent campaigns are generally easier to discuss openly, which means that news of their occurrence can reach a wider audience. Violent movements, on the other hand, require a supply of weapons, and tend to rely on more secretive underground operations that might struggle to reach the general population.
By engaging broad support across the population, nonviolent campaigns are also more likely to win support among the police and the military – the very groups that the government should be leaning on to bring about order.
During a peaceful street protest of millions of people, the members of the security forces may also be more likely to fear that their family members or friends are in the crowd – meaning that they fail to crack down on the movement. “Or when they’re looking at the [sheer] numbers of people involved, they may just come to the conclusion the ship has sailed, and they don’t want to go down with the ship,” Chenoweth says.
In terms of the specific strategies that are used, general strikes “are probably one of the most powerful, if not the most powerful, single method of nonviolent resistance”, Chenoweth says. But they do come at a personal cost, whereas other forms of protest can be completely anonymous. She points to the consumer boycotts in apartheid-era South Africa, in which many black citizens refused to buy products from companies with white owners. The result was an economic crisis among the country’s white elite that contributed to the end of segregation in the early 1990s.
“There are more options for engaging and nonviolent resistance that don’t place people in as much physical danger, particularly as the numbers grow, compared to armed activity,” Chenoweth says. “And the techniques of nonviolent resistance are often more visible, so that it's easier for people to find out how to participate directly, and how to coordinate their activities for maximum disruption.”
**A magic number?**
These are very general patterns, of course, and despite being twice as successful as the violent conflicts, peaceful resistance still failed 47% of the time. As Chenoweth and Stephan pointed out in their book, that’s sometimes because they never really gained enough support or momentum to “erode the power base of the adversary and maintain resilience in the face of repression”. But some relatively large nonviolent protests also failed, such as the protests against the communist party in East Germany in the 1950s, which attracted 400,000 members (around 2% of the population) at their peak, but still failed to bring about change.
In Chenoweth’s data set, it was only once the nonviolent protests had achieved that 3.5% threshold of active engagement that success seemed to be guaranteed – and raising even that level of support is no mean feat. In the UK it would amount to 2.3 million people actively engaging in a movement (roughly twice the size of Birmingham, the UK’s second largest city); in the US, it would involve 11 million citizens – more than the total population of New York City.
The fact remains, however, that nonviolent campaigns are the only reliable way of maintaining that kind of engagement.
Chenoweth and Stephan’s initial study was first published in 2011 and their findings have attracted a lot of attention since. “It’s hard to overstate how influential they have been to this body of research,” says Matthew Chandler, who researches civil resistance at the University of Notre Dame in Indiana.
Isabel Bramsen, who studies international conflict at the University of Copenhagen agrees that Chenoweth and Stephan’s results are compelling. “It’s [now] an established truth within the field that the nonviolent approaches are much more likely to succeed than violent ones,” she says.
Regarding the “3.5% rule”, she points out that while 3.5% is a small minority, such a level of *active *participation probably means many more people tacitly agree with the cause.
These researchers are now looking to further untangle the factors that may lead to a movement’s success or failure. Bramsen and Chandler, for instance, both emphasise the importance of unity among demonstrators.
As an example, Bramsen points to the failed uprising in Bahrain in 2011. The campaign initially engaged many protestors, but quickly split into competing factions. The resulting loss of cohesion, Bramsen thinks, ultimately prevented the movement from gaining enough momentum to bring about change.
Chenoweth’s interest has recently focused on protests closer to home – like the Black Lives Matter movement and the Women’s March in 2017. She is also interested in Extinction Rebellion, recently popularised by the involvement of the Swedish activist Greta Thunberg. “They are up against a lot of inertia,” she says. “But I think that they have an incredibly thoughtful and strategic core. And they seem to have all the right instincts about how to develop and teach through a nonviolent resistance campaigns.”
Ultimately, she would like our history books to pay greater attention to nonviolent campaigns rather than concentrating so heavily on warfare. “So many of the histories that we tell one another focus on violence – and even if it is a total disaster, we still find a way to find victories within it,” she says. Yet we tend to ignore the success of peaceful protest, she says.
“Ordinary people, all the time, are engaging in pretty heroic activities that are actually changing the way the world – and those deserve some notice and celebration as well.”
*David Robson is a senior journalist at BBC Future. Follow him on Twitter: **@d_a_robson**.*
--
*If you liked this story, ***sign up for the weekly bbc.com features newsletter***, called “If You Only Read 6 Things This Week”. A handpicked selection of stories from BBC Future, Culture, Capital, and Travel, delivered to your inbox every Friday*.
| true | true | true |
Nonviolent protests are twice as likely to succeed as armed conflicts – and those engaging a threshold of 3.5% of the population have never failed to bring about change.
|
2024-10-12 00:00:00
|
2019-05-14 00:00:00
|
newsarticle
|
bbc.com
|
BBC
| null | null |
|
23,314,853 |
https://techcrunch.com/2020/05/26/preventing-food-waste-nets-apeel-250-million-from-singapores-government-oprah-and-katy-perry/
|
Preventing food waste nets Apeel $250 million from Singapore's government, Oprah and Katy Perry | TechCrunch
|
Jonathan Shieber
|
Food waste and the pressures on the global food supply chain wrought by the COVID-19 pandemic have captured headlines around the world, and one small startup based in the coastal California city of Santa Barbara has just announced $250 million in financing to provide a solution.
The company is called Apeel Sciences, and over the past eight years it has grown from a humble startup launched with a $100,000 grant from the Gates Foundation to a giant, globe-spanning company worth more than $1 billion and attracting celebrity backers like Oprah Winfrey and Katy Perry, as well as large multi-national investors like Singapore’s sovereign wealth fund.
What’s drawn these financiers and the fabulously famous to invest is the technology that Apeel has developed, which promises to keep food fresh for longer periods on store shelves, which prevents waste and (somewhat counterintuitively) encourages shoppers to buy more vegetables.
At least, that’s the pitch that Apeel Sciences founder and chief executive James Rogers has been making for the last eight years. It has netted his company roughly $360 million in total financing and attracted investors like Upfront Ventures, S2G Ventures, Andreessen Horowitz and Powerplant Ventures.
“The [food] system is taxed beyond its limit,” says Rogers. “We view our job at Apeel to build the food system and support the weight of a couple of more billion people on the planet.”
Rogers started working on the technology that would become the core of Apeel’s product while pursuing his doctorate at the University of California, Santa Barbara. The first-time entrepreneur’s epiphany came on the road from Lawrence Livermore Laboratory where he was working as an intern.
Driving past acres of California cropland, Rogers surmised that the problem with the food supply network that exists wasn’t necessarily the ability to produce enough food, it was that much of that food is spoiled and wasted between where it’s grown and where it needs to be distributed.
In the past, farmers had turned to pesticides to prevent disease and infestations that could kill crops, and preservative methods like single-use plastic packaging or chemical treatments that had the seeds of other environmental catastrophes.
“We’re out of shortcuts,” says Rogers. “Single-use plastic had its day and pesticides had their day.” For Rogers, it’s time for Apeel’s preservative technologies to have their day.
With all the new cash in Apeel’s coffers, Rogers said that the company would begin expanding its operations and working with the big farming companies and growers in Africa, Central America and South America. “To maintain 52 weeks of supply on shelves we need to have operations in the Northern and Southern hemispheres,” Rogers said.
For all of the company’s lofty goals, the company is working with a relatively limited range of produce — avocados, asparagus, lemons and limes. Still, the pitch — and Rogers’ vision — is much broader. “Let’s take what the orange knows and teach it to the cucumber so that it doesn’t have to be wrapped in plastic,” says Rogers. “When you reduce that waste there’s a ton of economic value that is unlocked.”
Right now, the way the business works is through convincing retailers about all that economic value that’s waiting to be unlocked.
In practice, once a company agrees to try out Apeel’s technology, it installs the company’s treatment systems at the back end of its supply chain where all of their vegetable deliveries come in to be shipped to various locations, according to Rogers.
A single run of Apeel’s system can treat 10,000 kilograms of food in an hour, Rogers said. So far this year, Apeel is on track to treat 20 million pieces of fruit with its coatings, the company said.
Apeel Sciences is already working with food retailers in the U.S. and Europe. On average, grocers that use Apeel have experienced a 50% reduction in shrink, a 5-10% growth in dollar sales and an incremental 10% growth in dollar sales when sold in conjunction with in-store marketing campaigns, the company said.
“Food waste is an invisible tax imposed on everyone that participates in the food system. Eliminating global food waste can free up $2.6 trillion annually, allowing us to make the food ecosystem better for growers, distributors, retailers, consumers and our planet,” said Rogers in a statement. “Together, we’re putting time back on the industry’s side to help deal with the food waste crisis and the challenges it poses to food businesses.”
| true | true | true |
Food waste and the pressures on the global food supply chain wrought by the COVID-19 pandemic have captured headlines around the world, and one small
|
2024-10-12 00:00:00
|
2020-05-26 00:00:00
|
article
|
techcrunch.com
|
TechCrunch
| null | null |
|
998,948 |
http://www.dearstevejobs.com/en
|
dearstevejobs.com
| null |
Buy this domain.
dearstevejobs.com
| true | true | true |
This domain may be for sale!
|
2024-10-12 00:00:00
|
2024-01-01 00:00:00
| null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.