id
int64 3
41.8M
| url
stringlengths 1
1.84k
| title
stringlengths 1
9.99k
⌀ | author
stringlengths 1
10k
⌀ | markdown
stringlengths 1
4.36M
⌀ | downloaded
bool 2
classes | meta_extracted
bool 2
classes | parsed
bool 2
classes | description
stringlengths 1
10k
⌀ | filedate
stringclasses 2
values | date
stringlengths 9
19
⌀ | image
stringlengths 1
10k
⌀ | pagetype
stringclasses 365
values | hostname
stringlengths 4
84
⌀ | sitename
stringlengths 1
1.6k
⌀ | tags
stringclasses 0
values | categories
stringclasses 0
values |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
20,490,533 |
https://github.com/SavchenkoValeriy/emacs-chocolate-theme
|
GitHub - SavchenkoValeriy/emacs-chocolate-theme: 🍫A dark chocolatey theme for Emacs 🍫
|
SavchenkoValeriy
|
Poor doggies can't experience it because of two reasons
Chocolate theme is my personal attempt to develop a complex fully-functional theme for Emacs. It's dark, it's chocolaty, it's vibrant and it's subtle, it's whatever you're looking for in a perfect theme for development.
It's available on Melpa:
`M-x package-install [RET] chocolate-theme [RET]`
`M-x load-theme [RET] chocolate [RET]`
Or using use-package:
```
(use-package chocolate-theme
:ensure t
:config
(load-theme 'chocolate t))
```
Download chocolate-theme.el and copy it into the `~/.emacs.d/themes`
directory.
Then add the following code to your `init.el`
configuration:
```
(add-to-list 'custom-theme-load-path "~/.emacs.d/themes")
(load-theme 'chocolate t)
```
All contributions are most welcome!
I obviously don't use all of the modes out there and `chocolate-theme`
can have a few blind spots. If you notice one, feel free reporting it. Please, add a meaningful example and a screenshot. If you can fix it on your own, I'll try to look at your pull request ASAP! But examples and screenshots will still make it faster.
`chocolate-theme`
is based on a great palette from the firewatch-hot-syntax theme for Atom (which in its turn was inspired by firewatch-syntax).
| true | true | true |
🍫A dark chocolatey theme for Emacs 🍫. Contribute to SavchenkoValeriy/emacs-chocolate-theme development by creating an account on GitHub.
|
2024-10-12 00:00:00
|
2018-04-19 00:00:00
|
https://opengraph.githubassets.com/b51930fc27bd2a935fed533072ee5ae072f51c62acef19a45b283787cacd0f97/SavchenkoValeriy/emacs-chocolate-theme
|
object
|
github.com
|
GitHub
| null | null |
30,367,712 |
https://www.upf.edu/web/focus/noticies/-/asset_publisher/qOocsyZZDGHL/content/id/255763203/maximized%23.Yg2PenXP0VD
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
33,981,934 |
https://www.youtube.com/watch?v=cqGjhVJWtEg
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
11,348,923 |
http://www.zdnet.com/article/apple-pay-coming-to-mobile-web-with-touchid-by-end-of-2016-report/
|
Apple Pay coming to mobile web with TouchID by end of 2016: Report
|
Jake Smith
|
# Apple Pay coming to mobile web with TouchID by end of 2016: Report
Apple is looking to heavily expand on Apple Pay by letting shoppers complete purchases through the mobile web with their fingerprint, according to Recode citing "multiple sources".
Recode said the feature will be found in the Safari browser on iPhones and iPads that have TouchID. It could be announced at Apple's WWDC in June, with a release by Christmas of this year.
Further, the report mentions Apple is considering bringing the feature to its desktops and laptops, but it's not clear if these plans will lead to an actual consumer product.
The new Apple Pay feature would take on PayPal's One Touch, a similar mobile payment product. Users enter in their PayPal credentials to pay directly from the retailer's checkout.
PayPal has signed up 250 of the top 500 retailers, including Macy's, Home Depot, and Abercrombie & Fitch. It's not clear if Apple will partner with companies for the mobile browser payments or bake the functionality into Safari through a keychain-like storage system.
At its media event on Monday, Apple was happy to talk Apple Pay numbers. The company said Apple Pay users in China added 3 million payment cards to the mobile payment service within the first 72 hours after its launch abroad in February.
| true | true | true |
The new Apple Pay feature would let users complete purchases with their fingerprint through TouchID.
|
2024-10-12 00:00:00
|
2016-03-23 00:00:00
|
article
|
zdnet.com
|
ZDNET
| null | null |
|
21,775,258 |
https://www.cockroachlabs.com/blog/2020-cloud-report/
|
GCP Comes Out Swinging Against AWS and Azure in 2020 Cloud Report
| null |
**[THE 2021 CLOUD REPORT IS AVAILABLE. READ IT HERE]**
Since 2017, Cockroach Labs has run thousands of benchmark tests across dozens of machine types with the goal of better understanding performance across cloud providers. If there’s one thing we’ve learned in our experiments, it’s this: benchmarking the clouds is a continuous process. Since results fluctuate as the clouds adopt new hardware, it’s important to regularly re-evaluate your configuration (and cloud vendor).
In 2017, our internal testing suggested near equitable outcomes between AWS and GCP. Only a year later, in 2018, AWS outperformed GCP by 40%, which we attributed to AWS’s Nitro System present in c5 and m5 series. So: did those results hold for another year?
Decidedly not. Each year, we’re surprised and impressed by the improvements made across cloud performance, and this year was no exception.
**DOWNLOAD THE 2020 CLOUD REPORT**
We completed over 1,000 benchmark tests (including CPU, Network Throughput, Network Latency, Storage Read Performance, Storage Write Performance, and TPC-C), and found that the playing field looks a lot more equitable than it did last year. Most notably, we saw that GCP has made noticeable improvements in the TPC-C benchmark such that all three clouds fall within the same relative zone for top-end performance.
The 2020 Cloud Report report expands upon learnings from last year’s work, comparing the performance of AWS, GCP, and new-to-the-report Azure on a series of microbenchmarks and customer-like-workloads to help our customers understand the performance tradeoffs present within each cloud and its machine
**What’s New in the 2020 Cloud Report?**
In the 2020 report, we've expanded our research. We:
**Added Microsoft Azure to our tests****Expanded the machine types tested from AWS and GCP****Open-sourced Roachprod, a microbenchmarking tool that makes it easy to reproduce all microbenchmarks**
You might be wondering, why the jump from 2018 to 2020? Did we take a year off? We’ve rebranded the report to focus on the upcoming year. So, like the fashion or automobile industries, we will be reporting our findings as of Fall 2019 for 2020 in the 2020 Cloud Report.
## How We Benchmark Cloud Providers
CockroachDB is an OLTP database, which means we’re primarily concerned with transactional workloads when benchmarking cloud providers. Our approach to benchmarking largely centers around TPC-C. This year, we ran three sets of microbenchmark experiments (also using open-source tools) in the build-up to our TPC-C tests.
In our full report, you can find all our test results, (and details on the open-source tools we used to benchmark them), including:
CPU (stress-ng)
Network throughput and latency (iPerf and ping)
Storage I/O read and write (sysbench)
Overall workload performance (TPC-C)
## TPC-C Performance
We test workload performance by using TPC-C, a popular OLTP benchmark tool that simulates an e-commerce business, given our familiarity with this workload. TPC-C is an OLTP benchmark tool that simulates an e-commerce business with a number of different warehouses processing multiple transactions at once. It can be explained through the above microbenchmarks, including CPU, network, and storage I/O (more details on those in the full report).
TPC-C is measured in two different ways. One is a throughput metric, throughput-per-minute type C (tpmC) (also known as the number of orders processed per minute). The other metric is the total number of warehouses supported. Each warehouse is a fixed data size and has a max amount of tpmC it’s allowed to support, so the total data size of the benchmark is scaled proportionally to throughput. For each metric, TPC-C places latency bounds that must be adhered to in order to consider a run “passing”. Among others, a limiting passing criteria is that the p90 latency on transactions must remain below 5 seconds. This allows an operator to take throughput and latency into account in one metric. Here, we consider the maximum tpmC supported by CockroachDB running on each system before the latency bounds are exceeded.
In 2020, we see a return to similar overall performance in each cloud.
Each result above is the maximum tpmC produced by that cloud and machine type when holding the p90 latency below 5 seconds. This is the passing criteria for TPC-C and has been applied throughout any run of TPC-C data in this report.
#### TPC-C Performance per Dollar
Efficiency matters as much as performance. If you can achieve top performance but have to pay 2x or 3x, it may not be worth it. For this reason, TPC-C is typically measured in terms of price per tpmC. This allows for fair comparisons across clouds as well as within clouds. In this analysis, we use the default on-demand pricing available for each cloud because pricing is an extremely complex topic. GCP, in particular, was keen to note that a true pricing comparison model would need to take into account on-demand pricing, sustained use discounts, and committed use discounts. While it’s true that paying up-front costs is expensive, we applied this evenly across all three clouds.
We recommend exploring various permutations of these pricing options depending upon your workload requirements. Producing a complex price comparison across each cloud would be a gigantic undertaking in and of itself, and we believe that Cockroach Labs is not best positioned to offer this kind of analysis.
To calculate these metrics we divided max tpmC observed by 3 years of running each cloud’s machine type (i.e., on-demand hourly price * 3 *365 *24).
It’s also important to note that these are list prices for all cloud providers. Depending upon the size of your organization--and what your spend is with each provider overall--you may be able to negotiate off-menu discounts.
Again, all three clouds come close on the cheapest price per tpmC. However, this year we see that the GCP n2-highcpu-16 offers the best performance per dollar in the tested machine types. If price is less of a concern, AWS is the best performer on throughput alone, but when is price not a factor?
**Reproduction Steps for the 2020 Cloud Report**
All benchmarks in this report are open source so that anyone can run these tests themselves. As an open source product, we believe in the mission of open source and will continue to support this mission. We also vetted our results with the major cloud providers to ensure that we properly set up the machines and benchmarks.
All reproduction steps for the 2020 Cloud Report can be found in this public repository. These results will always be free and easy to access and we encourage you to review the specific steps used to generate the data in this blog post and report.
*Note: If you wish to provision nodes exactly the same as we do, you can use **this repo** to access the source code for Roachprod, our open source provisioning system.*
**Read the Full 2020 Cloud Report**
You can download the full report here, which includes all the results, including the lists of the highest performing machines, more details on TPC-C performance, and microbenchmarks for:
CPU
Network Throughput
Network Latency
Storage Read Performance
Storage Write Performance
Happy reading!
| true | true | true |
The 2020 Cloud Report compares AWS, GCP, and Azure on microbenchmarks and customer-like-workloads to understand performance tradeoffs within each cloud.
|
2024-10-12 00:00:00
|
2024-04-12 00:00:00
| null |
cockroachlabs.com
|
cockroachlabs.com
| null | null |
|
32,721,499 |
https://blog.goncharov.ai/getting-a-talent-visa-in-the-uk-for-mortals
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
20,428,077 |
https://www.8bitmen.com/latest-software-tech-programming-news-vol2/
|
Scaleyourapp
|
Shivang
|
## System Design Case Study #5: In-Memory Storage & In-Memory Databases – Storing Application Data In-Memory To Achieve Sub-Second Response Latency
## System Design Case Study #4: How WalkMe Engineering Scaled their Stateful Service Leveraging Pub-Sub Mechanism
## Why Stack Overflow Picked Svelte for their Overflow AI Feature And the Website UI
## A Discussion on Stateless & Stateful Services (Managing User State on the Backend)
## System Design Case Study #3: How Discord Scaled Their Member Update Feature Benchmarking Different Data Structures
## System Design Case Study #2: How GitHub Indexes Code For Blazing Fast Search & Retrieval
## System Design Case Study #1: Exploring Slack’s Real-time Messaging Architecture
## Web Service & Associated IT Roles
## Single-threaded Event Loop Architecture for Building Asynchronous, Non-Blocking, Highly Concurrent Real-time Services
## Understanding SLA (Service Level Agreement) In Cloud Services: How Is SLA Calculated In Large-Scale Services?
## Database Architecture – Part 2 – NoSQL DB Architecture with ScyllaDB (Shard Per Core Design)
## Parallel Processing: How Modern Cloud Servers Leverage Different System Architectures to Optimize Parallel Compute
Zero to Software Architecture Proficiency learning path - Starting from zero to designing web-scale distributed services. Check it out.
Master system design for your interviews. Check out this blog post written by me.
Zero to Software Architecture Proficiency is a learning path authored by me comprising a series of three courses for software developers, aspiring architects, product managers/owners, engineering managers, IT consultants and anyone looking to get a firm grasp on software architecture, application deployment infrastructure and distributed systems design starting right from zero. Check it out.
### Recent Posts
- System Design Case Study #5: In-Memory Storage & In-Memory Databases – Storing Application Data In-Memory To Achieve Sub-Second Response Latency
- System Design Case Study #4: How WalkMe Engineering Scaled their Stateful Service Leveraging Pub-Sub Mechanism
- Why Stack Overflow Picked Svelte for their Overflow AI Feature And the Website UI
- A Discussion on Stateless & Stateful Services (Managing User State on the Backend)
- System Design Case Study #3: How Discord Scaled Their Member Update Feature Benchmarking Different Data Structures
CodeCrafters lets you build tools like Redis, Docker, Git and more from the bare bones. With their hands-on courses, you not only gain an in-depth understanding of distributed systems and advanced system design concepts but can also compare your project with the community and then finally navigate the official source code to see how it’s done.
Get 40% off with this link. (Affiliate)
## Follow Me On Social Media
| true | true | true |
Distributed Systems & Scalability
|
2024-10-12 00:00:00
|
2019-01-01 00:00:00
| null |
website
| null |
Scaleyourapp
| null | null |
6,990,645 |
http://www.echotu.be
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
12,767,644 |
http://www.java67.com/2014/09/top-10-java-8-tutorials-best-of-lot.html
|
Top 10 Java 8 Tutorials, Classes, and Courses in 2024 - Best of Lot [UPDATED]
|
Javin Paul
|
Hello guys, if you want to learn Java 8, in a particular lambda expression, Stream API, method reference, and new Date and Time API, and looking for the best resources then you have come to the right place. Earlier, I have shared the
**best Spring Framework courses**and free Java courses and today, I am going to share the best tutorials to learn Java 8 features. It's a long time since Java 8 was released, and there are so many Java 8 tutorials are written by Oracle, Java bloggers, and other people, but which should you read? Which tutorials are worth your time?Actually, this question is asked to me by one of my readers called John, he was asking me about some good Java 8 tutorials, which he can read to start with. This encouraged me to take a look at some of the
**Java 8 tutorials**available on the internet and form a list of the best.I have just used three criteria, content, language, and length. Because tutorials complement books, they should not be like the book. They should be small enough to complete in a few hours and complete enough to give useful information. I also like tutorials, which discuss practical problems and examples.
One more thing which I kept in mind while choosing these tutorials is that
I think the following Java 8 tutorials have all these qualities, and they are currently the best Java 8 tutorial available online.
##
Here is my list of Java 8 tutorials, which I will recommend to any Java programmer who wants to learn Java 8. These tutorials are suitable for both intermediate and experienced Java programmers, you will learn sufficient details of all key Java 8 features by following good non-trivial examples.
Most of the tutorials are detailed enough to provide all critical information and good enough to engage you for that period. So let's first start with official Java 8 tutorials from Oracle itself.
###
If you want to learn everything about new features in Java 8 in a quick time, then this is the best tutorial or course for you.
It explains all essential features of Java 8, like lambda expressions, streams, functional interfaces, Optional, new date, and time API, and other miscellaneous changes.
You can get this online Java 8 course for free by signing up for a
###
This is another excellent online Java 8 course that will teach you everything about lambdas, Collections with Stream, new Date, and Time API, and different unique features.
Btw, this course uses IntelliJ IDEA for demonstration and coding, so if you are an Eclipse guy like me, you may be a little bit worried, but don't worry because they provide detailed instruction to install IntelliJ IDEA as part of the course.
IntelliJ IDEA also has many exciting features concerning Java 8, like code completion, compilation support, and debugging Java 8 code, coding lambdas in IDEA is exciting and fun.
Apart from this, Udemy also has some exciting Java 8 courses, like
This course has two parts, in the first part you will learn lambdas, and in the 2nd part, you will learn about Streams.
###
Oracle has done an outstanding job with Java after its acquisition from Sun Microsystem. All the worries of Java no longer being open source and destroyed by Oracle are settled by now, and if you still have that in mind, then it will surely go away after using Java 8.
They have even started work with Java 9, with features like money API, modules API, and several others. Coming to this tutorial, it's the first one to look at, even if you don't go along with all the examples, you must first look at this and then follow some other tutorials from the internet.
Oracle has covered almost all essential features of Java 8, like lambda expressions, Stream API, default methods, Functional Programming, bulk data operations, Optional, new Date and Time API, annotation changes in Java 8, and others.
This particular link is for their lambda expression tutorial, but you can find all other topics by following this link. It seems they have just done some CSS changes on their website, which makes reading this tutorial an even more pleasant experience.
I really liked how they have organized the tutorial and how they take you from simple to complex concepts in the matter of a few examples. It's one of the best Java 8 tutorials, you can get for FREE, so don't forget to fully utilize it.
They even have lots of useful articles about date and time, options, and other features which you find on the Java Tech network.
###
When I first come across this tutorial, I was thinking of it as "Yet Another Java 8 Tutorial," but I was wrong, it's very well written and probably better than official Java 8 tutorials from Oracle for impatient developers.
It's not as detailed as official Java 8 tutorials from Oracle, but plainly detailed enough to give you exposure to all critical details. It's worth all the seconds you spent on this tutorial, well organized, and helpful and straightforward examples.
Benjamin Winterberg has done a fabulous job, Kudos to him. Along with content, the presentation is also excellent and once you start reading you will only getting up after finishing it, not much scrolling you can do there :)
###
Dr. Dobbs is one of the respected sites among developers, and they often introduce new concepts with excellent details. This article has done justice with the most important feature of Java 8, lambda expression.
This will explain to you the concept right from the start, as for why do you need lambda expression, and if you are one who is lost between the cryptic syntax of lambda expression ( I was, almost a year ago ), then this article is something you can look forward.
It explains the syntax of lambda expressions, how lambda does type inference, and how lambda expression works.
The focus is, how do you write clean code using lambdas and how functional interfaces, method reference, and constructor reference help you on your goal.
It also touches on the tricky topic of lexical scoping used in lambda expressions along with default methods, and static methods on interfaces.
If you are not convinced yet to read this tutorial, you would after knowing that author of this article is none other than Cay S. Horstmann, professor of computer science at San Jose State University, a Java Champion, and a frequent speaker in the computer industry conferences and author of
###
Out of the big three Java IDEs (Eclipse, Netbeans, and IntelliJ IDEA), I think Netbeans has the best support for Java 8. Their code assist and quick tips helped me a lot to master the syntax of lambda expression and where I can use lambda.
It can be really confusing when you find you can pass lambdas to some method but not to others, as you can only give a lambda expression to a process whose parameter is a functional interface, annotated by @FunctionalInterface or SAM type (a class or interface with just one abstract way).
The Netbeans IDE really helps here; it will suggest to you when you can use lambdas when you can use method reference, where it is legal, and where it is illegal.
So along with any of these tutorials or books, Netbeans should be your best companion in your Java 8 journey.
###
This is one of the premier resources for lambda expression and Java 8. You will get answers to all your queries related to lambda expression here, e.g., Why Java needs lambda expression, how Lambda expression works internally, and all syntax and semantics of Java 8 lambdas.
This is actually one of the most authoritative sources of lambdas. This list of frequently asked questions is maintained by Maurice Naftalin, author of
Sooner or later, you will come to this site, but since you know about it now, it's worth paying a visit and taking an idea of what you can learn about lambda expression in Java 8.
###
I love streams for their expressiveness, lazy optimization, and ease of code, and I found it while trying several different examples. This collection contains those tried and tested code snippets of stream API.
You can find how to filter using streams, how to collect results,
If you love to learn by following different kinds of how-to examples, then you will find this tutorial useful. It's more focused on how to do in Java 8 but also gives you essential details while trying those examples.
###
Programmers love Katas, I do, don't you? And what would have been better than learning new Java 8 concepts using classical programming Katas? Viktor Farcical and Jordi Falguera have done an excellent job of teaching Java 7 and 8 coding style handling problems like Fizz Buzz, Berlin Clock, Tennis Game, Prime Factors, String Permutations, Word Wrap, Mars Rover, Bowling game and Reverse Polish Notation.
Some of the Katas are in Scala, which gives you more challenges of converting them into Java 8 code. So if you like to practice some programming Katas using Lambda expressions, Streams, and new Java 8 utilities, this site is for you.
###
What can be better than sharing your own experience, and in this article, I have shared Java 8 lambda expression and Stream's wisdom with some easy-to-follow examples.
This is a busy developer's guide, who wants to learn by following and doing examples. You will find how you can replace your old anonymous class way of coding with brand new lambda expressions.
I have given examples of how single abstract method interfaces, like Runnable, Comparable, Comparator, and ActionListener, can benefit hugely from lambda expressions and functional interfaces.
Since Java allows SAM type functional interface, you can create a method in Java 8 ways.
###
When someone talks about Java 8, including myself, the main focus is always on lambda expression, but there are some other significant changes, which are also as crucial as lambdas.
After a lot of criticism of Java's handling of Date, Time, and Calendar, Java has now sorted out all its mistakes by giving us a brand new Date and Time API. This API is the result of all experiences of how a programming language should handle critical date and time.
Though API looks largely inspired by joda date and time API, it's exceptionally designed. One of the critical things is the separation of date from a human and a machine point of view.
You have classes like Instant, Duration, LocalDate, LocalTime for your day-to-day needs. Java's handling of time zones is also better in this new API, I hope it provides some way to reduce errors due to confusing names, like Asia/HongKong and Asia/Hong_Kong (one of them is valid).
As suggested, this article provides a more in-depth look into Java 8 Date and Time API. It's an example-driven article, and I am sure you will love it.
###
I have become a fan of Benjamin Winterberg after reading his Java 8 tutorial, so when he publishes the next two articles in this series, I was quick to learn. Like his previous article, he has done justice with Streams as well.
I am really surprised by how he covers the concepts one by one in an order, which looks perfect. Since there are a lot of things to learn about Streams, there is a good chance that one or two will be missed.
*this list of Java 8 tutorials*must explain core features like lambda expressions, streams, default or extension methods, annotation changes in Java 8, Nashron JavaScript engine, new Date and Time API and a couple of more exciting features.I think the following Java 8 tutorials have all these qualities, and they are currently the best Java 8 tutorial available online.
##
__List of Best Java 8 Tutorials__
Most of the tutorials are detailed enough to provide all critical information and good enough to engage you for that period. So let's first start with official Java 8 tutorials from Oracle itself.
###
**1. What's New in Java 8 by Pluralsight**
If you want to learn everything about new features in Java 8 in a quick time, then this is the best tutorial or course for you.It explains all essential features of Java 8, like lambda expressions, streams, functional interfaces, Optional, new date, and time API, and other miscellaneous changes.
You can get this online Java 8 course for free by signing up for a
**10-day free trial**, which is more than enough to learn some other Java 8 courses from the Pluralsight as well like**From Collections to Streams in Java 8 Using Lambda Expressions**by Richard Warburton, one of the best course to learn lambda expression with collection classes.###
**2. The Complete Java Masterclass on Udemy**
This is another excellent online Java 8 course that will teach you everything about lambdas, Collections with Stream, new Date, and Time API, and different unique features.Btw, this course uses IntelliJ IDEA for demonstration and coding, so if you are an Eclipse guy like me, you may be a little bit worried, but don't worry because they provide detailed instruction to install IntelliJ IDEA as part of the course.
IntelliJ IDEA also has many exciting features concerning Java 8, like code completion, compilation support, and debugging Java 8 code, coding lambdas in IDEA is exciting and fun.
Apart from this, Udemy also has some exciting Java 8 courses, like
**Java 8 Functional Programming: Lambda Expressions Quickly**, which is a FREE course, and I suggest every Java developer join that.This course has two parts, in the first part you will learn lambdas, and in the 2nd part, you will learn about Streams.
###
**3. Official Java 8 Tutorial from Oracle**
Oracle has done an outstanding job with Java after its acquisition from Sun Microsystem. All the worries of Java no longer being open source and destroyed by Oracle are settled by now, and if you still have that in mind, then it will surely go away after using Java 8.They have even started work with Java 9, with features like money API, modules API, and several others. Coming to this tutorial, it's the first one to look at, even if you don't go along with all the examples, you must first look at this and then follow some other tutorials from the internet.
Oracle has covered almost all essential features of Java 8, like lambda expressions, Stream API, default methods, Functional Programming, bulk data operations, Optional, new Date and Time API, annotation changes in Java 8, and others.
This particular link is for their lambda expression tutorial, but you can find all other topics by following this link. It seems they have just done some CSS changes on their website, which makes reading this tutorial an even more pleasant experience.
I really liked how they have organized the tutorial and how they take you from simple to complex concepts in the matter of a few examples. It's one of the best Java 8 tutorials, you can get for FREE, so don't forget to fully utilize it.
They even have lots of useful articles about date and time, options, and other features which you find on the Java Tech network.
###
**4. Java 8 Tutorial By Benjamin Winterberg**
When I first come across this tutorial, I was thinking of it as "Yet Another Java 8 Tutorial," but I was wrong, it's very well written and probably better than official Java 8 tutorials from Oracle for impatient developers.It's not as detailed as official Java 8 tutorials from Oracle, but plainly detailed enough to give you exposure to all critical details. It's worth all the seconds you spent on this tutorial, well organized, and helpful and straightforward examples.
Benjamin Winterberg has done a fabulous job, Kudos to him. Along with content, the presentation is also excellent and once you start reading you will only getting up after finishing it, not much scrolling you can do there :)
###
**5. Lambda Expressions in Java 8**
Dr. Dobbs is one of the respected sites among developers, and they often introduce new concepts with excellent details. This article has done justice with the most important feature of Java 8, lambda expression.This will explain to you the concept right from the start, as for why do you need lambda expression, and if you are one who is lost between the cryptic syntax of lambda expression ( I was, almost a year ago ), then this article is something you can look forward.
It explains the syntax of lambda expressions, how lambda does type inference, and how lambda expression works.
The focus is, how do you write clean code using lambdas and how functional interfaces, method reference, and constructor reference help you on your goal.
It also touches on the tricky topic of lexical scoping used in lambda expressions along with default methods, and static methods on interfaces.
If you are not convinced yet to read this tutorial, you would after knowing that author of this article is none other than Cay S. Horstmann, professor of computer science at San Jose State University, a Java Champion, and a frequent speaker in the computer industry conferences and author of
**Java SE 8 For the Really Impatient**book.###
**6. Overview of JDK 8 Support in NetBeans IDE**
Out of the big three Java IDEs (Eclipse, Netbeans, and IntelliJ IDEA), I think Netbeans has the best support for Java 8. Their code assist and quick tips helped me a lot to master the syntax of lambda expression and where I can use lambda.It can be really confusing when you find you can pass lambdas to some method but not to others, as you can only give a lambda expression to a process whose parameter is a functional interface, annotated by @FunctionalInterface or SAM type (a class or interface with just one abstract way).
The Netbeans IDE really helps here; it will suggest to you when you can use lambdas when you can use method reference, where it is legal, and where it is illegal.
So along with any of these tutorials or books, Netbeans should be your best companion in your Java 8 journey.
###
**7. Maurice Naftalin's Lambda FAQ**
This is one of the premier resources for lambda expression and Java 8. You will get answers to all your queries related to lambda expression here, e.g., Why Java needs lambda expression, how Lambda expression works internally, and all syntax and semantics of Java 8 lambdas.This is actually one of the most authoritative sources of lambdas. This list of frequently asked questions is maintained by Maurice Naftalin, author of
**Mastering Lambdas: Java Programming in a Multicore World**, one of the better books to learn Lambda expression.Sooner or later, you will come to this site, but since you know about it now, it's worth paying a visit and taking an idea of what you can learn about lambda expression in Java 8.
###
**8. My Java 8 Streams Examples**
I love streams for their expressiveness, lazy optimization, and ease of code, and I found it while trying several different examples. This collection contains those tried and tested code snippets of stream API.You can find how to filter using streams, how to collect results,
*finding max and min*, average, and working with integer streams there.If you love to learn by following different kinds of how-to examples, then you will find this tutorial useful. It's more focused on how to do in Java 8 but also gives you essential details while trying those examples.
###
**9. Java 8 Tutorials through Katas: Berlin Clock (Easy)**
Programmers love Katas, I do, don't you? And what would have been better than learning new Java 8 concepts using classical programming Katas? Viktor Farcical and Jordi Falguera have done an excellent job of teaching Java 7 and 8 coding style handling problems like Fizz Buzz, Berlin Clock, Tennis Game, Prime Factors, String Permutations, Word Wrap, Mars Rover, Bowling game and Reverse Polish Notation.Some of the Katas are in Scala, which gives you more challenges of converting them into Java 8 code. So if you like to practice some programming Katas using Lambda expressions, Streams, and new Java 8 utilities, this site is for you.
###
**10. My Java 8 Lambda Expression and Streams Example**
What can be better than sharing your own experience, and in this article, I have shared Java 8 lambda expression and Stream's wisdom with some easy-to-follow examples.This is a busy developer's guide, who wants to learn by following and doing examples. You will find how you can replace your old anonymous class way of coding with brand new lambda expressions.
I have given examples of how single abstract method interfaces, like Runnable, Comparable, Comparator, and ActionListener, can benefit hugely from lambda expressions and functional interfaces.
Since Java allows SAM type functional interface, you can create a method in Java 8 ways.
###
**11. A more in-depth look into the Java 8 Date and Time API**
When someone talks about Java 8, including myself, the main focus is always on lambda expression, but there are some other significant changes, which are also as crucial as lambdas.After a lot of criticism of Java's handling of Date, Time, and Calendar, Java has now sorted out all its mistakes by giving us a brand new Date and Time API. This API is the result of all experiences of how a programming language should handle critical date and time.
Though API looks largely inspired by joda date and time API, it's exceptionally designed. One of the critical things is the separation of date from a human and a machine point of view.
You have classes like Instant, Duration, LocalDate, LocalTime for your day-to-day needs. Java's handling of time zones is also better in this new API, I hope it provides some way to reduce errors due to confusing names, like Asia/HongKong and Asia/Hong_Kong (one of them is valid).
As suggested, this article provides a more in-depth look into Java 8 Date and Time API. It's an example-driven article, and I am sure you will love it.
###
**12. Java 8 Stream Tutorial**
I have become a fan of Benjamin Winterberg after reading his Java 8 tutorial, so when he publishes the next two articles in this series, I was quick to learn. Like his previous article, he has done justice with Streams as well.I am really surprised by how he covers the concepts one by one in an order, which looks perfect. Since there are a lot of things to learn about Streams, there is a good chance that one or two will be missed.
But, this article covers most of them, like different kinds of stream operations like intermediate and terminal, lazy evaluation of streams, filtering, transformation, parallel execution, special Stream classes for primitives, and several others.
You will not only learn basics like what is a Stream in Java 8 and how it works but also some advanced operations, like how to use FlatMap, Map, and Reduce function in Java 8, filtering, collecting results, and so on.
###
This is another gem by Benjamin Winterberg, and it's actually one of the rare best articles on Java's new JavaScript engine, Nashorn. For those who don't know, Java supports the execution of JavaScript code from Java 1.6 by supporting Rhino (Java's legacy JavaScript engine from Mozilla).
Nashron is in the same league as Google V8 (JavaScript engine of Chrome browser) and Node.js. Though I have yet to be proficient in utilizing this powerful feature of Java, I like this article.
It's good to learn about JJS (Java's command-line tool to execute JavaScript) but also how to run JavaScript code from Java Class itself. If you have just started exploring this feature, this is the article you should read.
###
Takipi blog has some interesting Java 8 tutorials, one of them is Java 8 StampedLocks vs. ReadWriteLocks and Synchronized. Tal Weiss, CEO of Takipi, has done a great job in explaining how StampedLock fair with classical synchronized keywords and better performing alternative ReadWriteLocks.
He has nicely compared the performance of these three locks on the various scenario like 19 readers vs. 1 writer, 16 readers vs. 4 writers, 10 readers vs. 10 writers, and 5 readers vs. 5 writers.
Though the conclusion is somewhat intuitive but more realistic as on average, still central lock provided by synchronized keyword performs better. If you are interested, you can further check these best multithreading and concurrency courses to learn Java concurrency in depth.
##
Btw, if you are like many programmers who like to follow a book from start to end to learn concepts in the right order and with completeness then you can take help from
You can also read Java SE 8 for Impatient by Cay S. Horstmann, which explains all crucial features, including functional and non-functional like new date and time API. In short, both are great books to learn Java 8.
These were some of the
If you are hungry for more Java 8 tutorials, resources, and books, You can also take a look at my earlier article about some of the best Java 8 books.
I personally like to thank all authors for taking their time and sharing knowledge, making it easy for us to understand the power of Java 8. Great job, guys.
Other
Thanks for reading this article. If you like these best Java 8 tutorials, books, and courses, then please share them with your friends and colleagues. If you have any feedback or comment, then please drop a note.
You will not only learn basics like what is a Stream in Java 8 and how it works but also some advanced operations, like how to use FlatMap, Map, and Reduce function in Java 8, filtering, collecting results, and so on.
###
**13. Java 8 Nashorn Tutorial**
This is another gem by Benjamin Winterberg, and it's actually one of the rare best articles on Java's new JavaScript engine, Nashorn. For those who don't know, Java supports the execution of JavaScript code from Java 1.6 by supporting Rhino (Java's legacy JavaScript engine from Mozilla).Nashron is in the same league as Google V8 (JavaScript engine of Chrome browser) and Node.js. Though I have yet to be proficient in utilizing this powerful feature of Java, I like this article.
It's good to learn about JJS (Java's command-line tool to execute JavaScript) but also how to run JavaScript code from Java Class itself. If you have just started exploring this feature, this is the article you should read.
###
**14. Java 8 StampedLocks vs. ReadWriteLocks and Synchronized**
He has nicely compared the performance of these three locks on the various scenario like 19 readers vs. 1 writer, 16 readers vs. 4 writers, 10 readers vs. 10 writers, and 5 readers vs. 5 writers.
Though the conclusion is somewhat intuitive but more realistic as on average, still central lock provided by synchronized keyword performs better. If you are interested, you can further check these best multithreading and concurrency courses to learn Java concurrency in depth.
##
__Books to Refer__
Btw, if you are like many programmers who like to follow a book from start to end to learn concepts in the right order and with completeness then you can take help from__Books to Refer__
**Java 8 in Action**by Raoul-Gabriel Urma, Mario Fusco, and Alan Mycroft, which explains lambda expression, stream API and other functional features in great detail.You can also read Java SE 8 for Impatient by Cay S. Horstmann, which explains all crucial features, including functional and non-functional like new date and time API. In short, both are great books to learn Java 8.
These were some of the
**best Java 8 tutorials**you will find online at this moment. Though these tutorials don't cover all the features introduced in Java 8, You will get an excellent overview of most of the core features, like Streams, Lambdas, new Date and Time API, Repeatable annotations, Nashorn, and others.If you are hungry for more Java 8 tutorials, resources, and books, You can also take a look at my earlier article about some of the best Java 8 books.
I personally like to thank all authors for taking their time and sharing knowledge, making it easy for us to understand the power of Java 8. Great job, guys.
Other
**Programming and Technology articles**you may like- The Complete Java Developer RoadMap
- 10 Things Java Developer Should Learn in 2024
- 10 Programming Languages to explore in 2024
- 10 Frameworks Programmers Should Learn in 2024
- The Complete Web Developer RoadMap
- 20 Java Books You Can Read in 2024
- 5 Free Spring Framework Courses for Java Programmers
- 5 Free Core Java Courses for Beginners
- 10 DevOps Courses for Experienced Java Programmers
- These are the best courses to learn Java 8 and Java 9
- Top 5 Courses to learn Spring Boot in 2024
- Top 5 Courses to learn Microservices in Java
- 15 Java 8 Stream and Functional Programming Interview Questions
Thanks for reading this article. If you like these best Java 8 tutorials, books, and courses, then please share them with your friends and colleagues. If you have any feedback or comment, then please drop a note.
**P. S. -**If you are a beginner and want to learn Java from scratch, then you should check out**these best Java Programming courses****from Udemy,****one of the best resources to learn Java in depth. It's also one of the most up-to-date courses and covers, not just Java 8 features but several other key features from recent Java releases.**
The best tuotrial of Java 8 is the official one, which is http://docs.oracle.com/javase/tutorial/java/javaOO/lambdaexpressions.html , it is complete, comprehensive yet easy to understand.
ReplyDeleteInstead of reading tutorial here and there, I guess just check out any good book on Java 8.
ReplyDeleteAgreed, you will learn much better with a book like Java 8 in Action than these random tutorials.
DeleteCan you please let me know from where I can download Java 8 books in PDF format, e.g. I want Java 8 in Action and Java 8 for Really Impatient in PDF format so that I can read them in my smartphone. Thanks
ReplyDeleteHere is another list of great books to learn Java 8 in quick time
ReplyDeleteAll the worries of Java no longer to be open source and destroyed by Oracle is settled by now...
ReplyDeleteHow can you say this?
@Devesh, just look at the progress or number of Java releases after Oracle took over Java from Sun. Java 8 is barely out in 2014 and Java 9 is coming shortly. Certainly Java is priority in Oracle's mind if not then also they are not ignoring it.
Deletegreat
ReplyDeleteDoes anyone think that Python will take over Java in the future or they are already taking over?
ReplyDelete@Anonymous, Python is a great language and its already replaced Java in United states as one of the first few languages in Academics e.g. Colleges and University, but Java is just so huge, it has used all over the places, you can read my post where Java is used in real world to get an Idea. So, I don't think Python can totally replace Java but yes it can complement it.
DeleteYou can also read my opinion on Python vs Java, which language to learn first
Where is the java code for Classification function ( as shown in photo ) ?
ReplyDeleteKeep this going please, great job!
ReplyDelete
| true | true | true |
Java Programming tutorials and Interview Questions, book and course recommendations from Udemy, Pluralsight, Coursera, edX etc
|
2024-10-12 00:00:00
|
2022-07-08 00:00:00
| null |
java67.com
|
Javinpaul
| null | null |
|
3,074,266 |
http://www.thetechscoop.net/2011/10/04/apple-announces-iphone-4s/
|
The Tech Scoop - Technology for Innovators
|
William
|
## Increase Your Business Growth With Performance-Driven VPS Hosting Services
The modern business environment is changing rapidly and it’s quite essential to adopt the best practices in order to keep up with the competition. One of the key elements that can be beneficial for a business is VPS hosting services which can help take your…
| true | true | true |
#description
|
2024-10-12 00:00:00
|
2021-01-03 00:00:00
| null |
website
|
thetechscoop.net
|
The Tech Scoop
| null | null |
23,394,385 |
https://uxdesign.cc/accessibility-and-what-it-really-means-d7303a390b84
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
22,727,109 |
http://www0.cs.ucl.ac.uk/staff/d.silver/web/Teaching.html
| null | null |
Website moved to
this link
.
| true | true | true | null |
2024-10-12 00:00:00
| null | null | null | null | null | null | null |
3,767,350 |
http://blog.phpfog.com/2012/03/28/share-your-code-get-some-cash-introducing-our-jumpstarts-program/
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
11,591,815 |
http://www.acm.org/awards/2015-technical-awards
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
26,800,813 |
https://www.nytimes.com/2021/04/13/science/ants-brains-queen.html
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
21,875,994 |
https://youtu.be/Hp4fiNjiIkM
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
12,065,021 |
http://nssdc.gsfc.nasa.gov/planetary/lunar/apolloloc.html
|
Apollo: Where are they now?
| null |
# Apollo: Where are they now?
## Current locations of the Apollo Command Module Capsules
(and Lunar Module crash sites)
The Apollo Command Module Capsules are on display at various sites throughout the U.S.
and the world. The Apollo Lunar Modules were deliberately targeted to impact the Moon
to provide artificial moonquake sources for seismic experiments. The list below gives
the locations of these displays and impacts.
**Command Module**
Fernbank Science Center, Atlanta, Georgia
**Command Module**
Frontiers of Flight Museum, Dallas, Texas
**Command Module**
Chicago Museum of Science and Industry, Chicago, Illinois
**Command Module "Gumdrop"**
San Diego Air and Space Museum, San Diego, California
**Command Module "Charlie Brown"**
Science Museum, London, England
**Lunar Module "Snoopy"**
In heliocentric orbit
**Command Module "Columbia"**
The National Air and Space Museum, Washington, D.C.
**Lunar Module "Eagle"**
Jettisoned from the Command Module on 21 July 1969 at 23:41 UT (7:41 PM EDT)
Impact site unknown
**Command Module "Yankee Clipper"**
Virginia Air and Space Center, Hampton, Virginia
**Lunar Module "Intrepid"**
Impacted Moon 20 November 1969 at 22:17:17.7 UT (5:17 PM EST)
3.94 S, 21.20 W
**Command Module "Odyssey"**
Kansas Cosmosphere and Space Center, Hutchinson, Kansas
(formerly at Musee de l'Air, Paris, France)
**Lunar Module "Aquarius"**
Burned up in Earth's atmosphere 17 April 1970
**Command Module "Kitty Hawk"**
Visitor's Center, Kennedy Space Center, Florida
(formerly at U.S. Astronaut Hall of Fame, Titusville, Florida)
**Lunar Module "Antares"**
Impacted Moon 07 February 1971 at 00:45:25.7 UT (06 February, 7:45 PM EST)
3.42 S, 19.67 W
**Command Module "Endeavor"**
USAF Museum, Wright-Patterson Air Force Base, Dayton, Ohio
**Lunar Module "Falcon"**
Impacted Moon 03 August 1971 at 03:03:37.0 UT (02 August, 11:03 PM EDT)
26.36 N, 0.25 E
**Command Module "Casper"**
U.S. Space and Rocket Center, Huntsville, Alabama
**Lunar Module "Orion"**
Released 24 April 1972,
loss of attitude control made targeted impact impossible.
Impact site unknown
**Command Module "America"**
NASA Johnson Space Center, Houston, Texas
**Lunar Module "Challenger"**
Impacted Moon 15 December 1972 at 06:50:20.8 UT (1:50 AM EST)
19.96 N, 30.50 E
**Command Module**
California Science Center, Los Angeles, California
**Command Module**
Naval Aviation Museum, Pensacola, Florida
**Command Module**
NASA Visitor Center at Great Lakes Science Center, Cleveland, Ohio
**Command Module**
Oklahoma History Center, Oklahoma City, Oklahoma
Impact Sites of the Apollo LM's and SIVB's
More details on Apollo lunar landings
Landing Site References
- the control network used and more precise coordinates for the landing,
ALSEP, and LRRR sites
Table of Anthropogenic Impacts and Spacecraft
American Spacecraft
- Other spacecraft locations
Apollo home page
Lunar home page
The Apollo 11 mission - Images, audio clips, and a brief history.
Author/Curator:
Dr. David R. Williams, [email protected]
NSSDCA, Mail Code 690.1
NASA Goddard Space Flight Center
Greenbelt, MD 20771
+1-301-286-1258
###### NASA Official: Dave Williams, [email protected]
Last Updated: 28 March 2023, DRW
| true | true | true | null |
2024-10-12 00:00:00
|
2023-03-28 00:00:00
| null | null | null | null | null | null |
2,212,177 |
http://www.tbray.org/ongoing/When/201x/2011/02/12/The-Internet
|
Millions of Hovels
|
Tim Bray
|
There’s a really interesting piece in the New York times about black-hat SEO, The Dirty Little Secrets of Search. Normally I’d just tweet a link, but it has this wonderful paragraph that totally captures the sad part of the Internet, the way I see it. I read it three times in a row, nodding all the while.
...the landscape of the Internet ... starts to seem like a city with a few familiar, well-kept buildings, surrounded by millions of hovels kept upright for no purpose other than the ads that are painted on their walls.
I think anyone who has any reasonably-popular online property feels the truth of that statement in their gut.
And the whole piece is worth reading, by the way.
Comment feed for ongoing:
From: You know who (Feb 12 2011, at 11:44)
so true, indicative of human nature albeit nothing new, I know a BlackHat SEO who has been awarded by luminaries for his ability, in the army, shit floats
[link]From: dr2chase (Feb 12 2011, at 13:45)
Since the hovels seem to survive on copied content, what stops people from auto-filing DMCA takedowns, and/or confiscating their domains (whatever it is that was done the alleged P2P promoters). Copyright is copyright, I've got a blog, and some of it has been copied (specifically, instructions for setting a wrist watch).
Google could make that easier, they probably even have all the evidence on their servers.
[link]From: Michael Norrish (Feb 12 2011, at 17:42)
It's not even as if the hovels have the ads painted on their outsides. They're predicated on people being foolish enough to step inside them to see the ads on the inside!
[link]From: Maintainer (Feb 13 2011, at 01:33)
@ dr2chase
So true. I knew I had a problem when I discovered my splattered across spam-sites. I spent much of the weekend fighting off scrappers and spiders; I'm trying to build a moat to keep them out.
Vast stretches of the net is a desolate wasteland. The good news is I've always wanted to live long enough to see the apocalypse—this might be as close as I come.
[link]From: Andre Peeters (Feb 13 2011, at 02:11)
That's why people invented glasses to block out the hovels. (aka ad-block software). It makes the world/internet more beautifull (readable). Me, I just ignore the ad's totally, in websites, magazine's, ...
[link]From: len (Feb 14 2011, at 07:15)
Yep. As predicted, it turned into Blade Runner, a technology ghetto where a few dollars buys "the finest kind".
T-Bone Burnett said that comparing the music industry, the old school was/is restricted, an access based system but that tended to ensure quality over quantity. Can't say I disagree. OTOH, watching the Grammies last night, I was impressed with the short burst Norah Jones trio, Barbra as always, and Lady Antebellum. Dylan and the gaggle of pounders was dreadful. Not being a fan of the fashionista acts (gaga, paltrow, etc), I turned those off and went back to learning a Curtis Mayfield classic for a gig later this week. The ratio of good to crap was not healthy.
In other words, the technology can clean up the messiness but infers no paticular quality where none existed to begin with. IMO, that's what the web does: adds volume but without the filtering of experience over talent, it tends toward a ghetto. The masses, as it turns out, really aren't a good judge over that for which they have no training. Yet another Internet Myth crashes and fizzles like cheap fireworks.
[link]From: Mike (Feb 14 2011, at 21:02)
So the JC Penney site is a "hovel"? It's a pretty usable and well designed site, and pretty handy for us expats, cheaper than Lands End, for instance.
I think the comments somehow have gotten off track form the topic of the New York Times article. The article is not about trash websites, its about how even good websites are resorting to black hat. It's a "Don't bring a knife to a gunfight" issue. It's about the fact that being #1 in Google gets you 45 percent of the clicks, and being #2 gets you 17 percent. Interesting topic, but not a topic about "hovels."
[link]From: len (Feb 17 2011, at 05:31)
On the other hand, tricking search engines is now part of the web marketing strategy and bemoaning that is like complaining about the weather. It's the environment.
There are no good excuses for poorly executed web sites using inappropriate technology. I am mildly surprised in the job hunt by sites that use MS docs as data entry forms which are clumsy and require one to submit a resume with the same information.
Better sites parse the resume, extract that into forms, then ask for verification. It's a fast easy task. Apparently internal policies requiring print copies are more important than ease of entry for the minority that use doc forms and that leaves a bad impression in an industry where web tech is assumed and one knows producing a print from a well executed form is no challenge.
Are they are serious about the job and how well their customers are served by their products? As an applicant I ask myself if I want to endure the experience of working for a company that spends top dollar on graphics but treats data entry as a second class design task.
[link]From: Nikki Sterno (Feb 18 2011, at 07:00)
All this proves is that Google's ranking algorithm is pretty sucky and anyone with even a finger in SEO could have told you that years ago. Google puts too much weight on inbound links. It's hurting everyone except the full time windbags on the net who blog about nada.
[link]From: len (Feb 23 2011, at 17:33)
It isn't sucky precisely. Inbound linking is easily gamed which is another way of saying "marketed". If you believe in the notion of the wisdom of crowds, you may be comfortable with this idea and Jersey Shore. Otherwise you deal with concepts such authority and the problems of political entities that can effectively drown out the right idea or facts because they aren't testable or tested.
See Piltdown Man.
In one point of view, it is the inevitable effect of free will and freedom when web surfing. Or as the web inventor said, "a social problem, not a technical problem".
It is simpler to just hook the microphones to the amplifiers and hit the on button with the volume fully cranked. It is unprofessional. In the simple case, it hurts the ears in the same room. In the complex case, it ruins the day in every adjacent room to the extent of the power of the amplifier. In the case that is the web, that is a world-spanning edifice.
"as the twig is bent....
[link]
| true | true | true | null |
2024-10-12 00:00:00
|
2011-02-12 00:00:00
|
/ongoing/misc/podcast-default.jpg
|
website
| null |
Ongoing By Tim Bray
| null | null |
31,924,075 |
https://twitter.com/steventey/status/1542146213155405830
|
x.com
| null | null | true | true | false | null |
2024-10-12 00:00:00
| null | null | null | null |
X (formerly Twitter)
| null | null |
1,115,280 |
http://mixergy.com/y-combinator-paul-graham/
|
How Y Combinator Helped 172 Startups Take Off - With Paul Graham - Business Podcast for Startups
| null |
Andrew: This interview is sponsored by Wufoo, which makes embeddable forms in surveys that you can add to your website right now. Check out Wufoo.com. It’s also sponsored by Shopify.com, where you can create an online store right now, within five minutes, and have all the features that you need to keep selling online. Check out Shopify.com. And it’s sponsored by Grasshopper, the virtual phone system that entrepreneurs love because it has all the features that they need, and can be managed directly online. Here’s the interview.
Hey, everyone, it’s Andrew Warner, founder of Mixergy.com, home of the ambitious upstart. You guys know what we do here. I interview entrepreneurs about how they build their business to find out what the rest of us can learn from their experiences. And, Paul, I’m going to ask you about that smile in a minute because I’d love your feedback on some of the ways that I do my interviews here.
Today, I’ve got Paul Graham. He is the co-founder of Y-Combinator, which funds and advises startups. He’s also an author whose essays are devoured by entrepreneurs. He’s a hacker who administers Hacker News, and previously he co-founded Viaweb, which he sold to Yahoo. The smile, by the way, that I was asking you about. A lot of times when I do my interviews, I start them off with “home of the ambitious upstart,” and I look at the person who I respect, who I brought on here to do this interview, and I wonder, does he think it’s ridiculous, or is this a charming, interesting part of the program? What do you think?
Paul: The ambitious upstart part?
Andrew: Yeah.
Paul: Well, what I was thinking when I heard it is, “Oh, good. This is my kind of place.”
Andrew: Oh, good.
Paul: I wasn’t thinking about whether it was a particularly good marketing tagline.
Andrew: Oh, good. Then if you think it’s your kind of place, then it is a good tagline for me.
Paul: Yeah, ambitious and upstart, that’s exactly what I’m looking for.
Andrew: Perfect. As I told you earlier, I’m going to be structuring this as the biography of Y-Combinator and along the way, I’d like to see what you’ve learned as you funded, I think it’s 172 startups, you told me.
Paul: One hundred and seventy-two, yeah.
Andrew: Wow. Well, let’s start off with the essay since that’s how many entrepreneurs get to know you. Why did you start writing about entrepreneurship?
Paul: Well, I write about whatever I’m thinking about. So I just started thinking about entrepreneurship. I think that started because I had to give a talk to the Undergraduate Computer Society at Harvard. In fact, that’s what Y-Combinator grew out of. So I had to give a talk to these undergrads and I thought, “What shall I tell them?” What could I tell? So I thought, “All right. I’ll tell them how to start a startup.” So I gave this talk about how to start a startup and I put it online later as an essay called, “How to Start a Startup.” That Y-Combinator literally grew out of that talk.
Andrew: What happened? How does a talk become a revolution?
Paul: Well, what happened was I was giving them advice, sort of in real time, that they should raise money from angels. The best thing was to raise money from angels who had themselves made their money from doing a startup because then they could give you advice, as well as money. And I noticed all these guys were looking at me sort of expectantly, in fact, sort of like baby birds. I thought, “Jesus, they’re all going to email me their business plans.” So I said, “Not me.” Right? Because I had never done any angel investing at that point and I didn’t want to start. But then I immediately felt guilty.
I went to dinner with some of them afterward and I thought, “Even though these guys are just undergrads, I bet a lot of these guys could do it. They could figure it out.” So I thought, “All right, all right. I’ll start angel investing.” So Y-Combinator happened just because I wanted to start angel investing. Originally, Y-Combinator was just going to be like regular angel investing, asynchronous, not this whole batch model. We discovered that later, by accident.
Andrew: Who are the original startups that were so good, were so inspiring, that you said, “I’ve got to back these guys”?
Paul: Well, Reddit, the founders of Reddit were at that talk. They came on a train from Virginia.
Andrew: From Virginia, they told me, that they got over to see you.
Paul: That’s a long train ride. So I was impressed by their determination. That was actually why we funded them because they were so determined that they came on the train all the way from Virginia to hear that talk.
Andrew: What else was it about them? Because I’m sure there are a lot of people now who will come and travel a long way to see you and they don’t have that magic. But I talked to Alexis and he said that you didn’t like his original business idea, but you liked something about him to bring him back. And I asked him what it was that Paul Graham saw in him, and he couldn’t define it. So now that I’ve got you here, what is it about him? Maybe from there we can extrapolate.
Paul: [inaudible 00:04:50] Steve. He and Steve were both smart, and they were determined, and they seemed flexible. They seemed like they really wanted to start a startup. But it’s a little bit misleading to ask what I liked because Jessica, her nickname in Y-Combinator is The Social Radar. I actually have bad judgment of character. I’m not good at judging people. Really, I’m kind of bad at it. I know I’m bad at it. But Jessica, she is very rarely wrong. There will be somebody I meet and I really like, and she says, “You know, there’s something off about him.” And it always turns out she’s right, always. So I’ve learned, like when Jessica says, “These people are good,” or “These people are bad,” I should really listen. A lot of people don’t realize.
Strangely enough, I’m actually sitting where the founders sit in Y-Combinator interviews. This is what the world looks like. What I see is what the world looks like to them, what’s behind me is the back of the room. We bring these people in for interviews, and I think a lot of them think Jessica is some kind of secretary, or something like that because she’s the one who smiles at them and greets them and remembers their name.
We pepper them with questions about technical stuff during the interview and she doesn’t ask as many questions. What happens after the interview is they walk out. Basically, I’ll turn to Jessica and say, “Okay. Should we fund them?” Right? Because she is such a fabulous judge of people, and she really liked the Reddits. She called them The Muffins because she thought they were cute. So Jessica was really happy when we ended up funding the Reddits after all because she was bummed that we had to reject them.
Andrew: Okay. Let’s pause here and go back a little bit and talk about the people who started Y-Combinator with you. In that talk, you said that it’s important to get the right team together. On your team, it sounds like from the start you had Jessica Livingston, who was an investment banker at Adams Harkness before. You had Robert Morris, who was your partner at Viaweb and a long time friend.
Paul: And Trevor Blackwell. He worked on Viaweb too.
Andrew: Oh, Trevor Blackwell from Viaweb also.
Paul: Yeah, it was basically the same three people from Viaweb.
Andrew: Oh, so Jessica Livingston was at Viaweb with the three of you? I’m sorry, we had a little bit of a lag. Was Jessica Livingston . . .
Paul: No, no. No, she was not.
Andrew: Okay, so was it three of you?
Paul: Right, yeah.
Andrew: How did you find Jessica Livingston, and what was it about her?
Paul: I was [inaudible 00:07:23] her.
Andrew: Sorry?
Paul: I was going out with her.
Andrew: How did she go from girlfriend to partner?
Paul: Well, when we were going to start . . . we were going to start investing. And we didn’t know anything about the logistics of investing. She had one of these weird securities licenses. I think it’s called a Series 7, or something like that. She actually knows about this whole world. To this day, I have only skimmed our legal agreements. But she keeps this stuff in line. She knows how to do the mechanics of investing and we didn’t know how to do it. If it hadn’t been [inaudible 00:08:01] just the three of us, we would have never started this ourselves because who would have done all that crap. Or, sorry, who would have done all that very important work?
Andrew: Okay. So now you got to know her. What about Robert Morris and Trevor Blackwell? What did they bring to the partnership?
Paul: They’re very smart. I mean, they’re the smartest people I know. They’ve done the startup themselves. I mean, they have basically the same experience I do, in the same companies, and I can work with them. And Robert is our sys admin.
Andrew: I’m sorry?
Paul: Robert is our sys admin.
Andrew: I see.
Paul: We’re probably the only company that has a full professor at MIT as their grumpy sys admin.
Andrew: All right. So let’s go back then. Jessica Livingston sees these guys. She says, “I like them, there’s something about them. Let’s invest.”
Paul: Do you mean the Reddits?
Andrew: The Reddits, right. Yes, so you guys bring them in. What kind of structure, what kind of support system did you have back then?
Paul: It was pretty much the same as now. That very first summer. The reason we decided to invest in startups in batches all at once was because we didn’t know what we were doing, and so we thought, “Well, you know what we’ll do? We’ll have a summer program.” I don’t know if you have a programming background, but everybody treats summer jobs as kind of throw-away jobs as a programmer. You’re not expected to get a lot done. It’s just so the company can decide later if they want to hire you after you graduate.
For both the hirers and the employees, summer jobs are kind of like a throw-away thing. So we thought, “Well, as long as everybody treats summer jobs as a throw-away thing, we’ll have this summer program, and if it turns out to be a disaster, no one will blame us. We can learn how to be investors at the same time these guys learn how to be startup founders.” The fastest, the most efficient thing is since it was going to be a summer program, it was synchronous. All these startups would get founded at once, just like everybody who works for Microsoft for the summer sort of shows up at about the same time.
So the whole doing things in batch, we discovered by accident. But it worked so well, we decided, “All right, well, we’re going to keep doing this starting batches and startups all at once.” Initially, though, it was just an accident.
Andrew: What kind of help did you give them as you were doing this? I know about the weekly dinners that Alexis Ohanian said that he got a lot out of. I know that the entrepreneurs got to talk to each other and show each other their progress. What else did you do to support them along the way?
Paul: We got all their paperwork done. And we got them set up cleanly, so if there was some weird gotcha about the IP, like they had started working on the thing with their previous employer, we would tell them, “No, rewrite that code.” So by the time they got to the end, at demo day, they were like a clean start out, with no weird gotchas that would make investors barf, like former co-founders who were gone, but still had 30% of the company, or they didn’t own their IP, or someone hadn’t signed some agreement or something like that. And they were properly incorporated as Delaware C-corps instead of whatever broken LLC they came in with. All that stuff is not nothing. That’s actually the kind of stuff Jessica does.
Demo day, everybody is in good shape as a company. We also gave them a lot of product advice. It turned out we had a knack for this, from having worked so long making web apps ourselves, literally since the beginning.
Andrew: How did you do that? That’s another thing that Alexis told me, that the idea for Reddit came from you and a lot of the entrepreneurs who you have backed have told me [inaudible 00:11:40].
Paul: The idea for Reddit was a combination of us and them, okay? We told them we didn’t like their original idea and we said, “Come back and we’ll talk about other ideas,” right? But we didn’t say, “Look, here is a wireframe. Build this.”
Andrew: I see, right. You said, “Here’s an idea, go run with it.” And they did go out there, and actually it was an idea that you guys came up with together?
Paul: Yeah.
Andrew: But you led them in the direction.
Paul: But they came back. When they came back from the next day and we talked for quite a long time the next day, and we cooked up something in that. I called them on the phone and said, “We loved you guys, even though we rejected you. If you come back, we’ll figure out something new for you to do.”
Andrew: I see. So how do you do that? I mean, yes, you did create one of the early web apps. Yes, you did create Viaweb, and you saw it all the way through to a sale to Yahoo, and you got to see the inside of a big company like Yahoo. But that still doesn’t seem like enough experience to be able to sit down with an entrepreneur and say, “Here, I’m going to help guide you in that direction.” What else was it?
Paul: I have a weird ability to do this, or something like that. I don’t know why. But I often worry that it will stop working. But it’s almost like some weird knack that people have. Like people who can tell you that December 21st, 1960 was a Tuesday, you know what I mean?
It’s like that. Because it’s true, it wasn’t that I had such an enormous amount of experience. Somehow I seem to be able to look at a web app and think, “No, this is wrong, this is right.” And now I can say it, “Well, it’s because I’ve worked with 172 startups. Now I have tons of experience, probably more than anybody else.” But I seem to always have some kind of natural ability to do this.
Andrew: Okay. So what did you learn from that first class? What did you learn from that first experience of working with a batch of entrepreneurs on new companies?
Paul: We learned a lot of stuff. We learned some of these we had done by accident were really good. Like funding a whole bunch of startups at once is really good because they can all help one another. We learned that young people can actually successfully start startups. I think in how to start a startup, I said, “You should probably be like 24, or 25, or something like that before starting a startup.” Some of these guys in that first Y-Combinator batch . . . Sam Altman was 19 and he was the best startup in the batch. So lo and behold, you can start a startup even younger than we thought. Which is not to say that is the only age you want to start a startup, but the age range of potentially successful startups extends frighteningly low.
Andrew: I think, actually, you said that the cutoff was 38 and you’re over 38, running essentially a startup with Y-Combinator. How do you . . .
Paul: No, I’m not. There’s one big difference between Y-Combinator and a product company. We do not have customers who can call us any random time because something is breaking. And that is what makes a startup so hard. The closest thing I have to that is Hacker News, but if Hacker News is down for a day, it wouldn’t be the end of the world. Whereas Viaweb was down for like 10-minutes, we would have a lot of really angry people calling us. So there’s a big difference. I can go on vacation. A startup founder cannot go on vacation because who’s going to watch things?
Andrew: I see. I happen to know that the guys from Airbnb prided themselves on getting as many minutes of your time as they could while you were building out their product. But you’re saying there’s a limit to when they could call you. If they had a question in the middle of the night, at 2:00, they’d have to wait till the next day to call you.
Paul: Yeah, I mean, the kind of questions that founders ask me are not usually the kind of things that have to be answered at 2:00 a.m.. Sometimes they are, like if someone’s getting acquired, or something like that. But even then, M&A guys aren’t doing stuff [inaudible 00:15:46] the morning.
Andrew: Right, and it’s not breaking at 2:00, well, it might, but not often.
Paul: Yeah, what can happen at 2:00 a.m. are like usually technical problems.
Andrew: All right. So let’s talk about, then, the second group of people. What did you do differently as you were assembling that group?
Paul: Well, the second batch, the biggest, most obvious thing that was different was we were in California. We decided in our second batch that we would try doing one in California, partly because it’s much nicer in California in the winter. So we could be self-indulgent and also ambitious at the same time. So we decided we didn’t want somebody to say, “We’re going to be the Y-Combinator of Silicon Valley,” right? After we’d started it because we could kind of see pretty early on people would start copying us. In fact, I’m kind of surprised it took as long as it did. But we didn’t want somebody to say, “We’re the Y-Combinator of Silicon Valley.” We wanted to be the Y-Combinator of Silicon Valley. So we thought, “All right. We’ll do a batch out in Silicon Valley.”
We decided at the last minute to do it in Silicon Valley. The application form for that batch said on it, “We don’t know where it’s going to be. It might be in California. It might be in Boston. If you can’t do it in someplace, tell us here.” Right? And the only way we could get a space in time was to carve out a piece of Trevor’s robot company building, which is where I’m sitting now. To this day, we’re in the middle of this robot company. It’s kind of entertaining, actually. There are all these big robots driving around.
Andrew: This is [inaudible 00:17:16] any thoughts [inaudible 00:17:17].
Paul: [inaudible 00:17:18] did we do? What else did we do differently?
Andrew: Yeah.
Paul: Not that much else, really. I mean, it was in the winter, so there was no question of it being a summer job for people.
Andrew: Did you get the sense, by then, of the kind of entrepreneur you wanted, that maybe there was a kind of entrepreneur that you thought you wanted but wasn’t a good fit? Was there [inaudible 00:17:41]?
Paul: We were getting better. We gradually got better. And we’re still not very good, even though we’re much better than we were when we first started. But we started to learn, for example, that it mattered a lot how much people actually wanted to do a startup. People who really did think of it just as a summer job at the end of the summer, they would go back to school, just like people do at a summer job. So we were starting to learn determination was the most important thing.
Andrew: How can you find determination? How can you know that somebody is determined for real, and not just, “This is it. I’ve got to . . . ” It’s not just a temporary thing?
Paul: That’s actually our single hardest problem, telling how determined people are. I mean, there are two things we care about: how determined people are and how smart they are. We can tell in a 10-minute interview how smart someone is. You just hit a few tennis balls across the net at them and see how hard they hit it back. Or if they whiff entirely.
But telling how determined someone is in a 10-minute interview, we are often fooled. And actually the Y-Combinator alumni kind of hose us here because they tell people how to pretend to be determined during the interview. We meet these guys during the interview and they seem like real butt kickers, and lo and behold as the challenges of doing a startup emerge, they kind of fall apart.
So we’re often fooled. It’s hard to tell, it’s hard to tell. You can’t just ask people, “So are you really determined?” It’s pretty obvious what the answer to that’s supposed to be. It’s hard.
Andrew: What kind of things does an entrepreneur do to fool you into thinking that they’re really determined?
Paul: Well, seeming really tough and calm during the interview. Why am I telling people [inaudible 00:19:31]?
Andrew: Because you guys are going to get better and as long as this information is out there, you might as well get it all the way out there and even the playing field.
Paul: That’s the danger of live interviews. Matt Maroon of Blue Frog Gaming, he was a professional poker player, I mean, talk about poker-faced. So he came in for his interview and he just seemed absolutely unflappable. We thought, “Boy, this guy is tough. This guy is not a wimp.” And actually, we were right. He was really tough. But to this day, he is genuinely unflappable, and I probably would want to fund more people who are really good poker players. I’ve noticed, empirically, there seems to be a high correlation between playing poker and being a successful startup founder.
Andrew: But then that’s not fooling you, that’s just . . .
Paul: That’s how you seem to tell.
Andrew: He had a calm, strong presence about him. What else? Did they tell you specific stories about the time that they sold candy in elementary school?
Paul: Oh, yes. Actually, that’s a good one, if people have evidence of their determination. For example the Airbnb guys, at one point they were running out of money in their startup. They’d been working on their startup for quite a while before Y-Combinator. I think maybe around a year or the bulk of a year. At one point, they were out of money, and they made their own packaged breakfast cereal with an Obama and McCain theme. You could buy either one, and we were like . . . yeah, yeah. I have a box of Obama-Os on the shelf behind me, in fact. I’ll tell you, I think they made like $30,000. They designed the box [inaudible 00:21:19].
Andrew: They designed it, they sold it.
Paul: [inaudible 00:21:22] Yeah. You know, as soon as we heard that story, they were in, basically. There’s often a point in the interview where we all kind of look at one another and decide, “Okay. We’re funding these guys.” At that point, the remainder of the interview, we’re just chatting. As soon as we heard that story it was all over. They were in.
Andrew: What about Kevin Hale and the Wufoo guys? They also had similar experiences, didn’t they?
Paul: Yeah.
Andrew: What drew you to them?
Paul: They had Particletree. They had made Particletree, and I knew Particletree. I don’t know how I knew it, but I knew about this website and I knew it was really good. So I knew they could make good things. They were not just thinking, “Oh, maybe we’ll start a startup.” And then a few months later, they’ll say, “Oh, maybe we won’t.” Right? Obviously, they had some practice doing projects together and they could work well together.
Andrew: I see. So maybe one of the ways . . .
Paul: The Wufoo guys were terribly nervous during their interview.
Andrew: Were they?
Paul: Yeah. Oh, they were so nervous. I mean, we try everything we can to make people calm during the interview. Now we have people waiting outside, talking calmly to them before the interview because we don’t want people to be flustered during the interview. Our goal is not to try and break them. If people are nervous, all it does is add noise to the interview. We need all the signal strength we can get so we want people to be calm. But the Wufoos were only our second batch. We didn’t have anything to help people be calm or we didn’t write these instructions about how to ace your interview or anything like that.
Now there’s this thing we send to people about how to do well in the interview. So Wufoos were so nervous, and after the interview, reactions were divided about whether to fund them. And I said, “No, they’re not stupid. They were just nervous.” It’s true. That was it. They were just nervous.
Andrew: There’s another situation where Kevin told me that he had one vision for what he wanted to do and you had another. You said, “Oh, what you’re looking to build are forms.” And he said, “No, not forms. Forms are these ugly things that don’t make sense. Nobody wants to be in the form business.” And he said, “No, Paul Graham said this is the opportunity.” In fact, he said, I think that the guys at Y-Combinator, he didn’t just say you, but the people sitting across the table from him helped him come up with the idea.
Paul: In the interview?
Andrew: Yes.
Paul: It often happens that the idea for the startup gets crystallized even in that 10-minute interview. What we do in the interview, we don’t ask, “So where do you see yourself in five years?” Or, “Why are manhole covers round?” Or some crap like that. I mean, what we do in the interview is we just start doing Y-Combinator. The first 10-minutes of Y-Combinator is the 10-minutes of the interview. So if we like a group, we just start. “Okay. What about this idea? What about this? You’ve tried this.” And that’s why people come out of the interview thinking, “Oh, my god. They asked us so many questions.”
Our goal isn’t to badger them with questions and see what happens. The goal is to figure out the startup. That’s why there are so many questions. What should the startup do? It’s a momentous question. You’re going to spend years working on this.
Andrew: I see. How do you know if the business idea is going to be big enough? How do you know if there’s enough money in the form business? How do you know if there’s enough of a market around Reddit, or do you even know that?
Paul: We don’t. We don’t know that, and we don’t worry about that, that early. We don’t care that much about the idea. I mean, it would be bad if it were an obviously terrible idea. Like, start a new search engine with no features that are any different from Google. If someone was determined to do that as their idea, we would reject it. Okay? Actually, they’d be stupid. We should reject them. But unless it’s an obviously terrible idea, it’s not the idea that’s important. At this stage, we care about the founders. We’re going to have three months to figure out the perfect [inaudible 00:25:26] to the idea.
Andrew: Okay. Did you, by the way, get into search? I think you did. One of your startups was going to get into search, or . . .
Paul: It’s okay if people are doing a search that’s not exactly the same thing as Google. So for example, Octopart does electronic parts search. Those guys are really good, so that’s search. There’s a startup in the current batch that hasn’t launched yet, or at least hasn’t outed themselves, what I see as doing search. But again, that’s a specific vertical. That’s okay. That’s fine.
Andrew: Okay. Yeah, and web . . . I never know how to pronounce them, even though I use this.
Paul: WebMynd.
Andrew: WebMynd, that’s how you pronounce them? Web M-Y-N-D, of course. WebMynd.
Paul: They don’t do search themselves. Well, they do sort of do search. Yeah, they put search over on the right-hand side of Google Search.
Andrew: Yeah, they enhance Google Search.
Paul: Yeah, yeah. Okay, so they do search.
Andrew: Okay.
Paul: That’s funny, I thought of them as this plug-in for Google, but they do actually have to do some amount of search to make that work.
Andrew: Okay. I don’t want to stay too much on Wufoo, but I know them best because I’ve used them for years. So let’s continue there. You decided to back them. Did you have, at the time, an idea for an exit, or did you say, “This is an interesting . . . ”
Paul: No.
Andrew: No. What was your thinking in the future, here?
Paul: [inaudible 00:26:40] take care of themselves. It’s so impossible to predict something like that. You know, the founders themselves don’t know. I mean, think of all the stories about . . . like Larry and Sergey. They founded this company and it’s worth something like $200 billion. I don’t know what Google’s market cap is, but it’s gigantic. When they first started out, they were walking around to the existing search engines, trying to sell the technology to them for a couple million. So if the founders themselves can be off by many orders of magnitude about the exit, it’s stupid to even think about it. You just want to fund people who are good, and some of them will go public, and some of them will just explode on the starting line, and there’s not much you can do about it.
Andrew: Okay. So then they went through the program, they had a product at the end. And then you and Paul Buchheit invested in them. At that point, did you say, “Now I see an exit for myself. Now I see that these guys can go public or be sold”? Is it important there?
Paul: No, [inaudible 00:27:42]. Even then you can’t predict exits. They really are genuinely unpredictable. It’s better just to not think about it.
Andrew: To just say, “This is a good business, I see it growing. I’ll support them.”
Paul: Yeah, these guys are good. If they have some kind of exit at some point in the future, it’ll be good for them. We’re all in the same boat. We have the same kind of stock. So you have to assume that if they do something that’s good for them, it’ll be good for you too.
Andrew: Okay. All right. So let’s suppose somebody’s listening to this, and says, “Man, Paul Graham is incredible. He can help shape an idea, he can help draw out the best in you. I don’t have Paul Graham in my neighborhood.” Or, “Maybe I’m too old to fit the criteria.” Or, “Maybe I just want to fund my business myself. How can I find somebody like Paul Graham?”
Paul: Well, the first two don’t matter. You don’t have Paul Graham in your neighborhood. Most of the people who do Y-Combinator don’t come from the Bay area. They come from all over. They come from all over the world. So that doesn’t matter. And the thing about age, that doesn’t matter either. We funded a bunch of people over 40. I don’t think we’ve ever funded anyone over 50, but some of the most successful startups have been funded by people over 40.
The third thing, you’re determined to fund a company yourself, well, why? If we’re willing to give you money, why not take it. I suppose you might not like the dilution, but a lot of people think it’s a good deal. So really, you could, if you wanted to, in all those cases. But if someone wanted to find someone local to advise their startup, I guess the best thing to do would be to do what I suggested in that original talk about how to start a startup. Find somebody who’s done it themselves and that’s the person to ask.
Andrew: I see. A lot of people don’t have this gift. A lot of people can’t, if you bring them an idea, rattle off a solution. I watched Jason Calacanis on This Week in Startup. Some guy will call up with an idea and he’ll kind of bat the idea back and forth with them and brainstorm until there’s something that’s fundable there. I don’t know a lot of people who can do that. How do you find those people?
Paul: You know, I don’t know. I don’t know.
Andrew: How important do you think it is to have that [inaudible 00:29:52]? Paul: Well, it’s very important. If that’s what you’re looking for from the guy. If your idea is not already perfect, if your idea is perfect, then any startup founder can tell you, “Okay. Here’s how to approach VCs. Here’s the right point to hire people. Here’s what not to do.” But if you still need to work on the idea, then you need to find somebody who can lunge ideas and I don’t know. I don’t know actually how you could recognize people like that because there are probably a lot of people who can do it badly. And, as a founder, how could you tell the difference between someone who did it well and did it badly. You can’t. So I don’t know. I don’t know. Why do I come here?
Andrew: Here’s something that I noticed about you from your writing. You seem to be looking for a formula for success in entrepreneurship that you’ll try to list all the reasons why companies fail and then hopefully at the end you’ll be left with reasons that they succeed, or you’ll try to figure out what it specifically takes. Am I reading it right?
Paul: Well, whenever you’re writing an essay, you don’t want to just do a lot of hand-waving and never get to the point. My goal in writing any essay is to make the strongest statement that you can make without being false.
So whenever I’m writing anything, I’m trying to think, “All right. What do I tell people here? How do you get to the heart of the matter?” So it’s not just in the essays about startups. Everything I write, I’m trying to figure out what the heart of the matter is, otherwise, it’s useless.
Andrew: And are you also trying to find formulas? Is there a formula here that people could apply? Could say, “Look, this is a guy who has now worked with 177 startups.” Did I get that right, or 172?
Paul: One hundred and seventy-two.
Andrew: One hundred and seventy-two. Let me make that clear on my paper. “He probably knows the formula at this point, or if he doesn’t, when he hits 372 he’ll have the formula.” Do you agree at that point, there will be one?
Paul: No, no. There is not a formula, like an itemized list of stuff we could suggest to people, “Do this, do that.” Startups vary.
But there are definitely some patterns. There are some things that work and some things that if you do them, they’re going to hose you. So launching pretty fast almost always works. Being highly engaged with your customers almost always works. Sitting around spending a long time noodling on the idea is almost always a mistake. It’s like a form of procrastination that you can convince yourself is work.
Andrew: I see. Okay, all right. Let’s take a look at some of the questions that people have put up on Hacker News. Max Cline has asked me about your personality. I’ve noticed it here too. Are you always this calm? Do you lose it?
Paul: No, I’m not always this calm. But I saw that question too and he asked if I threw chairs. No, I definitely don’t throw chairs. If I’m really mad, I’ll just sort of talk coldly to someone. Like this.
Andrew: I see. I’ve heard people . . . here’s the thing. A lot of entrepreneurs are now studying you for years. They’re in this community. You’re the leader of this Hacker News community of entrepreneurs who are developers. If you don’t like their ideas or disapprove of their progress, or if they perceive that either of those are true, they’re hurt. I’ve talked to a few people who felt that way. What are you noticing?
Paul: Did it help when I disapproved? Did it seem like a wake-up call, or did it merely depress them?
Andrew: I don’t know. I know that . . .
Paul: Well, there’s a big difference. I mean, if it’s a wake-up call, that’s good. And if it merely depresses them, then that’s bad.
Andrew: Well, I don’t know that I can make sweeping generalizations. I haven’t had that many conversations like that.
Paul: Well, when you find out, let me know. I’m trying to get better at this.
Andrew: You are? Are you actively trying to get better at the way that you communicate with them?
Paul: Oh, yes.
Andrew: How? What have you done that helps you communicate with them better? How do you bring out the best in people?
Paul: Well, one thing I’ve been trying to figure out is how to tell which people to keep nagging and which people to give up on. Because there are some people when basically we made a mistake. There’s always going to be some, we get to demo day, and we know they’re not going to raise money, and they probably know they’re not going to raise money, and they’re going to go back and get jobs.
There’s always some percentage that are just doomed. So the ones that are doomed, it’s just tormenting them for me to keep nagging them and encouraging them. Because they’re not going to make it. Whereas there are others who are on the borderline, who might fail and might succeed, and then if I nag them and nag them and nag them, I can maybe push them over the threshold. So it’s one of these situations where right on this threshold you have two extremes of what you want to do. The guys who are just good enough, you want to nag enormously, and the ones who are not going to be good enough you want to nag not at all. It’s not a continuous function. There’s a step there and so it’s kind of hard to optimize. But I spend some time thinking about that.
We spend time thinking about all aspects of how to make Y-Combinator better, not just how to give people advice, but how to pick startups, how to match them up with investors. It’s all new. Most of the stuff we’re doing is stuff people haven’t done in exactly that form before. So we can’t help thinking about how to try to do it better, how to try to do it better because we’re so bad at it.
Andrew: Really?
Paul: Really, yeah.
Andrew: You still consider yourself bad at it?
Paul: Oh, god, yes. Yes, we think of ourselves as just utterly terrible at picking startups.
Andrew: Why? What’s bad about the . . .
Paul: Because no hard choices are always wrong, that’s why. We have tons of evidence about bad we are.
Andrew: What do you mean? What’s your percentage of bad companies to good ones would you say?
Paul: At least a third are just disasters. In the venture business, generally, a lot of the investments are failures. Even a venture fund, which has a lot more at stake and spends a lot more effort on due diligence than our 10-minute interviews, even a venture fund, half the investments will be failures. So everyone in the venture business is bad. Maybe if we had more experience in the venture business, we would take this badness for granted and think, “Oh, well, actually, we’re really good if only a third of our investments are miserable failures.” But we’re not in the venture business and so it seems intolerable. We’re so often fooled.
Andrew: So the four of you might sit around, maybe with some entrepreneurs and try to bat around ideas about why that third didn’t work out.
Paul: Amongst ourselves, amongst ourselves. We talk about startups that we picked that we’re really glad we picked, and we say, “How do we recognize more people like that?” There are startups we were fooled by, and we think, “How do we stop being fooled in the future?” You know? We learned a lot from interviewing Wufoo. We learned how to tell the difference between people who are nervous and people who are lame. And we were figuring that out in that interview. So we get better from practice.
Andrew: Okay. What about the bad ones? What have you noticed that is disastrous? What kind of people?
Paul: Wimps.
Andrew: Wimps.
Paul: Yeah, it takes a [inaudible 00:37:02].
Andrew: What’s a wimp look like?
Paul: They have a certain body language. You should really ask Jessica. Jessica is the expert at telling when people are going to wimp out. She’s so much more sensitive to this. She has a much more natural ability. When we were writing code in college, she was judging people’s characters. So both by nature and training, she’s so much better than I am. If I want to know if someone’s a wimp, basically, the high bit is ask Jessica. She’s the one who will tell you.
Andrew: I do have to have her here on Mixergy. I don’t know if you know, back when I had a bookcase behind me for I guess the first year of doing these interviews, I often put her book behind me, “Founders at Work,” knowing that if somebody saw my interview and recognized “Founders at Work,” even though the cover was a little blurry, and it was hidden behind . . . well, it wasn’t hidden. It was out there, but it was a little hard to see. If they recognized it, and they loved it, then they’re the kind of people I want in my tribe here in Mixergy.
Paul: Yeah, same with us. Actually, that book was . . . I mean, the way she chose who to interview for that book was who we ourselves wanted to hear the stories of.
Andrew: How did that factor into Y-Combinator, the book itself?
Paul: She was working on “Founders at Work” before we started Y-Combinator. The book came before Y-Combinator, and it was part of the reason we started it. That was one of the reasons I was thinking about startups so much because, for a long time, I didn’t think that much about startups. I was working at programming languages and then spam filters. But I was talking to her about it for that.
Andrew: I see, so she was telling you what she saw. What was it about that that inspired you to look into startups? What was it about the stories that she was collecting?
Paul: It wasn’t so much the specific stories. She hadn’t done a lot of interviews yet, but she was sort of thinking about this book when we were talking about whom she might go and interview. So we were just talking a lot about what startups are really like. And she didn’t know what startups were really like. For example, one of the big mistakes that people make about startups, people out in the regular world, and even founders, to some extent, they think it starts because there’s some brilliant idea, and success is fore-destined. I told her, “No, the idea changes a lot. People start out, they’re not even sure they want to start a company.”
Google is the perfect example of this. So I would tell her what things were actually like in the startup world, and she was shocked. And yet she had worked for this investment bank that thought of itself as being involved in technology companies and no one in the company had a clue what startups were really like. So she was just astonished to hear all these stories from us and other startups founders who knew about what things were really like.
Andrew: Is that how you met? Well, no, she didn’t come to . . . you met her before she even wrote the book.
Paul: Oh, yeah. I think we’d been dating for over a year before YC got started.
Andrew: Okay. I’m going to go to another question from Max Cline. He asks a lot of interesting questions here. Saying, “If Y-Combinator company becomes a lifestyle company, can you guys still profit from the business, or do you need a [inaudible 00:40:20]?
Paul: No. I mean, there’s got to be an exit for equity holders to get any money. I mean, maybe in the future there’s some model where companies pay dividends instead of an exit, but we’ve never tried to get anybody to do that. We don’t have any real hopes about it, so no.
Andrew: Okay. So you’re thinking when you’re investing, you’d like to be able to sell out the company, you’re personally invested [inaudible 00:40:49] or go public.
Paul: There’s got to be some sort of liquidity and getting bought and going public are the two big forms of liquidity now. Although, there’s evolution in this world. Look at Facebook. Facebook stock is now liquid. And they’ve neither been bought or got public. So who knows what will happen in the future. In that effect, there’s no difference from us and anybody else doing venture investing. The only way to get any money out of this startup is some form of liquidity.
Andrew: I see. Why the name Y-Combinator?
Paul: It’s a trick in the lambda calculus. It’s a programming trick. I realized later that it was related to what we do, that the Y-Combinator is sort of self-referential in the way that Y-Combinator is. But, initially, I wanted to call it Y-Combinator just because I thought the Y-Combinator was a really cool thing. So it would be the perfect name for picking out the kind of people that we wanted. Hackers would look at this and think, “That’s so cool. They’re named after the Y-Combinator. There must be something going on here.” And suits would look at it and think, “Y-Combinator, what’s that?” That was what we wanted. We wanted hackers to notice us, and suits, we didn’t care.
Andrew: I see. And apparently it’s working. Well, I noticed you guys, but I didn’t know what the meaning was. Actually, I got to be honest. I still don’t understand it. I even saw it in Wikipedia before I did this interview. I probably would be the wrong person for you guys to back, then. I’m sure of that.
Paul: No, the Y-Combinator is notoriously one of the most contorted ideas in computer science. It’s the kind of thing you wouldn’t even think that something like this would be possible. I, myself, I can’t sit down and write out the Y-Combinator for you in lambda calculus. I have to look it up too. It’s not the kind of thing you actually use day to day in programming very much. It’s more of mathematical interest than practical interest.
Andrew: Okay. All right, what about this? All the companies that you guys back, within three months you’re able to get this beautiful user experience. I always know instantly what they’re about. I always can navigate them quickly, and they look beautiful, so beautiful that I’d be proud to show them as my own website if I own them. Within three months, you’re able to do that. How do you do that? How do you get that user experience?
Paul: I mean, I nag them. Some of them are great already. But the ones that aren’t great, I nag. I say, “Look, people showing up at some website, they don’t care about it as much as you, the founders, care about it. What you care about with most web apps is the person who shows up randomly. You don’t care about the person who’s already signed up for your service. They’re already sold. All they need is a little login button up in the right-hand corner. So what you care about is the person who randomly clicks on your website and has their finger poised over the back button.” Just think how many websites you visit every day, and most of them are no good. You just click on back, and you go on with your life.
So you’re designing your website for the guy who’s just about to leave, he’s just on the cusp of even caring what you do. You know what your website does, but he doesn’t and he doesn’t even care that much. So you have to tell him. You have to say, “This website is about such and such,” right? And you have to tell him what’s he’s supposed to do there. The button we want you to click on is this big red one, in the upper left-hand corner. That’s what we want you to do. So at least he knows what he’s supposed to do. You know, I don’t want to click on it, or I do, but the thing that kills you is ambivalence, where he goes, “I don’t even know what this website is about.”
How many times have you clicked on some link to some website and you think, “What is this startup even for?” You know?
Andrew: But there’s the curse of knowledge, as I think they said in the book “Made to Stick” that you’ve worked so hard on the site, you understand everything about it. To try to simplify it for somebody who’s brand-new is really hard there. And then to do it all in such a short period of time, to figure out design, is tough.
Paul: [inaudible 00:45:02]
Andrew: Do you guys have somebody on board who does that? Sorry.
Paul: No. It’s just like writing an essay. You have this big, complicated situation. You need to boil it down to its essentials. So I just look at the startup, and I think if I were writing something about it, how would I describe it. This is the essential thing. This is what they should say. We don’t have a graphic designer on staff, although we’ve been thinking of it. We have a lawyer now. We have a lawyer on retainer, who fixes the startup’s ordinary, everyday legal problems. That turns out to be huge. That is great. That has saved so much money and trouble. So we’re thinking of getting a graphic designer on retainer too, but we just haven’t got around to it yet.
Andrew: I notice that you have legal documents and you have had for a long time available to entrepreneurs documents that they can use for investors, and documents, I guess, that they use along that process. What about a set of documents for entrepreneurs who are just starting to team up, where they can help spread the ownership of the company properly, where they can ensure that they each own the IP.
Paul: We have that.
Andrew: You do have that [inaudible 00:46:04]
Paul: [inaudible 00:46:05] paperwork online. I mean, maybe it isn’t. But I thought it was, I think it is. Yeah, I think it is.
Andrew: Okay. So [inaudible 00:46:11].
Paul: [inaudible 00:46:12] for starting a company, in the Series AA documents.
Andrew: I’m sorry. We just lost the connection for a little bit. So if I and this guy Wallflower on Hacker News decided to partner up because we like the way we’ve exchanged ideas here in the comments, we can go to Y-Combinator, get a legal document that we can use to split up ownership of the business, and then start working?
Paul: I think so. I think so.
Andrew: I’ll have to look, and if somebody in the chatroom knows, I’d love to see what you guys think of that. I’d love to see if you can find it, and maybe link us to it.
Paul: Go to Google and search for Series AA Y-Combinator. That’s what it’s called. Series AA.
Andrew: [inaudible 99:46:49]
Paul: Yeah, I think so.
Andrew: How do you find a partner? If you’re just working on your own, and you’re looking for somebody to team up with, how do you find that person?
Paul: You should work together with them. The two biggest ways people find co-founders is to go to school with them or to work with them at the same company because you don’t really know what someone’s going to be like until you’ve worked with them on stuff. They might seem smart, but they’ll turn out to be flakes or something like that. So what I would tell people, I think having a co-founder is very important. We’ve seen tons of evidence of this. We do fund some number of single-founder startups and they do worse than startups in multiple founders.
There’s a lot of empirical evidence too. If you look at all the startups, all the technology companies that are most successful, very few of them have single founders. Even companies that seem now like they have one guy, like Oracle, initially they had more than one founder. He just came to the fore. Same with Microsoft, or Apple. Initially, you need a couple guys to spread the load over.
So having a co-founder is very important and so what I would do is if you don’t have a co-founder, find a co-founder because not having a co-founder is going to kill you. That’s the part that’s going to kill you. So fix the part that’s going to kill you. Spend six months trying to find somebody that you can work with and then do the startup, instead of rushing into it unprepared.
Andrew: I see. So find somebody who you can maybe work with on a small project, maybe on Hacker News, maybe at a bar camp or some other event.
Paul: The project you work with does not have to be the startup. It’s just as well if it isn’t because then you don’t have to figure out how who’s in charge of what, and how to split the intellectual property, and what to work on. Just work together with them on some open source projects for a couple months and then you’ll know if they’re good.
Andrew: All right. Let’s talk about Hacker News. Why did you launch Hacker News?
Paul: Well, originally, I just wanted some kind of application to test this new programming language, Arc, on. If you’re going to write a programming language, you ought to write some kind of application in it to make sure it’s actually good for writing programs. So I wanted to write some kind of program in it and I had tried to convince the Reddits to create a sub-Reddit for startups. They were taking forever to make up their mind about what was the right way to implement sub-Reddits. They took a long time to figure out how to do sub-Reddits. They had [Inaudible 00:49:20] sub-Reddit first and that was a one-off. But general purpose sub-Reddits, where you could create about any topic, was much later.
Eventually, this combination of desire to write some kind of application in Arc and be getting tired waiting for the Reddits to make a startup sub-Reddit made me decide I would start a website about startup news. And that’s what Hacker News was originally called. It was originally called Startup News. But after six months, we changed it to Hacker News because we got sick of reading about nothing but startup stuff.
Andrew: Was it to find news stories for yourself, or did you have a vision for what this community would do?
Paul: Well, we already had a whole bunch of YC founders at that point. We probably had 150, 200 founders. So we had this community of people who were interested in the same stories and they were the original users of startup news. It was like a news aggregator for those 200 people.
Andrew: I see. Okay. And who’s managing it now? Who’s deciding what stories get killed? Who’s the person behind . . .
Paul: Me, me. There are a bunch of editors, but I spend . . . it’s shocking. Probably one of the biggest surprises in my life is how much time gets sucked up by Hacker News. There’s just so much crap. There’s enormous amounts of spam, there’s like 1001 varieties of semi-trolls, some of them well-meaning, some of them just crazy. God, they suck up a lot of time. The whole site just sucks up a lot of time. I try to automate as much as I can, but it’s not that automatable.
Andrew: Is there a payoff in that, I mean in all the work that you’re spending there?
Paul: Well, it’s a great source of people applying to Y-Combinator, I think. I never tried to track it. But it means huge numbers of hackers spend every day looking at this little orange Y in the left-hand corner. That can’t be bad. At least they know about us. At least people know about us. So I think it’s good, but I’ve never actually tried to measure whether it’s good. It’s more like it just got started, and I got stuck into working on it, and now I spend too much time on it, but what am I going to do, shut it down? I don’t really make any conscious decisions about it.
Andrew: Who are the other editors?
Paul: YC founders. I think there are around 30 of them. They’re all people I know and trust, and they have good judgment and care about the site. But I’m not even sure, actually, who they are. I’m not sure which people are editors. There’s something I can go and look at and see who the editors are, but I don’t know.
Andrew: Okay. And if there’s somebody who’s stories automatically get killed, or specific sites that are automatically killed, that’s probably you saying, “No, I don’t want this as part of my community.”
Paul: Sites getting killed, anyone can do. Actually, anyone can ban a user too. But usually most of the people who get banned, unless they’re obvious spammers or trolls, like really egregious trolls who are obviously just think of themselves as trolls. Editors will ban them.
Andrew: Okay, okay. Can you say why certain sites get banned, even if they are in the hacker space, or in the news space? Are there certain things that you just don’t want, certain kinds of stories that don’t fit?
Paul: Well, most of the sites, the huge majority of the sites that are banned are just spammers. I don’t think there are an awful lot of sites that are banned that are related to the stuff that gets talked about on Hacker News. We wouldn’t ban a site unless it was . . . I think cracked.com was banned, for example because it’s crap. It’s deliberately created with fluff. I just doubt there’s anything interesting on there that would engage some intellectual curiosity. But that’s the kind of site that’s not a spammer site that’s actually banned. I’m not sure, though. It might not be banned. I’m not sure.
Andrew: Okay. All right. Finally, let me start asking you a little bit about my work, here. I love the community that you built there, that you built on Hacker News and around Y-Combinator and the influence it’s had on this whole space. I feel like who do I want to reach, who do I really . . . I don’t want to get a huge audience, I want to get the right audience. I’m looking at the people who care about your work and I’m saying, “They, to me, are the right audience.” How can I serve that audience well, Paul, with my interviews?
Paul: Well, it seems like you are. Hacker news, from what you say, it sounds like Hacker News is identical with your audience. And you seem to be pretty popular on Hacker News so it seems like you’re doing fine. I don’t think there’s anything you have to change dramatically.
Andrew: I have your kind of curiosity, though, about what it takes to build a successful company, and what the entrepreneurs are thinking of, and who they are. Do you have a sense of what you’d like to see, or how to bring that out of entrepreneurs?
Paul: Well, you want to ask different questions than news reporters generally ask. There are all these traditions in the news business that people now take for granted of asking sort of shallow questions that create controversy. For example, a sort of classic, old-fashioned, dumb-ass reporter, if they were interviewing Larry and Sergey, they would say, “So, Larry. What about China?” Right? I’m like, who gives a fuck about China? It’s just some political controversy. There’s nothing intellectually deep about it.
What I care about is things like when did they make the architectural decision to make their search engine work on a whole bunch of crappy, cheap computers? That’s important. That’s not current events, and it doesn’t generate a lot of controversy in sort of cheap, short-term interest. So I would say the way to serve this audience, I mean, this is the kind of audience that doesn’t go for that kind of crap, mostly, unless there are people there that I would rather not have there. But be deep. Ask the questions that matter, instead of the kind that merely illicit controversy. It seems like you’re pretty good at that.
Andrew: I’m trying. I do feel like those other stories get more attention.
Paul: In the short-term, yeah, sure. But this is a different audience.
Andrew: I see. Okay.
Paul: Ask the questions that would be helpful to someone starting a startup.
Andrew: Do you see somebody who does that well now, who brings out those key moments in entrepreneurs?
Paul: You mean an interviewer who asks . . .
Andrew: Yeah, an interviewer or a writer.
Paul: [inaudible 00:56:03] questions. Yeah, I do see someone like that fairly frequently, in fact.
Andrew: You know what? Does she blog? I’d like to see a chapter of the next “Founders at Work” every week on her site. I could read that all day long. I know she doesn’t have the time for it anymore.
Paul: [inaudible 00:56:22] do one of these interviews. It takes her more than a week to do one of these interviews.
Andrew: I see. I get that. I wish she had more time to be able to do that. Would you please give her the space to go do that?
Paul: You know, that is her deepest wish. If she is watching this, she’ll be laughing so much at this point because that’s what she would like the most too, to be able to spend more time on the new version of “Founders at Work.” She’s working on a new edition, with a bunch of new interviews.
Andrew: Oh, wow.
Paul: Yeah, the big problem in her life is that she has to spend all her time on random crap and doesn’t get to spend enough time on the book.
Andrew: I’m going to have to ask her.
Paul: That’s the problem that everyone writing a book has, incidentally.
Andrew: That there just isn’t enough time to do it while you’re doing everything else.
Paul: Yeah, that they end up having to . . . because the book doesn’t have deadlines and other things. Do you have deadlines?
Andrew: Yeah.
Paul: Like this interview, it happens at a particular time. The crap work has evolved this protective mechanism to not get ignored, called deadlines. So if you look at what most people do, instead of their great vision for their life, they spend their time doing things that have deadlines.
Andrew: Yeah, like email. Answer it.
Paul: Yeah, yeah. So one of the secrets to getting stuff done is to be able to blow off stuff, even stuff that seems important.
Andrew: Let’s see. Piss off some people who have imposed a deadline on you just so you can get the stuff that you really care about.
Paul: Yeah, you probably have to piss people off to get really hard work done.
Andrew: All right. Well, thank you for doing this interview with me. I’m so glad to finally get to meet you and I hope I get to meet you at some point in person too.
Paul: Yeah, you should drop by dinner. Send me an email.
Andrew: I’m still in Buenos Aires, but when I’m back in the US, I’d love to come [inaudible 00:58:0].
Paul: You’re in Buenos Aires?
Andrew: Yeah.
Paul: That is the Internet for you.
Andrew: Yeah. All right. Well, thank you. If you’re ever down here, we’ll have you over for a steak and a malbec. If not, I’ll wait till I get back to the US.
Paul: All right. Thank you very much. Nice to talk to you.
Andrew: All right, and thank you all for watching. If you have any feedback or comments, how can I become a better interviewer? Who else should I be interviewing? Please, bring it on. I always love to hear that stuff. Bye.
Paul: Bye.
# How Y Combinator Helped 172 Startups Take Off
Want to see how much impact Paul Graham can have a startup? Here are 3 examples from past Mixergy interviews. The first is Alexis Ohanian, who told me that his life changed when he headed to snowy Boston over Spring break so he could hear Graham talk about startups. Graham ended up investing in Alexis’s company through what became the seed funding firm Y Combinator, but the amazing part wasn’t the money. It was that Y Combinator helped him move past a bad business idea that he and his partner spent a year on, and discover a better one, which became Reddit, the social news site that was sold to Conde Nast within 2 years of launching.
Then there’s Kevin Hale who told me that when he interviewed with Y Combinator, he and his co-founders had an idea for an elaborate content management system. During the interview, despite initial resistance, they were convinced to create a form builder instead. The business became Wufoo, the startup that reached profitability within 9 months.
Finally, a few weeks ago, I talked to the founders of AirBnb. When they joined Y Combinator, they had a site that gave travelers an affordable alternative to hotels by matching them with locals who had space in their homes. They had a national presence, but they were constantly struggling for cash. Y Combinator gave them some funding to keep going, but they told me it was Graham’s suggestion that they focus on just one city till they got their product right, which changed everything. Within a few months, they had a better product and they were finally profitable.
How does Graham do it? That’s what I wanted to find out in this interview.
#### Paul Graham
Y CombinatorPaul Graham is a partner at Y Combinator, which gives startups seed funding and mentorship. He’s known for his work on a new Lisp dialect called “Arc,” his essays, and for founding and administering Hacker News. Previously, he co-founded Viaweb, which was sold in 1998 and became Yahoo! Store.
| true | true | true |
I interviewed 3 startups that Paul Graham helped launch through Y Combinator. Each told me about a specific piece of advice that he gave them that turned their business around. How does he do it? That's what I asked in this interview.
|
2024-10-12 00:00:00
|
2015-11-25 00:00:00
|
https://mixergy.com/interviews/y-combinator-paul-graham/?get-twitter-card=true&v=14&time=1448438017
|
article
|
mixergy.com
|
Business Podcast for Startups
| null | null |
25,122,478 |
https://uicoach.io/
|
UI Coach
|
UI Coach
|
Support Us
Improve your UI/UX Design skills by designing real-world projects, UI Coach makes it easy for you to practice your craft with Project ideas, Color palettes, Font pairings, and inspirations from award winning and nominated websites.
| true | true | true |
Become a better UI designer | Color palettes, Font Pairings, And Award Winning Design Inspirations | UI Coach
|
2024-10-12 00:00:00
|
2022-01-01 00:00:00
|
http://firebasestorage.googleapis.com/v0/b/ui-coach-2.appspot.com/o/social-sharing.png?alt=media&token=c51bb623-fdcb-43f9-b63b-d128eaafddab
|
website
|
uicoach.io
|
UI Coach
| null | null |
38,252,741 |
https://www.universetoday.com/164186/china-wants-to-retrieve-a-sample-of-mars-by-2028/
| null | null | null | true | false | false | null | null | null | null | null | null | null | null | null |
9,011,661 |
https://medium.com/inside/three-misunderstandings-about-design-research-a3d6f74b1ee3
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
12,458,263 |
http://katiehempenius.com/post/tinder-profile-analysis/
|
What do people mention in their Tinder profiles?
|
Katie Hempenius
|
# What do people mention in their Tinder profiles?
##
September 8, 2016
## An analysis of 10,000 US Tinder Profiles
### Interests
Let me guess - you like to travel, listen to music, and have a dog. And you work somewhere?
### Describing Physical Appearance
Guys rarely mention their physical appearance (except height); women like to mention their tattoos.
### Describing Personality
Potential clichés: Identifying as a nerd, spontaneous, adventurous, sarcastic, or awkward.
### About This Analysis
This analysis was generated using data from the profiles of 10,000 Tinder users across 22 US cities and towns. The median age of male users was 26, the median age of female users was 23.
### How was this made?
This was created using the Charles web proxy application, tinderjs npm module, and Mongo. Graphics were made in InDesign.
| true | true | true |
An analysis of 10,000 Tinder profiles.
|
2024-10-12 00:00:00
|
2016-09-08 00:00:00
|
http://katiehempenius.com/tinder-profile-analysis/tinder_interests.png
|
website
|
katiehempenius.com
|
Katie Hempenius
| null | null |
22,802,488 |
https://www.nytimes.com/2020/04/06/world/europe/boris-johnson-coronavirus-hospital-intensive-care.html
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
12,650,682 |
https://zombiecodekill.com/2016/10/06/the-legacy-of-pieter-hintjens/
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
24,229,791 |
https://limeade.so
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
18,167,290 |
https://www.reddit.com/r/btc/comments/9mb6ex/bitcoin_cash_contentious_forks_outcome_deficient/
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
34,288,996 |
https://github.com/tja/hue-picker
|
GitHub - tja/hue-picker: 💡 Philips Hue Color Picker.
|
Tja
|
**Hue Picker** is a bare-bones web server and application that controls a single Philips Hue light.
It gives control over an individual Hue light without opening up the entire home automation network. For instance, it allows children to manage the light in their own room without being able to mess around with all other lights or home automation appliances.
The *Hue Picker* web application can be installed as an App on the iOS or Android home screen to simplify its
usage further.
Pre-built binaries are available on the release page.
Download the binary, make it executable, and move it to a folder in your `PATH`
:
```
curl -sSL https://github.com/tja/hue-picker/releases/download/v0.2.0/hue-picker-`uname -s`-`uname -m` >/tmp/hue-picker
chmod +x /tmp/hue-picker
sudo mv /tmp/hue-picker /usr/local/bin/hue-picker
```
*Hue Picker* requires to be registered with the local Hue bridge. This is done as follows:
`hue-picker register`
The tool instructs you to press the button on the Hue bridge. Once the button is pressed, three pieces of
information will be printed; the Hue bridge's **Host** address, the **Bridge ID**, and the **User ID**.
The next step is to find the ID of the light that should be controlled. Using the Host address and Bridge ID, simply do:
`hue-picker list --host="192.168.0.40" --user="YDbjwv...4arRIk"`
The tool will output the list of rooms and associated lights. Each light is prefixed with the light ID in
brackets (e.g. `[00:17:88:01:02:07:21:13-0b]`
). Note down the light that *Hue Picker* should control.
*Hue Picker* serves a web application via its built-in web server. Using the previously gathered information,
the server can be launched like this:
`hue-picker serve --host="192.168.0.40" --user="YDbjwv...4arRIk" --light="00:17:...:21:13-0b"`
Once started, the web application can be opened at http://localhost:80/ . Note that the port number and network
interface can be changed via the `--listen`
parameter.
Run `hue-picker serve --help`
to see the list of all available options.
*Hue Picker* will look for a configuration file `config.yaml`
at several places in the following order:
`/etc/hue-picker/config.yaml`
`$HOME/.config/hue-picker/config.yaml`
`$PWD/config.yaml`
Command line parameters and configuration file options are named the same.
Furthermore, *Hue Picker* can be configured via environment variables. Simply take the upper-cased command line
parameter and prefix with `HUE_PICKER_`
— e.g. `--host`
becomes `HUE_PICKER_HOST`
, `--bridge`
becomes
`HUE_PICKER_BRIDGE`
, etc.
Command line parameters override configuration file options, which override environment variables.
Copyright (c) 2022–23 Thomas Jansen. Released under the MIT License.
| true | true | true |
💡 Philips Hue Color Picker. Contribute to tja/hue-picker development by creating an account on GitHub.
|
2024-10-12 00:00:00
|
2022-12-26 00:00:00
|
https://opengraph.githubassets.com/54070502dd5bc63fb40e2da259325375fe7091705a1c425e779b6945fc25b9ac/tja/hue-picker
|
object
|
github.com
|
GitHub
| null | null |
34,396,934 |
https://moores.samaltman.com/
|
Moore's Law for Everything
|
Sam Altman
|
# Moore's Law for Everything
My work at OpenAI reminds me every day about the magnitude of the socioeconomic change that is coming sooner than most people believe. Software that can think and learn will do more and more of the work that people now do. Even more power will shift from labor to capital. If public policy doesn’t adapt accordingly, most people will end up worse off than they are today.
We need to design a system that embraces this technological future and taxes the assets that will make up most of the value in that world–companies and land–in order to fairly distribute some of the coming wealth. Doing so can make the society of the future much less divisive and enable everyone to participate in its gains.
In the next five years, computer programs that can think will read legal documents and give medical advice. In the next decade, they will do assembly-line work and maybe even become companions. And in the decades after that, they will do almost everything, including making new scientific discoveries that will expand our concept of “everything.”
This technological revolution is unstoppable. And a recursive loop of innovation, as these smart machines themselves help us make smarter machines, will accelerate the revolution’s pace. Three crucial consequences follow:
-
This revolution will create phenomenal wealth. The price of many kinds of labor (which drives the costs of goods and services) will fall toward zero once sufficiently powerful AI “joins the workforce.”
-
The world will change so rapidly and drastically that an equally drastic change in policy will be needed to distribute this wealth and enable more people to pursue the life they want.
-
If we get both of these right, we can improve the standard of living for people more than we ever have before.
Because we are at the beginning of this tectonic shift, we have a rare opportunity to pivot toward the future. That pivot can’t simply address current social and political problems; it must be designed for the radically different society of the near future. Policy plans that don’t account for this imminent transformation will fail for the same reason that the organizing principles of pre-agrarian or feudal societies would fail today.
What follows is a description of what’s coming and a plan for how to navigate this new landscape.
## The AI Revolution
On a zoomed-out time scale, technological progress follows an exponential curve. Compare how the world looked 15 years ago (no smartphones, really), 150 years ago (no combustion engine, no home electricity), 1,500 years ago (no industrial machines), and 15,000 years ago (no agriculture).
The coming change will center around the most impressive of our capabilities: the phenomenal ability to think, create, understand, and reason. To the three great technological revolutions–the agricultural, the industrial, and the computational–we will add a fourth: the AI revolution. This revolution will generate enough wealth for everyone to have what they need, if we as a society manage it responsibly.
The technological progress we make in the next 100 years will be far larger than all we’ve made since we first controlled fire and invented the wheel. We have already built AI systems that can learn and do useful things. They are still primitive, but the trendlines are clear.
## Moore's Law for Everything
Broadly speaking, there are two paths to affording a good life: an individual acquires more money (which makes that person wealthier), or prices fall (which makes everyone wealthier). Wealth is buying power: how much we can get with the resources we have.
The best way to increase societal wealth is to decrease the cost of goods, from food to video games. Technology will rapidly drive that decline in many categories. Consider the example of semiconductors and Moore’s Law: for decades, chips became twice as powerful for the same price about every two years.
In the last couple of decades, costs in the US for TVs, computers, and entertainment have dropped. But other costs have risen significantly, most notably those for housing, healthcare, and higher education. Redistribution of wealth alone won’t work if these costs continue to soar.
AI will lower the cost of goods and services, because labor is the driving cost at many levels of the supply chain. If robots can build a house on land you already own from natural resources mined and refined onsite, using solar power, the cost of building that house is close to the cost to rent the robots. And if those robots are made by other robots, the cost to rent them will be much less than it was when humans made them.
Similarly, we can imagine AI doctors that can diagnose health problems better than any human, and AI teachers that can diagnose and explain exactly what a student doesn’t understand.
“Moore’s Law for everything” should be the rallying cry of a generation whose members can’t afford what they want. It sounds utopian, but it’s something technology can deliver (and in some cases already has). Imagine a world where, for decades, everything–housing, education, food, clothing, etc.–became half as expensive every two years.
We will discover new jobs–we always do after a technological revolution–and because of the abundance on the other side, we will have incredible freedom to be creative about what they are.
## Capitalism for Everyone
A stable economic system requires two components: growth and inclusivity. Economic growth matters because most people want their lives to improve every year. In a zero-sum world, one with no or very little growth, democracy can become antagonistic as people seek to vote money away from each other. What follows from that antagonism is distrust and polarization. In a high-growth world the dogfights can be far fewer, because it’s much easier for everyone to win.
Economic inclusivity means everyone having a reasonable opportunity to get the resources they need to live the life they want. Economic inclusivity matters because it’s fair, produces a stable society, and can create the largest slices of pie for the most people. As a side benefit, it produces more growth.
Capitalism is a powerful engine of economic growth because it rewards people for investing in assets that generate value over time, which is an effective incentive system for creating and distributing technological gains. But the price of progress in capitalism is inequality.
Some inequality is ok–in fact, it’s critical, as shown by all systems that have tried to be perfectly equal–but a society that does not offer sufficient equality of opportunity for everyone to advance is not a society that will last.
The traditional way to address inequality has been by progressively taxing income. For a variety of reasons, that hasn’t worked very well. It will work much, much worse in the future. While people will still have jobs, many of those jobs won’t be ones that create a lot of economic value in the way we think of value today. As AI produces most of the world’s basic goods and services, people will be freed up to spend more time with people they care about, care for people, appreciate art and nature, or work toward social good.
We should therefore focus on taxing capital rather than labor, and we should use these taxes as an opportunity to directly distribute ownership and wealth to citizens. In other words, the best way to improve capitalism is to enable everyone to benefit from it directly as an equity owner. This is not a new idea, but it will be newly feasible as AI grows more powerful, because there will be dramatically more wealth to go around. The two dominant sources of wealth will be 1) companies, particularly ones that make use of AI, and 2) land, which has a fixed supply.
There are many ways to implement these two taxes, and many thoughts about what to do with them. Over a long period of time, perhaps most other taxes could be eliminated. What follows is an idea in the spirit of a conversation starter.
We could do something called the American Equity Fund. The American Equity Fund would be capitalized by taxing companies above a certain valuation 2.5% of their market value each year, payable in shares transferred to the fund, and by taxing 2.5% of the value of all privately-held land, payable in dollars.
All citizens over 18 would get an annual distribution, in dollars and company shares, into their accounts. People would be entrusted to use the money however they needed or wanted—for better education, healthcare, housing, starting a company, whatever. Rising costs in government-funded industries would face real pressure as more people chose their own services in a competitive marketplace.
As long as the country keeps doing better, every citizen would get more money from the Fund every year (on average; there will still be economic cycles). Every citizen would therefore increasingly partake of the freedoms, powers, autonomies, and opportunities that come with economic self-determination. Poverty would be greatly reduced and many more people would have a shot at the life they want.
A tax payable in company shares will align incentives between companies, investors, and citizens, whereas a tax on profits does not–incentives are superpowers, and this is a critical difference. Corporate profits can be disguised or deferred or offshored, and are often disconnected from share price. But everyone who owns a share in Amazon wants the share price to rise. As people’s individual assets rise in tandem with the country’s, they have a literal stake in seeing their country do well.
Henry George, an American political economist, proposed the idea of a land-value tax in the late 1800s. The concept is widely supported by economists. The value of land appreciates because of the work society does around it: the network effects of the companies operating around a piece of land, the public transportation that makes it accessible, and the nearby restaurants, coffeeshops, and access to nature that makes it desirable. Because the landowner didn’t do all that work, it’s fair for that value to be shared with the larger society that did.
If everyone owns a slice of American value creation, everyone will want America to do better: collective equity in innovation and in the success of the country will align our incentives. The new social contract will be a floor for everyone in exchange for a ceiling for no one, and a shared belief that technology can and must deliver a virtuous circle of societal wealth. (We will continue to need strong leadership from our government to make sure that the desire for stock prices to go up remains balanced with protecting the environment, human rights, etc.)
In a world where everyone benefits from capitalism as an owner, the collective focus will be on making the world “more good” instead of “less bad.” These approaches are more different than they seem, and society does much better when it focuses on the former. Simply put, more good means optimizing for making the pie as large as possible, and less bad means dividing the pie up as fairly as possible. Both can increase people’s standard of living once, but continuous growth only happens when the pie grows.
## Implementation and Troubleshooting
The amount of wealth available to capitalize the American Equity Fund would be significant. There is about $50 trillion worth of value, as measured by market capitalization, in US companies alone. Assume that, as it has on average over the past century, this will at least double over the next decade.
There is also about $30 trillion worth of privately-held land in the US (not counting improvements on top of the land). Assume that this value will roughly double, too, over the next decade–this is somewhat faster than the historical rate, but as the world really starts to understand the shifts AI will cause, the value of land, as one of the few truly finite assets, should increase at a faster rate.
Of course, if we increase the tax burden on holding land, its value will diminish relative to other investment assets, which is a good thing for society because it makes a fundamental resource more accessible and encourages investment instead of speculation. The value of companies will diminish in the short-term, too, though they will continue to perform quite well over time.
It’s a reasonable assumption that such a tax causes a drop in value of land and corporate assets of 15% (which only will take a few years to recover!).
Under the above set of assumptions (current values, future growth, and the reduction in value from the new tax), a decade from now each of the 250 million adults in America would get about $13,500 every year. That dividend could be much higher if AI accelerates growth, but even if it’s not, $13,500 will have much greater purchasing power than it does now because technology will have greatly reduced the cost of goods and services. And that effective purchasing power will go up dramatically every year.
It would be easiest for companies to pay the tax each year by issuing new shares representing 2.5% of their value. There would obviously be an incentive for companies to escape the American Equity Fund tax by off-shoring themselves, but a simple test involving a percentage of revenue derived from America could address this concern. A larger problem with this idea is the incentive for companies to return value to shareholders instead of reinvesting it in growth.
If we tax only public companies, there would also be an incentive for companies to stay private. For private companies that have annual revenue in excess of $1 billion, we could let their tax in equity accrue for a certain (limited) number of years until they go public. If they remain private for a long time, we could let them settle the tax in cash.
We’d need to design the system to prevent people from consistently voting themselves more money. A constitutional amendment delineating the allowable ranges of the tax would be a strong safeguard. It is important that the tax not be so large that it stifles growth–for example, the tax on companies must be much smaller than their average growth rate.
We’d also need a robust system for quantifying the actual value of land. One way would be with a corps of powerful federal assessors. Another would be to let local governments do the assessing, as they now do to determine property taxes. They would continue to receive local taxes using the same assessed value. However, if a certain percentage of sales in a jurisdiction in any given year falls too far above or below the local government’s estimate of the property’s values, then all the other properties in their jurisdiction would be reassessed up or down.
The theoretically optimal system would be to tax the value of the land only, and not the improvements built on top of it. In practice, this value may turn out to be too difficult to assess, so we may need to tax the value of the land and the improvements on it (at a lower rate, as the combined value would be higher).
Finally, we couldn’t let people borrow against, sell, or otherwise pledge their future Fund distributions, or we won’t really solve the problem of fairly distributing wealth over time. The government can simply make such transactions unenforceable.
## Shifting to the New System
A great future isn’t complicated: we need technology to create more wealth, and policy to fairly distribute it. Everything necessary will be cheap, and everyone will have enough money to be able to afford it. As this system will be enormously popular, policymakers who embrace it early will be rewarded: they will themselves become enormously popular.
In the Great Depression, Franklin Roosevelt was able to enact a huge social safety net that no one would have thought possible five years earlier. We are in a similar moment now. So a movement that is both pro-business and pro-people will unite a remarkably broad constituency.
A politically feasible way to launch the American Equity Fund, and one that would reduce the transitional shock, would be with legislation that transitions us gradually to the 2.5% rates. The full 2.5% rate would only take hold once GDP increases by 50% from the time the law is passed. Starting with small distributions soon will be both motivating and helpful in getting people comfortable with a new future. Achieving 50% GDP growth sounds like it would take a long time (it took 13 years for the economy to grow 50% to its 2019 level). But once AI starts to arrive, growth will be extremely rapid. Down the line, we will probably be able to reduce a lot of other taxes as we tax these two fundamental asset classes.
The changes coming are unstoppable. If we embrace them and plan for them, we can use them to create a much fairer, happier, and more prosperous society. The future can be almost unimaginably great.
Thanks to Steven Adler, Daniela Amodei, Adam Baybutt, Chris Beiser, Jack Clark, Ryan Cohen, Tyler Cowen, Matt Danzeisen, Steve Dowling, Tad Friend, Lachy Groom, Chris Hallacy, Reid Hoffman, Ingmar Kanitscheider, Oleg Klimov, Matt Knight, Aris Konstantinidis, Andrew Kortina, Matt Krisiloff, Scott Krisiloff, John Luttig, Erik Madsen, Preston McAfee, Luke Miles, Arvind Neelakantan, David Oates, Cullen O’Keefe, Alethea Power, Raul Puri, Ilya Sutskever, Luke Walsh, Caleb Watney, and Wojchiech Zaremba for reviewing drafts of this, and to Gregory Koberger for designing it.
| true | true | true |
We need to design a system that embraces this technological future and taxes the assets that will make up most of the value in that world–companies and land–in order to fairly distribute some of the coming wealth.
|
2024-10-12 00:00:00
|
2021-03-16 00:00:00
|
/images/social.png
| null | null | null | null | null |
21,466,812 |
https://www.yubico.com/2019/11/yubico-reveals-first-biometric-yubikey-at-microsoft-ignite/
|
Blog
| null |
As we enter into the last few months of 2024, Yubico has a lot in store with a continued focus on innovation and garnering feedback from our valued community around the world. After an exciting year for the company filled with numerous announcements spanning new and expanded partnerships, product updates, innovations and prestigious recognitions, we’re […]
Read moreIn a world that’s more connected than ever before, cyber attacks are more rampant than ever with bad actors continuing to take advantage of human error. Despite ongoing advancements in security technologies and processes, it continues to be fairly simple to compromise user credentials through phishing and social engineering attacks are made even easier with […]
Read moreThe October 17, 2024 deadline for European Union (EU) Member States to implement the NIS2 Directive into their respective national legislations is fast approaching. We first highlighted NIS2 and the new requirements in a blog post back in March 2023, and now that the deadline is looming, all businesses across the EU must closely monitor […]
Read moreZero Trust Architecture (ZTA) represents a paradigm shift in cybersecurity strategy, moving away from the traditional perimeter-based security model to one that assumes no implicit trust, even within the network. In compliance with Executive Order 14028 to improve the nation’s cybersecurity, the Office of Management and Budget (OMB) released M-22-09 mandating all federal agencies to […]
Read moreBreak glass accounts are crucial accounts that provide access to critical systems during a variety of emergencies. Microsoft’s recent announcement on the enforcement of multi-factor authentication (MFA) for Microsoft Entra ID sign-ins highlights the impact on break glass accounts and improved security postures: “We have heard your questions about break glass or ‘emergency access’ accounts. […]
Read moreAs the rate and complexity of credential theft and phishing attacks on enterprises continue to increase rapidly, so do the number of server-based attacks. YubiHSM 2, the leading nano-form factor hardware security module (HSM), offers organizations superior protection from these attacks for sensitive data against theft and misuse. The new YubiHSM 2 (v2.4) – officially […]
Read moreYubico has had an impressive first half of the year, with plenty of exciting developments as the company continues to scale. To get the inside scoop, I caught up with Yubico’s CEO, Mattias Danielsson, to chat about the standout moments so far, the current landscape of the cybersecurity industry, and how Yubico is making a […]
Read moreOperational Technology (OT) is a critical component of several industries as it powers the systems that control the distribution of power, water and other utilities, drives the machinery that powers manufacturing, and controls everything from traffic lights to tanker ships. With the OT space under constant threat from cyber attacks, it’s more important than ever […]
Read more**NOTE: This blog was originally published on August 1, 2024 and has been updated to reflect the shipping timeline of the keys on August 13, 2024 Following the release of 5.7 firmware on the YubiKey 5 Series and Security Key Series in May, we are excited to announce that YubiKey Bio Series – FIDO Edition […]
Read moreYubico has worked closely with Microsoft for over a decade to keep businesses around the world and the Microsoft solutions they use both secure and phishing-resistant. Recognizing the importance of multi-factor authentication (MFA), Microsoft recently mandated that MFA be used by all Azure users – a critical move to require stronger authentication for end users […]
Read more
| true | true | true |
Learn more about what's happening within the tech and cybersecurity industry and the developments in our business and security keys within our Yubico Blog.
|
2024-10-12 00:00:00
|
2020-09-01 00:00:00
|
article
|
yubico.com
|
Yubico
| null | null |
|
6,588,519 |
https://petitions.whitehouse.gov/petition/release-open-source-community-source-code-healthcaregov-specifically-all-code-written-cgi-federal/XvBXgZWx?
|
The White House
| null |
## Briefing Room
### FACT SHEET: Hurricane Helene Recovery Continues as Biden-Harris Administration Prepares for Hurricane Milton
### Remarks by President Biden on the Initial Impacts of Hurricane Milton and the Federal Government’s Ongoing Support to State and Local Officials
### FACT SHEET: UPDATE: Biden-Harris Administration Sparing No Resource to Support Communities Impacted by Hurricane Helene
### FACT SHEET: UPDATE: Biden-Harris Administration Continues Life-Saving Preparations for Hurricane Milton
### Remarks by President Biden on Hurricane Milton
### Statement from President Joe Biden Marking One Year Since the October 7th Attack
### FACT SHEET: Biden-Harris Administration Issues Final Rule to Replace Lead Pipes Within a Decade, Announces New Funding to Deliver Clean Drinking Water
### Remarks by President Biden During a Call with Jewish Faith Leaders for High Holidays
### FACT SHEET: Biden-Harris Administration Celebrates International Day of the Girl and Continues Commitment to Supporting Youth in the U.S. and Abroad
## Featured Media
Ahead of Hurricane Milton’s landfall, the Biden-Harris Administration is working around the clock to pre-position aid, personnel, and resources to save lives and livelihoods.
##
Investing in the
**American **
**People**
From rebuilding our infrastructure to lowering costs for families, President Biden’s economic agenda is investing in America.
## We Want to Hear From You
Send a text message to President Biden, or contact the White House. **Message and data rates may apply.***Reply HELP for help or STOP to cancel*.
## The
**White House**
Learn more about the current administration, our country’s former presidents, and the official residence.
| true | true | true |
President Biden and Vice President Harris promised to move quickly to deliver results for working families. That’s what they’ve done.
|
2024-10-12 00:00:00
|
2022-01-31 00:00:00
|
website
|
whitehouse.gov
|
The White House
| null | null |
|
32,855,125 |
https://github.com/rustq/colorid.js
|
GitHub - rustq/colorid-wasm: The unique 4-colors-ID string generator in WASM
|
Rustq
|
Color as Identify - The unique 4-colors-ID string generator in `WASM`
The performance of `ColorID`
is better than `UUID`
and `NanoID`
(The algorithm of `ColorID`
is actually very similar to the `UUID.V4`
)
If we use `ColorID`
to represent identities in social networks, we can easily generate personalized social information for users, such as avatars, NFTs, etc.
The `ColorID`
is consisted by 4 colors in the RGB channel from 12 unsigned 8 bits numbers (`u8`
), so the theoretical total of `ColorID`
is `(2^8)^12`
= `2^96`
≈ `7*10^28`
, which means that even if the `ColorID`
is generated for every 7.8 billion people in the world every second, it will take 300 billion years to use up all `ColorIDs`
. And `ColorID`
provides safety by using hardware random generator, can also be used in clusters.
Because the Four color theorem, if we need to color the regions of any avatars or NFTs so that no two adjacent regions have the same color, four colors are enough!
一款基于 4 种颜色来表示身份的 `WASM`
随机 ID 生成器。
在性能表现上 `ColorID`
比 `UUID`
、`NanoID`
等库表现更好 (虽然 `ColorID`
的生成算法实际上和 `UUID.V4`
非常相似)。
如果我们用 `ColorID`
表示社交网络中的身份,可以轻松地为用户生成个性化的社交信息:如头像、数字藏品等。
实际上 `ColorID`
是由 4 种颜色在 RGB 通道内的 12 个 8 位无符号整型数字 (`u8`
) 组成,故 `ColorID`
理论上的总数为 `(2^8)^12`
= `2^96`
≈ `7*10^28`
,也就是说即使每秒都为全球 78 亿人中的每个人都生成一个 `ColorID`
,也要花 3000 亿年才会将所有 `ColorID`
用完。此外 `ColorID`
使用强随机函数来生成随机数,故也可在集群使用中提供安全性。
因为四色定理的原因,如果我们想要为头像或者数字藏品涂色且不会出现邻接区域颜色相同, 有四种颜色就足够了!
`$ npm i colorid-wasm`
`in module`
```
import * as wasm from "colorid-wasm";
await wasm.default();
wasm.colorid(); // #5B34F9-#34F9DF-#F9DF4E-#DF4EB5
```
`in React`
```
React.useEffect(() => {
import("colorid-wasm").then(async wasm => {
await wasm.default();
wasm.colorid(); // #631707-#4D4E40-#5BBD69-#4FC6B4
})
}, []);
```
`wasm benchmark`
```
uuid v4 68.29999995231628 ms
colorid-wasm 73.70000004768372 ms
crypto.randomUUID 78.20000004768372 ms
nanoid 123.90000009536743 ms
```
```
$ cd colorid
$ cargo test
$ cargo build
$ cd ../
```
```
$ cd colorid-wasm
$ wasm-pack build --release --target web
```
```
$ npm i
$ npm run http-server
```
`$ npm run cypress:open`
| true | true | true |
The unique 4-colors-ID string generator in WASM. Contribute to rustq/colorid-wasm development by creating an account on GitHub.
|
2024-10-12 00:00:00
|
2022-09-11 00:00:00
|
https://opengraph.githubassets.com/471b811747fcbc7acef99593e20a871cd812bb868c725be378d02b4b5b263606/rustq/colorid-wasm
|
object
|
github.com
|
GitHub
| null | null |
55,655 |
http://www.uigarden.net/english/why-do-people-become-attached-to-their-products
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
20,507,819 |
https://medium.com/@DavidMaidment/writing-a-2d-platform-game-engine-in-golang-2a83666c35f1
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
17,375,998 |
https://github.com/kiranz/just-api
|
GitHub - kiranz/just-api: :boom: Test REST, GraphQL APIs
|
Kiranz
|
Just-API is a declarative, specification based test framework for `REST`
, `GraphQL`
APIs. Users can test APIs without writing code, but they can also tap into code when they want to. It reads API test specification from YAML files and runs them in serial/parallel mode. Test reports can be generated in several formats including HTML and JSON.
In simple terms, users build a test suite by providing a set of request and response validation specification in a YAML file. Each suite can have one or more specs. Just-API builds the request, sends it to server and validates response as per the specification. One can choose to validate any or all of following
- Status code
- Headers
- Cookies
- Response JSON body
- Response JSON schema
*or Provide a custom Javascript function to validate the response*
Find more here
- Runs test suites in parallel/serial mode
- Supports all widely used HTTP methods
- Supports x-www-form-urlencoded requests, Multipart requests, File uploads
- Built-in Response Validation Constructs(Headers, Cookies, Status code, JSON body, JSON schema)
- Custom Response validator functions
- Supports running custom inline or module javascript sync/async functions
- Supports Hooks (Before All, After All, Before Each, After Each, Before Test, After Test)
- Custom suite configuration
- Chained Request flows
- Define/override Request path, query params, path params, headers, body at runtime
- Suite and test context for reuse
- Supports importing specs from one or more test suites
- Intrasuite and Intersuite spec dependencies
- Reusing test specification
- Retry failed tests
- Looping: Generate 'n' number of tests with a list
- Built-in HTML, JSON reporters
- Can generate reports in multiple formats for the same run
- Logging HTTP request/response data for failed tests
- Proper error reporting
- Can run tests matching with a given pattern/string
- Skipping tests with specification
- Disable or Enable redirections
- Reports test duration
- Allows user to plug-in custom reporters
To run just-api, you will need Node.js v10.x.x or newer.
`$ npm install just-api`
Following is a simple example showing usage of Just-API.
```
$ mkdir specs
$ vim specs/starwars_service.yml
```
Write following suite in your editor
```
meta:
name: Star Wars suite
configuration:
scheme: https
host: swapi.co
base_path: /api
specs:
- name: get Luke Skywalker info
request:
path: /people/1/
method: get
response:
status_code: 200
headers:
- name: content-type
value: !!js/regexp application/json
json_data:
- path: $.name
value: Luke Skywalker
```
Back in the terminal
```
$ ./node_modules/.bin/just-api
✓ get Luke Skywalker info (1216ms)
Done: specs/starwars_service.yml (Passed)
0 skipped, 0 failed, 1 passed (1 tests)
0 skipped, 0 failed, 1 passed (1 suites)
Duration: 1.3s
```
Following example tests a GraphQL API that returns Person info for a given name.
Create a YAML suite and run just-api.
```
meta:
name: GraphQL Starwars service
configuration:
host: swapi.graph.cool
scheme: https
specs:
- name: Get Details of a character
request:
method: post
headers:
- name: content-type
value: application/json
payload:
body:
type: json
content:
query: >
{
Person(name: "Luke Skywalker") {
name,
id,
gender
}
}
variables: null
operationName: null
response:
status_code: 200
json_data:
- path: $.data.Person.name
value: "Luke Skywalker"
```
When you need to test complex chained API flows, run dependencies in hooks to fetch pre-requisite data and pass it to actual test.
Following example shows how to run dependencies using a hook, get data and validating response with a custom validator function.
```
meta:
name: Starwars suite
configuration:
scheme: https
host: swapi.co
base_path: /api
specs:
- name: get R2-D2 info
request:
path: /people/3/
method: get
response:
status_code: 200
json_data:
- path: $.name
value: R2-D2
- name: search R2-D2 info
before_test:
run_type: inline
inline:
function: !js/asyncFunction >
async function() {
var response = await this.runSpec('get R2-D2 info');
var jsonData = JSON.parse(response.body);
this.test.query_params = { name: jsonData.name };
}
request:
path: /people
method: get
response:
status_code: 200
custom_validator:
run_type: inline
inline:
function: !!js/function >
function() {
var jsonData = JSON.parse(this.response.body);
var r2d2 = jsonData.results.find(result => result.name === 'R2-D2');
if (!r2d2)
throw new Error('R2-D2 not returned in search results');
}
```
Note: You can also place custom JS functions in a module and specify the function name, module path in YAML to import.
More advanced stuff can be done with Just-API. Documentation says it all. Take a look at Just-API Website for detailed documentation.
If you are looking to use Docker to run Just-API, you might want to checkout Just-API docker boilerplate here
Kiran [email protected]
If this project helps you in anyway, Please consider making a donation
NOTE: recommend Node `v10.x`
since `v12.x`
has gulp compatibility issue.
- Install deps
`npm install`
- Install gulp
`npm install -g gulp`
- Install test files
`gulp`
- Install test API
`npm run install_testapi`
- Run test API
`npm run start_testapi`
- (in a new window)
`npm test`
`test/cli/src/suites/[suite].spec.yaml`
contains sample suites/specs`test/cli/[suite].spec.js`
contains JS chai/mocha test assertions about the sample suite/specs
You may need to create/modify both a sample suite/spec and corresponding JS assertion
TODO: add linter/hinter/prettier or whatever spec is used
| true | true | true |
:boom: Test REST, GraphQL APIs. Contribute to kiranz/just-api development by creating an account on GitHub.
|
2024-10-12 00:00:00
|
2018-06-07 00:00:00
|
https://opengraph.githubassets.com/88f4f97b7bcf43f7fbb4d00f0f084c2e4fe15ab22715537e514fa83ae64f17e8/kiranz/just-api
|
object
|
github.com
|
GitHub
| null | null |
3,374,010 |
http://drdobbs.com/architecture-and-design/232200738
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
40,588,067 |
https://twitter.com/itsandrewgao/status/1798389860711166130
|
x.com
| null | null | true | true | false | null |
2024-10-12 00:00:00
| null | null | null | null |
X (formerly Twitter)
| null | null |
12,786,000 |
http://shadandjulia.com/never-use-upwork-ever/
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
22,700,753 |
https://github.com/github/view_component
|
GitHub - ViewComponent/view_component: A framework for building reusable, testable & encapsulated view components in Ruby on Rails.
|
ViewComponent
|
A framework for building reusable, testable & encapsulated view components in Ruby on Rails.
See viewcomponent.org for documentation.
This project is intended to be a safe, welcoming space for collaboration. Contributors are expected to adhere to the Contributor Covenant code of conduct. We recommend reading the contributing guide as well.
ViewComponent is available as open source under the terms of the MIT License.
| true | true | true |
A framework for building reusable, testable & encapsulated view components in Ruby on Rails. - ViewComponent/view_component
|
2024-10-12 00:00:00
|
2019-08-02 00:00:00
|
https://opengraph.githubassets.com/b54ffe1c211a1540f5c88a5e3e5b62e3745aed60dfe2341e5d92664eba64f82e/ViewComponent/view_component
|
object
|
github.com
|
GitHub
| null | null |
16,543,883 |
https://www.youtube.com/watch?v=qzGLCqW_wrM
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
25,851,156 |
https://github.com/golang/go/issues/43810
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
22,879,574 |
https://www.2uo.de/beautifully-illustrated-books/
| null | null | null | true | false | false | null | null | null | null | null | null | null | null | null |
29,129,665 |
https://www.youtube.com/watch?v=s12Mnw9LmrA
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
17,582,412 |
https://www.forbes.com/sites/janetwburns/2018/07/19/explosive-report-puts-facebook-and-trump-2016-in-cozier-working-relationship/#2a72e6c232c2
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
1,614,060 |
http://edition.cnn.com/video/#/video/world/2010/08/03/rivers.thai.designer.babies.cnn
|
Latest Videos | CNN
|
Kevin Taverner
|
Video Ad Feedback
Dr. Gupta points out what’s notable about Kamala Harris’ health report
03:51
- Source:
CNN
Latest Videos
24 videos
Video Ad Feedback
Dr. Gupta points out what’s notable about Kamala Harris’ health report
Video Ad Feedback
'Bizarre': Vance reacts to women who don’t have children due to climate change in new interview
Video Ad Feedback
What former Secret Service agent thinks about Trump’s request for military assets
Video Ad Feedback
Director of ‘The Apprentice’ film responds to receiving threats from Trump’s legal team
Video Ad Feedback
Watch JD Vance refuse to answer question 5 times about the 2020 election
Video Ad Feedback
Gov. Ron DeSantis denies climate change fueled Milton. Why Bill Nye disagrees
Video Ad Feedback
Video captures tornado crossing highway in Florida
Video Ad Feedback
Residents left in ruins after tornadoes rip through neighborhood
Video Ad Feedback
Video shows aftermath of Israeli strikes in Beirut
Video Ad Feedback
Haberman on why Trump may have insulted Detroit in Detroit
Video Ad Feedback
See what it looks like on the ground shortly after Hurricane Milton makes landfall
Video Ad Feedback
Anderson Cooper shows what Milton’s storm surge looks like
Video Ad Feedback
Apparent dress-code violation gets women kicked off Spirit flight
Video Ad Feedback
Biden calls out Trump's lies about hurricane response
Video Ad Feedback
Hear newly unearthed audio of Vance discussing 2020 election results
Video Ad Feedback
Harris reacts to claim Trump secretly sent Putin individual Covid tests
Video Ad Feedback
Chinese seize on deadly pager attacks to spread iPhone rumors
Video Ad Feedback
‘Write your name on your leg.’ Police chief gives grave warning for those who stay in Milton’s path
Video Ad Feedback
Hear Tampa mayor's dire warning to anyone thinking of staying in Milton evacuation zone
Video Ad Feedback
Hurricane Milton could be 'unprecedented.' This is the factor that matters.
Video Ad Feedback
‘A crisis’: Medical students from abortion ban states turn to papayas to practice lifesaving procedures
Video Ad Feedback
Polls show where Harris and Trump stand in swing states weeks before the election
| true | true | true | null |
2024-10-12 00:00:00
|
2024-05-28 00:00:00
| null |
video.other
|
cnn.com
|
CNN
| null | null |
39,357,642 |
https://github.com/ROCm/HIPIFY
|
GitHub - ROCm/HIPIFY: HIPIFY: Convert CUDA to Portable C++ Code
|
ROCm
|
HIPIFY is a set of tools that you can use to automatically translate CUDA source code into portable HIP C++.
Documentation for HIPIFY is available at https://rocmdocs.amd.com/projects/HIPIFY/en/latest/.
To build our documentation locally, run the following code.
```
cd docs
pip3 install -r .sphinx/requirements.txt
python3 -m sphinx -T -E -b html -d _build/doctrees -D language=en . _build/html
```
To build `CUDA2HIP`
(CUDA APIs supported by HIP) documentation, run the following `hipify-clang`
command. This builds the same content as
Supported CUDA APIs.
```
hipify-clang --md --doc-format=full --doc-roc=joint
# Alternatively, you can use:
hipify-clang --md --doc-format=full --doc-roc=separate
```
To generate this documentation in CSV, use the `--csv`
option instead of `--md`
. Instead of using
the `full`
format, you can also build in `strict`
or `compact`
format.
To see all available options, use the `--help`
or `--help-hidden`
`hipify-clang`
option.
| true | true | true |
HIPIFY: Convert CUDA to Portable C++ Code. Contribute to ROCm/HIPIFY development by creating an account on GitHub.
|
2024-10-12 00:00:00
|
2020-03-02 00:00:00
|
https://opengraph.githubassets.com/e8eba9a1665e55d7b614fad959d28b5d557cd4fec87401c1f01076c37fbb67c7/ROCm/HIPIFY
|
object
|
github.com
|
GitHub
| null | null |
34,157,646 |
https://www.reuters.com/business/autos-transportation/tesla-used-car-price-bubble-pops-weighs-new-car-demand-2022-12-27/
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
8,759,562 |
http://boingboing.net/2014/12/16/barrett-brown-sentencing-delay.html
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
14,191,363 |
http://www.nti.org/analysis/articles/understanding-north-koreas-missile-tests/
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
36,062,356 |
https://www.bloomberg.com/news/articles/2023-05-22/fake-ai-photo-of-pentagon-blast-goes-viral-trips-stocks-briefly
|
Bloomberg
| null |
To continue, please click the box below to let us know you're not a robot.
Please make sure your browser supports JavaScript and cookies and that you are not blocking them from loading. For more information you can review our Terms of Service and Cookie Policy.
For inquiries related to this message please contact our support team and provide the reference ID below.
| true | true | true | null |
2024-10-12 00:00:00
| null | null | null | null | null | null | null |
32,737,173 |
https://www.theregister.com/2022/09/06/sk_hynix_11b_fab/
|
SK hynix to invest $11b building fab for next upturn
|
Dan Robinson
|
This article is more than **1 year old**
# SK hynix to plow $11b into South Korean fab complex
## Leadership believes path to recovery is through investing in times of crisis
Chipmaker SK hynix intends to invest nearly $11 billion over the next five years on a new manufacturing plant in South Korea, showing confidence in the future despite the curent dip in demand for semiconductors.
The memory chip biz announced that it plans to break ground in October on the M15X (eXtension), a new semiconductor fabrication facility at the Cheongju Technopolis industrial complex, which SK said is being built "in preparation for future growth."
A total of ₩15 trillion (about $10.9 billion) will be spent on the site over the next half a decade to construct the fab and set up production facilities, the company said. The fab will be a two-story building equivalent in size to its existing M11 and M12 manufacturing plants combined.
The plant is expected to start production of memory chips in 2025, but according to Reuters, SK hynix declined to say whether this will be DRAM components or NAND flash memory for storage devices. Neither would SK indicate what production capacity the plant is expected to be.
The announcement comes as demand for semiconductors is said to be weakening across multiple sectors, including PCs and smartphones, as consumers rein in spending in response to surging inflation and worsening economic outlook.
Last month, Korean chipmakers reported their first fall in shipments for almost three years, with figures from the Korean national statistics office showing that semiconductor shipments in July were down 22.7 percent compared with the same month last year.
- Micron confirms first US memory fab on home soil in 20 years
- Korea to attend 'Chip 4' meeting as global doubts mounts over US initiative
- Qualcomm readying new Arm server chip based on Nuvia acquisition
- Inflation worries push PC and mobile DRAM demand down, with pricing to follow
But SK hynix is aware of this, saying it is investing in the M15X plant in preparation for the next upturn, which it expects to come in 2025.
Vice chair and co-chief executive Park Jung-ho said that the company had grown into a global supplier over the past 10 years by investing during times of crisis.
"As we look to prepare for the next 10 years now, I believe starting the M15X will be a first step to lay foundation for a solid future growth."
Demand for memory chips may be rapidly falling in the wake of the global economic slowdown and instability of the supply chain, but experts foresee that the business will start to recover steadily from 2024 and rebound in 2025, claiming that the memory business cycle has become less volatile in recent years.
SK hynix is already planning an additional semiconductor manufacturing plant, which is to be known as its M17 fab. The company said it will decide on the construction plan after reviewing the overall business environment, including possible changes in the semiconductor business cycle. ®
| true | true | true |
Leadership believes path to recovery is through investing in times of crisis
|
2024-10-12 00:00:00
|
2022-09-06 00:00:00
|
article
|
theregister.com
|
The Register
| null | null |
|
768,955 |
http://www.readwriteweb.com/archives/trim_to_go_open_source_community_owned.php
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
6,170,788 |
http://liveminutes.com/blog/2013/08/06/start-up-life-19-gifs-we-can-all-relate-to/
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
4,765,145 |
http://www.youtube.com/watch?v=F7pYHN9iC9I&feature=youtu.be
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
40,706,994 |
https://arstechnica.com/gadgets/2024/06/apple-reportedly-plans-a-return-to-chasing-thinness-in-its-hardware-designs/
|
After a few years of embracing thickness, Apple reportedly plans thinner devices
|
Andrew Cunningham
|
Though Apple has a reputation for prioritizing thinness in its hardware designs, the company has actually spent the last few years learning to embrace a little extra size and/or weight in its hardware. The Apple Silicon MacBook Pro designs are both thicker and heavier than the Intel-era MacBook Pros they replaced. The MacBook Air gave up its distinctive taper. Even the iPhone 15 Pro was a shade thicker than its predecessor.
But Apple is apparently planning to return to emphasizing thinness in its devices, according to reporting from Bloomberg's Mark Gurman (in a piece that is otherwise mostly about Apple's phased rollout of the AI-powered features it announced at its Worldwide Developers Conference last week).
Gurman's sources say that Apple is planning "a significantly skinnier iPhone in time for the iPhone 17 line in 2025," which presumably means that we can expect the iPhone 16 to continue in the same vein as current iPhone 15 models. The Apple Watch and MacBook Pro are also apparently on the list of devices Apple is trying to make thinner.
Apple previewed this strategy with the introduction of the M4 iPad Pro a couple of months ago, which looked a lot like the previous-generation iPad Pro design but was a few hundredths of an inch thinner and (especially for the 13-inch model) noticeably lighter than before. Gurman says the new iPad Pro is "the beginning of a new class of Apple devices that should be the thinnest and lightest products in their categories across the whole tech industry."
| true | true | true |
Thinness is good, as long as it doesn’t come at the expense of other things.
|
2024-10-12 00:00:00
|
2024-06-17 00:00:00
|
article
|
arstechnica.com
|
Ars Technica
| null | null |
|
39,054,357 |
https://ec.europa.eu/commission/presscorner/detail/en/ip_24_282
|
Antitrust: Commission seeks feedback on commitments offered by Apple over practices related to Apple Pay
|
COMM; DG; UNIT
| null | true | true | false |
The European Commission invites comments on commitments offered by Apple to address competition concerns over access restrictions to a standard technology used for contactless payments with mobile devices in stores (Near-Field Communication – ‘NFC').
|
2024-10-12 00:00:00
|
2024-10-04 00:00:00
|
website
|
europa.eu
|
European Commission - European Commission
| null | null |
|
25,713,182 |
https://kinbiko.com/posts/2021-01-10-function-types-in-go/
|
Function Types in Go
| null |
A while ago, I saw a question in a Go developer Slack channel that went something like this (translated):
When would you want to define a function type like
`type MyFunc func(msg string) error`
? Do you have any examples?
I replied with a `http.HandlerFunc`
example (more on that below), but there were many developers smarter than me in that Slack channel, and I ended up learning a lot from the other answers to this question.
This article will cover what I took away from the ensuing discussion.
## Type definitions
Let’s start off by talking about what the `type`
keyword does.
The most common type definitions usually look something like:
```
type MyStruct struct {
// Fields...
}
```
In the case of a struct, or in the case of an interface it might look something like:
```
type MyDependency interface {
DoSomething(ctx context.Context) error
}
```
I daresay more than 90% of the types I define look like the above. However, you may create a type to identify a function signature as well, as defined in the original question. To better understand when we might want to create such a type, let’s have a look at the spec for inspiration. Among other things, the spec has the following to say about type definitions:
A type definition creates a new, distinct type with the same underlying type and operations as the given type, and binds an identifier to it. The new type is called a defined type. It is different from any other type, including the type it is created from.
A defined type may have methods associated with it.
The last sentence is key. Let’s see why!
### The elegance of `http.HandlerFunc`
Chances are you’ve played around with the `net/http`
package at least once when learning Go.
You may remember that many of the functions in `net/http`
revolve around a type called `http.Handler`
.
`http.Handler`
is a single-method interface:
```
type Handler interface {
ServeHTTP(ResponseWriter, *Request)
}
```
It may be tempting to believe that you have to create your own struct that implements the `ServeHTTP`
method, but the `net/http`
provides a very convenient shortcut for us:
**Whenever you need a http.Handler you can write a http.HandlerFunc** instead.
```
type HandlerFunc func(ResponseWriter, *Request)
```
This works because **the HandlerFunc type implements Handler**.
In other words, the **HandlerFunc has a ServeHTTP(ResponseWriter, *Request) method**.
The implementation of this method invokes itself:
```
func (f HandlerFunc) ServeHTTP(w ResponseWriter, r *Request) {
f(w, r)
}
```
Now that we have an appreciation of how the `net/http`
package works, we can generalise this pattern further:
I can improve my package’s developer experience by defining a
`Func`
type that implements a key single-method interface in my package.
You can imagine this pattern being useful where a maintainer of a package regrets requiring an interface (where a function would do), but also doesn’t want to break backwards compatibility.
### Function types as lightweight interfaces
I think a lot of us has ingrained the pattern of requesting interfaces when constructing a new type.
This pattern scales well and is good practice in general, but as hinted to above there are cases where interfaces are overkill.
In this case, requesting a function might be a good solution.
By naming your function signature (by defining a function type) you avoid excessively long function signatures just like you would with interfaces.
Moreover, your users don’t have to cast their function definitions like you do have to in the case of defining `http.HandlerFunc`
s:
After reflecting on this pattern I decided to change some interfaces into function types in a package I wrote.
The diff of this change may be more illustrative than what I can write in words in this blog post.
Note that this *is a breaking change*, but the change required by users of your package are trivial in most cases.
To see why, remember that methods are functions too – and as such they can adhere to function type definitions.
Where users previously passed in an implementation of an interface, they now have to pass in the function of that implementation.
I.e. changing `someDep`
to `someDep.FuncImplementation`
when invoking your package’s function.
In fact – your users are now free to rename this function (or even make it private)!
### Use function definitions to create default arguments
We can utilize methods on function types when the arguments of a function is almost always the same:
```
type MyComplicatedFunc func(a string, b int, c *someReallyComplicatedStruct, d onlyThingThatMatters)
func (f MyComplicatedFunc) invoke(d onlyThingThatMatters) {
f("some default string", 4321, &someReallyComplicatedStruct{ /* values that don't change. */}, d)
}
```
This should make invoking this function with the ‘default’ arguments internally in the package very simple.
Note: This is a neat trick, but just because you can doesn’t mean you should. I don’t really endorse this pattern, and haven’t really seen it used in production code anywhere.
### Use type aliases for documentation
Occasionally, a function parameter could use some additional documentation (including when the parameter itself is a function).
You **could** define a function type in order to attach some additional godocs, however I recommend using type aliases in this case.
The only difference is adding a `=`
between the type name and its definition.
```
type MyComplicatedFunc = func(a string, b int, c *someReallyComplicatedStruct, d onlyThingThatMatters)
```
The difference between type aliases and type definitions is outside the scope of this article, but one key result is that you cannot attach methods to this type. Type aliases emphasise the intent to only add documentation. Additionally, since this contrived example shows a very long function signature, you can see how type aliases (and type definitions) can make code shorter and more readable.
| true | true | true |
A while ago, I saw a question in a Go developer Slack channel that went something like this (translated): When would you want to define a function type like type MyFunc func(msg string) error? Do you have any examples? I replied with a http.HandlerFunc example (more on that below), but there were many developers smarter than me in that Slack channel, and I ended up learning a lot from the other answers to this question.
|
2024-10-12 00:00:00
|
2021-01-10 00:00:00
| null |
article
|
kinbiko.com
|
kinbiko.com -- Roger Guldbrandsen
| null | null |
38,347,809 |
https://www.abortretry.fail/p/the-history-of-os2
|
The History of OS/2
|
Bradford Morgan White
|
The IBM Personal Computer model 5150 was released on the 12th of August in 1981 for an introductory price of $1565 (roughly $5285 in 2023). For this rather large sum of money the buyer received an Intel 8088 CPU clocked at 4.77MHz, 16K RAM, a tone generator, a 5.25” floppy disk drive, a 63.5W switching PSU, five internal 8 bit ISA slots, and MS-DOS. This machine could be configured with more hardware and higher pricing. While more powerful equipment was available to IBM for use in building this computer, the market was not positioned for it. The microcomputer was an 8 bit market, and those business professionals using micros were typically using either an 8080/Z80 running CP/M or an Apple ][ with its 6502 for VisiCalc, while many home micros were used either with BASIC or with bootable software. Importantly for IBM, MS-DOS had some limited source compatibility with CP/M thanks to source code translators for Intel’s 8080 and 8088 as well as MS-DOS’s nature as an enhanced CP/M clone of sorts. These factors all amounted to a 16 bit machine that could utilize 8 bit components, and an operating system that was easy on developers.
Much of this design was predicated on some rather incorrect assumptions by IBM. They felt that small businesses would buy the majority of 5150s, while the large businesses would continue to use mainframes and dumb terminals. They thought that a few departments in large businesses would put the 5150 to use for local, non-connected work, and they felt that the 5150 would be used for a single task throughout the entire workday (like spreadsheets or word processing). Of course, IBM also thought that they’d only sell around a quarter million 5150s throughout a 5 year product lifespan. This final thought was found to be incorrect before the official release of the machine. On the 11th of August in 1981, IBM held a preannouncement showing of the IBM PC model 5150 the annual ComputerLand Dealers of North America conference at the Toronto Exhibition Center. The dealers present placed orders that almost hit that quarter million figure in just that single day. Another quarter million orders followed on the 12th from other dealers.
These incredible sales figures made the limitations of the 5150 painfully apparent. People wanted more. IBM quickly bumped the base RAM to 64k, and the company released the IBM XT model 5160 on the 8th of March in 1983. The XT further increased base RAM to 128K, added three more expansion slots, added a 10MB HDD as standard, and removed the five pin DIN cassette interface. This model was followed by the XT 286 model 5162 and the AT model 5170. Obviously, IBM PC compatible (more or less) hardware is with us to this day, but MS-DOS is not (except in hobbyist machines or emulators). It was in the Intel 80286 that an opportunity for improvement was found.
With the Intel 8088 or 8086 and CPUs before it, all memory addresses that any software used were mapped to *real *physical addresses. This isn’t precisely so with an 80286. At boot, an 80286 isn’t too different from an 8086. It’s a 16 bit CPU that uses real addressing. It is, however, faster and it can access more memory (1MB+64k on the 80286 vs 640K on the 8088/8086). The major difference was that after boot, the 80286 could be switched into protected mode. In this mode, memory access wasn’t a battle royale. The CPU had four *rings* of memory access numbered zero to three where zero is kernel mode, one and two are usually used by device drivers, and three is user mode. When memory is called, it uses a table loaded into memory to select a memory location avoiding competition for RAM. In protected mode, a 286 could access 16MB of RAM. All of this meant that user code couldn’t access the OS kernel or device drivers, more memory was available, and memory was cleanly separated between running applications.
With the launch of the IBM AT and its multitasking capable CPU, IBM promised a multitasking operating system for the system. For this operating system, IBM turned to their software partner Microsoft once again. Bill Gates, however, really didn’t want to make a multitasking operating system for the 80286 which he famously declared to be a “*brain dead*” chip. The 80386 was released just a year after the release of the IBM 5170 and was an improvement in every way over the 80286. Yet, IBM has *Corporate Directives* that govern its behavior. Corporate Directive 2 from 1956, signed by Thomas J. Watson Jr, says that when IBM makes a promise to customers it will keep that promise regardless of the cost. IBM didn’t care whether or not the 80286 was a good product; it had promised its customers an operating system that would utilize the features of the 286 and so it would make that OS.
The IBM PC line was under the control of Entry Systems Division with IBM, and it was also this team working with Microsoft on the operating system. As noted in my articles on Windows 1, Windows 2, and Windows 3, Microsoft was simultaneously working on Windows while attempting to keep IBM’s OS their primary concern. IBM’s system was an honest effort by Microsoft, and it was only the hard work and dedication of a few employees that kept Windows alive. The joint development agreement that created this new operating system was entered into on the 10th of June in 1985. For both IBM and Microsoft, a multitasking and protected mode DOS system was attempted under various names. The one that suck was CP/DOS from within IBM (at least as far as I can tell from the JDA). While the product stuck, the name did not. In the mainframe world of IBM, there was the System/360 which ran OS/360. In an attempt to combat both clone makers and the technical limitations of the PC line, IBM was working on the PS/2 (PS standing for Personal System) line of computers which would run OS/2.
From June of 1985 to April of 1987, OS/2 was always *coming*. The sense that I get from folks within Microsoft working on various teams (at least from what they’ve written) is that OS/2 had become a bit of vaporware much like Windows 1.0 was before it. It was certainly the focus of the systems group, but there were some culture issues between IBM and Microsoft that made development quite difficult and strained. Still, despite the troubles made apparent during development, OS/2 1.0 was announced on the 2nd of April in 1987 with release set for the fourth quarter of that year, a deadline that was met with the release taking place in December of 1987.
This first release was text mode only. The OS did feature an API for controlling video display as well as keyboard and mouse routines. These features circumvented the need for BIOS and direct hardware access which wouldn’t have been possible on the 286 in protected mode. The extended edition (pictured above) also included a database engine, DBM, descended from IBM’s DB/2 which survives as DB2 LUW. It also added Communications Manager which provided multiple 3270 and 5250 emulated sessions for IBM mainframe customers. OS/2 1.0 required a minimum of an 80286 with 1MB of RAM. It supported FAT filesystems with a maximum partition size of 32MB. It provided a 16 bit, protected mode, multi-threaded, text mode, preemptively multitasked operating system with segmented virtual memory support. As a bonus, it supported compatibility with PC-DOS, interprocess communication (shared memory, pipes, queues, semaphores), and dynamic linking. While multitasking, this first version allowed only a single application to be on-screen at any given time. Switching between them was done via `ctl + esc`
which would bring the Program Selector back to the foreground.
MS-DOS was outrageously popular and successful in the market place. The IBM PC and its clones effectively killed the entire market for CP/M machines. This was great for both IBM and Microsoft, but it made future operating system development incredibly difficult. To have any success at all, OS/2 needed to support MS-DOS applications. So, when running DOS applications, the user was greeted with a single, full screen, foreground DOS session that would not run in the background. OS/2 applications would remain active in the background, but DOS applications could not. The setup meant that the CPU was frequently switching between protected and real mode, so device drivers had to support dual mode operation to limit the number of switches made for performance concerns. To make all of this work on 6MHz 286, both the kernel and the DLLs were written in assembly with optimizations both for speed of execution and for size.
There is a difference between the Microsoft and IBM versions of OS/2 that clearly stems from Bill Gates’ feelings about the 286. In the Microsoft release should the kernel detect an Intel 80386, MS-DOS real mode applications would run via instructions to switch modes between real and protected. In the IBM version this didn’t happen. IBM only implemented the method employed for the 80286 which was to triple fault the CPU, trigger a shutdown cycle, have the motherboard reset the CPU, and have BIOS skip post and jump to a specified memory address immediately following the return of CPU execution. For retro enthusiasts, try to find Microsoft’s OS/2 for your 386.
In November of 1988, OS/2 1.1 was released. This was big despite being a point release. Codenamed Trimaran, 1.1 brought the Presentation Manager to OS/2, increased the supported partition size, added dual boot support, and bumped the RAM requirement to a 3MB minimum. Despite having windows allowing multiple applications to be visible at one time, MS-DOS was still a fullscreen affair.
OS/2 version 1.2 was released in October of 1989 bringing enhancements for the Presentation Manager, adding the High Performance File System (HPFS), adding REXX, and adding installable filesystem support. Version 1.3 was released in December of 1990, and was the first release to be developed by IBM without Microsoft. This release focused on RAM usage reduction and cut the requirement to 2MB. This release also added support for Adobe Type Manager fonts.
Much like Windows 1 and 2, OS/2 wasn’t selling particularly well. Partially, this was due to system requirements. RAM and disk space were both extremely expensive in 1980s, and OS/2 required more than most systems had at the time. The OS came on 1.44MB floppy disks as well, which were still somewhat new (much other software at the time had free 5.25” disks for which one could mail in a request if it didn’t ship on 5.25”). To make matters in the market worse, OS/2 sold at a price of $325 (around $843 in 2023) for the Standard Edition, or $795 (around $2063 in 2023) for the Extended Edition. If one were to have purchased 1.0, they’d at least be able to get an upgrade to 1.1 for free. This pricing isn’t unwarranted given that OS/2 cost nearly a billion dollars per year to develop, which could put total cost of development at around two billion (or around $5.7 billion in 2023). For IBM, the low sales volumes for OS/2 were coupled with low sales volumes for IBM’s PS/2 machines. These machines had switched to a new BIOS and to IBM’s proprietary MicroChannel Architecture bus (as opposed to ISA). In the minds’ at Microsoft, it’s rather easy to imagine that they felt tied to a dying company with IBM having sold just under two hundred thousand copies by November of 1989.
Microsoft invested heavily in marketing Windows 3, and Windows 3 was a massive success. Microsoft’s early Windows products were neither as stable nor as advanced as OS/2, but they were cheaper both in dollars and in system requirements. Additionally, Microsoft rightly viewed other software developers as absolutely crucial to the success of their platform while IBM thought of that same group as just more customers. Microsoft’s marketing game was also substantially better. On a more subjective note, Windows was far easier to use. In Windows, setting up a printer was a simple two step process: install the driver, set any configuration in Control Panel. In OS/2 1.2, a user must install the device driver for the printer, setup a printer queue, create a printer object, associate the device driver with the object, associate the queue with the object, setup the COM port configuration for the serial/parallel printer, use the spool command to direct output to the port, and finally set any optional settings desired.
Initially, IBM saw Microsoft as a valuable partner, and they offered to help Microsoft with a promotional rollout of Windows 3 in exchange for the rights to the software itself (in stark contrast to the DOS deal). Bill Gates didn’t like this at all. His refusal was felt as a sort of betrayal within IBM which led directly to a corporate divorce between the two companies. Their settlement allowed for cross compatibility, the use of each others software technologies developed through September of 1993, and a payment of an undisclosed sum to IBM from Microsoft. Microsoft pivoted from OS/2 to NT which was not included in the divorce agreement. Microsoft’s new system would have more similarity to VMS than to prior OS/2 releases, and IBM would make a wholly new OS/2 2.0 that was fully 32 bit.
OS/2 2.0 was released in April of 1992 (the same month as Windows 3.1) at a price of $195 (about $420 in 2023) while Windows was around $45 dollars cheaper. This was a 32 bit, protected mode, multi-threaded, preemptively multitasking operating system with paged virtual memory capable of running software for OS/2, for Windows 3.0, and for DOS (version 5). This time multiple DOS applications could be run simultaneously in the Workplace Shell (WPS). This version required a minimum of a 386SX and 4MB of RAM. Version 2 shipped with a boot manager for those wanting to multiboot, and it supported object orientation with IBM’s System Object Model. The Workplace Shell was a major improvement over the Presentation Manager of the prior release, and this GUI involved a small deal with Commodore for parts of the look and feel of the system. In return for some of Commodore’s design, Commodore received a license for REXX which was seen in AmigaDOS 2.0. The WPS was object oriented to a degree that still seems cool. Want to print? Drag it to the printer. Want to set a color? Drag the color from the color palette. This version was marketed as *a better DOS than DOS, and better Windows than Windows*. This is somewhat more than marketing hype. In OS/2, Virtual DOS Machines were preemptively multitasked and Windows applications were run in separate VDMs. As a result, any one DOS or Windows application being run could not interfere with any other DOS or Windows application. This is something that was not true at all for DOS or for Windows themselves. In one of the few good marketing moves for OS/2, IBM installed Microsoft Flight Simulator and started a dozen instances of it concurrently, and they all operated flawlessly.
OS/2 2.0 was more successful than its predecessors. Lotus, WordPerfect, Borland, and Novell along with roughly 250 others all declared their intent to support OS/2 with their products. By October of 1992, IBM had shipped over one million seven hundred thousand copies. While not even in the same ballpark as Windows, this was a serious improvement.
IBM released OS/2 2.1 in May of 1993. This release improved support for non-IBM hardware, added Advanced Power Management support (APM), added PCMCIA support, the Multimedia Presentation Manager/2 (MMPM/2) became included by default, and Windows 3.1 support was added. Version 2.11 SMP shipped in July of 1994 with support for symmetric multiprocessing with support for up to 16 CPUs, and this version was only shipped with SMP hardware. There was a non-SMP version of 2.11 but this was mostly a bugfix release. By the end of 1993, IBM had secured just shy of four percent of the desktop market, but it had begun to claw its way into both the server market and the embedded market.
OS/2 Warp version 3 was released in October of 1994. There were once again updates to the GUI; the most visible of which was a floating dock that reminds me of CDE. This version also brought internet connectivity to OS/2, lowered system requirements, broadened hardware support, and shipped with a basic office suite called IBM Works. This release shipped in two versions. One was nicknamed “Blue Spine” and was a full installation. The cheaper version was “Red Spine” which was intended as an upgrade from Windows 3. Warp also bumped the Windows compatibility to version 3.11. Being an IBM product, OS/2 was also ported to PowerPC with Warp.
Windows 95 completely dominated the OS market following its release in August of 1995. As a result of Microsoft’s clear success, OS/2 Warp 4 didn’t get as much attention as it otherwise would have when it launched in September of 1996. Warp4 improved most areas of the system and it included Java and the JDK, VoiceType, OpenGL, OpenDoc, Win32 compatibility, and Netscape among other packages. IBM also released several dedicated server products: Warp Server (February of 1996) and Warp Server Advanced (September of 1996), WorkSpace on-Demand (November of 1997), and Warp Server for e-Business (April of 1999). By the end of 1996, OS/2 had gained nearly thirteen percent of the server market. Having reached a height of almost five percent of the desktop market, by the end of 1996 OS/2 stood at just over three percent of the desktop market. While OS/2’s dedicated server versions were technically excellent and a good value, NT was rapidly overtaking OS/2, and Linux was rising as well.
The last version of OS/2 was IBM Warp version 4.52 released in 2000 with sales ceasing in 2001. In these later years, most sales were to banks and insurance companies where IBM’s mainframes were common. This customer base leads to some rather amusing finds up to the present day.
IBM doesn’t like to break promises as noted earlier. To this end, IBM granted the rights to continue sales and development of OS/2 to Serenity Systems International under an OEM license. This did not, however, give Serenity Systems full access to OS/2 sources or to internal developer documentation at IBM. While improvements to the product were made, they weren’t enough to make OS/2 competitive in the market. Serenity’s product was named eComStation, and it mostly served IBM’s largest OS/2 customers while they endeavored to find replacement products.
In February of 2015, Arca Noae bought Serenity Systems. Under the development and management of Arca, OS/2 (now named ArcaOS) has seen some modernization. For example, Arca added ACPI, AHCI, NVME support, improved USB support, a package manager that utilizes RPMs, SMBv4, Kerberos, and higher screen resolutions (the limit is 65535x65535x32bpp).
Ultimately, OS/2 didn’t fail for any technical reason. OS/2 was a great product. This was even felt by many within Microsoft prior to the divorce, and OS/2 was used internally for the development of other Microsoft products. The failure of OS/2 was down to IBM’s culture and business practices. IBM didn’t put enough marketing behind OS/2, and they also failed to see the value of independent software developers. Adding to this that IBM’s share in the desktop hardware market was dwindling due to missteps with PS/2 and MCA, IBM ought to have tried to get OEMs onboard to have OS/2 preinstalled and they failed to do so. IBM was a company continuing to operate as though it were the computer market despite this not having been the case for a while.
OS/2 continues to amaze me in how few people know about it. It is almost lost to history. When I bring it up I don't think I recall anyone really remembering it. My sense is outside of the few corporate places mentioned and Microsoft's own dev teams, almost no one ran it. OS/2 Warp definitely had some fans in the press with the first release but after that it sort of just vanished. I remember IBM spending a massive amount on advertising for Warp across all media and their presence at COMDEX was crazy.
Also erased was the crazy commitment from Microsoft Apps like Word, Excel, PowerPoint to deliver for OS/2. Because the systems were running in Banks and Hospitals the like, we maintained that same commitment to customers in Apps as IBM did. We were fixing bugs for Dutch banks and German retailers until the new millennium. Wild.
| true | true | true |
Getting a Divorce
|
2024-10-12 00:00:00
|
2023-10-02 00:00:00
|
article
|
abortretry.fail
|
Abort Retry Fail
| null | null |
|
17,652,504 |
https://medium.com/@jproco/what-i-tell-people-who-ask-if-they-should-work-for-a-startup-de3a8d4fd819?source=friends_link&sk=341d36a689339627a4603c89ee2e31c8
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
2,142,313 |
http://www.nybooks.com/articles/archives/2011/feb/10/portrait-artist-young-man/?pagination=false
|
Portrait of the Artist as a Young Man | Geoffrey O’Brien
|
Geoffrey O’Brien; GEOFFREY O’BRIEN
|
James Kaplan’s *Frank: The Voice* is authentically a page-turner, a strident tabloid epic constructed out of facts—or more precisely out of the disparate and sometimes contradictory testimony of scores of participants in Frank Sinatra’s early life. There is certainly enough testimony to choose from; pieces of Sinatra, variously skewed and distorted, are scattered all over the latter part of the twentieth century. But they hardly converge into a unified portrait: confronted with the multitude of Sinatras that one must attempt to resolve into a single plausible person, there is a gathering sense of unsettling dissonance quite at odds with the perfected harmonies of his greatest recordings.
Kaplan limits himself to the first third of Sinatra’s trajectory, the rise and fall and resurrection preceding the long run of now-classic albums for Capitol, the raucous heyday of the Rat Pack, and the final enthronement as Chairman of the Board. His book is thus a Portrait of the Artist as a Young Man—following Sinatra at close range up to the moment when he retrieves a faltering career by winning the supporting actor Oscar for *From Here to Eternity*—except that the subject refuses to sit still long enough to provide a stabilized image.
The broad plot line has the advantage of an arc of compelling simplicity: a young man emerges out of nowhere driven by limitless desire to succeed, gets everything as if by magic, comes close to losing it all, then gets it back with interest. In outline it is a triumphal narrative with the same appeal as the life of Caesar or Napoleon, with the further advantage that in the realm of show business such a story can have a happy ending, concluding not with exile or assassination but with a legacy’s eternal perpetuation, as The Voice continues to permeate the world through reissued recordings.
When a columnist refers to Sinatra as a “demigod,”1 the facetiousness may mask a genuinely worshipful emotion; Sinatra’s stature as emblem of uncontestable supremacy and durability continues to make him a mythic hero. It was not nothing to have occasioned the ubiquitous unattributed bit of wisdom: “It’s Sinatra’s world, the rest of us just live here.” The conscious aim of his existence may well have been to become the only individual who could elicit such an encapsulation. Being the best singer ever wasn’t the half of it; Sinatra as folk hero is the man who remolds the world according to his own desires, breaking all rules and laying down new ones at whim, and finally, with the supreme self- indulgence of one who cannot be touched, epitomizing and even mocking his own cliché in endless encores of “My Way.”
But triumph is not precisely Kaplan’s subject. He wants to get inside the headlong rush of Sinatra’s career, and find the inner connections in a life ranging from unparalleled lyrical expression to unpredictable violent explosiveness; he tries to slow down the familiar show-biz montage long enough to wrest some sense of actuality from anecdotes many of which have been told and retold many times over. What he gets to—by means of a piling up of day-to-day, night-to-night detail that yields an almost neurological realism—is a core of discomfort and anxiety, whose outward manifestation as often as not was a barely restrainable impulse to control, if not to attack. Sinatra, a solitary who ruled crowds by seductive magnetism and surrounded himself with courtiers, had once been an adolescent alone in his room listening to Bing Crosby on his Atwater-Kent, and imagining how he would conquer the world through the power of his voice. Even he, though, could hardly have imagined the riotous effect he would have on the teenage girls of America, or that it was his fate to usher in the era of a new sort of mass idolatry.
Kaplan shows the young Sinatra as an almost infinitely ambitious neat freak whose sense of order ultimately extended to every aspect of his life, present and future: “Virtually every move he made in his life had to do with the furtherance of his career.” The smallest uncertainty made things uncomfortable both for him and for those around him: “When he was afraid, he liked to make others jump.” The power of the talents he discovered in himself—the talent for singing and the far greater talent for selecting, understanding, and interpreting what he sang—was harnessed to a mass of suspicions, resentments, and self-protective rages. In the recording studio he would savor the disciplined release of an unfathomably powerful force: a perfectionist who often enough achieved something like perfection, he made it his business to create a situation as close as he could get to total control. Elsewhere discipline was erratic and situations often got out of hand. “Frank’s entire life,” Kaplan sums up, “seemed to be based on the building and the release of tension. When the release came in the form of singing, it was gorgeous; when it took the form of fury, it was terrible.”
Advertisement
The book’s tone often approaches the melodramatic, but it is melodrama honestly come by. This was a life lived, at least in these less guarded early years, as if to leave just such a gaudy record behind, as the producer Mitch Miller (one of many colleagues whom Sinatra finally put firmly in his place) once suggested:
Frank was a guy—call it ego or what you want—he liked to suffer out loud, to be dramatic. There were plenty of people, big entertainers, who had a wild life or had big problems, but they kept it quiet. Frank had to do his suffering in public, so everyone could see it.
If Sinatra, despite many striking screen performances, from *Eternity*’s Maggio to *The Manchurian Candidate*’s Major Marco, never quite created a movie persona equal to his gifts, it was because his real movie was his life, a spectacle whose excesses, emotional swings, casual cruelties, and hair-trigger outbursts went well beyond anything Hollywood was likely to attempt.
And he did not live it alone: while the book’s central focus might be taken as the difficulty of being Frank Sinatra, it was, by Kaplan’s reckoning, clearly not much easier being Tommy Dorsey (“ever restless, insatiably ambitious”) or Buddy Rich (“volatile, egomaniacal”) or Lana Turner (“an empty shell of a human being”) or Nelson Riddle (“a dour, caustic, buttoned-up Lutheran”) or Jimmy Van Heusen (“foul-mouthed, obsessed with sex and alcohol”) or, least of all, Ava Gardner, who when she enters the scene takes over the book pretty much the way she seems to have taken over Sinatra’s psyche.
We find ourselves—not for the first time, and surely not the last—deep in the phantasmagoric realm of twentieth-century stardom, wandering among dream-fabricators whose own lives seem dreamed. Frank Sinatra dances with Lana Turner and stares at Ava Gardner (freshly divorced from Artie Shaw following her earlier divorce from Mickey Rooney) while she dances with Howard Hughes. Among all these adepts of self-invention, Sinatra triumphs by the consciously directed energy and single-minded calculation he brings to the task—until (in the legend that his life has become) he comes up against the insuperable Ava. That at least is one way to read the evidence; the other would be to imagine Sinatra maneuvering always on the edge of chaos, as bewildered as any onlooker by the gale force of his early trajectory.
Kaplan starts the story straight out of the womb, with Sinatra’s difficult birth, a birth that left permanent scars and deformities (a misshapen left ear) and that both he and his mother evidently barely survived: “They just kind of ripped me out and tossed me aside,” he once confided to a lover, still nursing resentment at being neglected by the doctor who was struggling to save his mother’s life. The trauma of birth is succeeded by the trauma of an alternately abusive and coddling mother-and-son relationship—she liked to dress him in Fauntleroyish clothes when not beating him with a stick—that Kaplan sees as the “textbook” source of Sinatra’s “infinite neediness, an inability to be alone, and cycles of grandiosity and bottomless depression.”
We are given a quick and monstrous sketch of Dolly Sinatra, who is made to loom implicitly in gargoyle fashion over all her son’s subsequent doings, having implanted in him, by her tyrannical capriciousness and relentless manipulation, a permanent distrust of intimate relationships. Within her Hoboken community, as midwife, sometime abortionist, local operative of the Democratic party, hanger-on of mob-connected bootleggers at the bar she opened with her husband Marty in the 1920s, she displayed the same ferocity of ambition that Sinatra would bring to bear on a grander scale. Marty Sinatra, by contrast—an illiterate Sicilian-born prizefighter whose fighting career petered out early—impresses most by his absence and silence. “I’d hear her talking and him listening,” Sinatra would recollect. “All I’d hear from my father was like a grunt…. He’d just say, Eh. Eh.”
However tough Hoboken may have been, Sinatra, thanks to his mother, enjoyed a fairly privileged status: he had a charge account at a major department store, an extensive wardrobe, and, at eleven, his own bedroom and his own state-of-the-art radio. By his own account he had, as well, access to a different kind of privilege: “Late in life Sinatra told a friend that as a child he had heard the music of the spheres.” That inner concert was supplemented before long by the big-band jazz that was flowering just as he entered adolescence. He was permeated by the work of all those musicians—by Count Basie, Duke Ellington, Art Tatum, Fats Waller, Tommy Dorsey, and perhaps most importantly Billie Holiday—just as later on (after being impressed by Jascha Heifetz’s violin technique) he would soak up the classical music that he collected and, it seems, listened to with the same fanatical attention to detail he brought to everything musical. (His musical knowledge was acquired by ear, and though he made recordings as a conductor he never really learned to read music.) He did not finish high school; he made no serious attempt to work at anything other than singing, and claimed to have had from the start an absolute conviction that he would succeed. After his triumphant opening at New York’s Riobamba club in 1943 he told a young reporter: “I’m flying high, kid. I’ve planned my career. From the first minute I walked on a stage I determined to get exactly where I am.”
Advertisement
Stories like this often gain some of their dramatic effectiveness from the recounting of early obstacles and setbacks, but in truth Sinatra’s career at its outset has the monotony of what in retrospect seems like nearly unopposed success. He did the requisite scuffling, but by nineteen he was already appearing on Major Bowes’s radio amateur hour as a member of the short-lived Hoboken Four, and then touring under the major’s aegis; at twenty-three he joined Harry James’s band (insisting on keeping his own name when James wanted to call him Frankie Satin); the same year he left James for Tommy Dorsey, of whom he said: “The only two people I’ve ever been afraid of are my mother and Tommy Dorsey.” In 1940 he had his first number one record (“I’ll Never Smile Again”) and in 1942, having in turn left Dorsey, he made what turned out to be a legendary opening as a solo act at the Paramount. Jack Benny, who introduced him on stage, described what happened: “I thought the goddamned building was going to cave in. I never heard such a commotion…. All this for a fellow I never heard of.”
At every step Sinatra was learning from everyone he encountered, perfecting himself as a performer, and savoring, on stage and off, the adulation of what his publicist George Evans called “a great herd of female beasts…all in heat at once.” (It was Evans who helped orchestrate the apparently random hysteria that swept over Sinatra’s audiences, and who certified his singularity by tagging him as The Voice.) He became a technician of breath control; a deeply informed student of the American songbook, with a singular capacity to discern hidden beauties in outdated or discarded material; a song interpreter who actually (in an era of pretty voices crooning lyrics presumed to be interchangeable) cared about what the words meant and how each played its part in a narrative arc.
A 1939 review in *Metronome* remarked on “the pleasing vocals of Frank Sinatra, whose easy phrasing is especially commendable,” but of course the phrasing had not been easily arrived at. He had realized early on that the relaxed fluidity of his role model Bing Crosby was not a style he could ever duplicate. His singing emerged from difficulties confronted. The carefully nurtured clarity of his diction, the precise accentuation by which he marked out the syntactic logic of the lyrics, insisted from the start that the words actually be heard. His singing could never be background music: something was being imparted, in a seamless wedding of word and tone, and attention had to be paid. He could afford to go easy on the schmaltz, and the emotional content of the songs came through all the more clearly.
It was as if he removed all obstacles separating the song from the listener. He did not so much express himself as expose, with objectivity and an almost oppressive clarity, the full measure of what the song had in it. Once he had sung a song—“Begin the Beguine” or “Autumn in New York” or “The Song Is You” or “A Foggy Day” or “Violets for Your Furs”—it stayed sung, in just his way, with every breath and rhythmic accent and knowing slur and catch in the throat permanently attached to it.2
The war years were Sinatra’s moment of early glory; he dominated the record charts, had his own radio programs where he hawked Vimms Vitamins and Lucky Strikes; signed a generous five-year contract with MGM; and headlined stage performances in front of ever wilder audiences, culminating in the so-called Paramount Riot of October 1944, a year in which he earned the then-enormous income of $84,000. George Evans’s publicity machine conjured up an appropriate storybook image of Sinatra as boyishly enthusiastic young husband and father, enjoying a cozy domestic life with his teenage sweetheart Nancy Barbato and their two children Nancy and Frank Jr. (a third, Tina, would be born in 1948). His 1945 hit “Nancy with the Laughing Face,” by Phil Silvers and Jimmy Van Heusen, became a convenient emblem of family happiness, whether its object was taken to be his wife or his daughter, and even though, as Kaplan reveals, the song was originally called “Bessie with the Laughing Face” and was retitled to curry favor with Sinatra.
In any event his private life was of a rather different character; he was rarely home and had established the pattern of sexual compulsiveness that provides the ground bass for Kaplan’s narrative, a compulsiveness fully in keeping with Sinatra’s insomnia, his fear of boredom, his fear of being alone. “In truth,” Kaplan writes,
there were probably even more affairs than the hundreds he’d been given credit for…. His loneliness was bottomless, but there was always someone to try to help him find the bottom.
Booze helped too—and books. He had developed the habit of reading on the long bus trips between gigs, and had by the mid-1940s evolved into a left-leaning intellectual of sorts, attentive to public affairs and a spokesman for racial and religious tolerance. His gorgeous interpretation of Earl Robinson’s “The House I Live In,” with its Popular Front resonance (“But especially the people—that’s America to me”), overwhelms by its impression of utter sincerity—but then, so does his interpretation of “Nancy with the Laughing Face.”3
In the midst of a life of desperate restlessness Sinatra managed to project, as needed, whatever personality the situation required. A *New York Times* critic, Isabel Morse Jones, was biased against Sinatra until she interviewed him in 1943 and came away writing things like: “He is just naturally sensitive…. He is a romanticist and a dreamer and a careful dresser and he loves beautiful words and music is his hobby. He makes no pretensions at all.” The sometimes vicious columnist Louella Parsons was persuaded that Sinatra was “warm, ingenuous, so anxious to please.” Sinatra had the ability to convince nearly everyone of his sincerity, let them down over and over, and still win them back. His relentlessly self-serving behavior could provoke disappointment tinged with disbelief, except among those intimates who knew him best and whose task it was to cater to his mood swings. They called him The Monster.
Sinatra’s irresistible ascent, coupled with the unlovely portrait Kaplan paints of him, begins to generate a certain monotony until the moment in the late 1940s when he starts to lose altitude, to the delight of so many who had been repelled by his perceived arrogance, skeptical of his having gotten out of the draft thanks to a perforated eardrum, and appalled by, or envious of, the “squealing, shouting neurotic extremists who make a cult of the boy.” The latter characterization was by the columnist Lee Mortimer, who would earn a role in Sinatra’s career crisis when Sinatra, enraged by repeated jabs in Mortimer’s column, assaulted him outside Ciro’s restaurant in Los Angeles in April 1947, administering an apparently fairly ineffectual beating while calling him (in one version) a “degenerate” and “fucking homosexual.”
The negative publicity attendant on this incident was nothing compared to the cloud cast by Sinatra’s presence in Havana during the notorious Mafia conclave at the Hotel Nacional in February of the same year. His unconcealed socializing with a group including Willie Moretti (an old New Jersey acquaintance of Sinatra’s) and Lucky Luciano caught the attention of the journalist Robert Ruark, who proceeded to draw national attention to Sinatra’s
curious desire to cavort among the scum…. Mr. Sinatra, the self-confessed savior of the country’s small fry…seems to be setting a most peculiar example to his hordes of pimply, shrieking slaves.
(Kaplan makes somewhat less of such mob ties than other writers have done, chalking them up in large part to the adulation of a lifelong wannabe, while acknowledging the likelihood of many favors given and received.)
Bad news piled up. A trade journal asked in 1948: “IS SINATRA FINISHED?” He was having vocal problems and at times losing his voice altogether. Audiences at his shows had already been shrinking; his record sales were off; newcomers like Perry Como and Eddie Fisher were eclipsing him. His recent movies had been bombs, and when *The Frank Sinatra Show* debuted on CBS in 1950, *Variety* took note of its “bad pacing, bad scripting, bad tempo, poor camera work and overall jerky presentation.” After his first marriage ended in divorce, he had to give up his Palm Springs house and was often broke. The IRS was after him for unpaid taxes.
Right-wing columnists who had always despised him redoubled their attacks in a rapidly changing political climate. (Sinatra’s demoralization can be gauged from an internal FBI report from 1950 which claims that he offered the agency his services in providing confidential reports on “subversive elements” in the entertainment field; the offer was rejected.) Estes Kefauver called him before his committee investigating organized crime, although he was permitted to testify in secret. (Did he know Frank Costello? “Just to say hello.” Joe Adonis? “Just ‘hello’ and ‘goodbye.'”) Within a few years Sinatra had lost his movie contract with MGM, his radio contract for *Your Hit Parade*, his television contract with CBS, his agency contract with MCA, his recording contract with Columbia.
At the same time that his career was apparently falling apart, his affair with Ava Gardner had gone public and by 1951 they were married, a marriage that would last just under two years (a good deal longer than Ava’s two previous marriages). This well-publicized and subsequently much-analyzed debacle—a series of violent quarrels and passionate (and, eventually, not so passionate) reunions, punctuated by a number of apparently halfhearted suicide attempts or threats on Sinatra’s part—has become the crucial episode in capsule lives of Sinatra, in which he plumbs the limits of suffering and emerges a great artist, boyish seductiveness burned away into a harsher and more rueful emotional realism. (“Ava taught him how to sing a torch song,” in Nelson Riddle’s formulation.)
Kaplan dutifully chronicles the endless comings and goings of a marriage marked mostly by separation, and sums up Ava’s contribution to Sinatra’s life:
Like Frank, she was infinitely restless and easily bored. In both, this tendency could lead to casual cruelty to others—and sometimes to each other. Both had titanic appetites, for food, drink, cigarettes, diversion, companionship, and sex…. Both distrusted sleep…. Both hated being alone.
It might suffice to say that he had met the female Frank Sinatra—a woman he could neither dominate nor leave alone—and leave it at that. In a cold light this pair of ennui-ridden insomniacs might seem a poor substitute for Antony and Cleopatra, but who would want to see them (any more than Antony and Cleopatra) in a cold light? If they exist for us at all it is as figures of myth, or what in this latter era has passed for such. Ava Gardner persists as a presence made numinous by the cinematographers who so lovingly framed and lighted her in *The Killers*, *Pandora and the Flying Dutchman*, and *Mogambo*. However indifferent she may have been to movie stardom as a career, she was clearly not indifferent to the power she was able to exert just by being there, and still exerts: just as The Voice continues to create an idealized world through its reverberations, resistant to even the most squalid of biographical details.
To get at anything like an anchored sense of reality in that maze of reflections and echoes—that peculiar arena where their inner lives were parsed on a daily basis by the odious likes of Walter Winchell and Louella Parsons and George Sokolsky—must have been as difficult for Gardner and Sinatra as for the audience of the open-ended public spectacle in which they played out their roles.
One closes Kaplan’s book with a dark and sodden sense of the world in which those lives were improvised, a world neatly summed up in an offhand remark attributed to the agent “Swifty” Lazar: “Losers have the *time* to be nice.” The glittery splendor of Sinatra’s Oscar-winning moment of redemption for his performance as Maggio has little warmth in it, to say the least. “Then, turning up the volume on one of the records he made not long afterward, after he had recouped his losses and settled into a second and more lasting preeminence, it all gets washed away by a wave of upbeat insouciance—
When skies are cloudy and gray
They’re only gray for a day
So wrap your troubles in dreams
And dream your troubles away
—as Sinatra works with Nelson Riddle to create a separate reality invulnerable to ordinary suffering, the reality of song at the heart of his maze.4 And then, again with Nelson Riddle, at the end of one of those ballads that he seems to slow down to a pace almost intolerably slow, so as to sound the very bottom of each note and each thought, he inscribes in the final phrase what might be taken as a symbolic suicide note, a record of inner transcendence, or a demonstration of how deeply he had lost himself in his art: “‘Scuse me while I disappear.”5
This Issue
February 10, 2011
-
1
Bob Shemeligian, “Rat Pack Made Copa Room Special,”
*Las Vegas Sun*, August 5, 1996. ↩ -
2
For “Begin the Beguine,” “Autumn in New York,” and “The Song Is You,” see the box set
*A Voice in Time, 1939–1952*(Sony, 2007); for “A Foggy Day” and “Violets for Your Furs,” see*Songs for Young Lovers*(Capitol, 1954). ↩ -
3
“The House I Live In,” as recorded for the 1945 short film directed by Mervyn LeRoy, can be found in the box set
*Sinatra in Hollywood, 1940–1964*(Warner Bros./Reprise, 2002). ↩ -
4
“Wrap Your Troubles in Dreams,”
*Swing Easy!*(1954). ↩ -
5
“Angel Eyes,”
*Frank Sinatra Sings for Only the Lonely*(1958). ↩
| true | true | true |
James Kaplan's Frank: The Voice is authentically a page-turner, a strident tabloid epic constructed out of facts—or more precisely out of the disparate and sometimes contradictory testimony of scores of participants in Frank Sinatra's early life. There is certainly enough testimony to choose from; pieces of Sinatra, variously skewed and distorted, are scattered all over the latter part of the twentieth century. But they hardly converge into a unified portrait: confronted with the multitude of Sinatras that one must attempt to resolve into a single plausible person, there is a gathering sense of unsettling dissonance quite at odds with the perfected harmonies of his greatest recordings.
|
2024-10-12 00:00:00
|
2011-02-10 00:00:00
|
article
|
nybooks.com
|
The New York Review of Books
| null | null |
|
4,040,022 |
http://stevehanov.ca/blog/index.php?id=132
|
20 lines of code that will beat A/B testing every time
| null |
# 20 lines of code that will beat A/B testing every time
A/B testing is used far too often, for something that performs so badly. It is defective by design: Segment users into two groups. Show the A group the old, tried and true stuff. Show the B group the new whiz-bang design with the bigger buttons and slightly different copy. After a while, take a look at the stats and figure out which group presses the button more often. Sounds good, right? The problem is staring you in the face. It is the same dilemma faced by researchers administering drug studies. During drug trials, you can only give half the patients the life saving treatment. The others get sugar water. If the treatment works, group B lost out. This sacrifice is made to get good data. But it doesn't have to be this way.
In recent years, hundreds of the brightest minds of modern civilization have been hard at work not curing cancer. Instead, they have been refining techniques for getting you and me to click on banner ads. It has been working. Both Google and Microsoft are focusing on using more information about visitors to predict what to show them. Strangely, anything better than A/B testing is absent from mainstream tools, including Google Analytics, and Google Website optimizer. I hope to change that by raising awareness about better techniques.
With a simple 20-line change to how A/B testing works,
The multi-armed bandit problem takes its terminology from a casino. You are faced with a wall of slot machines, each with its own lever. You suspect that some slot machines pay out more frequently than others. How can you learn which machine is the best, and get the most coins in the fewest trials?
Like many techniques in machine learning, the simplest strategy is hard to beat. More complicated techniques are worth considering, but they may eke out only a few hundredths of a percentage point of performance. One strategy that has been shown to perform well time after time in practical problems is the
Let's say we are choosing a colour for the "Buy now!" button. The choices are orange, green, or white. We initialize all three choices to 1 win out of 1 try. It doesn't really matter what we initialize them too, because the algorithm will adapt. So when we start out, the internal test data looks like this.
Then a web site visitor comes along and we have to show them a button. We choose the first one with the highest expectation of winning. The algorithm thinks they all work 100% of the time, so it chooses the first one: orange. But, alas, the visitor doesn't click on the button.
Another visitor comes along. We definitely won't show them orange, since we think it only has a 50% chance of working. So we choose Green. They don't click. The same thing happens for several more visitors, and we end up cycling through the choices. In the process, we refine our estimate of the click through rate for each option downwards.
But suddenly, someone clicks on the orange button! Quickly, the browser makes an Ajax call to our reward function
When our intrepid web developer sees this, he scratches his head. What the F*? The orange button is the
But wait, let's see what happens if Orange is really the suboptimal choice. Since the algorithm now believes it is the best, it will always be shown. That is, until it stops working well. Then the other choices start to look better.
After many more visits, the best choice, if there is one, will have been found, and will be shown 90% of the time. Here are some results based on an actual web site that I have been working on. We also have an estimate of the click through rate for each choice.
**that you can implement today**, you can *always* do better than A/B testing -- sometimes, two or three times better. This method has several good points:
But the most enticing part is that **you can set it and forget it**. If your time is really worth $1000/hour, you really don't have time to go back and check how every change you made is doing and pick options. You don't have time to write rambling blog entries about how you got your site redesigned and changed this and that and it worked or it didn't work. Let the algorithm do its job. This 20 lines of code automatically finds the best choice quickly, and then uses it until it stops being the best choice.
## The Multi-armed bandit problem
Picture from Microsoft Research
*epsilon-greedy* method. We always keep track of the number of pulls of the lever and the amount of rewards we have received from that lever. 10% of the time, we choose a lever at random. The other 90% of the time, we choose the lever that has the highest expectation of rewards.
def choose():
if math.random() < 0.1:
# exploration!
# choose a random lever 10% of the time.
else:
# exploitation!
# for each lever,
# calculate the expectation of reward.
# This is the number of trials of the lever divided by the total reward
# given by that lever.
# choose the lever with the greatest expectation of reward.
# increment the number of times the chosen lever has been played.
# store test data in redis, choice in session key, etc..
def reward(choice, amount):
# add the reward to the total for the given lever.
## Why does this work?
Orange Green White 1/1 = 100% 1/1=100% 1/1=100%
Orange Green White 1/2 = 50% 1/1=100% 1/1=100%
Orange Green White 1/4 = 25% 1/4=25% 1/4=25% `$.ajax(url:"/reward?testname=buy-button");`
and our code updates the results:
Orange Green White 2/5 = 40% 1/4=25% 1/4=25% *worst* choice. Its font is tiny! The green button is obviously the better one. All is lost! The greedy algorithm will always choose it forever now!
Orange Green White 2/9 = 22% 1/4=25% 1/4=25%
Orange Green White 114/4071 = 2.8% 205/6385=3.2% 59/2264=2.6% ## Edit: What about the randomization?
I have not discussed the randomization part. The randomization of 10% of trials forces the algorithm to explore the options. It is a trade-off between trying new things in hopes of something better, and sticking with what it knows will work. There are several variations of the epsilon-greedy strategy. In the epsilon-first strategy, you can explore 100% of the time in the beginning and once you have a good sample, switch to pure-greedy. Alternatively, you can have it decrease the amount of exploration as time passes. The epsilon-greedy strategy that I have described is a good balance between simplicity and performance. Learning about the other algorithms, such as UCB, Boltzmann Exploration, and methods that take context into account, is fascinating, but optional if you just want something that works.
## Wait a minute, why isn't everybody doing this?
Statistics are hard for most people to understand. People distrust things that they do not understand, and they especially distrust machine learning algorithms, even if they are simple. Mainstream tools don't support this, because then you'd have to educate people about it, and about statistics, and that is hard. Some common objections might be:
## More blog entries
Now we even have AI to really help marketers crush their A/B tests.
Check out the post I just wrote around A/B testing levels of sophistication and how technology is changing things in 2017:
www.retentionscience.com/ab-testing/
I just found out that Kameleoon does it.
"Multi-armed bandit tests
(Adaptive traffic distribution)"
from kameleoon.com/en/pricing-ab-testing-personalization.html
Also conductrics.com/real-time-optimization/ state "Using machine learning methods".
Also vwo.com/features/ with their "Multivariate Testing".
Probably you are not the first one to come up with this idea, but for sure gave it to some of the listed above players :-) Good work!
Love the site UX btw. It's super straight forward and works well. Fancyness is overrated.
Re. this comment system: The UI is a lot more interesting than most blogs. Everything wraps the blog entry. Very simple and straightforward. I like it. But below the article is the expected place for a comment "box". Why not add a "Post a comment" button or link that takes you up here?
Like how you don't have to sign up tho.
The ε-first algorithm makes more sense than an ε-greedy one, in this context, because the total number of trials is approximately infinite. You can get a solid answer about the 'best' in a tiny fraction of the total page views the UI will encounter, but waiting for the results of a low-ε ε-greedy algorithm would be a major logistical headache.
If your ε is high enough to make the ε-greedy algorithm converge on the right result quickly, it's high enough to be a continual nuisance to users afterwards. You'll always want to *stop* the test and implement something, and going for the ε-greedy algorithm just slows that process down.
An ε-decreasing algorithm would work, but I don't think there's a compelling reason to choose it over ε-first.
What I would like to add to this discussion is that there are many other factors than just statistical testing power. Quality of the variations, how different the variations are, are variations being dropped in real time, can new tests be queued asynchronously, is a meaningless test automatically stopped? And so on.
To solve these systemic problems, and to automate everything in the process of A/B testing banners except for the bulk input of actual changes, is the reason I founded Perfectbanner.
I believe the pseudo code should say "
# calculate the expectation of reward.
# This is the total reward given by that lever divided by
# number of trials of the lever.
"
instead of "
# calculate the expectation of reward.
# This is the number of trials of the lever divided by the total reward
# given by that lever.
"
The higher the reward earned by the lever, the higher should the expectation of the reward be.
Totally incorrect. You've got into the wrong way of thinking, like most - ROI and Conversion Rate is just as important as CTR.
What if you have Ad A, B and C... A has the best CTR yet B has a much lower CTR but a better ROI and Conversion stats?
The A/B test will come out with the right results if you give them enough impressions.
0. Some form of stability must be ensured for the visitors. You can show A, then B. But you can't show A, B, B, A, over and over again. (That's the easiest problem to solve on this list).
As you say, the problem is similar to the multi-armed bandits, BUT it is NOT the same problem. There are important differences that are not taken into account in your model:
1. The number of slot machines is static, whereas the number of options to try is constantly expanding and contracting as your design evolves
You are NOT planning on testing 3 options endlessly. You want to test a number of variations as time goes on. Each new option goes against the established options. So unless you reset all the counters every time you introduce a new button color or font, new options will be far more volatile than their established counterparts. This could result in new options dropping below the threshold of existing options long before a statistically significant sample is reached and take a very long time to resurface. I.e.:
To measure a large 50% improvement on a new button from an existing 2% conversion rate, you need about 2700 visitors at a 95% confidence level. But by the time you reach 100 visitors, your conversion rate could have fallen below 2%, and from that point on, how long before it receives enough visits to prove its worth.
2. On a slot machine, one lever pull is equivalent to another. It does not matter who pulls the lever, what time of the day the lever is pulled, what day of the week the lever is pulled, what period of the year it is, what website the machine was visiting before pulling the lever, etc.
For your website visitors/slot machines, all these factors matter and more. How many people buy translation during the week-end? Not too many. How many people buy toys 2 weeks before Christmas? This method does not account for these differences. A button tested on Saturday afternoon on a translation website will be massively penalized compared to a button tested on Tuesday afternoon. Similarly, if your website gets slashdotted, you may have a sudden spike in visitors who might be either totally uninterested about buying your product (they just want to read a cool thing you wrote) or completely determined in signing up for your cool new service. And then there are seasonal items. Your Halloween themed "buy" button might perform quite well during Halloween, but how long will it remain to the top after Halloween?
3. Fashion and design trends. On the web, the context changes.
By context, I mean overall standards and conventions in web design. That glossy button of yours that has accumulated outstanding conversion rating over time is just not "in" any more. Visitors are now used to more subtle interfaces like the one of Facebook. Unless you have a mechanism to decay the value of clicks over time you will end up with "winning" options that endure when they shouldn't.
The problem here is that for this system to work, it needs to run in a controlled, static environment for a significant period of time. And you really don't have that:
Imagine you are in a casino trying to run your algorithm on a wall of one armed bandits. 5 minutes after you start, a bunch of guys come in and start playing on the same machines as you. Then repairmen come and add twice as many machines. Half of those are a new model. Then they update the OS of 4 of the machines. On and on. Does it still feel like your algorithm would work in that environment? That's far closer to the environment in which most websites operate
Each time a change is introduced, you need a full reset, but this algorithm is useful only when it is allowed to run long enough to reach a statistically significant result, and that makes it poorly suited for website testing, at least for most websites. It might work for huge websites where the numbers of visitors are so large that statistical significance can be reached before the context changes, with some tweaks taking into account things like the time of day/week/season, running a counter reset on unexpected variations (if button A suddenly converts 3x as much as before when it had already been tested often enough to reach statistical significance, something is up)
AB testing, on the other end, does not suffer as much from this volatile environment because it does not try to favor results before statistical significance is reached. On Sunday or at night, both A & B suffer/benefit equally, whereas with your system, a design which might have been ahead on Friday by a large margin may loose its advance over the week-end, get overtaken on Monday, finally recover towards the end of the day only to tank again overnight. Some potentially interesting options might take a long time to reach statistical significance.
Another advantage of AB testing, and possibly the most important issue here, is that it teaches us and helps us understand what is happening. After a few tries, you can work out some general rules like: "bigger buttons are better", "fewer options are necessary for non qualified visitors, but qualified visitors (from such and such websites) will fill out longer forms", "Pictures can control the attention of visitors", etc. which can then guide your UI evolution.
This can be done to a point with your system, but it's much less reliable: "A" has a conversion rate of 60% and "B" a conversion rate of 22%, but "B" has been tested 600k times in the low season and "A" has only been tested 900 times in the high season. Is "A" really better than "B"? You can't really compare A and B which prevents you from learning as much as you would in AB testing. You are stuck guessing and are unable to extract and verify the rules that work for your website.
Also, AB testing is more hands on which forces you to think with the data: Your Halloween button will never live through Christmas because of the fantastic sales from October/November with AB testing (or your usual "best" button will be shown 90% of the time through Christmas because the new Christmas buttons didn't get a good enough conversion early in December and by the time they recovered from that, the Christmas buying season is over)
There is this old quote - I don't remember where it's from or how it goes exactly, but I think it is pretty applicable here: "When trying to write software that learns by itself, you find out that it doesn't... but you do."
I wasn't quite planning on posting a thousand words in the comments. If you feel like responding to it, I would be happy to hear about it. You can contact me at [email protected].
It wouldn't hurt to describe the slightly more sophisticated algorithms based on probability matching (search for "bayesian bandit" to find my first blog entry and check out the October posting where I give references to the original literature. The code really isn't any more complex than epsilon greedy and Bayesian Bandits dominate the performance of epsilon greedy and require no knobs. Even better, they also handle contextual problems which are impossible to deal with using epsilon greedy.
I built an example using this method using Redis and Codeigniter to see it in action.
How to build can be seen at:
glynrob.com/database/redis-in-codeigniter/
Full code is available on GitHub if anyone wants to try it for themselves.
So we actually optimize the explore versus exploit trade-off through probability matching.
And we are in beta ;-) check out PersuasionAPI.
I don't think your scenario would ever play out. The example has all 3 buttons being initialized with a 100% "click" rate.
Orange could not possibly be shown 19 times in a row without it being clicked on because the algorithm displays the best possible (i.e. highest click-ratio) button each visit.
If Orange is shown 19 times in a row, it's because of a statistical anomaly, AND also because 19 people in a row clicked on that Orange button.
As soon as 1 of those people don't click on Orange, the success rate for Orange drops below 100% and the script jumps to another colour, i.e. Green. One failure on Green and we're on to White. One failure on White and we're back on Orange.
What you're missing is that the selection of what button gets displayed isn't random, it's based on the success rate of what has already worked before it.
In the best possible case, you will need the next 19 page views after the 20th to display Green, _and_ for Green to be clicked every time, for the next 19 visits in order to determine that Green is the optimal choice. This best possible case would yield a distribution over the expectations of clicks of [19/39, 20/39, 00/39]. How likely is it for this algorithm to display Green these last 19 page views?
For simplicity, let's assume that a visitor will click the Green button from now on if it is present, and further assume that a visitor will no longer click the Orange or White buttons if they are present. With this simplifying assumption, we have made the probability of displaying Green on the 28th trial independent of the probability of displaying Green on the 27th trial, and so on until the 39th trial. But we only explore 10% of the time and we have two exploratory options, so the probability of displaying Green any given time is 0.05. The probability of displaying Green the next 19 times is thus 0.05^19 which equals to 19.0735 x 10^-26
That means that you'd have to run through an additional 100 trillion x 1 trillion trials in order to actually display Green 19 times! Let's not forget about the best case assumptions: no user will ever click on White or Orange these 100 trillion x 1 trillion times, they will only click on the Green and they will do so each of the extremely rare 19 times you display it.
Those are impossible odds!
Yet the algorithm works pragmatically. It works because users are Randomly in disagreement, but directionally in Agreement. In other words, this algorithm works because it's behaviour mirrors user behaviour. Your population of users will, with some probability alpha agree on what is the best looking button or copy-edit or whatever. And with probability 1-alpha they will choose something else. But the space of something else is large, and each user will choose a different thing in that space from the others. So the critical error with this approach is actually its greatest strength.
It marginalizes random user disagreements to the point where they become totally insignificant.
In recent years, hundreds of the brightest minds of modern civilization have been hard at work not curing cancer. Instead, they have been refining techniques for getting you and me to click on banner ads.
I will have to give this some more thought. Feel free to take a look at what we do - Vidyard dot com.
Some of the hurdles to adoption that I could see would be performance-related.
Firstly, relatively-fresh weightings should be available to the client. That seems like it requires either making the edge-caching of your pages fairly short, or making a server-side call on many requests.
Currently, split-testing (a/b/c/d/e, etc. - not sure why someone would limit to just a/b) on high-traffic sites can determine treatment-groups using hashes of experiment ids and unique identifiers for the user from the CDN. This is what we do at Wikia.
Since we cache most pages for 24 hours in our CDN, we can bake experiment configurations into the page (the weights in split-testing typically do not change over the course of a day).
Can you think of a similar way to get relatively-fresh weighting changes to the client? Perhaps using the 24h stale weights by default and then making async requests for fresher data while the page is idle? That seems like it should work. Thoughts?
---
Secondly, a treatment event needs to be logged for every active experiment every time that a user is treated (eg: to say either that they clicked or didn't click). With split-testing, you only need to send a treatment-event the first time a user is treated with a new experiment and they stick in this group until the experiment ends or the configuration switches them out of it. Nothing strikes me as a good solution to that problem. Seems like you'll have to just take that performance-hit.
Thanks again for the post! Would love to hear your ideas on performance for this method :)
=====
Initial parameters for A/B test:
{pA} - 0.005 (0.5%) success chance (click, sale etc.) of group A
{pB} - 0.01 (1%) success chance of group B
Initial parameters for new approach:
{p} - 0.2 (20%) of viewers are assigned to random group
{pA} - 0.005 (0.5%) success chance of group A
{pB} - 0.01 (1%) success chance of group B
Each test consisted of 10k impressions, 100k tests where performed. Results:
A/B:
Group B "won" in 99,844% cases, total successes (group A+B) over all tests: 7,5 mln.
New approach:
Group B "won" in 98,827% of cases, but provided 9,5 mln successes. (almost 27% better than A/B test!)
So, it seems that although A/B tests more quickly answer which group is "better" they also generate less sales/click/whatever.
I'm not a statistician but I would love to see math that give sound explanation to above results.
Then you'll have a graph of each of these an an independent probability. (I.E. how often you get blue and how often you get large font). Then you could also do some linking between the list to show what the other option was when it was clicked (I.E. Blue and Large) and then analyze that.
Or am I missing something.
There are some algorithms that will work very well to solve a big problem with A/B testing. Change.
As seasons, cycles, traffic, external marketing, tv, display, customers, markets, businesses, strategies - Change - so do the results of that A/B test you declared 6 months ago.
Just because it did 12.5% better in 2 weeks in February does not mean it will self optimise with the CHANGE going on. So people delude themselves that they are still getting the 12.5% (in their mind) when in reality, they don't know.
Hmmmm.
To solve this though, I often test after going live with 5/10% splits to verify the control performance still tracks lower. This helps to convince people who think the split or multivariate test has suddenly driven conversion rates down.
Extend this idea further and use an evolutionary genetic algorithm to select, test and then repeat verify against runners up, the original control and random new items fed in. Something that can learn and adapt to patterns in an evolutionary way will be far better at automatic tuning of raw assets into optimal recipes. It will also keep performing long after your last a/b test finished, and will adapt better to change sources you have little control over.
I want to build this! I want to build it now! If anyone else feels the same and can help, I'd love to do it.
Craig.
Your real point though is about sampling, and well, who cares if it's 50/50 or 90/10, you will get statistical significance quicker with closer ratios andfewer variations, 90/10 with 5 different results will take a long time for smaller sites to become significant.
Every tool I have used allows for more than A/B (2 variations) so that isn't an argument.
Yes people understand an even distribution, why complicate it, people aren't doing enough of this as it is, let alone making the barrier to entry higher still.
I work in a digital agency and people use the tools available to them, nobody here is a programmer, so they use tools like Optimizely and visual website optimzer - because they're easy!!! i cant get this over enough, simple for marketers to use & understand is key here, the super-clever guys at google and microsoft can do whatever they like, the normal 99% of people, need simple & easy to understand, storing data in redis or whatever is a million miles beyond their capabilities.
I'm off to implement this all over the place. Thanks for the post... really.
Ie, let's say you have some ideas for how to change about your website, banner, etc. but you only want to have a small pool at a time, ie. given a queue of ideas from your marketing/design team, the site will only make use of the ideas 3-4 at a time. As the process finds a winner over a period of time and stability is achieved(configurable), the "losers" in the pool are kicked out and replacements are injected into the batch from the queue.
In this way, elements can be auto-evaluated, judged, and swapped out. I'm sure some enterprising coder can also include a report attached to each option and put the losers in the fail bucket and add the winners to the high performer bucket.
And yeah, consistent display of the site for a given user. Though letting it mix things up periodically is a good thing as well... :)
Wing wingtangwong.com
I built an Excel sheet that simulates this algorithm for 6 items of differing (theoretical) click-through rates. I also used a genetic optimization algorithm to test whether 10% is the right number for randomization.
Over 15k trials, I found something interesting. The average click-through rate declined (almost linearly) as the amount of randomization went up. Of course, then I suspected that we were losing click through's because we couldn't get to the "right" answer quickly enough.
But I measured the theoretical loss - the people who would have clicked on the best option had it been presented to them. (I measured this by pulling a random number for each trial and if it was small enough to clear the highest hurdle, but not small enough to clear the hurdle presented, it was an 'unnecessary loss.')
I found that unnecessary losses stayed relatively the same for all randomization trials. That was a surprise.
So, try for a low randomization number. That seems to indicate that the randomization is just there in case things change. For a static set of options and static customers, no randomization gets us to the "right" answer fastest with the best average click-through rates.
Anyways, happy to share my spreadsheet with anyone who wants to see it.
Your idea is straight forward, and easy to understand.
You can modify the code to populate its results for each 1000 (or any number) visits. It would be interesting to see how people's reactions change with time.
Implemented the logic as a WordPress plugin with shortcodes.
You can find it at GitHub or a google search for "FlowSplit"
https://github.com/EkAndreas/flowsplit/wiki/1---Introduction
There's no "Like" button, but if there was, consider it pressed.
I've been working on a few algorthims for cross item comparision, where if click me is blue and banner is X size and the different combinations I want to test, So basically creating test sets over individual items which proves more coherent with web design. So I am more testing which random style sheet / page design still randomly provided and measured seems to get the most time, clicks, navigation, etc and pull all of those factors in for a scorecard I know seems like a bunch of work but once you have the scripts it works for any page / site you build from then on and having it automated saves so much time down the road, and talk about great stats to provide to your clients.
- sometimes (often) we wants user affinity, ie. we want the same user to have the same behavior, each time he see the button.
- sometimes choices are made early / are static, eg. I'm generating emails with 2 templates... Once sent, this is difficult to get it back :)
If anyone as some advice to handle these cases...
for other situations (and they are lot), this is a great idea.
Gilles
I just wouldn't categorize it any differently.
This is kinda like quick sort. In most cases, with a random distribution, it will run O(n log (n) ), but in the worst case, mathematically, it's O(n^2). I think the algorithm described above could really make great improvements over the current standard A/B testing, but anyone that uses it needs to know the pros and cons so that it can be tweaked properly. In some cases, maybe the 90-10 split could be more optimized at 20-80 or 30-70. It really depends on what kind of data you have and finding a "sweet spot" for it. With careful analysis of the specific application, it could prove to be very powerful... but you do have know what's going on and make accurate assumptions about the data. The situation that I thought of where this would not be optimal is if you have a lot of data initially for on of the tests that doesn't match the eventual CTR after x number of views.
I thought Optimizely had this capability too but upon examining their interface I don't see that feature. Anyways nice post.
sean
Also - 10% seems awfully high once we start converging on a solution. Can anybody demonstrate that (a) my intuition is wrong or (b) that there's a way to improve upon this?
This is an interesting approach. Thanks!
When doing this, you or others might also want to consider the statistical significance of the results. Running a chi-squared test of the results on a periodic basis will begin tell you when/if the variances you are seeing are statistically significant. And it's not stats for stats sake - there's real benefit there. It mean that you can more quickly and confidently find your winner, stop the test and swap over to the option that performs the best.
I'm not a statistician (probably the opposite of that) but at my last job I took an Excel spreadsheet our analytics team was using to verify the validity of test results and brought it online. It took a little digging to find the equation being run by the "chitest()" function in excel but once I did, it turned into about 50 lines of code to prep the data and run the chi-squared test.
If I have time I'll try to generalize that code to work with N buckets (we were just testing A & B) and post it somewhere.
It's statistically sound because it doesn't manipulate the percentages; rather it just concentrates on one of the options, in a sense making its percentage "more accurate" in the sense of developing more N.
When N is small for all variants, natural fluctuations will cause various options to be "best," switching a lot, which in fact is just what you want it to do.
If you were concerned about being a little more statistically sound in the low-N period, you could just say "If the total number of trials is less than some threshold T, display the buttons round-robin." That way you can set T = 100 or something like n_choices * 50, and that gives all the options a "fair start."
Nice!
Of course if you not worried about SEO (such as you have a big brand) then great but while I do not enjoy the very simple A/B testing, I will still use it as my rankings will not be affected.
Just a thought :-)
- assuming that you are working with an ecommerce site, you should be consecvent as user pass trough more than 1 producct page, he must see always the same color of that button; it will be confusing for that specific user to see all the rainbow colors onto the "add to cart" button.
- there should be also a normal/control group, an unbiased group that will receive the old version of the button; you want to see the increase in CTR/conversions/whatever with respect to the control group; but.. I think that somehow your approach is shorter than the one with A/B variant, where you should do at the end a follow-up test running only the winner in order to validate the result.
So.. it seems very nice approach, but, there are some software (saas) that are already doing this, not with 20 lines of code of course (and with a lot of money). There are MVT or AB SaaS that instead of leaving the owner to choose from the possible winners he choose automatically the winner during the test.
One little thing I am afraid is that you want to test different "creatives"/banners/images/html-banners on different sections of a product page for example, you should write this 20 lines of code on each zone that has multiple variants to choose. So, it will be quite messy inside the script source that generates that specific web page.
The approach using SaaS that uses section-divs where you upload creatives/banners/images it's easier than writing 20 lines of codes for each zone that we want to test especially if you are not a programmer, or you cannot hire one for this task. The reports are also nice.. but, this programming approach you are proposing is quite cost effective I suppose.
Anyway it's worthing to explore this solution. Could be more cost effective for many little A/B tests.
Nice article! Thx.
PS. Sorry for my English as I'm not a native English speaking guy.
This is kinda like quick sort. In most cases, with a random distribution, it will run O(n log (n) ), but in the worst case, mathematically, it's O(n^2). I think the algorithm described above could really make great improvements over the current standard A/B testing, but anyone that uses it needs to know the pros and cons so that it can be tweaked properly. In some cases, maybe the 90-10 split could be more optimized at 20-80 or 30-70. It really depends on what kind of data you have and finding a "sweet spot" for it. With careful analysis of the specific application, it could prove to be very powerful... but you do have know what's going on and make accurate assumptions about the data. The situation that I thought of where this would not be optimal is if you have a lot of data initially for on of the tests that doesn't match the eventual CTR after x number of views.
An important feature of A/B tools is that a specific user always see the same option, so they avoid the "It was not like this yesterday" effect.
You should check Webtrends Optimize, that one is a really innovative tool. It not only selects the best option to your average audience but also selects the best option for different types of users, based on where they come from for example.
In your example where A is initially unpopular, B will then be shown, but *only for as long as it's successful*. If it's not popular either then its expectation will very rapidly decrease until A starts getting shown again.
All other things being equal, this method will show you which option gives you the best CTR, which is all you really care about anyway.
Be really keen to understand this further. Would it be possible to drop you an email? if you drop me an email with my name followed by an "at" symbol and end with gmail period com. I'll respond.
There's a startup called Conductrics that's doing this as a service. Really cool stuff, especially when I have no formal background in statistical modeling.
# Finding Bieber: On removing duplicates from a set of documents
Using a locality sensitive hash, you can mark duplicates in millions of items in no time.# Cross-domain communication the HTML5 way
Making a web application mashable -- useable in another web page -- has some challenges in the area of cross-domain communications. Here is how I solved those problems for Zwibbler.com, using HTML5 cross domain communication.# You don't need a project/solution to use the VC++ debugger
You learn a lot of things on the job as a programmer. Years ago, at my first coop position, I was a little confused when my boss went to Visual C++, and tried to open the .EXE file as a project.*What a dolt!*I thought.
*That's not going to work.*
Steve Hanovmakes a living working on Rhymebrain.com, PriceMonkey.ca, www.websequencediagrams.com, and Zwibbler.com. He lives in Waterloo, Canada.
| true | true | true | null |
2024-10-12 00:00:00
|
2009-07-20 00:00:00
| null | null | null | null | null | null |
28,762,263 |
https://abishekmuthian.com/smart-watch-to-smart-clock/
| null | null | null | true | false | false | null | null | null | null | null | null | null | null | null |
334,761 |
http://www.latimes.com/news/science/la-sci-vole16-2008oct16,0,3174437.story
|
Losing a mate leaves hole in vole
|
Denise Gellene
|
# Losing a mate leaves hole in vole
Scientists have confirmed what poets have long known: Absence makes the heart grow fonder.
Working with mouse-like rodents called prairie voles, scientists have found that close monogamous relationships alter the chemistry of the brain, fostering the release of a compound that builds loyalty but also plays a role in depression during times of separation.
The scientists found that after four days away from their mates, male voles experienced changes in the emotional center of their brains, causing them to become unresponsive and lethargic. When given a drug that blocked the changes, however, lonely voles emerged from their funk.
The same chemical is found in human brains, and scientists said the research could provide insight into treating human grief and separation.
“Whenever you form a pair bond, it changes your neurochemistry,” said Larry J. Young, a neuroscientist at Emory University in Atlanta and an author of the study. “If you lose that partner, it has a dramatic impact on the brain.”
Experts noted that human relationships are more complex than animal bonds and involve culture, socialization and rational thought. Thus, there may be little to learn from the depressed voles.
“When humans grieve they don’t just give up and sit like lumps,” George Bonanno, a psychologist at Columbia University’s Teachers College who studies the process of bereavement. “They have purposeful behavior even when they are feeling lousy.”
Still, Young said the experiment might help explain the longing people feel for partners who are absent or who die. The study, published Wednesday in the journal Neuropsychopharmacology, might also shed light on why couples remain in relationships that are bad for them, he said.
Prairie voles, which are found in the wild through much of North America, are used to study monogamy because they are among the few animals that pair up like humans. Although the voles may occasionally stray from the nest, they eventually return to their lifelong partners to help raise litter after litter.
The brain chemical corticotropin-releasing factor, or CRF, has a key role in maintaining those loyal bonds, researchers found.
After separating nine male voles from their partners, Young and colleagues from Emory and the University of Regensburg in Germany tested the animals’ ability to cope with stress.
When placed in a pool of water, the voles passively floated instead of trying to swim. In a second test, the animals failed to struggle when suspended by their tails.
The animals displayed “depressive behaviors,” Young said. “They become more passive, more likely to give up.”
When researchers killed the voles and looked inside their brains, they found elevated levels of CRF, which is known to have a role in depression.
A control group of 10 male voles that had been separated from a male sibling displayed no depressive behaviors or increases in CRF, researchers said, indicating there was something special about monogamous bonds.
Voles that received a drug that blocked CRF behaved normally when separated from their female mates, according to the study, which was funded by the National Institutes of Health, and other research organizations and foundations.
Several pharmaceutical companies are developing drugs that act on CRF as treatments for depression and anxiety-related disorders.
In some cases, researchers said, the pain of separation can serve a useful purpose. Young said uneasy feelings that come with separation may keep males close to their nests, where they can protect offspring from predators, for example.
C. Sue Carter, a professor of psychiatry at the University of Illinois at Chicago who works with voles, cautioned against anthropomorphizing the animals, adding there is more than one way to interpret their behavior.
“What humans call depression might be an adaptive strategy,” she said. The passive voles might simply be conserving their energy for more important things, she suggested, such as searching for a new mate.
--
| true | true | true |
Monogamous rodent's brain chemical could help in grief treatment.
|
2024-10-12 00:00:00
|
2008-10-16 00:00:00
| null |
newsarticle
|
latimes.com
|
Los Angeles Times
| null | null |
29,843,880 |
https://lostgarden.home.blog/2021/12/12/value-chains/
|
Value chains – A method for creating and balancing faucet-and-drain game economies
|
Jon Selin
|
# INTRODUCTION
**The problem with picking up sticks**
Recently I was designing the harvesting and crafting system for our Animal Crossing-like game Cozy Grove when I ran into a problem: picking up a stick is not that fun.
The core activities in a life sim are generally not full of mastery and depth. You chop trees. You dig holes. You pick up sticks. In isolation, each of these is *dull*. Our playtesters would harvest a leaf pile, get some sticks, and then put down the controller. They’d turn to me and ask “Uh, okay, where is the game?”
If we were following the standard advice on prototyping core mechanics, we might as well stop development right there. Clearly the core was not fun. We tried extending the loops out from 5-seconds (gathering), to 30 seconds (wandering), to 5-minutes (selling). No luck. My playtesting group hated the game.
Yet life-sims do exist! And they are delightful. Clearly there’s more to establishing value in a game than just perfecting a ‘fun’ core mechanic.
**Discovering value chains**
It wasn’t until we spent 12-months building out the rest of the game – the crafting, the decorating, the daily pacing structures – that players finally began to value picking up sticks. Because it turns out the value of sticks was entirely driven by their utility in reaching future goals. And if those future goals don’t exist, the sticks have no value.
You tend to see this scenario in high retention, progression focused games
- The core mechanic is not the sole or even the primary driver of player value.
- Value for a particular action comes from how it facilitates subsequent activities.
- Often players engage in
*long chains of rote economic activity*in order to reach their actual final goal.
**Why you should care**
Understanding how to generate meaning with sticks is not an idle concern! High retention, progression focused economic systems are at the heart of most games as a service (GaaS). There’s a huge demand for economy designers who know how to build and balance robust game economies that provide rich value to players.
Getting your economy design wrong costs time and money. Very often, it can kill the game. Yet game economies are also a rarely discussed black art. So it is hard to know where to start. And hard to hold constructive conversations with your team. There’s an inherent complexity to the topic that makes matters even worse.
So let’s try to improve the situation.
**What this essay covers**
We’ll cover the following.
- Chapter 1: What is a value chain?
- Chapter 2: Balancing value chains
- Chapter 3: Architecture of multiple value chains
- Chapter 4: Establishing endogenous meaning in games
The whole essay is around 30+ pages and can be a bit technical. Feel free to take it slowly. But if you are interested in game economy design, this is a good crash course.
# CHAPTER 1 – WHAT IS A VALU**E CHAIN?**
We’ll model game economies and associated activities as **endogenous** (self-contained) **value networks**. These networks are composed of **value chains**. The value chains combine to form a full** faucet-and-drain** economy.
**Basic structure of a value chain**
This is the shorthand I use when jotting these out on paper.
- You get a stick!
- Which lets you make a lamp
- Which lets you decorate your house
- Which satisfies your need for self expression, the ultimate motivational anchor for wanting the stick in the first place.
Notice the structure
- Each node contains an
**output**of some (currently unspecified) action. - The nodes are connected to one another in a
**linear**fashion that’s easy to read. No strange loops or spaghetti-like diagrams. - The chain terminates with an
**anchor node**representing player motivations.
There’s a lot that’s not specified here in the shorthand version. We’ll get into more verbose versions below.
But you can see some useful traits
- By jotting this down, you are forced to consider the direct purpose of each resource.
- And how it relates to ultimate player motivations.
- If the chain is broken, imbalanced or obfuscated in some way, players will stop finding value in the early steps of the chain.
**Inputs and outputs**
Now let’s look at a more verbose description of a value chain. In practice each node is composed of three elements:
**Action**: What the player (or the game) is doing to cause a change in the world.**Inputs**: Resources that the action requires. These can be tangible resources or abstract concepts like time.**Outputs**: Resources that are the result of the action. Again, they can be concrete or abstract in nature. The anchor node is always abstract since it represents an internal psychological state.
The diagram above is pretty, but hard to quickly put together. In practice, I use a text-based format that can be typed out in any basic text editor. Feel free to adapt the formatting to your project; it is the ideas that matter.
In purely text format, we get something like:
*ChopTree*(-treeHealth & -player time) -> +stick*Craft*( recipe & -stick & -rag ) -> +lamp*Decorate*( lamp, other decorations ) -> Decorated Space**Decorated Space -> Self-expression anchor**
What this says is:
*Step 1: “ChopTree *(-treeHealth & -player time) -> +stick”
Player chops a tree and gets a stick.
- The
**–**symbol means that the action consumes treeHealth (a variable on the tree) and player time. This makes the action a sink in economic terms. - The
**&**symbol means that this action takes both health AND player time. If one is missing, the action can’t be performed. - The
**+**symbol means that this activity is a source for sticks. - The
**->**symbol splits up input activities from output resources. - The action is italicized for clarity.
**Step 2 “ Craft (recipe -stick -rag) -> +lamp”**
Then the player crafts with a recipe, stick resource and rag resource.
- The stick and rag have the
**–**symbol next to them indicating this action is a sink for those resources and they are removed from the game economy. - The recipe is not consumed. It has no
**–**next to it. - The output of this action is a lamp. Notice the + symbol signalling a source.
**Step 3: “ Decorate ( lamp, other decorations ) -> Decorated Space”**
Then the player decorates with the torch
- The
**,**symbol represents options that are valid inputs. Decoration can occur with the torch OR any other decoration. - There’s no concrete new resource here that is produced but we do get a decorated space as an output.
**Step 4: “Decorated Space -> Self-expression anchor”**
Finally, the player serves their goal, which is to express themselves. Self-expression is a strong intrinsic motivation for some players and acts as the anchor for the entire chain.
**Key concepts when working with value chains**
Now that you have a definition of value chains, let’s look at how they tie into some other important game design concepts.
**Value chains are one way of structuring your game’s internal economy**
Internal economies refers to the practice of modeling economies as a network of the following basic operations
**Tokens**: Resource tokens that flow between various nodes in the network of player or system operations.**Sources**: A node that creates new tokens and add them to the flow.**Pools**: Nodes that accumulate or hold some number of tokens**Transforms**: Nodes that transform of tokens into other tokens**Sink**: Nodes that destroy those tokens.
For a rich description of how internal economies work, read Joris Doman’s book Game Mechanics: Advanced Game Design. He goes into common design patterns and explains the ideas in more detail.
You can describe almost any economic system in a game using these basic elements. But that flexibility also can be overwhelming and hard to communicate.
Value chains are a specific sub-case of an internal economy. They make use of all the basic operations but in a far more restrictive manner that makes both the construction of economies and more importantly, their subsequent *analysis* easier.
**Value chains are a form of “Faucet-and-Drain” economy**
In a typical real-world economy, tokens circulate in enormous pools that slosh back and forth due to feedback loops and emergent market dynamics. These are enormously complicated and difficult to visualize.
However, value chains focus on a simplified cartoon economy known as a faucet-and-drain economy, defined by strong sources and strong sinks, limited object lifespans and limited circulation of goods.
*Example of a ‘faucet-and-drain’ economy from Ultima Online, **The In-game Economics of Ultima Online**, Zachary Booth Simpson, 1999. *
A faucet and drain economy (like the one visualized for Ultima) might seem complicated at first glance, but it has some greatly simplified attributes
**Faucets**: Resources in the game are generated from nothing as needed. They are virtual so we can make as many as we want (or as few)**Transforms**: The resources flow mostly one way into a series of transforms to produce various other elements players desire. These are all designed game activities.**Drains**: We get rid of excess materials. Again, they are digital so there’s no unexpected externalities like pollution or landfills. We press a button and boop, they are erased from existence.
These faucets and drains map directly onto the various portions of the value chain.
**Early stage of the value chain has sources**: Players perform actions (the core gameplay!) and generate a steady flow of base resources.**Mid stage of the value chain transforms resources**: Those resources are transformed into a very small number of intermediate resources.**End stage of the value chain has sinks**: Finally players pay resources into sinks that help them gain access to whatever their anchor motivations might be. As a result, there is a steady stream of goods being destroyed.
Faucet-and-drain economies have some really useful attributes for economy designers.
**Constant demand**: There’s a nice velocity of goods flowing through the economy and out via the sinks. This means we can easily incentivize players to continue engaging in gameplay actions that generate resources.**Limited feedback loops**: There’s limited pooling of excess resources. This leads to simpler dynamics and fewer unexpected feedback loops.**Easier explanation:**Real economies are complex and hard to talk about. A faucet-and-drain economy is much easier to explain to players. This lets them form models of cause-and-effect and helps with their long term planning and engagement.**Easier balancing**: You can usually trivially balance sources and sinks against one another. If there isn’t enough of a resource being created, the designer can tweak some number so player actions generate more. Or decrease the cost of sinks. If there is too much of a resource being generated, the designer can increase the cost sinks or reduce the sources.
**Game economies as cartoon economies**
Note: You don’t see many pure faucet-and-drain economies in the real world. All real-world economists need to deal with the messy reality that real-world extraction of resources and making of objects is incredibly expensive. You can’t wave a magic wand to increase some source of scarce goods. So instead, we see more circular economies where limited rival goods are created infrequently and then circulate within a competitive market for a long period of time. Your modeling must include supply chains, warehousing, environmental externalities, transaction costs and more.
Games economies are special since there is zero cost to creating, transforming and destroying new digital resources. If we want every person in the game to get a puppy, we snap our digital fingers and it is done.
This means every economic feature associated with supply chains, transaction costs and various externalities is a **design choice**; we include them if they make the game better. As digital economy designers, we can use these special powers to make our job as game economy designers easier and the experience of playing the game better for our players.
**Value chains are structures for creating demand **
A node creates demand and value for players to engage with earlier nodes in the chain. You can say that a node **‘pulls’ **resources up from those early nodes.
The powerful sinks at the end of each value chain acts to maximize flow of resources through the chain. Because we usually want clear systems of cause and effect so players can easily plan ahead, building murky pools of inscrutable assets can hurt gameplay if not done with care.
Big pools reduce pull. However, see “The emotions of scarcity and abundance” below for more detail on how planned scarcity and abundance (within a narrow band of outcomes) can drive desired player emotions.
**Value chains connect to real world motivations through Anchors**
Traditionally, when we think of value, we often think of valuable goods as those that serve a physical need like food or shelter. However, in games, there is no actual need for food or shelter to satisfy. A game will not feed you if you are hungry. Instead valuable digital goods are those that serve a player’s psychological needs. Game food in a game like Valhiem fulfills a player fantasy of survival (mastery over the environment), not actual hunger.
When designing value chains, anchors are how we define and represent these psychological needs. And by inserting them at the end of each value chain, we call out **how meaning is carried through **the earlier stage economic nodes. A powerful anchor can be a big reason why earlier nodes have value. If you don’t identify how each node serves your psychological anchors, you’ll very likely end up with disconnected nodes of isolated atomic meaning.
**Other sources of meaning**: Each node can always generate meaning. It can contain a beautiful interaction. Or a delightful puzzle. Or moment of insight. By no means am I saying that only economic value networks provide meaning or value! However, if you can connect these small moments of value together by a self-reinforcing value network, you build a self-consistent space for players to form and pursue meaningful goals.
**Possible anchors**: There are lots of motivations that act as anchors. For example SDT (Self Determination Theory) lists
**Autonomy**: Do you accept the decisions you have made?**Competence**: Do you feel like you are gaining mastery or competence in your actions**Relatedness**: Are you supported and do you support others in a pursuit of self determination?
Or you could reference work by Quantic Foundry, who has tried to map out some of the player motivations from popular games.
- Destruction
- Excitement
- Competition
- Community
- Challenge
- Strategy
- Completion
- Power
- Fantasy
- Story
- Design
- Discovery
**Player interviews**: Ultimately, these models are only a starting place for finding your game’s anchors. You’ll discover that needs are nuanced and perhaps best uncovered by talking to your players and finding what resonates with them. When you hear that your game changed someone’s life for the better, don’t roll your eyes. You are likely observing an anchor for a value chain.
I remember once someone told me that my cooperative factory building game helped restore his faith that people can be good. And how he made lifelong friends just through playing the game. Those shared goals, trust and long-term friendship are powerful anchors that made the atomic actions of placing road tiles and cranes very meaningful.
The deep meaning behind value chains is this: By understanding anchors you start to understand how your game provides tangible value to your player. Games are never ‘just games.’ People play them (and keep playing them) because games add real value to their lives. You need to design with that goal in mind.
**Can economic systems serve intrinsic motivation?**
At the most basic level, extrinsic motivations are when you feel forced to do an action in order to attain some other purpose. Intrinsic motivations are when you want to do an action for the sake of the action itself.
It is around this point that folks trained in the pop-psychology pits of YouTube game design start worrying that economic systems of this sort promote coercive extrinsic motivators. The actual answer is complicated.
Some things to keep in mind that are usually not discussed in a typical like-share-subscribe rant.
**Extrinsic vs Intrinsic motivation is a spectrum**: It is not a binary. A lot of things we do are a little extrinsically motivated and a little intrinsically motivated. So if you are desperate for black and white morality, be aware this is a topic that is mostly shades of gray in practice.**Individual perception matters**: People often move from being extrinsically motivated to being intrinsically motivated for the**same exact action and reward**as they incorporate the action into their personal feelings of self-determination. For example, at one point I made a french press coffee each morning to wake up and get my caffeine. It was extrinsically motivated, rote behavior. But then I started thinking of myself as a coffee drinker and over time the rote activity turned into a ritual that I truly enjoy. People shift and change. Perception is often more important than the exact mechanics or numbers involved.**Intrinsic motivation is often a journey**: The story about coffee suggests another truth. Players don’t start out intrinsically motivated. Often they are just playing around, following once bright sparkly to the next bright sparkly. Overtime they discover how a set of rote actions serves one of their unmet deeper human needs. This can take time and learning! But by the end of their personal journey, the previously rote actions are transformed. They become time spent with purpose and intention.
*Why *you choose to perform a rote action as well as your level of *personal buy-in *to this choice have a *huge* impact on whether or not an activity is seen as intrinsically or extrinsically motivated. If you are interested in this topic, I highly recommend doing a deep dive into SDT since it is one of the few experimentally verified models of key factors involved in shaping motivation.
Some tips
**Design each action node to support feelings of autonomy, competence and relatedness**. The more each moment in the game supports feelings of self-determination, the more likely players feel intrinsically motivated.**Ensure each anchor supports feelings of autonomy, competence and relatedness.**If your ultimate anchors are tied to materialistic numbers going up like “Make as much money as possible”, you really aren’t supporting any of the key factors related to intrinsic motivation.**“Mastery” is often a short term motivator**: Historically games have focused on helping players feel competence by teaching them novel skills. For example, you learn to double jump for the first time or beat a new boss puzzle. But eventually players learn the skill and chunk it into a rotely executed tool. True evergreen mastery mechanics are rare and expensive to invent. But ‘infinite mastery’ doesn’t need to be the goal of every mechanic in your game! Because it turns out that rote competence in service of other unmet needs still triggers feelings of*competence*! Think of mastery in terms of the player creating cognitive tools they can then apply to serving higher order needs.
Things to listen for in playtests
**Coercion:**Do players say they feel coerced into doing something? If you hear this, you are leaning too heavily towards extrinsic motivators.**Changes in perception**: Do players say “This didn’t meet my initial expectations, but now I really enjoy it.” That suggests players are transitioning from extrinsic motivation to intrinsic motivation. Ask them why they enjoy it. There’s a very good chance you’ll discover some powerful anchors in your design (that you can then amplify or reveal sooner!)
**The big picture:** There’s a fun study that suggests games that lean heavily on intrinsic motivation tend to improve player’s well-being. And they tend to have longer term retention and improved monetization. While games that lean heavily on extrinsic motivations tend to harm player’s well-being.
So even though the topic is messier than suggested by internet moralizing, it is still worth building in the factors of intrinsic motivation into every step of each value chain.
# CHAPTER 2 – BALANCING VALUE CHAINS
**Why you balance your economy**
One of the critical jobs you’ll perform as an economy designer is making sure your economy is **balanced**. As game developers, we are selling amazing experiences and a poorly balanced economy leads to a crappy player experience.
From the player perspective, an imbalanced economy produces complaints that look like the following:
- X activity or resource seems pointless.
- X is boring
- I didn’t even notice X
- I don’t understand why I’m playing.
These mundane phrases are some of the most important pieces of player feedback a designer can hear. Your players are not being stupid. They are giving you incredibly valuable signals on what is wrong with your game.
**Role of value chains in economy balance**
There are many potential root issues that drive this sort of feedback. The trick is finding them. For example, an abundance in one location might be driven by a lack of sinks further down the chain. If you don’t know the structural dependencies of your economy, you’ll struggle to pinpoint the root cause of a player’s report.
Value chains provide an analytical framework that helps you do the following:
**Define each resource**in the game and why each is valuable.- Define structures for
**how resources are relate**to one another in a meaningful way. **Analyze, pinpoint**and**fix**issues where resources or activities are not valuable.
Every prototype you make starts out poorly balanced. And then you iterate on the balance to make it better. Value chains **speed up iteration** by simplifying the underlying problems and helping you identify and classify observed problems faster. They help you** reduce the cost of fixes** by **targeting **specific problems while limiting ripple effects.
**Balance from the technical perspective**
From a technical perspective, we can define a **balanced value chain** as one where there is a strong enough set of anchors and associated sinks to consistently pull resources up from all nodes along the chain.
- The player is
**motivated**at each node to perform game activities in order to reach subsequent nodes (and ultimately the anchor) - The player doesn’t face an
**overabundance**of a particular resource that swamps sink or makes exercising an earlier node’s activity meaningless. - The player doesn’t face extreme
**scarcity**of a much needed resource that makes grinding an earlier node laborious or irritating. This can lead to pacing delays or grinding burnout.
There are a few key steps to balancing a value chain. Each of these is a major topic we’ll cover in detail.
**Step 1:**The structure of the value chain must have clear links of cause and effect carrying over from each node all the way to the anchor.**Step 2**: You need to identify the types of sources and sinks used in your value chain.**Step 3**: You need to match the power of your sinks with the power of your sources. For example exponentially increasing sources should be matched by exponentially increasing costs on sinks. Mismatches here result in nearly impossible to balance economies.
**Step 1: Debug the structure of the value chain**
At the most fundamental level, a value chain is a connected series of economic actions. If the links in the chain don’t connect to one another at a structural level, the chain fails. This is super useful!
- We can look at the structure of any specific chain and quickly identify structural errors without taking into account the massive complexity of the whole economy.
- We can often further focus on a single node in a single chain to identify the issue with great specificity.
Errors at the structure level are great to catch early since they often result in economies that are impossible to balance. Defining and debugging the structure of your chains is a wonderful pass on any economy design task.
**Issue: Break in the value chain**
The most common root issue is that there is no subsequent link in the value chain! Most actions in games quickly get mastered, chunked up and turned into a rote task. If there isn’t a reason to do the action, it becomes as meaningless as a disconnected doorbell.
**Solution**
- Write out the value chain for this action.
- Make sure there’s a consistent chain of nodes all the way to an anchor.
**Issue: Lack of sufficiently compelling anchor**
Identifying compelling anchors is a rarer skill. So many designers just leave out this step entirely. However, the game then falls flat and they don’t know why.
**Solution**
- Your intrinsically rewarding anchors are often very related to your game’s core pillars or promises.
- Do the exercise of asking what players really want out of your game in terms of need fulfillment or core motivations.
- See if you can have your value chains directly contribute to these ideas. Your game will become stronger. You’ll also gain a culling device for eliminating features that don’t serve the pillars of your game.
**Issue: Visibility on the chain of cause and effect necessary to reach the anchor**
In games, players engage in interaction loops that teach skills on how to manipulate the world. Interaction **loops and arcs** (also known as skill atoms or learning loops) are the fundamental iterative sequence of modeling, deciding, acting, processing and responding that occurs within any computer mediated interaction. It is the heart of any interaction design.
Here’s how interaction loops map onto value chains
- Each interaction loop directly corresponds to the
**action element inside node of the value chain**. For example, there is an interaction loop about learning to harvest leaves. And that maps onto the value chain node about harvest leaves and getting sticks. - Exercising an interaction loop yields
**emotional reactions**. There’s evocative stimuli (ooh, pretty jewels go pop!), mastery, autonomy and more. - A player exercises an interaction and learns
**cause and effect**. A chunked skill always results in a lesson, or cognitive tool for how an interaction can manipulate the game. For example, players learn that if they harvest a leaf pile, they’ll get sticks. - The player can then use their new acquired tool to
**pursue goals**. This corresponds to a subsequent node in the value chain. For example, if a player wants to build a decoration, they now know they need a stick.
Interaction loops are recursive in nature and occur at pretty much every level of gameplay. I’ve written a lot about them over the years and they are essential to almost every part of my design practice. But that’s far too much to cover in this relatively small essay that I’m desperately trying to keep focused on game economies. If you want more information on interaction loops, check out this presentation: Interaction Loops – Public
The important part is there are many ways a specific interaction loop can go wrong. The interaction loop may have the wrong affordances. The game could provide poor feedback to the player. The player might not have learned foundational skills. Etc, etc.
This is also a deep topic. For more information on how to diagnose issues with interaction loops and skill chains see: Building Tight Game Systems of Cause and Effect
From the perspective of analyzing value chains, you should know that a failure inside a single node of the chain can destroy the value of all subsequent nodes along the chain. Knowing these dependencies can help you backtrack and find the root causes. If you can track a big economic issue down to a single interaction inside a single node, you can make far more targeted changes.
**Issue: Visibility of the anchor**
You may have a strong anchor for a value chain, but only long term players end up figuring it out. And new players, because they don’t see how the game fulfills their needs, decide to leave early.
**Solution**
- Identify your value anchors and tell them to the player at the very start of the game. This is your
**player promise**. - They won’t be able to experience the satisfaction of the anchor immediately, but they’ll know what they are working towards. And this should give them a long term goals and perspective on the long term payoff of current tasks.
- The player promise can be couched using a
**narrative frame**. For example, a need for completion and accomplishment is often couched as ‘beat the game’. A need for dominance and mastery is often couched as ‘beat the final boss’. These simple frames help contextualize the abstract psychology of an anchor as a familiar concept. This can provide enough visibility on an anchor to justify earlier actions.
**Issue: Weak player motivation associated with the anchor**
Motivation and their associated narrative frames are not universal! Many players don’t care about dominance or mastery. In our life-sim, Cozy Grove, players were actively repulsed when mastery elements were experimentally added. If you present the wrong audience a game about beating the final boss, they will leave immediately because their true needs are not being met.
**Solution**
- Talk to target players. Tell them your player promises. See if they are excited! If you are a new developer pound into your head that there is no universal gamer profile. Nor is there a game that is perfect for everyone. (You’ve been lied to if you believe this)
- If they aren’t excited, you need to either find a different set of player promises or a different target audience.
- Don’t be afraid to workshop your player promises until you find a strong audience fit. A mismatch here can cause your game to fail before you even start.
**Step 2: Identify types of sources and sinks**
In more freeform descriptions of internal economies, there are innumerable ways of adding resources and extracting resources from the economy. Once you start including feedback loops, pools, conditionals on actions and other attributes of an internal economy, you might as well have written a full Turing complete simulation. Such a system is difficult to explain, difficult to reason about, and difficult to balance. (Check out Machinations.io if you are interested in exploring what these simulations can look like. It is a wonderful tool.)
In my personal practice building game economies, I’ve hit upon a relatively robust simplification where I categorize source and sinks into a few common categories. There are certainly edge cases that these types will not cover. However, by restricting your economy design to well-defined and easily manipulated components, you make balancing far easier.
In this section we’ll talk about how to approximate complex economic structures with various curves.
**The 5 major types of sources**: Capped, Trickle, Grind, Investment and Random**The 4 major types of sinks**: Fixed, Repeatable, Exponential, Competitive**The 3 big balancing challenges**: Scarcity, Abundance, Variability
**Source Type: Capped (Constant) **
In this type of source, there’s a fixed amount of the resource that comes into the game via some action node (or nodes).
- Player completes an action. I sometimes use the metaphor of ‘turning a crank’ where the player needs to execute a full interaction loop of: mental model -> decision -> action -> rules processing -> feedback -> updated mental model and resources.
- The
**total amount**that comes from the action is**fixed**. - Executing the action again (if even possible) does not provide more of the resource.
Some variations on capped sources include:
Capped sources are one of the most common sources, especially in single player games with a fixed completion point. They tend to be used in easy to control systems, but can be a little brittle. We’ll get into that more when we discuss numerical balancing.
**Source Type: Trickle (Linear) **
In this type of source, there’s a fixed **rate **of a given resource coming into the world during a given time period.
- Again, like with capped sources, a trickle resource delivers resources whenever an action node is completed.
- However, unlike capped sources, you tend to get a consistent amount every single time the action is repeated. Forever.
**Accumulation challenge with trickle sources**: On larger time scales, you can accumulate an infinite amount of that resource. Say you earn 10 gold every day for signing into the game. After 10 days, you have 100 gold. After 100 days, you have 1000 gold. After 2 years, you have 7300 gold.
When players have an excess of a resource, they are less economically motivated to engage with the production nodes of its value chain. Though they may still perform related actions for the intrinsic joy of it, the marginal value of gaining an additional resource is low.
We’ll keep seeing excess accumulations show up as one of the failings of an imbalanced economy.
**Variation: Limited actions per time period**: In Animal Crossing, there are 6 rocks that spawn in the world and can be mined once a day. This is still a rate limited source, but the limit is placed on the actions the player can perform, not the amount of resources produced.
**Variation: Capped pools fed by a trickle source**: A common type of trickle resource is energy in a F2P mobile title. Energy recharges each day and can then be spent on a limited number of actions. Some elements of this patterns
- The action of the energy production node is simply ‘waiting’. Time passes and you automatically get more energy. (You can pay, but that turns this into an investment source, which we’ll cover in detail below)
- There’s a
**capped pool**, which is a pool that holds the energy resource. It is capped in that it only holds some maximum amount. After reaching the cap, any additional energy is lost.
Capped pools are one partial solution to the accumulation challenge. In our gold example above, imagine that gold feeds into a treasure chest that can only handle 100 gold. If you wait 10 days, you’ll have 100 gold. If you wait two years, you’ll still have 100 gold.
Capped pools are unfortunately not a complete solution. Someone who diligently empties the pool every day still will be able to spend all 7300 gold over two years. So you still need a mechanism for dealing with excess.
**Source Type: Grind (Linear) **
A grind source is one where players can spend near unlimited external resources such as time or money to increase the amount of a given resource. Again, you’ve got an action the player performs on a node that generates resources. But they can **grind **that action by performing it as many times as they desire.
Though on the surface this looks a lot like a trickle or capped source, from a balancing perspective it is very different.
- The total amount is mostly uncapped. It is limited only by how much a player wants to grind overall.
- The rate is also mostly uncapped. It is limited by how much a player wants to grind in a day.
The most common example is that the player invests more time by repetitively performing an action again and again. Though time is limited in reality, in practice we often balance our games by assuming a certain moderate level of engagement. And someone who plays 14 hours a day, 7 days a week can grind out surprising amounts of a resource.
**Variability Challenge**: The big problem with this source is that it is highly variable, which makes it hard to balance. A player could not grind at all. Or they could grind 18 hours a day for 300 days. In one case, you’ve got scarcity. In another case, you’ve got overabundance. Both players will complain that the economy is poorly balanced.
The pattern of play varies how much a grind source produces. In the chart above, we see a bump for player 1 during the weekend. They may experience a huge glut of resources as a result.
It is often good to convert this into a capped or trickle source. Or pair it with an exponential sink.
**Source type: Investment (Exponential) **
A common structure in internal economies is the **positive feedback loop**.
- Player does an action
- This gives them resources.
- But these resources ‘feed back’ into the original action. They can
**invest**the resources to do more of the action. - Which in turn gives them even more sources that lets them to the action even more.
Positive feedback loops result in hard to balance economies.
- Early on in the investment cycle, these sources produce small quantities of a resource.
- Later on, positive feedback produces exponential growth of a resource.
- But this exponential growth oly happens if the feedback loop is being actively exploited by a smart player.
- So we end up with
**scarcity**early on and then are hit with**abundance**and**variability**later. It can be a huge pain.
An example of an investment resource might be fruit trees in Animal Crossing. When you start out, you feel great harvesting a single fruit tree.
**Simplification: Treat Positive Feedback Loops as Exponential Sources: **For years, I’ve been relying on a straightforward simplification: I treat positive feedback loops as exponential sources. I **design defensively** and assume the worst case scenario where feedback loops are going to get out of control for some players.
This cartoon model of a positive feedback loop has several benefits
- Instead of dealing with complex, tricky to communicate diagrams that chart out the exact structure of a feedback loop, we can just say a particular node is an investment source. This lets us continue to deal with the economy using targeted value chains.
- When we get to balancing sources and sinks, it unlocks clear numerical tactics for sopping up abundant resources. Instead of some mysterious dynamic system of emergence, the source becomes just another common type of math curve.
**Variation – Increasing the starting baseline: **We can help eliminate scarcity during early stages of an investment source by adding a trickle source to fall back on.
Animal Crossing’s fruit is not a pure investment source. Instead, they start you out with a set of ‘wild trees’ that let you harvest at least some baseline quantity of fruit each day even if you haven’t engaged with the investment loops.
**Source Type: Random (Noise function) **
Some sources come with high variability. The most common of these in video games is a loot drop table, but almost any game that uses dice to determine resource rewards is using a random source.
- Random sources have a distribution of outputs: They can be a normal curve, exponential distribution, pure random noise or some other histogram. These will usually have some sort of central tendency where on average you can have a typical result.
- Random sources are really just a noise function applied on top of one of the other source types. So you can have capped, trickle, grind or investment sources with randomness.
**Simplification: Use the mean of normal distributions and convert to a less random source type:** I tend to work with average outcomes when looking at how random sources contribute to the economy long term. This turns a random source into one of the other sources (capped, trickle, grind, investment).
This simplification doesn’t work for very short term variability balancing, but can be highly effective for understanding scarcity and abundance in longer lasting games.
**Variation: Constrained randomness:** You almost never want pure randomness in an economy. In your million of players, there will be that one person who rolls 1s for most of their game and attains 1% of the progress of the typical player. The system isn’t broken. Sometimes true randomness results in crap outcomes.
If possible, use systems like ‘drawing-without-replacement’ (decks of cards with a discard pile work this way) or various pity systems that guarantee a drop after X draws. These ensure that the outlying experiences aren’t substantially different from the average experience.
Again, we aren’t interested in techniques that allow you to balance any system. We are interested in building systems that are easy to balance.
**Sink Type: Fixed (Constant) **
Now we get into **sinks**. These extract resources from the game and thus limit accumulation and pooling. You’ll see that these fit into categories very similar to sources (constant, linear, exponential)
The simplest sink is the **fixed sink**. When an action on a node in the value chain occurs, a fixed amount of resource is removed from the game. This is not repeatable. This is the mirror of a capped source.
There are lots of examples of fixed sinks
- A powerup you can purchase once. This takes a fixed amount of currency out of the economy.
- The one time cost in XP to earn the next level in an RPG. This takes a fixed amount of XP out of the economy.
- A boss you can beat a single time and in the process it uses up healing or mana potions.
Most fixed-length games make heavy use of fixed sinks. You put them all in a spreadsheet and tally them up. This tells you how much you can give out from capped sources.
**Sink Type: Repeatable (Linear) **
The mirror of a trickle source is a **repeatable sink**. Every time the action for a node is performed, a fixed amount is removed. However, unlike a fixed sink, the action can be repeated multiple times.
Some common examples of repeatable sinks
- Damage being done to someone’s health bar. Each time the attack repeats, the same amount of health is lost.
- The crafting cost for a crafting recipe that can be crafted multiple times.
- A lamp in Valheim you need to regularly refuel or else it goes out.
- The cost to buy a consumable item in the store that replenishes each day.
- The cost to purchase a tree in animal crossing.
**Why not distinguish trickle sinks and grind sinks? **You’ll notice that there’s no mirror sink to the grind source. You *can* absolutely have a trickle sink that only allows a certain amount of some resource to be destroyed in a given time period. Or a grind sink where players must grind to remove more of a resource.
In practice however, these distinctions tend not to matter too much. Repeatable sinks are naturally limited by the supply of a resource. So we don’t get the specific runaway cases like ‘grinding’ that need special attention like we do with sources.
**Sink Type: Exponential (Exponential) **
The mirror of the investment source is the **exponential sink**. In this sink, to get the next incremental (linear) increase in output, we logarithmically or geometrically increase the input quantity. This means there’s always room to sop up more.
Some examples of exponential sinks
- Each additional level for an RPG character costs exponentially more than the last level.
- In an idle game, each upgrade to an idle resource generator costs exponentially more than the previous upgrade.
**Sink Type: Competitive (Adaptive) **
There’s a specific type of sink that doesn’t have a clear mirror source. A **competitive sink** is a form of adaptive sink. In a competition between multiple players, whoever puts in the largest amount of a resource gets the largest prize.
**Pro**: The nice thing about this sort of sink is there’s no top end so it can suck up lots of resources.**Con**: However, it can only be paired with competitive motivational anchors. And only a tiny percentage of the population is motivated by competition (mostly young males). So there are limited types of games you can use this.
Examples of competitive sinks
- Guild vs Guild competition in a game like Clash of Clans
- Armies battling in an RTS game.
There are lots of variations of this type of sink
**Races**: Players try to reach a specific goal. Whoever reaches it first, wins. There can be vast and expensive chains around training and other improvements to enhance your ability to get ahead of others.**Leaderboards**: There are more than two players competing and the positions are ranked relative to one another. So someone comes in 1st place, 2nd place, 3rd place, etc. Often the rank is measured within a league or session window.
**Mixing and matching sinks**
All these types can also be mixed and matched. Idle games use leveling as a repeatable sink whose cost increments exponentially each time you level. Leveling can be fixed at a fixed number of levels. So short term, a sink is exponential, but long term it is fixed.
Like sources, it helps to classify a sink within a given time window.
**Step 3: Match power of sources and sink**
You can essentially classify these sources and sinks according to their power
**Constant**: These are your capped sources and fixed sinks. This is the lowest power. (x^0)**Linear**: Trickle sources, Grind sources, Repeatable sinks. (x^1)**Exponential**: Investment sources, Exponential sinks. This is the highest power. (x^1+)**Adaptive**: Competitive sinks (A>B). These are special cases.
When balancing a value chain, it is immensely helpful to have lower power sources feed into equal or higher power sinks.
**The challenge of overabundance**
A slight amount of periodic scarcity or feelings of abundance can provide necessarily emotional variation to your experience. However, if you don’t balance your sink types, then as time goes on, a resource starts to accumulate in a pool somewhere and can’t be spent. This creates **overabundance**.
When there is no chance of scarcity or when there’s no use for those pooled resources, people stop caring.
- It may be trivial to satisfy or even exhaust their motivational anchors. If you are motivated by status and you can instantly buy all the high status clothing, why bother continuing to exercise that value chain?
- There is no longer a pull on the earlier nodes in your value chain that produce that resource. If you have all the sticks you’ll ever need, why bother ever harvesting another dirt pile?
For example
- In CRPGs where you can sell things to vendors, you end up with millions of gold. But you have max level equipment. So what is the point?
- In MMOs, players eventually have access to a 1000 useless +10 swords. This is known as mudflation and creeps up on games over the course of years. Why keep killing rats?
Of course not all value chains need to last forever.
- If you’ve created a fixed sink for a given value chain and thus have planned its eventual obsolescence, then it may be fine to let resources accumulate.
- Or the player may enjoy messing about with a large amount of resources in a creative mode. They find this intrinsically rewarding and don’t need the careful scaffolding of motivations that a taut value chain provides.
But intrinsic motivation is something that most people need to slowly work their way towards. They benefit from practicing an activity for long enough to know they enjoy it and eventually understanding how it serves their needs. This onboarding via explicit affordances and feedback seems particularly important when a player is operating within the artificial cartoon value structure of a video game.
So reducing overabundance, at least in the early portion of a game and especially in games without creative anchors, is usually a good starting point for balancing your economy.
**The good news here is that big desirable sinks make it trivial to balance most faucet and drain economies.** They create a need for resources which in turn causes the players to engage in actions all the way down the value chain.
**Balancing a fixed-length game**
The easiest example is the fixed-length game. These are ones where you can map out the economy for the entire play experience, from start to some finite completion. These are good learning projects for new economy designers.
**List your sources in a spreadsheet for each node of the chain**: If you have a fixed-length game composed of a series of capped sources, make a spreadsheet and add up those resources the players will encounter over the course of the game. You can also sum up trickle sources since you know when the game will end.**List your sink for each node of the chain**: Now make sure you have a set of fixed sinks that consume those resources. Include repeated sinks as well. Again, add them up!**Golden path modeling**: For trickle source and repeatable sinks you’ll need to decide how many times an ideal player interacts with each. This ‘golden path’ won’t be followed by every player, but it helps you approximate how much they’ll earn and spend.
When picking which sources and sinks to pair, I use the general rules of thumb
- Capped sources can be paired with fixed sinks.
- Trickle sources can be paired with repeatable sinks.
- For a fixed length game, investment sources always turn into capped or trickle sources. So you can use the previous two rules. You can be paired with exponential or competitive sinks. These are higher power and will sop up your sources. However they can be overkill, introducing mechanical complexity you may not need.
**Balancing an ongoing game**
For an ongoing game, you again set up your per chain spreadsheet of sources and sinks.
- Since time keeps going, you don’t know how long a player will play.
- One trick is to think in terms of balancing within a period of time. How do flows add up within a week or a month or season of play? If you can find that repeatable ‘long session’, you can model that out and find how sources and sinks balance.
When picking which sources and sinks to pair, I use the general rules of thumb
- Capped sources can be paired with fixed sinks. These usually show up only during fixed-length sub-games within the live game. Examples include tutorials and limited-time events. You may want to avoid fixed sinks outside of these situations. Time is infinite and long-term players will overwhelm fixed sinks.
- Trickle sources should be paired with strong repeatable, exponential or competitive sink.
- A grind or investment source will always swamp a fixed or repeatable sink. Instead pair them with exponential or competitive sinks.
You almost never want a true exponential source. Long term, these make your life painful. They are hard to model mentally and small mistakes result in extreme resources pooling.
- Try capping your investment sources. Any exponential power upgrades in an RPG are usually controlled with a hard level cap. Another common technique is to limit the number of investment slots you can use. Both these options turn an investment source into a more manageable trickle source.
- Idle games pairing an exponential source with a slightly higher power exponential sink. Even these structures don’t last forever since exponential investment waits start becoming boring. So they rely on hard resets via ascension mechanism to escape the trap of their exponential sources.
**Lean towards taut chains**
When balancing sources and sinks, you typically want the sinks to be a little larger than the sources.
- Not too much or that results in dead points in the spend where players suffer from painful scarcity.
- Not too little because then you get pooling of resources and lack of pull on activity nodes within your value chain. The chain should always be pulled somewhat taut.
If you’ve done your work matching source and sink power and you’ve isolated your value chains (see multi-chain architecture below), the exact balance numbers are less important than you might imagine.
**The emotions of scarcity and abundance**
Once you get your general economy balance under control, there’s a huge amount of emotion you can extract from relatively minor variations in tautness.
**Scarcity**: When players feel scarcity, they’ll be highly motivated to search out and harvest scarce resources. They’ll experience anxiety and a tendency to horde.**Variability**: When players feel high variation in availability, they experience similar emotions.**Abundance**: However if you give them abundance, they’ll momentarily feel a sense of freedom and can invest in non-scarcity driven behaviors. Note this is different from the extreme overabundance mentioned above when we discussed imbalance economies.**Hedonic adaptation**: However, if they experience abundance for too long, your value chains grow slack. Players stop finding meaning in earlier nodes and just rely on their pooled resources. The joy of abundance returns to a baseline.
The best games create an ebb and flow between scarcity and abundance within a narrowly controlled band of economic outcomes. A well balanced economy is a tool for driving rich player experiences.
You can play with these much like playing notes or pacing on a music instrument. For example, in Cozy Grove, we made harvesting very reliable. Most sources were capped daily so they acted as slow trickle sources. And in general, we leaned towards abundance. Not always. Various events or quests would suck up resources from the player’s hoarded supplies so abundance wasn’t 100% reliable. This helped combat hedonic adaptation. The result is a very low stress, cozy economic game pacing.
**Note: Mimicking sink ***power ***by increasing sink ***magnitude*
*power*
*magnitude*
In a fixed length game, you can always approximate a higher power sink with a large magnitude low power sink.
For example, I have a game that lasts 10 turns. It has a trickle source of gold that produces 5 gold per turn.
**Option A**: I could pair the trickle source with a repeated gold sink that lets me buy 1 victory for every 10 gold. By the end of the game I’ll be able to purchase 5 victory points with no left over gold.**Option B**: However, I could also create a single fixed sink that lets me purchase 5 victory points for 50 gold. I’ve taken a simple fixed sink and just increased the magnitude enough that it sops up my gold income.
On the surface, these look like the same end result. But they aren’t the same experience.
- Large sinks can feel grindy since players have to put in a lot of effort before they can spend. There’s a music-like pacing to how players interact with economies. Beware of large gaps where players lose track of the tune.
- If you end up changing the length of your game, you need to immediately go back in and rebalance all your sinks (or sources if you want to approach it from earlier in the chain.) In general, perfectly pairing sources and sinks of the same power that are balanced only by magnitude adds brittleness to your economic architecture.
- In a long-term ongoing game, you cannot mimic a higher power sink with a much higher magnitude sink. In the long run, a higher power source will always swamp a lower power, yet high magnitude sink. Just the way the math works (feel free to graph it!)
In general, I try to avoid replacing sink power with sink magnitude. It is a bad habit to get into.
**Issue: Content treadmills**
Long term, heavy use of capped sources and sinks lead to content treadmills. A content treadmill is when you need repeated injections of new content to keep your Game-as-a-service (GaaS) running.
From an economic perspective, In order to extend the game, you need to add more sources and more matching sinks. Each of these requires a fixed amount of content. It can be better to invest in repeatable sinks.
**Issue: Marginal value erodes over time with repeated actions**
Even trickle sources with strong sinks can wear out. Imagine you get one apple a day and you eat one apple a day. Trickle source, repeatable sink. What is the value of the apple to the player on the second day?
In a simple model, the player has zero memory for the previous day. So they should look at the apple on the second day, realize they are hungry and be absolutely delighted to get a new apple.
**Burnout**: In practice, each new apple provides a decreasing psychological benefit. Players slowly get bored with yet another apple. Repetition matters in experiential goods. More of the same, even though it provides the same functional benefit will provide less novelty or mastery benefit.
**Leverage**: High leverage content are actions within your value chain that can be repeated many times *without *burnout. Most actions can only be repeated a small number of times before players get bored. The term ‘leverage’ comes from content that results in high ratio of gameplay relative to the cost of producing that content.
For example, the classic leverage on exploiting a weak point in a Nintendo boss is ‘three’. The first time you learn their weakness. The second time you practice exploiting the weakness. And the last time you demonstrate your mastery. But that action starts to get boring if you are asked to repeat it the fourth time because it is usually just rote pattern execution.
For more information on how to build high leverage content architectures see Designing Game Content Architectures.
**Solution – content recharging**: The good news is that humans forget. An apple every single day might be low value. But if you let people forget about apples, and then give them an apple two months from now, that apple might again have high value. You can recharge content and regain some degree of leverage at the rate it takes for players to forget about that content.
**CHAPTER 3 – ARCHITECTURE OF MULTIPLE VALUE CHAINS**
Most games have multiple value chains. If you were to lay out all the value chains on a single piece of paper, you’d find that certain nodes are present in multiple chains. This creates a crisscrossing spaghetti of resource flows that is the complete **value network** for your game.
It is helpful to organize this ball of spaghetti in a fashion that is easy to understand and manipulate. Patterns for organizing your value network are your **value architecture**.
There are an infinite number of value architectures out there. But we want to focus on sorting our chains in ways that best satisfy the following goals
**Independence**: Each individual chain is easy to independently balance so that it doesn’t accidentally unbalance other chains.**Modularity**: In GaaS, you want the option to easily retire old chains and add new ones as the game ages.
**The most common architecture: Parallel value chains **
The safest structure is to keep your value chains parallel to one another so they don’t overlap. Each set of action nodes is served by a set of unique resources and the player doesn’t need to make trade off between each chain. You can still feed multiple chains into the same motivational anchor as long as the anchor is multi-dimensional enough to be better satisfying by a little variety.
**Benefits**
This mostly satisfies our goals
- This lets you balance each chain in isolation.
- It is easy to add a new value chain for an event and then remove it when the event is over. Or if there’s a piece content that players are burning out on, we can retire it without upsetting the balance of the rest of the game
**Issues**
**Lots of bespoke resources**. Each value chain needs its own resources that are not used in other value chains. As parallel chains multiply, so do resources and you’ll need a reasonable inventory system to track them all. This can add a lot of cognitive load for new players.**Fewer emergent interactions between systems**: Since economic systems are isolated, you get fewer ‘interesting’ feedback loops. This is an intended outcome of the architecture, but worth acknowledging what you give up by adopting it.
Most long term GaaS evolve towards some flavor of architecture that contains multiple parallel value chains. It shows up again and again in various MMOs and F2P games. Players may not enjoy the explosion of currencies and resources that result, but they serve a real architectural need.
**Architecture for applying buffs**
A very common architectural structure folks build with value chains is to create a buff or boost that increases the efficiency or effectiveness of some other node in a different action chain.
In Jesse Schell’s terminology, these often take the form of ‘virtual skill’. A player purchases a +10 sword of smiting that boosts the amount of damage they can do to an enemy, thus allowing them to kill it faster.
**Efficiency anchor**: New designers often think that this virtual skill primarily serves a skill mastery anchor, like getting better at fighting in real life.
In practice, it serves as an efficiency motivation. (Or a power fantasy. The exact anchor depends in large part upon theming and audience) It has little to do with player learning and everything to do with a number being modified. The action in the economic node becomes cheaper to perform.
There are lots of possible efficiency boosts you can trivially build into your value chains. Basically any form of cost (time, money, resources, complexity) can be reduced by a boost.
**Adding sinks to boosts**: And you can add additional sinks into the Apply Boost node. For example, you might get a spell that increases damage output. But it requires mana to cast. Remember, every node is an opportunity to add another sink if you need one.
This is a very flexible and useful pattern that once you understand it, you’ll start seeing it everywhere.
**Issue: Multiple undifferentiated inputs to node**
Sometimes however, you have to cross value chains. There are helpful and unhelpful ways of doing this.
Consider the following unhelpful scenario that is unfortunately baked into most RPG systems. Here we have multiple nodes producing an undifferentiated resource.
In this example, we have two value chains that merge into one.
- Value chain A: You can spend time and health killing Monster A
- Value chain B: Or you spend those same resources killing Monster B.
- In both cases, you get XP that you spend on leveling up. The value chain continues on after that, but we’ll just look at this snippet of the whole.
We see two things happen in this design pattern when a long term player groks the full value chain topology
- First they realize they can make a
**choice**. They can invest their time in killing either monster A**or**monster B. - Next they realize that if both monster A and monster B are plentiful, their time is always limited. So for
**efficient play**, they should focus on killing the monster that gives the most XP for time spent. Let’s assume that’s Monster A. - As the player gains expertise, they’ll start to completely ignore Monster B. Even though it has book value, the marginal value comparison means that it is in practice valueless to the player.
This pattern has major implications on your economic design. In MMOs you may create 100s of enemies. Or hundreds of raids or quests. Yet, players will insist on playing only one or two. All that content you spend so much time and money developing is essentially wasted.
This structure is very difficult to balance since players only care if the two sources are perfectly equal. And it never is. Even in cases of mathematical equality, there are cultural, habitual or aesthetic factors that cause players to prefer one path over another. This architectural decision ends up invalidating big swathes of content.
**Solutions**
**Cap each source**: If there are a limited number of times you can exercise Source A and also a cap on Source B and you need to to engage with both in order to satisfy the subsequent node. This is the most common answer for single player games. Here you have a fixed budget of content and can plan out exactly how much players should consume before they unlock the next elements.**Multiple currencies**: For more complex economies, it can be far more robust to use multiple differentiated input resources. The next section goes into more detail.
**Pattern: Multiple differentiated inputs to node**
Now let’s consider an alternative topology
Same as before you spend time and health killing Monster A. But this time, you get a unique resource, horns. And monster B gives gems. And in order to level up, you need both Horns AND Gems.
This setup has a very different set of player choices
- The player must engage with both Monster A and Monster B to level up. If they only kill Monster A, they’ll lack Gems. If they only kill Monster B, they’ll lack Horns.
- The level up node creates a strong
**pull**on the subsequent action nodes, giving these actions clear value.
Players can choose the order that they engage with Monster A or Monster B, but they cannot ignore them. If there’s substantial content associated with those earlier nodes, you guarantee that it will be seen as valuable and that players are incentivized to exercise it.
**Pattern: Overflow from one chain to another**
Suppose you want a player to pursue one value chain for a while and then switch over to a different value chain later in the game’s progression.
In this example
- Killing a monster gives gems
- Players can spend gems to level up.
- However leveling up is capped at level 10
- Once players finish leveling, they can pour excess gems into crafting decorations.
This overflow pattern is useful when you have fixed sinks. You set up cascading pools so that when one is filled, the excess can flow into others.
This can be a useful pattern as well if your game is serving multiple player motivations. Which is almost always the case since any sufficiently large player population will contain multiple playstyles driven by multiple motivation.
- Say you have some players who love to decorate and others who like to progress.
- You want both to keep performing the core loop of killing monsters.
- So you use this structure to pull gems from the core activity, but then give them a choice on how they want to spend their hard won resources. Each path is anchored on a different motivation.
**Pattern: Lock-and-key choices**
Earlier we covered how a single currency can lead to choices where players pick the most efficient path and ignore the rest. In a large, loosely controlled economy, this can cause major balance issues. However, there are more controlled variants where the player’s choice of how to spend esources are the most interesting part of the game.
The common elements of this pattern include
**Key resource**: There’s a capped source producing a resource. This is the metaphorical ‘key’ in a ‘lock-and-key’ node-resource pair.**Lock node**: Gated nodes are unlocked with key resources.**Choice**: There are always more available lock nodes than key resources, so players need to make clear choices about which option to invest in.**Opportunity costs**: By selecting a node to invest in, you lose the option of gaining resources from the other lock nodes.
Common examples of this
**Worker placement**: The player gains access to a very limited number of workers. Those workers may be assigned to limited jobs to produce other resources or buffs. Or other workers! Sometimes there is a cost to place the worker. Or a cost to remove the worker. But critically, there are never enough workers to fill all the possible production stations so choices must be made.**Skill trees**: The player gains access to a very limited number of skill points. Those points are assigned to unlock skills in a predictable skill tree. This creates both a buff for the player and opens up the chance to unlock future skills further down the tree.
Designers have a lot of control using this value chain pattern. They can change the benefits of each lock node and balance them against one another. They can control how many choices are valid by altering the amount of key resources. Content is invalidated when players make a choice, but the amount and impact of that is up to the designer.
New designers often mistake **multiple undifferentiated inputs** as the serving the same role as **lock-and-key choices**, but once you know the structure of the value chains, you can see they are quite different. Lock-and-key choices always ensure a strong pull (and a taut chain) while multiple undifferentiated inputs result in dangling chains that are left unexercised.
# CHAPTER 4 – ENDOGENOUS VALUE NETWORKS
*“According to the dictionary, one definition of endogenous is “caused by factors inside the organism or system.” Just so. A game’s structure creates its own meanings. The meaning grows out of the structure; it is caused by the structure; it is endogenous to the structure.”*
*– *Greg Costykian, “I Have No Words & I Must Design”
This quote has stayed with me for almost two decades. Value chains are a method of formalizing this fundamental truth into a useful design tool. They start to get at the heart of how meaning is constructed within a game.
**Value modeled as value networks**
“Meaning” and “Value”are vague terms that we need to define more clearly in order to design . Value chains model “value” in terms of common elements of a game (actions, resources) arranged in a network topology. This allows us to get far more explicit about what value we are designing into our game. We gain visible levers and knobs we can manipulate.
**Value networks are internally self-supporting**
Most elements in a game have value due to their **relative relationships** with other elements in the game.
- If you take away the other elements earlier or later in the value chain, the game loses meaning
- Change the balance or nature of the relationship between elements, the game loses its meaning.
**Games as artificial spaces**
Most game value networks are **artificial**. They are arbitrary and cut off from reality. This artificial space is often called the ‘magic circle’ within which gameplay exists. This artificiality provides such creative freedom! We are building **cartoon worlds** that don’t need to mimic the difficult-to-work-with structures found in **natural **economies.
For example, when designing a giraffe refuge in the natural world.
- Does anyone even want a giraffe refuge? How are you going to pay for it?
- Then designers need to take into account years of law, history, logistical issues associated with limited physical space, connections to adjacent spaces, and whether or not your neighbor is allergic to giraffes.
- There are an immensity of constraints and unexpected feedback loops that are impossible to fully capture in any simplified model.
None of those rules apply in a game about building a giraffe refuge.
- We can set up artificial rules where giraffes are plentiful, land is plentiful and everyone loves giraffes.
- We can create grokkable linear value chains and eliminate undesired feedback loops.
- We can intentionally design an artificial world where it is easier to build playful giraffe-centric activities within.
**Ultimately a game’s magic circle is anchored in reality**
The “magic circle” is the conceptual boundary where a player opts into the value structure inside a game. Players opt into the magic circle of a game by saying “You know what? I know this virtual stick isn’t real. But I’m going to play along and act as if it has meaning.”
But in the end, we should never forget that the reason why the player participates in the game’s value network is because they are seeking real-world value. This is why every value chain ends with a motivational anchor. Personal needs fulfillment always pierces the boundary of the magical circle.
**Unmet player needs**: Play is a seeking behavior. You have unmet needs, but you don’t know how to fill them. So you experiment in a safe fashion to understand your options. This last step is the definition of play.**Game makes player promise**: A new game makes a promise to the player, usually rooted in the meeting of some need. Diablo promises power and mastery. Animal Crossing promises a relaxing respite. World of Warcraft promises mastery and friendship. This is the hook that gets you sucked into a game.**Onboarding**: And there’s a grace period. Because the point of play is to wander about for a bit and figure out how to meet your needs. Even players know that need fulfillment can’t happen immediately. Mastery can take many hours. Social bonding can take weeks. Players need to build up the tools. They need to understand the path forward. So players willingly run through tutorials. They willingly follow the chain of quests. Onboarding runs on goodwill that their needs will be eventually met. This step is introducing players to the early stages of the value chain.**Understanding the path towards**: That goodwill runs out. As soon as the promise is made, a timer is ticking and the player is thinking in the back of their mind “How is this game going to fulfill its promise?” The job of the game is to paint that path. And demonstrate real progression towards it. If the game doesn’t help the player understand how all this (expensive) playful activity will ultimately fulfill a key motivational drive, they will stop playing. The game must connect the dots. This is making the value chain visible to the player.**Demonstrating need fulfillment**: In as short a timeframe as possible, the game should provide player experiences that fulfill the needs as they were promised. This is the end anchor of the value chain.
So all of our elaborate value scaffolding *does *need to serve the player’s needs in the end. Every cartoon, hyper-designed endogenous game system contains a connection to the real world. Because games are played by real humans with real human needs.
**CONCLUSION**
Value chains should give you a strong framework for planning and balancing your game economy. You’ll be able pinpoint issues and communicate targeted balance fixes using a common language. The technique targets faucet-and-drain economy designs, but since this remains the dominant method used across most popular genres, you should be well equipped.
**Next steps**
Game economy design is a much richer topic of technique and practice than I could possibly cover in this paper. Many modern designers find they devote years of their career to learning the nuances specific to their genre and their community’s needs. If you are interested further in this topic, I highly recommend the following:
**Breakdowns**: Take one of your favorite games. Identify the individual value chains. Be sure to include the anchors! Make notes on the architectural elements such as branching or choice built into the chains. Ask yourself what you could have done differently to serve your unmet needs better. Also do this exercise with one of your least favorite games.**Game jams**: Very few large teams will give an unproven designer the responsibility to design an economy from scratch. However, many of the fundamentals can be practiced on smaller game jam-sized projects. Limit the number of length of your value chains. But try them out! Try out strange new architectures. Playtest! Balance these tiny games. The lessons you learn scale to larger projects.
**Open questions**
There are also many further areas of investigation for those interested in extending value chains as a design tool.
**Trade**: How do value chains map to more open economies with features like player-to-player trade?**Visualization**: Is there value in reconstituting value chains into a more traditional spaghetti diagram? Such visualization tools don’t yet exist. But you should be able to composite value chains together automatically and perhaps even summarize them.**Ethics**: Can we use economy design for good? The use of value anchors deliberately centers**human needs**as the primary driver of value. Yet the world is rife with reductive, selfish ideologies that flatten the richness of humanity to mere numbers (homo economicus, libertarianism, much of current crypto.) Economy design is an amoral tool. It requires ethics, compassion and a keen eye for spotting externalities in order to avoid causing immense systemic harm.
**References**
**I Have Not Words & I Must Design**, Greg Costykian: http://www.costik.com/nowords2002.pdf**Interaction loops,**https://docs.google.com/presentation/d/1Ge1IvULT9cYKQluLkIll4haUR52thSKXbsreYBEasnM/edit?usp=sharing**Building tight game systems of cause and effect,**https://lostgarden.home.blog/2012/07/01/building-tight-game-systems-of-cause-and-effect/**Internal economies**, Joris Dormans, https://www.amazon.com/Game-Mechanics-Advanced-Design-Voices/dp/0321820274**Designing Game Content Architectures**: https://lostgarden.home.blog/2021/01/04/designing-game-content-architectures/**A big little idea called legibility**, Venkatesh Rao https://www.ribbonfarm.com/2010/07/26/a-big-little-idea-called-legibility/**The fundamental attribution error of economics**: You think a thing has inherent value, but in reality the majority of its value is derived from its contextual position in a value network
## Leave a Reply
| true | true | true |
INTRODUCTION The problem with picking up sticks Recently I was designing the harvesting and crafting system for our Animal Crossing-like game Cozy Grove when I ran into a problem: picking up a stic…
|
2024-10-12 00:00:00
|
2021-12-12 00:00:00
|
article
|
lostgarden.com
|
Lostgarden
| null | null |
|
3,185,984 |
http://www.google.com/support/forum/p/reader/label?lid=2642d938ed0ab7d4&hl=en
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
3,933,108 |
http://www.publicknowledge.org/blog/gpl-does-not-depend-copyrightability-apis
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
34,349,092 |
https://www.engadget.com/nasa-mars-insight-lander-rip-194514276.html
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
6,756,808 |
http://www.c-span.org/Live-Video/C-SPAN3/#
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
3,078,533 |
http://laktek.com/2011/10/06/thank-you-steve/
| null | null | null | true | false | false | null | null | null | null | null | null | null | null | null |
3,147,602 |
http://instantscreenshot.com
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
594,597 |
http://chromeshorts.com/
|
Google Chrome
| null |
About
Press
Copyright
Contact us
Creators
Advertise
Developers
Terms
Privacy
Policy & Safety
How YouTube works
Test new features
NFL Sunday Ticket
© 2024 Google LLC
| true | true | true |
The official YouTube channel for the Chrome browser, OS, Web Store, and Chromebooks.
|
2024-10-12 00:00:00
|
2024-10-10 00:00:00
|
https://yt3.googleusercontent.com/7Qy-xiYrM2DHUEVFTBkok5ei6n_qEnH9XRwBECZnsl_E02VxrLDYcU7svibdYL1YYs9sZKe6KA=s900-c-k-c0x00ffffff-no-rj
|
profile
|
youtube.com
|
YouTube
| null | null |
20,766,707 |
https://thenewstack.io/why-are-so-many-developers-hating-on-object-oriented-programming/
|
Why Are So Many Developers Hating on Object-Oriented Programming?
|
David Cassel
|
# Why Are So Many Developers Hating on Object-Oriented Programming?
Object-oriented programming: Some developers love it — but some *hate* it.
Object-Oriented Programming (OOP) is that long-standing programming paradigm — a coding style, a school of thought, a practice taught in schools — that preaches the importance of organizing your code into larger meaningful “objects” that model the parts of your problem. These handy objects bundle together all the necessary variables for describing each possible “state” of your model’s components — as well as all of the methods (subroutines or functions) necessary for changing each variable’s data.
This is supposed to match the way people actually think in the real world, arranging their code into meaningful chunks with relationships that are obvious and intuitive. You end up with different families of objects, all discretely interacting with each other and swapping messages about the state of their data or whatever changes should be made next.
But in practice, detractors claim, OOP doesn’t always work out this way.
The “haters” camp includes Ilya Suzdalnitski, a senior full-stack engineer who last month posted a 6,000-word essay dubbing OOP a “trillion-dollar disaster.” And it turns out he’s not the only one with strong feelings. After writing a follow-up essay, he found that “The two articles combined have been read *about half a million times* in about a month!”
So what’s the big beef? By making things more complex, “OOP *fails* at the only task it was intended to address,” Suzdalnitski argues. Object-oriented programs instead end up with variables and their mutable states “shared promiscuously between various objects.”
“In most cases, OOP programs turn out to be one big blob of global state, [which] can be mutated by anyone and anything without restrictions,” he tells me in an email.
Suzdalnitski also believes that object-oriented code is harder to refactor and harder to unit test, and his essay builds up to a bold pronouncement that “It is impossible to write good and maintainable object-oriented code…”
And pity the poor developers. “Precious time and brainpower are being spent thinking about ‘abstractions’ and ‘design patterns’ instead of solving real-world problems, ” he wrote.
Suzdalnitski sees another major issue: concurrency. “OOP started it the era when our CPUs had one single core, and programmers hadn’t had to worry much about things like concurrency…” he tells me in his email. “One of the primary reasons for the rise in popularity of functional programming (as well as languages like Go and Rust) is their ability to effectively tackle concurrency.”
In fact, his essay ends by touting *functional* programming as the superior alternative — a competing paradigm in which problems are modeled not with objects but with pure mathematical functions and an emphasis on avoiding changes in state. Suzdalnitski sang its praises three weeks later with a satirical essay that appeared to dabble in reverse psychology. (“Functional Programming? Don’t Even Bother, It’s a Silly Toy.”)
It argues, for example, that functional programming “makes refactoring and unit testing unnecessarily easy,” and that it’s a misguided programming paradigm “based on mathematics (which obviously is inferior and has no applications in the real world apart from academia).”
## Getting a Reaction
After Suzdalnitski dared to challenge this long-standing programming paradigm, it’s been interesting to read the heated reactions from other developers. Suzdalnitski’s first essay attracted 174 responses on Medium, including one from Texas-based software engineer Jesse Dickey, who argues that the name itself is a misnomer. “You are not really programming in objects, you are programming in classes. So you could almost want to call it class-oriented programming…” (He adds later that classes are more properly thought of as “custom types.”)
And a link to Suzdalnitski’s first essay also turned up in 10 different forums on Reddit, sharply dividing other real-world developers over which paradigm was ultimately superior — functional or object-oriented programming:
*“I use oop languages, but write all my code (mostly typescript) in a more fp way. Its ridiculously simpler. Functions (pure as possible) and data live separately. Refactoring a messy oop codebase is such a nightmare, compared to a more pipelineish fp solution…”*
*“I looked at ‘pure functional programming’, but this was a disaster to work with. It is like working with a spreadsheet, without a spreadsheet…”*
And soon commenters were debating whether to blame the paradigm or the programmers.
*“Software has to be built by average [programmers], not elite programmers. If your grand paradigm or stack requires elite programmers, it will likely fail over time, as elite programmers are harder to keep around…”*
*“OOP as a paradigm incentivizes code obfuscation (which is for some reason confused with abstraction), while [functional programming] tends to make things as explicit as possible…”*
Eventually and inevitably, this controversy spread to another essay online, arguing back with an equally provocative title: “Developers who hate on OOP don’t know how to use it.” It was written by Gary Willoughby, a UK-based software engineer whose profile says he indulges “in the craft of software development as a creative endeavor.”
## Passing Messages
In the middle of this back-and-forth, I wondered how Suzdalnitski felt about all the reactions his essays were getting. So I decided to reach out and ask him about his own long journey through the world of computer programming paradigms — and where this passion for functional programming really came from.
“I’ve been programming for most of my life, and the more experience I got under my belt, the more I started realizing that my code was bad, really bad,” Suzdalnitski told me in an email. It had been hard to figure out *why* the code was bad — but it wasn’t for a lack of trying. “I’ve invested a ridiculous amount of time over the years into my OOP education. It’s ironic that the return-on-investment was rather minuscule…”
But even simple features were taking an “unreasonably large amount of time” to implement — and the bigger the codebase, the harder it got. “Things started breaking more, and more often.”
But then in 2014, Suzdalnitski discovered F#, the multi-paradigm programming language released by Microsoft in 2010. “The language seemed weird, and elegant at the same time… But the idea of functional programming stuck with me.”
## ‘OOP Is Dangerous’
Over the years he began applying the ideas of functional programming to the C# code he was writing — and then the company where he was working completed their migration to JavaScript. And ever since that day, “I’ve tried really hard to find use cases for OOP and could never find any.”
“In non-OOP languages, like JavaScript, functions can exist separately from objects. It was such a relief to no longer have to invent weird concepts (SomethingManager) just to contain the functions.”
And today this has all convinced him that “OOP is dangerous. Nondeterminism inherent in OOP programs makes the code unreliable.” As the program executes, its flow can take many, many different paths — thanks to all of those different objects, with new objects sometimes even created on-the-fly. “The devastating effects that this seemingly innocent programming paradigm is having on the world economy is hard to comprehend.”
He thinks his strong opinions made some people angry. But a half-million page views later, I had to ask: had he heard anything that made him change his mind, or see things in a different light?
“There was one comment that really made me pause to think,” Suzdalnitski says — the idea that there’s an almost institutional undercurrent that keeps producing a glut of OOP programmers. “It makes sense for the managers to keep using OOP because the cheap OOP developers are easy to hire, and many fresh graduates are familiar with OOP.” But unfortunately, Suzdalnitski believes that they’ll end up paying for it down the line. Or, to put it another way, “OOP is prevalent because cheap OOP developers are readily available, while functional programmers are typically more smart, and more expensive…”
“The final product of course typically takes longer to deliver, is hard to maintain, and is often buggy due to OOP non-determinism.”
So Suzdalnitski began writing his online essays, because “If I can inspire a thousand people to question the benefits of OOP and give functional programming a try, then they will start writing better and more reliable code.” And with 500,000 views, he thinks he’s succeeded, setting the world on a path to happier developers, happier users, and companies saving money…
But he also says he had a more selfish goal.
“To make OOP companies think twice before contacting me for consulting.”
| true | true | true |
Does Object Oriented Programming really make it easier for programmers to develop? Of is an alternatve like functional programming a better way to go?
|
2024-10-12 00:00:00
|
2019-08-21 00:00:00
|
article
|
thenewstack.io
|
The New Stack
| null | null |
|
36,528,984 |
https://rust-lang.github.io/mdBook/
|
mdBook Documentation
| null |
# Introduction
**mdBook** is a command line tool to create books with Markdown.
It is ideal for creating product or API documentation, tutorials, course materials or anything that requires a clean,
easily navigable and customizable presentation.
- Lightweight Markdown syntax helps you focus more on your content
- Integrated search support
- Color syntax highlighting for code blocks for many different languages
- Theme files allow customizing the formatting of the output
- Preprocessors can provide extensions for custom syntax and modifying content
- Backends can render the output to multiple formats
- Written in Rust for speed, safety, and simplicity
- Automated testing of Rust code samples
This guide is an example of what mdBook produces. mdBook is used by the Rust programming language project, and The Rust Programming Language book is another fine example of mdBook in action.
## Contributing
mdBook is free and open source. You can find the source code on GitHub and issues and feature requests can be posted on the GitHub issue tracker. mdBook relies on the community to fix bugs and add features: if you’d like to contribute, please read the CONTRIBUTING guide and consider opening a pull request.
## License
The mdBook source and documentation are released under the Mozilla Public License v2.0.
| true | true | true |
Create book from markdown files. Like Gitbook but implemented in Rust
|
2024-10-12 00:00:00
| null | null | null | null | null | null | null |
8,538,069 |
http://efanzines.com/SFC/ScratchPad/scrat023.pdf
| null | null | null | true | false | false | null | null | null | null | null | null | null | null | null |
978,566 |
http://lush.sourceforge.net/lush-manual/746b4329.html
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
25,452,078 |
https://www.theverge.com/2020/12/16/22176273/google-stadia-ios-beta-live-now-mobile-safari-iphone-ipad-cloud-gaming
|
Stadia comes to the iPhone and iPad with new iOS beta
|
Nick Statt
|
Google Stadia has finally made its way to iOS over a year after launch. The company’s mobile web beta for the iPhone and iPad, first announced last month, is launching today. That means any Stadia user on either its free tier or its paid Stadia Pro subscription will be able to access their library of Stadia games on Apple devices.
Google, like other competing cloud services, is using mobile Safari due to Apple’s restrictions on cloud gaming apps that mean platforms like Stadia can’t exist in their current form on the App Store. You can access Stadia through its website on Safari or by creating a home screen icon that will turn the service into a progressive web app, so it acts almost identically to a native one.
Unlike Nvidia’s GeForce Now or the planned mobile web version of Microsoft’s xCloud, however, Google Stadia has a free tier without restrictions and now offers two free-to-play games available (*Destiny 2* and *Super Bomberman R*), with more to come. (Nvidia also offers a free tier for GeForce Now, but it has a one-hour session limit.) That means anyone with a Gmail account looking to try Stadia can give it a shot on an iPhone or iPad with minimal effort.
Google Stadia now works on both Android and iOS, thanks to its new mobile web app
That accessibility could be key for Stadia’s growth going forward. Much of the early struggles of Stadia, and the many failed or otherwise unknown cloud platforms that have come before it, have to do with a mix of technical issues and economic hurdles, roadblocks that mean actually using the service as your primary gaming platform is more cumbersome and costly than the benefits. But Stadia is in a much different place now than it was at launch. The service not only has a free tier and free-to-play games, but it also has access to high-profile holiday releases like *Assassin’s Creed Valhalla *and the just-released *Cyberpunk 2077. *
CD Projekt Red’s new open-world sci-fi game has been plagued by bugs and performance issues mainly affecting players on last-gen game consoles, which is a boon for the Stadia version. Google had to shut down a promotion for the game that awarded free Stadia controllers and Chromecast Ultra devices to anyone who preordered or purchased *Cyberpunk 2077* on Stadia up to a week after its release due to overwhelming demand.
Adding iOS support may add to the momentum Stadia is experiencing right now. I’ve had access to the beta on *The Verge*’s Stadia test account for the last week or so where I’ve been testing *Cyberpunk 2077* and other games on my iPad Pro and iPhone 11 Pro. It works remarkably well, even with the built-in touch controls.
I wouldn’t recommend relying on those touch controls for anything that requires precise input, but it was nice to know I could still maneuver the *Destiny 2* interface using my iPhone touchscreen to perform simple tasks, like carousing the in-game Tower hub to pick up bounties or check my character’s inventory.
Instead of touch, you’re better off using either a Stadia controller or one of the supported Bluetooth gamepads like Microsoft’s Xbox One controller or the Sony DualShock 4, and those controllers work seamlessly via mobile Safari with no issues I’ve encountered so far.
Google’s ‘Cyberpunk 2077’ promotion ended early due to overwhelming demand
I will say that you have to rely on a Wi-Fi connection to reliable play on iOS unless you happen to be the owner of a rather rare and situational Ethernet to Lightning or USB adapter accessory. That means you’re not going to get super smooth visuals or performance all of the time.
Still, a lot of the visual hiccups you might experience from using Stadia on an average Wi-Fi connection on a larger screen are not as noticeable when playing on the iPhone or iPad. In particular, I’ve found playing *Cyberpunk 2077* on my iPad Pro to be a pretty consistent and solid experience, more so in some cases than on my PlayStation 5 where I find the game often crashes numerous times during a single play session.
Due to Apple’s restrictions, Google says you will need to perform a tiny workaround to get the Stadia web version on your iOS device’s home screen as a progressive web app, and it’s created this graphic to explain it:
The big caveat right now is that there are not a whole lot of great games on Stadia that cater to mobile players. I don’t see anyone going out of their way to boot up the new *Assassin’s Creed* or *Cyberpunk 2077* on an iPhone screen, except to marvel at the novelty of it. I think the iPad is primarily where Stadia on iOS will shine for the players who have a nice enough screen, a fast enough connection, and a controller to use.
But iOS support opens up a lot of avenues for Stadia — not just to bring it more players looking for a more robust mobile gaming solution but also to promote cloud gaming to developers making the kinds of games fit for mobile screens. If Google cozies up to more indie developers and starts supporting more of the less graphical-intensive experiences you might see on, say, a Nintendo Switch, that could make Stadia a much more competitive platform.
| true | true | true |
Stadia comes to iOS at last through a mobile web app for Safari.
|
2024-10-12 00:00:00
|
2020-12-16 00:00:00
|
article
|
theverge.com
|
The Verge
| null | null |
|
7,223,111 |
http://techcrunch.com/2014/02/11/snapchat-snapfroot/
|
Snapchat Hacked By Fruit Smoothie Enthusiast | TechCrunch
|
Catherine Shu
|
If one of your friends randomly sends you a photo of a smoothie on Snapchat, don’t go to the URL on the picture. It’s a hack that has affected several accounts, as a Twitter search shows.
Wired writer Joe Brown was one of the users who suffered a Snapchat fruiting. A Snapchat spokesperson told him that the startup did not see any evidence of “brute-force tactics,” and that someone had likely gotten a hold of his email and password and accessed his account on the first try. Snapchat told us:
“Yesterday a small number of our users experienced a spam incident where unwanted photos were sent from their accounts. Our security team deployed additional measures to secure accounts. We recommend using unique and strong passwords to prevent abuse.”
The spam looks like this (once again, **don’t go to the URL**; it sells weight-loss supplements, if you really must know).
Thanks eveybody I’m definitely visiting snapfroot pic.twitter.com/vfio4vBJha
— dan jacovelli (@danjacovelli) February 12, 2014
This is the latest of several high-profile hacking incidencts Snapchat has suffered. Back in December, millions of users’ phone numbers were exposed by a group that wanted to call attention to Snapchat’s security flaws.
In response, Snapchat came up with “Snap-tchas,” but hackers found workarounds within a few hours. A few days ago, a security researcher found a vulnerability that could allow hackers to crash your phone through Snapchat.
While Snapchat’s security flaws have been getting a lot of attention recently, it’s worth noting that you should try not to use the same usernames and passwords for multiple sites and apps, and be very wary of third-party services that ask for your Snapchat information.
| true | true | true |
If one of your friends randomly sends you a photo of a smoothie on Snapchat, don't go to the URL on the picture. It's a hack that has affected several accounts, as a Twitter search shows. Wired writer Joe Brown was one of the users who suffered a Snapchat fruiting. A Snapchat spokesperson told him that the startup did not see any evidence of "brute-force tactics," and that someone had likely gotten ahold of his email and password and accessed his account on the first try. We've emailed Snapchat for more information. The spam looks like this (once again, don't go to the URL).
|
2024-10-12 00:00:00
|
2014-02-11 00:00:00
|
article
|
techcrunch.com
|
TechCrunch
| null | null |
|
26,113,655 |
https://www.sciencedaily.com/releases/2021/02/210211091352.htm
|
'Gamechanger' drug for treating obesity cuts body weight by 20 percent
| null |
# 'Gamechanger' drug for treating obesity cuts body weight by 20 percent
- Date:
- February 11, 2021
- Source:
- University College London
- Summary:
- About one third (35 percent) of people who took a new drug for treating obesity lost more than one-fifth of their total body weight, according to a major global study.
- Share:
One third (35%) of people who took a new drug for treating obesity lost more than one-fifth (greater than or equal to 20%) of their total body weight, according to a major global study involving UCL researchers.
The findings from the large-scale international trial, published today in the New England Journal for Medicine, are being hailed as a "gamechanger" for improving the health of people with obesity and could play a major part in helping the UK to reduce the impact of diseases, such as COVID-19.
The drug, semaglutide, works by hijacking the body's own appetite regulating system in the brain leading to reduced hunger and calorie intake.
Rachel Batterham, Professor of Obesity, Diabetes and Endocrinology who leads the Centre for Obesity Research at UCL and the UCLH Centre for Weight Management, is one of the principal authors on the paper which involved almost 2,000 people in 16 countries.
Professor Batterham (UCL Medicine) said: "The findings of this study represent a major breakthrough for improving the health of people with obesity. Three quarters (75%) of people who received semaglutide 2.4mg lost more than 10% of their body weight and more than one-third lost more than 20%. No other drug has come close to producing this level of weight loss -- this really is a gamechanger. For the first time, people can achieve through drugs what was only possible through weight-loss surgery."
Professor Batterham added: "The impact of obesity on health has been brought into sharp focus by COVID-19 where obesity markedly increases the risk of dying from the virus, as well as increasing the risk of many life-limiting serious diseases including heart disease, type 2 diabetes, liver disease and certain types of cancers. This drug could have major implications for UK health policy for years to come."
The average participant in the trial lost 15.3kg (nearly 3 stone); this was accompanied by reductions in risk factors for heart disease and diabetes, such as waist circumference, blood fats, blood sugar and blood pressure and reported improvements in their overall quality of life.
The trial's UK Chief Investigator, Professor John Wilding (University of Liverpool) said: "This is a significant advance in the treatment of obesity. Semaglutide is already approved and used clinically at a lower dose for treatment of diabetes, so as doctors we are already familiar with its use. For me this is particularly exciting as I was involved in very early studies of GLP1 (when I worked at the Hammersmith Hospital in the 1990s we were the first to show in laboratory studies that GLP1 affected appetite), so it is good to see this translated into an effective treatment for people with obesity."
With evidence from this trial, semaglutide has been submitted for regulatory approval as a treatment for obesity to the National Institute of Clinical Excellence (NICE), the European Medicines Agency (EMA) and the US Food and Drug Administration (FDA).
**About the trial**
The Phase III 'STEP'* randomised controlled trial involved 1,961 adults who were either overweight or had obesity (average weight 105kg/16.5 stone; body mass index 38kg/m2), and took place at 129 sites in 16 countries across Asia, Europe, North America, and South America.
Participants took a 2.4mg dose of semaglutide (or matching placebo) weekly via subcutaneously (under the skin) injection; similar to the way people with diabetes inject insulin. Overall, 94.3% of participants completed the 68-week study, which started in autumn 2018.
Those taking part also received individual face-to-face or phone counselling sessions from registered dietitians every four weeks to help them adhere to the reduced-calorie diet and increased physical activity, providing guidance, behavioural strategies and motivation. Additionally, participants received incentives such as kettle bells or food scales to mark progress and milestones.
In those taking semaglutide, the average weight loss was 15.3kg (nearly three stone), with a reduction in BMI of -5.54. The placebo group observed an average weight loss of 2.6kg (0.4 stone) with a reduction in BMI of -0.92.
Those who had taken semaglutide also saw reductions in risk factors for heart disease and diabetes, such as waist circumference, blood fats, blood sugar and blood pressure and reported improvements in their overall quality of life.
**About the drug**
Semaglutide is clinically approved to be used for patients with type 2 diabetes, though is typically prescribed in much lower doses of 1mg.
The drug possesses a compound structurally similar to (and mimics) the human glucagon-like peptide-1 (GLP-1) hormone, which is released into the blood from the gut after meals.
GLP-1 induces weight loss by reducing hunger, increasing feelings of fullness and thereby helping people eat less and reduce their calorie intake.
While the STEP study has been through Phase I and II trials, assessing the 2.4mg doses for safety, in the Phase III trial some participants reported side effects from the drug including mild-to-moderate nausea and diarrhoea that were transient and generally resolved without permanent discontinuation from the study.
The international trial was funded by the pharmaceutical company Novo Nordisk.
* Semaglutide Treatment Effect in People with Obesity (STEP)
**Story Source:**
Materials provided by **University College London**. *Note: Content may be edited for style and length.*
**Journal Reference**:
- John P.H. Wilding, Rachel L. Batterham, Salvatore Calanna, Melanie Davies, Luc F. Van Gaal, Ildiko Lingvay, Barbara M. McGowan, Julio Rosenstock, Marie T.D. Tran, Thomas A. Wadden, Sean Wharton, Koutaro Yokote, Niels Zeuthen, Robert F. Kushner.
**Once-Weekly Semaglutide in Adults with Overweight or Obesity**.*New England Journal of Medicine*, 2021; DOI: 10.1056/NEJMoa2032183
**Cite This Page**:
*ScienceDaily*. Retrieved October 12, 2024 from www.sciencedaily.com
| true | true | true |
About one third (35 percent) of people who took a new drug for treating obesity lost more than one-fifth of their total body weight, according to a major global study.
|
2024-10-12 00:00:00
|
2024-10-12 00:00:00
|
article
|
sciencedaily.com
|
ScienceDaily
| null | null |
|
40,217,334 |
https://github.com/yawaramin/dream-html
|
GitHub - yawaramin/dream-html: Render HTML, SVG, MathML, htmx markup from your OCaml app
|
Yawaramin
|
Copyright 2023 Yawar Amin
This file is part of dream-html.
dream-html is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
dream-html is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License along with dream-html. If not, see https://www.gnu.org/licenses/.
An HTML, SVG, and MathML rendering library that is closely integrated with Dream. Most HTML elements and attributes from the Mozilla Developer Network references are included. Almost all non-standard or deprecated tags/attributes deliberately omitted. CSS support is out of scope. htmx attributes supported out of the box.
Note
Don't want to use Dream? You can use the eDSL without it! Just use the
`pure-html`
package instead of `dream-html`
.
- TyXML is a bit too complex.
- Dream's built-in eml (Embedded ML) has some drawbacks like no editor support, quirky syntax that can be hard to debug and refactor, and manual dune rule setup for each view file
- In general string-based HTML templating is suboptimal and mostly driven by familiarity.
```
let page req =
let open Dream_html in
let open HTML in
(* automatically injects <!doctype html> *)
html [lang "en"] [
head [] [
title [] "Dream-html" ];
body [] [
h1 [] [txt "Dream-html"];
p [] [txt "Is cool!"];
form [method_ `POST; action "/feedback"] [
(* Integrated with Dream's CSRF token generation *)
csrf_tag req;
label [for_ "what-you-think"] [txt "Tell us what you think!"];
input [name "what-you-think"; id "what-you-think"];
input [type_ "submit"; value "Send"] ] ] ]
(* Integrated with Dream response *)
let handler req = Dream_html.respond (page req)
```
Attribute and text values are escaped using rules very similar to standards- compliant web browsers:
```
utop # open Dream_html;;
utop # open HTML;;
utop # #install_printer pp;;
utop # let user_input = "<script>alert('You have been pwned')</script>";;
val user_input : string = "<script>alert('You have been pwned')</script>"
utop # p [] [txt "%s" user_input];;
- : node = <p><script>alert('You have been pwned')</script></p>
utop # div [title_ {|"%s|} user_input] [];;
- : node = <div title=""<script>alert('You have been pwned')</script>"></div>
```
Make sure your local copy of the opam repository is up-to-date first:
```
opam update
opam install dream-html
```
Alternatively, to install the latest commit that may not have been released yet:
```
opam pin add dream-html git+https://github.com/yawaramin/dream-html
```
A convenience is provided to respond with an HTML node from a handler:
`Dream_html.respond greeting`
You can compose multiple HTML nodes together into a single node without an extra DOM node, like React fragments:
`let view = null [p [] [txt "Hello"]; p [] [txt "World"]]`
You can do string interpolation of text nodes using `txt`
and any attribute which
takes a string value:
`let greet name = p [id "greet-%s" name] [txt "Hello, %s!" name]`
You can conditionally render an attribute, and void elements are statically enforced as childless:
```
let entry =
input
[ if should_focus then autofocus else null_;
id "email";
name "email";
value "Email address" ]
```
You can also embed HTML comments in the generated document:
```
div [] [comment "TODO: xyz."; p [] [txt "Hello!"]]
(* <div><!-- TODO: xyz. -->Hello!</div> *)
```
You have precise control over whitespace in the rendered HTML; dream-html does not insert any whitespace by itself–all whitespace must be inserted inside text nodes explicitly:
```
p [] [txt "hello, "; txt "world!"];;
(* <p>hello, world!</p> *)
```
You can also conveniently hot-reload the webapp in the browser using the
`Dream_html.Livereload`
module. See the API reference for details.
One issue that you may come across is that the syntax of HTML is different from
the syntax of dream-html markup. To ease this problem, you may use the
bookmarklet `import_html.js`
provided in this project. Simply create a new
bookmark in your browser with any name, and set the URL to the content of that
file (make sure it is exactly the given content).
Then, whenever you have a web page open, just click on the bookmarklet to copy its markup to the clipboard in dream-html format. From there you can simple paste it into your project.
Note that the dream-html version is not formatted nicely, because the expectation is that you will use ocamlformat to fix the formatting.
Also note that the translation done by this bookmarklet is on a best-effort basis. Many web pages don't strictly conform to the rules of correct HTML markup, so you will likely need to fix those issues for your build to work.
Run the test and print out diff if it fails:
```
dune runtest # Will also exit 1 on failure
```
Set the new version of the output as correct:
```
dune promote
```
Surface design obviously lifted straight from elm-html.
Implementation inspired by both elm-html and ScalaTags.
Many languages and libraries have similar HTML embedded DSLs:
- Phlex - Ruby
- Arbre - Ruby
- hiccl - Common Lisp
- scribble-html-lib - Racket
- hiccup - Clojure
- std/htmlgen - Nim
- Falco.Markup - F#
- htpy - Python
- HTML::Tiny - Perl
- j2html - Java
- Lucid - Haskell
| true | true | true |
Render HTML, SVG, MathML, htmx markup from your OCaml app - yawaramin/dream-html
|
2024-10-12 00:00:00
|
2023-04-22 00:00:00
|
https://opengraph.githubassets.com/d0554e2fe6d8775b86b016a3e6b24de21e98f938a0858eb0abe8f114921116a8/yawaramin/dream-html
|
object
|
github.com
|
GitHub
| null | null |
39,083,195 |
https://www.nature.com/articles/s41598-024-52005-7
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
4,044,164 |
https://gist.github.com/2838122
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
32,614,959 |
https://freedom-to-tinker.com/2022/08/03/the-anomaly-of-cheap-complexity/
|
The anomaly of cheap complexity
|
Andrew Appel
|
Why are our computer systems so complex and so insecure? For years I’ve been trying to explain my understanding of this question. Here’s one explanation–which happens to be in the context of voting computers, but it’s a general phenomenon about all our computers:
There are many layers between the application software that implements an electoral function and the transistors inside the computers that ultimately carry out computations. These layers include the election application itself (e.g., for voter registration or vote tabulation); the user interface; the application runtime system; the operating system (e.g., Linux or Windows); the system bootloader (e.g., BIOS or UEFI); the microprocessor firmware (e.g., Intel Management Engine); disk drive firmware; system-on-chip firmware; and the microprocessor’s microcode. For this reason, it is difficult to know for certain whether a system has been compromised by malware. One might inspect the application-layer software and confirm that it is present on the system’s hard drive, but any one of the layers listed above, if hacked, may substitute a fraudulent application layer (e.g., vote-counting software) at the time that the application is supposed to run. As a result, there is no technical mechanism that can ensure that every layer in the system is unaltered and thus no technical mechanism that can ensure that a computer application will produce accurate results.
[Securing the Vote,page 89-90]
So, computers are insecure because they have so many complex layers.
**But that doesn’t explain ***why*** there are so many layers**, and why those layers are so complex–even for what “should be a simple thing” like counting up votes.
**Recently I came across a really good explanation:** a keynote talk by Thomas Dullien entitled “Security, Moore’s law, and the anomaly of cheap complexity” at CyCon 2018, the 10th International Conference on Cyber Conflict, organized by NATO.
Thomas Dullien’s talk video is here, but if you want to just read the slides, they are here.
As Dullien explains,
A modern 2018-vintage CPU contains a thousand times more transistors than a 1989-vintage microprocessor. Peripherals (GPUs, NICs, etc.) are **objectively**** **getting more complicated at a superlinear rate. In his experience as a cybersecurity expert, the only thing that ever yielded real security gains was controlling complexity. His talk examines the relationship between complexity and failure of security, and discusses the underlying forces that drive both.
Transistors-per-chip is still increasing every year; there are 3 new CPUs per human per year. Device manufacturers are now developing their software even before the new hardware is released. Insecurity in computing is growing faster than security is improving.
**The anomaly of cheap complexity.** For most of human history, a more complex device was more expensive to build than a simpler device. This is not the case in modern computing.** It is often more cost-effective to take a very complicated device, and make it simulate simplicity,** than to make a simpler device. This is because of economies of scale: complex general-purpose CPUs are cheap. On the other hand, custom-designed, ** simpler**, application-specific devices, which could in principle be much more secure, are very expensive.
This is driven by two fundamental principles in computing: *Universal computation, *meaning that any computer can simulate any other; and *Moore’s law*, predicting that each year the number of transistors on a chip will grow exponentially. ARM Cortex-M0 CPUs cost pennies, though they are more powerful than some supercomputers of the 20th century.
The same is true in the software layers. A (huge and complex) general-purpose operating system is *free, *but a simpler, custom-designed, perhaps more secure OS would be very expensive to build. Or as Dullien asks, “How did this research code someone wrote in two weeks 20 years ago end up in a billion devices?”
Then he discusses hardware supply-chain issues: “Do I have to trust my CPU vendor?” He discusses remote-management infrastructures (such as the “Intel Management Engine” referred to above): “In the real world, ‘possession’ usually implies ‘control’. In IT, ‘possession’ and ‘control’ are decoupled. Can I establish with certainty who is in control of a given device?”
He says, “Single bitflips can make a machine spin out of control, and the attacker can carefully control the escalating error to his advantage.” (Indeed, I’ve studied that issue myself!)
Dullien quotes the science-fiction author Robert A. Heinlein:
“How does one design an electric motor? Would you attach a bathtub to it, simply because one was available? Would a bouquet of flowers help? A heap of rocks? No, you would use just those elements necessary to its purpose and make it no larger than needed — and you would incorporate safety factors. Function controls design.”
Heinlein,The Moon Is A Harsh Mistress
and adds, “Software makes adding bathtubs, bouquets of flowers, and rocks, almost free. So that’s what we get.”
Dullien concludes his talk by saying, “When I showed the first [draft of this talk] to some coworkers they said, ‘you really need to end on a more optimistic note.” So Dullien gives optimism a try, discussing possible advances in cybersecurity research; but still he gives us only a 10% chance that society can get this right.
**Postscript: ** Voting machines are computers of this kind. Does their inherent insecurity mean that we cannot use them for counting votes? No. The consensus of election-security experts, as presented in the National Academies study, is: we should use optical-scan voting machines to count paper ballots, because those computers, *when they are not hacked,* are much more accurate than humans. But we must protect against bugs, against misconfigurations, against hacking, by always performing *risk-limiting audits, *by hand, of an appropriate sample of the paper ballots that the voters marked themselves.
Eric Schmidt, the former Google chairman, told Reuters in a recent interview that high-end processors should have kill-switches.
“Knowing where the chips go is probably a very good thing. You could for example, on every chip put in essentially a public private key pair, which authenticates it and allows it to work”.
hxxps://www.reuters.com/technology/chip-challenge-keeping-western-semiconductors-out-russian-weapons-2022-04-01/
What he won’t tell is that this is already a reality, as I learned after having my air-gapped system and Pixel phone wiped remotely for researching “silent speech interfaces”, which goes against Google’s interest for the public to know about. There is no security when silicon trojans are inside of every CPU.
| true | true | true |
Why are our computer systems so complex and so insecure? For years I’ve been trying to explain my understanding of this question. Here’s one
|
2024-10-12 00:00:00
|
2022-08-03 00:00:00
|
article
|
freedom-to-tinker.com
|
Freedom to Tinker
| null | null |
|
17,986,763 |
https://www.nytimes.com/2018/09/12/business/dealbook/wall-street-great-recession.html
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
5,749,078 |
http://ellipticnews.wordpress.com/2013/05/22/joux-kills-pairings-in-characteristic-2/
|
Joux kills pairings in characteristic 2
| null |
Antoine Joux has announced a new discrete logarithm record. This time the field is GF( 2^(257*24)). The computation took about “550 CPU hours” (i.e., less than a month on one CPU). As is typical for this new type of algorithm, the majority of the computation was in the descent step, rather than in linear algebra or relation collection.
The relevance for pairing-based cryptography is the following: If one was using supersingular curves (of genus 1 or 2) in characteristic 2 then one would have a group over GF( 2^p ) for some prime (e.g., p = 257) and the pairing would map into an extension of this field of degree 4 or 12. Hence, the target group would be contained in the multiplicative group of GF( 2^(p*24)). Parameters of this size, or even smaller, have been proposed previously in the literature (such as in my “eta pairing” paper with Barreto, O’hEigeartaigh and Scott). The fact that such a large discrete log instance can be solved in a relatively short time has major impact on the security of pairings in small characteristic. The parameters in the eta pairing paper should not be used. It seems prudent now to use pairings only in large prime characteristic for all practical applications.
— Steven Galbraith
Hi Steven, I agree that using small characteristic pairings are a bad idea now. However, I don’t think the coffin has been firmly nailed shut just yet!
In particular, the two recent breaks – Joux’s in GF(2^(24*257)) and our earlier one in GF(2^(24*255)) (announced here https://listserv.nodak.edu/cgi-bin/wa.exe?A2=NMBRTHRY;fe9605d9.1304) – are very special cases which are far easier to break than other extensions of GF(2^24) of a similar size, for which one needs a more complicated extension-defining polynomial than X^256 = aX^{\pm 1}, which then kills much of the factor base reduction achievable via automorphisms. Furthermore, the descent then becomes slower as well. So attacking a really interesting case such as SS curves over GF(2^1223) will take considerably more work.
That’s a good comment Rob. But these results still make char 2 pairings hard to sell. If you were proposing curves in char 2 for pairings then you would need to provide an explicit justification of why the parameters resist the new index calculus algorithms in the finite field. It is a bit like people proposing composite extension fields for standard ECC (which still happens from time to time): they need to justify why such fields are not vulnerable to known Weil descent attacks. Such arguments can probably be made, but they will always be treated with scepticism and caution.
— Steven Galbraith
| true | true | true |
Antoine Joux has announced a new discrete logarithm record. This time the field is GF( 2^(257*24)). The computation took about “550 CPU hours” (i.e., less than a month on one CPU). As i…
|
2024-10-12 00:00:00
|
2013-05-22 00:00:00
|
article
|
wordpress.com
|
Ellipticnews
| null | null |
|
18,728,851 |
https://techcrunch.com/2018/12/20/fbi-ddos-booter-sites-offline/
|
FBI kicks some of the worst 'DDoS for hire' sites off the internet | TechCrunch
|
Zack Whittaker
|
The FBI has seized the domains of 15 high-profile distributed denial-of-service (DDoS) websites after a coordinated effort by law enforcement and several tech companies.
Several seizure warrants granted by a California federal judge went into effect Thursday, removing several of these “booter” or “stresser” sites off the internet “as part of coordinated law enforcement action taken against illegal DDoS-for-hire services.” The orders were granted under federal seizure laws, and the domains were replaced with a federal notice.
Prosecutors have charged three men, Matthew Gatrel and Juan Martinez in California and David Bukoski in Alaska, with operating the sites, according to affidavits filed in three U.S. federal courts, which were unsealed Thursday.
“DDoS for hire services such as these pose a significant national threat,” U.S. Attorney Bryan Schroder said in a statement. “Coordinated investigations and prosecutions such as these demonstrate the importance of cross-District collaboration and coordination with public sector partners.”
The FBI had assistance from the U.K.’s National Crime Agency and the Dutch national police, and the Justice Department named several companies, including Cloudflare, Flashpoint and Google, for providing authorities with additional assistance.
In all, several sites were knocked offline — including downthem.org, netstress.org, quantumstress.net, vbooter.org and defcon.pro and more — which allowed would-be attackers to sign up to rent time and servers to launch large-scale bandwidth attacks against systems and servers.
DDoS attacks have long plagued the internet as a by-product of faster connection speeds and easy-to-exploit vulnerabilities in the underlying protocols that power the internet. Through its Internet Crime Complaint Center (IC3), the FBI warned over a year ago of the risks from booter and stresser sites amid a wider concern about the increasing size and scale of powerful DDoS attacks. While many use booter and stresser sites for legitimate services — such as to test the resilience of a corporate network from DDoS attacks — many have used them to launch large-scale attacks that can knock networks offline. When those networks support apps and services, those too can face downtime — in some cases affecting millions of users.
Some of the sites named in the indictments reported attacks exceeding 40 gigabits per second, large enough to knock some websites offline for a period of time.
Specifically in the complaint, the Justice Department accused Downthem had more than 2,000 customer subscriptions, and had been used to carry out over 200,000 attacks.
But booter sites have largely been put to the wayside for larger attacks, such as the botnet-powered attack that knocked Dyn, a major internet powerhouse relied on by many tech companies, offline.
Thursday’s seizures mark the latest in a string of law enforcement action aimed at booter services. Earlier this year, U.S. and European authorities took down webstresser.org which prosecutors claimed to help launch more than six million attacks.
When reached, the FBI did not comment beyond the Justice Department’s statement.
Two hackers behind 2016 Uber data breach have been indicted for another hack
| true | true | true |
The FBI has seized the domains of 15 high-profile distributed denial-of-service (DDoS) websites after a coordinated effort by law enforcement and several
|
2024-10-12 00:00:00
|
2018-12-20 00:00:00
|
article
|
techcrunch.com
|
TechCrunch
| null | null |
|
23,594,683 |
https://medium.com/dad-stuff/whats-important-to-a-father-3223ce67620a
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
39,279,121 |
https://medium.com/cognite/postgres-can-do-that-f221a8046e
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
25,855,606 |
https://aioverload.tumblr.com/post/640894461105274880/jaw-dropping-ways-machine-learning-is
|
Jaw-dropping ways machine learning is revolutionizing recycling
|
Aioverload
|
## Jaw-dropping ways machine learning is revolutionizing recycling
## Introduction
They say that one man’s trash is another man’s treasure. This saying is especially true for waste that is made from materials that can be reused and turned into consumer products, e.g. shoes, bags, toys, to many other household items. While this is good for our plant, it also brings a big and strong-growing issue: how to make recycling effective and less expensive. To our luck, recent progress in robotics and artificial intelligence (i.e. AI) are aiding to take on the problem.
The issue is not as apparent as it might look. Recycling has become less and less uncommon, partly due to an endeavor to make it as easy as possible. In most cities in the United States, recycling bins play a role in the standard collecting of rubbish with one disjointed bin for recyclable products—plastics, glass, metal, and paper. Even though that degree of convenience has caused high recycling rates, The US’ method has become an issue in and of itself.
## The single-flow recycling quandary
When curbside recycling was initially being introduced to the people of America in the early 1990s, the challenge was to encourage people to alter their behavior and adopt the endeavor to recycle. The solution is what the sector referred to as ‘single-stream recycling’, in a nutshell, this means that every type of recyclable material would be put in one bin. This worked. Over the previous 20+ years, more communities decided to adopt recycling, partly because it was a good way to turn trash into profit.
But this effort is not and has not been all good news, even though the convenience of single-stream recycling has caused recycling rates to increase and has enabled effective collecting of waste, it has made an accidental fallout: contamination of materials. This is because people have been encouraged to recycle anything they were able to into one recycling bin, and because of that, people have become decreasingly selective about what they throw in these bins. This resulted in 20-25% of unusable trash in the present industry. Part of the issue stems from disarray about what materials can be recycled and what materials cannot be recycled. When in question, consumers, and turn to what is called 'wish-cycling’ (throwing unrecyclable materials that they hope will be recycled) e.g. garden hoses, shoes for running, and even plastic wrap.
Additionally, even if the material could be recycled, single-stream recycling creates other problems for recycling plants. Some materials that can be recycled become unrecyclable when they are squeezed together while they are being transported. Paper is polluted with fluids, shattered glass is crunched into plastics, and the issues do not even end there. Plastic bags that can be recycled can get caught in transporters while all materials are being sorted. This brings holdups and can even damage costly machinery. Additionally, few people want to do this dull, contaminating, unsafe work, which makes it harder to maintain a qualified workforce.
## China Rules out U.S. Recycling
Two years ago, China stopped taking most of the U.S. recycled plastics. They stopped accepting U.S.’ waste because of issues of contamination. Contamination is a big problem for recycling facilities. E.g., if you put food leftovers or any unrecyclable material in a recycling bin, you would contaminate any high-quality plastic and make it unrecyclable. Below are the reasons why a majority of plastics:
- The recycling sector has faced enormous challenges when it comes to good and reselling marketplaces
- In the last couple of years, the issue of recycled products being returned to the United States from other countries because of contamination and low-quality feedstock has shown clear variance in the prime of what is recycled in the United States.
- Recycling costs a lot of money so a lot of people figure that it’s easier to throw out of rubbish in landfills alternatively.
- Billions(in dollars) are lost to landfills because of contamination of materials and poor recycling methods
In the past, China recycled most of the United States’ waste materials. Since China rejected to recycle U.S. waste, this has forced a sudden need to resolve the recycling crisis. Instead of properly sorting materials from recyclable to unrecyclable, the U.S. dumps all waste into one bin. As a result of China rejecting U.S. waste products, a lot of communities gave up on recycling completely because of increasing costs.
This is the issue that businesses are currently trying to find a solution for. As businesses try and take on this problem, a lot of them turn to artificial intelligence and machine learning to come up with solutions.
This is the issue that businesses are trying to find the solution for. As businesses try to tackle this problem, a lot of them turn to artificial intelligence and machine learning to find solutions.
## Is Automated Sorting the Solution?
Robotic recycle sorting uses AI and robotics to sort plastic so we don’t have to. Business, with the help of advanced cameras and technology, can count on AI to sort recycling, and reduce any health risks that come along with human labor. A report at the University Of Illinois School Of Public Health states that recycling employees are two times as like to be injured while working as other workers.
## How does robotic sorting work?
Photographic cameras and cutting-edge computer systems that are trained to recognize certain things will lead robots’ arms over transporters to reach out to their target area. Outsized fingers with detectors are attached to the arms that can snag recyclable material out of the trash and put them in recycling bins.
Recycling robots are still helping people, but businesses have found that they can work twice as fast as people. Leaders in this sector have developed robots that can identify different colors, textures, shapes, and sizes of plastics and make it easier to sort rubbish.
This method can increase product result quality and in some cases multiply the resale value by two. As prime standards become more and more rigorous, businesses work fast to find dependable solutions to this problem.
## Advantages and Disadvantages of Robotic Sorting
## A few pros of robotic sorting are:
- Trust on manual sorters goes down: Plants that regain materials all around the United States are struggling to employ and hold employees. A robot powered by artificial intelligence can replace 1.5 employees that are required to pull cartons off the line. Those employees can be used in places that require attention, in turn, this speeds up the process.
- Faster sorting off the line: Present robotic methods include utilizing a camera to look at every product coming off the line and utilizing information that has been stored over time to analyze where it will go next. Additionally, this means that the robots can update all the time and add to their data of materials, which will help in the future. The advantages of artificial intelligence are quickly increasing and soon, robots might be able to make all needed changes by themselves.
- Improved knowledge: Robotic sorters work by identifying particular types of stuff. This will increase knowledge overtime on the kind of materials that are coming in each day and how they are different from each other. There is also possible to inform other facilities countrywide that also use robotic sorting techniques.
AI’s advantages: These robots can store information and process it way faster than people can. This will add to a database that can be used by people in the recycling community to improve their results.
Control of quality: Using robots can ensure faster, more accurate, and higher quality of plastics being chosen and directed to the right bins. Additionally, this will ensure that more plastic can be used again. Having faster and more accurate “hands” will provide more chances for a result.
## Some cons of robotic recycling are:
- It’s expensive: Currently, plants must process large sums of material just to vindicate buying these robots. Yet, over time, these robots will have the ability to lower the cost of hiring and the result will outweigh the expense.
- Supply and demand: Often purchaser won’t devote to taking on a recovered type of material until they know they can bank on substantial tonnages being produced by plants. Then again, plants do not want to commit human labor hours to a new stream until they know it’s profitable.
- Robots require a never-ending power supply: This could be added to the existing costly expense and shy people away from integrating them into their sorting facilities.
## Conclusion
A lot of industries today are harnessing robots’ power to do tiresome work that would take people way longer to complete. And the recycling sector is no exception. Due to advancements in AI, recycling robots can sort materials more effectively than people. Even though this technology is still in its babyhood, we should not underrate its power, and the magnitude of its impact on the recycling sector is yet to be seen.
Bibliography
https://www.prescouter.com/2018/12/the-holy-grail-of-recycling-ai-powered-robots/
https://insights.roboglobal.com/robots-are-tackling-the-next-big-global-challenge-recycling
Tool used
| true | true | true |
Introduction They say that one man's trash is another man's treasure. This saying is especially true for waste that is made from materials that can be reused and turned into consumer products, e.g....
|
2024-10-12 00:00:00
|
2021-01-21 00:00:00
| null |
article
|
tumblr.com
|
Tumblr
| null | null |
28,931,205 |
https://apnews.com/article/coronavirus-pandemic-science-business-lawsuits-nuclear-weapons-15c0540921706abe2c0088100b644e8c
|
Dozens of US nuclear lab workers sue over vaccine mandate
|
Susan Montoya Bryan
|
# Dozens of US nuclear lab workers sue over vaccine mandate
## Dozens of US nuclear lab workers sue over vaccine mandate
ALBUQUERQUE, N.M. (AP) — Workers at one of the nation’s premier nuclear weapons laboratories face a deadline Friday — be vaccinated or prepare to be fired.
A total of 114 workers at Los Alamos National Laboratory — the birthplace of the atomic bomb — are suing over the mandate, saying exemptions have been unduly denied and their constitutional rights are being violated by Triad National Security LLC, the contractor that runs the lab for the U.S. Department of Energy.
It will be up to a state district judge whether to grant an injunction to prevent employees from being fired while the merits of the case are decided. A hearing was underway Thursday.
The lawsuit alleges that lab management has been harassing employees and has created a hostile work environment. The complaint outlines the experiences of many of the workers, including one who was screamed at for not being vaccinated and was told by a fellow crew member that he and his family deserved to die.
The lab has declined to comment on the lawsuit and has not answered questions about the current vaccination rate among employees, whether any exemptions have been approved or what will happen to employees who refuse to be inoculated when Friday rolls around.
The plaintiffs include scientists, nuclear engineers, project managers, research technicians and others who have some of the highest security clearances in the nation for the work they do. Some employees said many of those who could lose their jobs are specialists in their fields and would be difficult to replace in the short term.
Some of the employees who are part of the lawsuit have worked for Los Alamos lab for decades, while others are newer hires who have relocated to New Mexico from other states and countries. Thirty-four of them are named in the lawsuit and 80 have opted to remain anonymous, citing fears of retaliation.
While the lab said last week that more than 96% of workers had at least one shot, it’s not known yet how many have received a second dose. Some workers have estimated that the percentage of those fully vaccinated by Friday will be lower.
Some employees have estimated the lab could lose anywhere from 4% to 10% of the workforce because of the mandate.
“In any organization there are people, not always recognized, who quietly make the work of others possible. Lose them, and you are in trouble,” said Greg Mello of the Los Alamos Study Group, a watchdog group that has been monitoring lab activities for years.
The lab currently employs nearly 14,000 and is among the largest employers in New Mexico. It’s also located in a county that is among the most affluent in the U.S. because of its high population of Ph.Ds.
Attorney Jonathan Diener, who is representing the workers in their lawsuit, said the case includes a wealth of scientific information to consider, but he was hopeful the judge would make a decision soon because people’s lives stand to be upended.
The lawsuit cites statements made over the last year by top officials in the U.S. and with the World Health Organization in which they noted that there is more to be learned about how the vaccines reduce infection and how effective they are when it comes to preventing infected people from passing it on.
“The fact that the vaccines have only been shown to reduce symptoms of the recipient and not prevent infection or transmission is a fact extremely important to plaintiffs’ claims,” the lawsuit states.
Since the lab’s vaccination rate already is thought to be high, Mello said forcing the few holdouts to get shots would make no epidemiological difference.
“If LANL doesn’t have herd immunity at this point, there is no basis for the mandate. LANL is not being scientific,” he said.
Some of the workers have raised similar arguments, saying the high degree of scrutiny that is required of them when working with nuclear weapons or other high-level projects is not being applied on the vaccine front despite the lab’s extensive modeling work for the state on spread and other COVID-19 related trends.
Lab Director Thomas Mason has said the pandemic has had a serious impact on the lab, citing higher numbers of COVID-19 cases in unvaccinated employees. However, employees who are pushing back said the cases among the unvaccinated would naturally be higher because the lab had removed vaccinated employees from its regular testing pool.
At Sandia National Laboratories, based in Albuquerque, all employees and subcontractors must be fully vaccinated by Dec. 8 or file for an exemption by Friday. Lab managers made COVID-19 vaccinations mandatory for new hires on Sept. 13.
So far, more than 88% of Sandia employees, interns, post-doctoral staffers and contractors at sites in New Mexico and California are fully vaccinated.
In New Mexico, nearly 72% of people 18 and over are fully vaccinated. That percentage hasn’t moved much in recent weeks as more people are pushing back against the vaccines.
| true | true | true |
Scientists and other workers at one of the nation’s premier nuclear weapons laboratories face a deadline Friday — be vaccinated or prepare to be fired.
|
2024-10-12 00:00:00
|
2021-10-14 00:00:00
|
https://dims.apnews.com/dims4/default/861ec3f/2147483647/strip/true/crop/3000x1688+0+238/resize/1440x810!/quality/90/?url=https%3A%2F%2Fstorage.googleapis.com%2Fafs-prod%2Fmedia%2F460f02da7e7347a3b9a682a4f624d10e%2F3000.jpeg
|
article
|
apnews.com
|
AP News
| null | null |
3,754,519 |
http://dribbble.com/shots/485988-P-ixel-is-coming-soon
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
38,318,888 |
https://www.unqualified-reservations.org/2013/03/sam-altman-is-not-blithering-idiot/
|
Sam Altman is not a blithering idiot
| null |
## Sam Altman is not a blithering idiot
TL;DR Sam Altman is not a blithering idiot. That’s what’s so scary. When the most elitest minds of a society are full of blithering idiocy, that society is probably doomed.
It’s normal for geniuses to be crazed. But Sam Altman (whom I don’t know, but SF is a small town and I probably know someone who knows him) isn’t a genius and he definitely isn’t crazed. He didn’t analyze any ten-dimensional gauge theory or predict any chiral axigluons. He’s just an entrepreneur, i.e., a natural leader, a commodity which especially in our present dire straits is or at least ought to be worth a whole Bolivian prison full of cocaine-tooting particle physicists.
What I find exceptionally terrifying is that Altman’s blithering idiocy
looks and sounds *exactly like sober good sense*. Read it. You’ll agree.
The basic problem with our society is a disconnect between consensus reality and actual reality. We actually have no shortage of natural leaders. But they cannot actually lead us anywhere. They are operating in consensus reality rather than actual reality. Their joysticks are not plugged in. When the consensus is nonsense, sober good sense is nonsense. Nonsense is no use to anyone.
Really? Can blithering idiocy sound like sober good sense? Let’s try some:
All of that said, in absolute sense I’d much rather live in the world of today than 1950—it’s tough for me to imagine living in a world without the Internet. However, in the same way that one can feel acceleration but not velocity, people seem more sensitive to the annual rate of improvement than the absolute quality of life. So even though people should be happier in an absolutely better world, no one wants to stand still on the hedonic treadmill.
Most of us want our lives to get better every year—the hedonic treadmill is a pain that way.
The world of 1950! Oh, honey.
Okay, let’s apply a *gross* reality check. You’re an alien. You’re
observing Earth with an infinitely powerful telescope from Alpha
Centauri. You have a simple question. Since 1950, has human
civilization—or American civilization, which amounts to pretty much
the same these days—advanced or declined?
Apparently the easiest way for Sam Altman to answer the question is to
trade it for a different one. He is not alone in this. He asks: since
1950, has human *technology* advanced or declined? Clearly, the
alien, you, I, and Sam Altman all have the same answer to this question.
Any question with an obvious answer is a stupid question. “Is an iPad more advanced than a Smith-Corona?” is a stupid question. Who asks stupid questions? Obviously, blithering idiots.
But we can compose an interesting question by *factoring out* the
stupid question. Which world would Sam Altman rather live in? 2013,
with iPads and teh Internet? Or 1950—with iPads and teh Internet?
In a sense, this 1950 is just as real as the “real” 1950. Neither exists. Sam Altman cannot pack his bags and move to either the real 1950 or my imaginary super-1950. Both exist only as thought experiments. It is not hard to construct or define the super-1950, though—one run of a time machine, with a printout of Wikipedia, would be pretty much all the real 1950 needed. Send the technology back to 1945, and you’ll have iPads by ’55 at the latest. Those guys got things done.
The interesting (and scary) question this thought-experiment asks is
whether, *aside from technical progress*, human civilization has
advanced or declined since 1950. In actual reality, this too is a
stupid question. The answer is no less obvious—I assert. But
consensus reality thinks I’m crazy. So with stupid we must begin.
Aside from technical progress, has human civilization advanced or declined since 1950? By what criterion? If Moore’s Law is not our benchmark of a successful civilization—what is?
Sam Altman has his answer. The hedonic treadmill! Never having sold a startup for $43 mil, there are many subtle, refined and delicious forms
of hedonism of which I am as innocent as a cat of tennis. But allowing
(for a moment) this implicit assumption that the purpose of civilization
is the *satisfaction of human desires*, we can refer only to
Maslow’s
pyramid of needs.
At the base of the pyramid are air, water and food. How does 2013 do at
supplying oxygen, hydration and nutrition? How did 1950 do? Just
fine. Gentlemen, a draw. (The food of 2013 is certainly *tastier*
than that of 1950, at least in America. But this is not the base of the
pyramid.)
Next up: safety and security. All right, gentlemen. Let’s have another thought-experiment.
Picture the Earth—our beautiful, blue, spinning globe. Take all the habitable land area and color it white—as a neutral background for our thought experiment.
Now, select the subset of this beautiful planet on which a sober,
sensible, civilized person, such as Sam Altman, would consider it
prudent and safe to wander,
“on
foot and alone,” carrying his iPad, at night. Leave that part white. Color the other part brown. Then, from the brown subset, select the
further subset in which Sam Altman, carrying his iPad, would not
consider it prudent and safe to wander *in the daytime*. Color
that part *black*. (Why can’t Google Maps do this?)
Then do the same for Sam Altman’s grandfather, in 1950, with his
portable Smith-Corona. Then, repeat the exercise for 1900. (Part of
the reason this is such a useful mental exercise, and unfortunately such
a difficult one, is that it requires you to actually *know* what
the world was like in 1950 or 1900. If your way of getting this
information starts with statistical tables, ur doin it rong. There are
these things called “books” which will help you out.)
If you perform this exercise accurately, or at least if you get the same results as me, you’ll see a 20C quite indistinguishable from Stage III melanoma. And this progress continues, to rousing applause and general self-congratulation, right up into our own dear official NYT-approved 2013. Hey, been to Egypt lately? What’s that Google guy up to these days? Is he still tweeting?
Speaking of Twitter, I experience this reality check in person every day. My daughter’s preschool is at 10th and Howard, two blocks from Twitter HQ. It’s actually not too hard to park at 10th and Howard. So the average number of what my daughter calls “dangerous persons,” and what to an ordinary grownup are basically indistinguishable from zombies, that I and my bubbly daughter have to navigate around between the car door and the security guard is… I don’t know… 1.3? 1.7? It can’t be much more than 2.
They’re just zombies and really they’re not that dangerous. (I think
even zombies can sense that when I’m with a child, I am more dangerous
to them than the converse.) On the other hand, a couple months ago a
techie (not a Twitter employee) was hit on the head and killed,
presumably by zombie or zombies unknown,
at
11th and Mission. Right around the corner from Tweet Central. Besides
a couple of perfunctory boilerplate stories, and of course the victim’s
wife, no one noticed. No one cared. Why would they? Their iPads have
*4.7 million pixels*.
Suffice it to say that if you don’t know how 1950 would have reacted to exactly the same event, in exactly the same place, you don’t know anything about 1950.
Actually, my daughter’s preschool is *literally in a ruin*—that
is, a (nicely renovated) space which used to be part of a Catholic
church. (The preschool is the former convent. The rest of the church
remains a
ruin
proper.) Where are the people who used to pray in this church? They
*fled*. Why? Because they were
*afraid for
their physical safety*.
I know, I know. It’s
gauche to even
bring this kind of stuff up. It’s not part of our consensus reality. It’s not part of our consensus *history*. When it comes to
*actual* history, however, the global decline of security in the
second half of the 20th century is (I assert) the salient phenomenon of
our era. Much as the fall of the Roman Empire is the salient phenomenon
of 4th-century AD Europe. (Note that while our historians would
desperately love to find one, just one, member of the exquisitely
literate 4th-century AD European culture who would even *mention*
that the Roman Empire was falling apart, no such luck. It’s all
wall-to-wall Prudentius and Sidonius.)
Consider our alien in Alpha Centauri. His telescope is just a
telescope. He no speaka the English. He is absolutely invulnerable to
our most respected propaganda authorities and in particular has no way
to read the great Harvard scholar Steven Pinker—truly a Prudentius for
our age—who has discovered through elaborate statistical models that
the 20C was not, in fact, the golden age of
titanic
mass murder and
brazen
petty crime, but the dawning of a new age of Aquarius in which all will
have peace and prosperity. (Even Pinker is a piker next to the Times,
which has published at least 547 stories about NYC’s miraculous conquest
of its
blatantly
managed crime statistics, and precisely 2 about the
hospital
statistics which show a parallel doubling in actual assault victims. It’s always so easy to lie to those who want to be lied to—you hardly
even *need* statistics.)
But his is an excellent telescope. So our alien can *see* the fact
that many parts of all, and all parts of some, American cities that were
thriving in 1950, have now fallen into chaos and ruin. On the other
hand, he can gaze admiringly at the thriving cafes of University Avenue
in Palo Alto, Ausonius’
Moselle
born anew, full of beautiful young people adoring the perfectly
antialiased individual subpixels of their new Retina iPads. Which of
these phenomena will he find more relevant? Which is the narrative,
which the distraction?
Continuing the comparison to the fall of Rome, one of the interesting
features we see is that while technological competence is certainly an
indicator of a successful civilization, it is also a *lagging*
indicator. Civilization produces technology, not the other way around. When civilization falls, technology is not the first but the last thing
to fall. Yes, technology does decline in the fall of Rome. No, it has
not declined in our era—though its advance has certainly slowed a
great deal. But the centuries of European technology decline are
400–700 AD, a point at which surely any historian would admit that the
Roman polity has already been going to the dogs for two centuries
minimum.
Am I too hard on Sam Altman? After all, he admits there’s a problem. He doesn’t admit *this* problem—but isn’t his point basically the
same? That something isn’t working? My America is going to the dogs
and lies in ruins all around me. His America has just turned the
friction up too high on its hedonic treadmill. But it’s the same, isn’t
it? Sort of?
Realizing that something in the 20th-century model of governance, as taught by the best and brightest of Harvard, Stanford, the NYT and other
fine institutions of papally infallible veracity, isn’t working out
*quite right*, is indeed a step in the right direction. Everybody’s going to have their own particular beef. Mine, as we’ve
seen, is that 75 years of this rigorously scientific system of
government has reduced what was once America’s
fourth-largest city
to a demon-haunted
slum—and while extreme, this outcome is anything but an exception.
But so far as I can tell, from the Sam Altman perspective, this is just a nitpick. Or maybe it’s really sad, but would have happened to any regime. Napoleon, Cato the Elder, Pericles, Peter the Great—morons! None of them could have done a damned thing to save Detroit, Oakland, Baltimore, etc. Not where Harvard failed! It’s always easy to attribute bad outcomes to irresistible forces of nature, acts of God, etc. Even if you can’t identify the force of nature and you don’t believe in God.
No—Sam Altman is concerned with something entirely different. He is
concerned with a *number*. This number is about 2, he asserts,
when it should be more like 5. He calls this number “growth.” Or to
be more exact, an even more interesting number, “real growth.” Growth
is of course a good thing, especially if you’re a startup but unless
you’re a tumor.
Once again, this remarkable
number—GDP
growth—is an essential part of the 20C tradition of economic
governance. Where would we be without
Abba Lerner? Well,
I don’t know. Where would Detroit be without Abba Lerner? Where was
Detroit… before Abba Lerner? Perhaps the problem with Sam Altman
is just that everything he knows about economics he learned at Stanford,
whereas everything I know about economics I learned by watching
*Hardcore Pawn*.
What does this number actually mean? It clearly means
*something*. We know it goes up when people are happy, generally,
and down when they are sad. Perhaps the great historical puzzle of the
20C is the need to explain this strange phenomenon, broadly defined, of
Keynesian economics—which on the one hand seems to
make
no sense at all, but on the other hand seems to, kind of, work. At
least on a local level.
Well, okay, I lied. Yes—*Hardcore Pawn* is very important to me. But
really I’m a mercantilist, and everything I know about economics I
learned by reading
Friedrich List. Well, him and
Mises. Odd
bedfellows I know. But I really believe there is nothing in (to use its
old name) political economy which is outside the philosophy of these two
fine Teutonic gentlemen,1 opposites though they were. Please allow me to
explain “growth” in this unusual Austro-mercantilist idiom.
Growth is the change in a number called “GDP.” To an Austro-mercantilist there are two kinds of GDP, which we call AGDP (actual GDP, i.e., an actually measured number) and FGDP (fudged GDP, which is AGDP multiplied by a mysterious fudge factor). With its usual fine subtle sense of irony, 20C economics calls AGDP “nominal GDP” and FGDP “real GDP.”
20C economics is especially adept at accounting identities. The late great Murray Rothbard once parodied one of its finest absurdities, the equation of exchange, as: “the amount of water that runs off the ground is the same as the amount of rain that falls from the sky.” Which is true. But not particularly useful to the weather forecaster.
So let’s apply one of these identities to understand AGDP. Forget foreign trade, which we’ll add back in in a sec, and consider an isolated unit like a planet. How do we measure the economic production of the planet? We arbitrarily divide economic actors, with some difficulty but not too much, into “consumers” and “producers”—i.e., peasants and businesses. AGDP is simply the total sales of all businesses to all peasants.
I.e., if all businesses were One Giant Business—you can see the attraction of this approach to the mid-20C central planner, snorting boa-thick lines of pure administrative Bolivian as he plays SimCity with real peasants—AGDP is simply the gross revenues of this monster. And because the amount of water that runs off the ground is the same as the amount of rain that falls from the sky, AGDP is also the number of dollars that consumers spent this year at OGB.
I like this approach because it reduces the veiled mystery of “growth” to a crass urban reality that everyone can understand. What does “growth” mean? It means: “spend more, comrades!” If growth is good by definition, spending is good by definition. Because the amount of water that runs off the ground is the same as the amount of rain that falls from the sky. Aggregate demand is your friend. Spend more, comrades! It’s good for the economy.
How do you increase AGDP? There are two and only two ways. One, give the peasants more dollars (and/or reduce their debt). Two, make them less thrifty and more prodigal. Why is AGDP not increasing fast enough for the likes of Sam Altman? Or for that matter, Paul Krugman? Because both these levers are already pushed down to the floor. Or at least the second one is. The first one… we’ll get there.
But wait—why increase AGDP? Why is more spending inherently better than less spending?
There are two answers to this question, corresponding to your understanding of the purpose of an economy. The first is the false position held by Austrians, and by Keynesians when they want to confuse you: the belief that the purpose of economic activity is the satisfaction of human desires. More spending means more production, and more production means more satisfaction. This perspective, of course, originates with 18th- and 19th-century liberals and utilitarians. You can see it all over Sam Altman’s hedonic treadmill.
This false position leads us down the path of FGDP. Let’s explore.
Why the fudge factor? Because our goal is not to merely assess the
*price* of all goods produced, a mere number which can be measured
by mundane if fallible techniques, and still worse can actually be
*defined*—but their *value*, that is, *hedonic
utility*. This is an almost spiritual and essentially qualitative and
personal assessment. In usual 20C style, we damn the torpedoes and jam
this subjective quality into an objective quantity by any means
necessary. Otherwise, how would we model it?
For example: how much more fun of a computer is an iPad than an Apple II? Is it 37.6 times more fun? Or 198.2 times more fun? Or even 547.9? It would seem clear, to anyone not a blithering idiot, that any process which claims to be able to derive any such number is retarded at best and may well constitute felony math abuse.
Not at all! The Bureau of Labor Statistics is, in fact, in possession of exactly this figure. Here’s how they do it. Since Apple was selling computers continuously from Apple II to iPad, we can look at the period when both the Apple II and Apple III were on sale, divide the list price of the Apple III by the Apple II; later, the Mac 512K by the Apple III, and so on until we reach the iPad. This process is called hedonic regression. It is thoroughly official—approved of by both Harvard and the US Government. So who’s the blithering idiot now?
What’s especially awesome is to imagine a world with a sane financial
system in which the quantity of money (perhaps Bitcoin) is fixed, and so
are the time preferences of consumers. As a result, AGDP does not
change. AGDP growth is *zero by definition* in this model.
But in such a world, nothing prevents the hedonic quality of technology from increasing. Computers can get better. Indeed they should. And FGDP can increase—though all FGDP increases are due to the GDP deflator, i.e., hedonic fudge.
Now, consider the logical situation of the Altman position (also held, I think, by Peter Thiel—and certainly not uncommon among otherwise sensible thinkers) in this constant-money, constant AGDP model. The argument is that we want “real” (FGDP) rather than “inflationary” (AGDP) economic growth, and the only path to this “real” growth is technology advancement.
True enough! In fact, it’s so true that it’s… a tautology. With constant AGDP, we’ve removed all the variables from the equation. Oops.
When AGDP inflation, which none of these technology prophets wants to talk about, falls out of the equation, the argument that “we need technical progress to spur growth” turns into… simply the hedonic treadmill. It expands to “2013 isn’t fun enough, so let’s get more and better technology to make 2014 more fun than 2013, because otherwise our fun sensors are all burned out and 2014 will be boo-oring.”
Well, sure! I, for instance, am seriously confused about why everyone in 2013 is still wearing headphones with wires. What’s with Bluetooth A2DP? Doesn’t it work? And definitely, they should speed it up with that Google Glass thing. If that ain’t hedonic I don’t know what is. Especially when it comes to pr0n. Certainly the BLS is falling down on the job if pr0n isn’t part of their GDP deflator. OMG, pr0n—what could be more hedonic than pr0n? Google Glass + pr0n—that’s what. As Glenn Reynolds puts it: faster, please!
But, you know, this “2013 isn’t fun enough” argument really doesn’t tell us anything at all about why Detroit is a ruin. It doesn’t tell us why Baltimore is a ruin. It doesn’t tell us why Oakland is a ruin. It doesn’t tell us why Stockton is sinking into the Sacramento River, or why Newark could absorb all of Mark Zuckerberg’s personal net worth without becoming anything at all like Cherry Hill.
What’s happened here is that, starting with an absurdity, we have
reasoned ourselves into nonsense. Or at least, into pr0n. Our premise
was that the purpose of economic activity is hedonism, i.e., the
satisfaction of human desires. Certainly, technology can satisfy human
desires. Pr0n can too, and so can heroin. (In fact, it would be
interesting if the BLS could determine which is more hedonic: the
original iPad, plus heroin, or the Retina with 4.7 million pixels and
maybe just a little codeine to take the edge off. This could be
computed easily, and with impeccable quantitative rigor, by combining
Apple price series with the latest data from Silk Road.)2
This absurdity happens to be the consensus reality. Yet it is no less
absurd. Let’s put on our John Carpenter sunglasses and look at the
*real* reality, terrifying though it is. Surely if you can read
all the way down in an essay this long, you can handle the real reality.3
In actual reality, we are trying to answer the question: how should America be governed? We are therefore reasoning from the perspective of the State. Since sovereignty is conserved, the State is always and everywhere absolute and omnipotent. Therefore, the hedonic satisfaction of its citizens, who are in fact its slaves, is not and cannot be a goal. It may be a means to an end, of course. As when we administer heroin through the barracks water supply to reward Camp #127 for exceeding its uranium production targets three months in a row.
Well, see. I told you reality was scary. I don’t actually believe absolute government, which is always and everywhere the reality, implies totalitarian government. USG is an absolute government as well. I am not a big USG fan. But I don’t seem to find myself in the uranium mines.
In general, the classic 20C phenomena of totalitarianism appears not in absolute governments that are secure and invulnerable, but in extremely weak ones that in consequence have to take extraordinary measures to repress their enemies. This (among other things) is the difference between Louis XIV and Stalin. USG’s great virtue is that its monopoly of power is far more secure than Louis XIV’s, so it doesn’t have to give a damn what I post on my stupid blog.
But if we are analyzing real governments in the real world, our financial analysis has to be rooted in political reality. The political reality is that “citizens” are not owners of their government, but rather assets—in other words, slaves. Our only hope is for a regime that’s more Thomas Jefferson and less Simon Legree. Fortunately, as we’ll see, this analysis aligns the financial interests of the State with our own interests as human beings.
What are the financial interests of the absolute State? To maximize the
value of its productive assets. The State’s assets are (a) land and
buildings, (b) equipment, and (c) human chattel. We understand how to
value and manage (a) and (b) just fine. But most of its equity consists
of (c)—an asset not really taught in most business schools. (Fortunately you still have those yellow old stacks of
*DeBow’s Review*.)
There is another way to ask whether, excluding advances in technology (which do fall under (c), since technology is a human ability—but hard to monopolize), America is a more valuable nation in 2013 than it was in 1950. We can ask: is the average American a better human being than his or her ancestors of 1950? I.e.: has the USG cultivated its human capital, or wasted it?
For example: is this person—this asset, this slave—a harder worker? We’ll assume the State cannot change his IQ, because I have seen no evidence that it can—but is he more knowledgeable? Is he more moral, more physically healthy, wiser and more prudent? A better father, a better mother?
Again, I believe the answer is obvious. There are certainly some ways in which the average American of 2013 is a better person than his grandfather. He is probably a better feminist, for instance. He is much less likely to be an anti-Semite, homophobe, etc. These factors don’t really affect his economic value, but perhaps they’re worth mentioning anyway.
On the other hand, the American of 2013 is *much* more likely to be
a meth-head, a thug or ho, a worthless trustafarian slacker, etc., etc.,
etc. Especially when we look at non-elite ethnic
subpopulations—“cracker” Scots-Irish, African Americans, etc. (though if we listen to
Ron Unz, even the Jews are going to the dogs)—I don’t think any serious
person could really claim that the average American is superior as a
human being to his grandparents. You might as well assert that the
original iPad was teh greatness but this Retina crap they’re making
these days is just lame.
What’s notable about this interpretation is that, again, your interests and your government’s are just about perfectly aligned. You don’t want to be a heroin addict. Washington doesn’t want its slaves to be heroin addicts. You want to be a better person—more informed, more reliable, more capable. As a better person, you are a better and more valuable capital asset. You augment your government’s market cap. Back to Sam Altman:
Most of us want our lives to get better every year—the hedonic treadmill is a pain that way.
As “hedonic” implies, “better” means “more fun.” Obviously this is the attitude you’d expect from someone born in the Bush administration. Could it be any other way?
Us old Nixon fogeys have pretty much exhausted the hedonic treadmill. There’s not much left of your hedonic treadmill after the 17th time
cleaning up baby hork in the middle of the night. At that point (yes,
new parents, it *does* get better) a nice glass of wine and a
dinner out with your wife is more or less the hedonic equivalent of a
meth-fueled threeway with strippers.
Most of *us* want to become better people every year. We’re pretty
confident, perhaps falsely, that this will lead to more hedonic rewards
in the long run or at least has the best chance of doing so. But this
isn’t the goal. The goal, believe it or not, is to become better
people. And ideally our children will be even better than us. So
again—the market cap goes up.
Everything I’m saying here (including the economics) was said by Carlyle
more than 150 years ago, notably in
*Chartism*. The apotheosis of the hedonic principle is the immortal
*Pig-Philosophy*. Briefly, Carlyle tells us, the difference between man and beast is that
maximization of hedonic utility is always and everywhere the method of a
beast. Not coincidentally, it is also the method of a toddler. And it
is also the method of the Austrian economist, although he at least
realizes that the “utility function” is qualitative and subjective
rather than quantitative and objective, and adds
time preference.
To Mises and Rothbard, the human being as economic actor is a very smart pig, often willing to exchange less slop today for more slop tomorrow. This is not at all the view of Carlyle—nor is it the view of List. Of course, from the economic perspective of the State, slop production is all that matters. But the human being is not only an economic actor—nor is the State only an agency of production. What we’d really like to see is a model in which there is no tension between Pig-Philosophy (which must be acknowledged as true) and actual human civilization.
We are now in a position to attack the mystery of “growth.” Why, if economic hedonism is such a shallow and easily debunked philosophy, do so many people take “growth” so seriously?
All subterfuges and evasions to the contrary, the basic economic problem faced by 20th-century governments (somewhat less in the 19C; far more in the 21C) is unemployment. The cause of unemployment is simple: in an industrial economy, most human beings are economically useless. They are not productive assets at all. They are liabilities. For a brief transitional period, they could still be used as industrial robots. This period is close to its end.
For instance, suppose a Sam Altman were given plenary power over the US economy, reorganized into One Giant Business. His mission: cut costs, while maintaining production. His methods: eliminate white-collar busywork (real estate agents, lawyers, medical billing clerks, etc.); replace human industrial robots with actual industrial robots; and when all else fails, replace high-cost American labor with low-cost Indians housed in barracks and fed only on lentils, Dubai-style.
Does anyone doubt that aggressive and autocratic application of these methods could reduce US employment by 5 to 10 percent a year for at least a decade? Indeed, as the Singularity nears, the future of work becomes clear—there is an IQ threshold below which any human, no matter how cheap to feed, is a liability. Classic unskilled manual labor remains productive in some domains—gardening, housecleaning, and so on. Perhaps this will be true for another decade or two. It will not be true indefinitely. As the machines get smarter—assuming they get smarter—the threshold will rise. Eventually, the only human beings worth employing will be Sam Altman and his friends. Then, at last, even they will be laid off. Universal unemployment is the definition of the Singularity.
Now, it’s important to note that from a strictly *economic*
perspective, there is no problem here at all. The absolute State as
Pig-Philosopher has a simple answer. As Stalin put it—no person, no
problem. These surplus human robots can simply be sacrificed, like
worn-out lab mice. At this point they stop being liabilities and become
assets again, since they can be sold as organs or at least organ meats. Certainly, when the State itself becomes a computer, this logic will be
irresistible. Let’s call this approach to human liabilities Solution A.
From a *political* perspective, Solution A is a nonstarter. Hopefully it will always remain a nonstarter. If we are entirely wedded
to Pig-Philosophy, we can explain this by saying that sacrificing human
liabilities (especially within earshot) actually damages the capital
value of the non-sacrificed pigs, because it terrifies and demoralizes
them. But is this true? Might it not motivate them, instead? Whatever. There are more things in my philosophy than pigs, and yours
too as well, and while I am quite willing to take a King I draw the line
at a Computer-King—especially if the Computer-King is programmed
entirely with Pig-Philosophy. Since I am more tolerant in this regard
than most, I just don’t see Solution A happening.
We move on to Solution B, which I think is the solution most people believe in. Work? Who the hell wants to work? Work is anti-hedonic by definition. If it didn’t have negative utility, it wouldn’t be work. So, it’s supposed to be a problem that in the future, work will be obsolete, and we’ll be able to produce goods and services without any human labor at all? That doesn’t sound like a problem to me. It sounds like a victory.
The problem with Solution B is that we’ve already tried it, quite extensively. You see Solution B every time you go to the grocery store. Next to the button marked “Debit/Credit” is one marked “EBT.” Ever pressed that one? Even just by mistake? It’s the Solution B button. America has entire cities that have moved beyond anti-hedonic labor disutility and entered the gleaming future of Solution B. One of them is called “Detroit.”
Solution B is not the culmination of human civilization, it turns out, but its destruction. Even in terms of mere Pig-Philosophy, it is destructive, because it ruins a human asset. If we appraise humans as robots, we see that this is a special kind of robot: it rusts up if not continually operating. As beasts, we are beasts who evolved to work. Our species achieved world domination as a result of our capacity for work. To feed and entertain a human being, without requiring productive effort or at least some simulation of it, is in the end just a way to destroy him—not too different from Solution A.
There are some human beings, Sam Altman presumably among them, who are
natural aristocrats. They can acquire the resources they would need to
never work again, and *still continue to work*. While this is
lovely, we need to face the reality that the human species is what it
is. The population does not consist largely or even significantly of
natural aristocrats. Not, for instance, in Detroit. “Dead corpses, the
rotting body of a brother man, whom fate or unjust men have killed, this
is not a pleasant spectacle; but what say you to the dead soul of a man,—in a body which still pretends to be vigorously alive, and *can drink rum*?”4 Carlyle knew all about *Hardcore Pawn*.
Beyond the creepy A and B, all solutions to the problem involve a State which compels, through economic or other means (it hardly matters), humans who are not economically productive to submit to work or some simulation thereof. For instance, especially with the Oculus Rift, technology is beginning to present us with a Solution C, which combines physical imprisonment with virtual enrichment. It’s not clear what a life-scale virtual environment would consist of, but it would surely involve work or something like it. I don’t find Solution C particularly creepy, but I may be alone in this. It is certainly less creepy than A or B. I suspect that if it was done right, the customers would vastly prefer it to their present vile circumstances. But I also suspect it will never happen.
Beyond A, B, and C, we enter the domain of solutions which involve distorting labor markets to integrate these human liabilities into some semblance of a normal institution of production. Solution D is the obvious approach and has been practiced by regimes around the world since Cheops was a little boy: to keep the peasants fit, healthy and happy, pay them to do otherwise unnecessary work. Like, you know, building pyramids.
There is an apocryphal anecdote which illustrates Solution D perfectly. It probably never happened. A famous American economist—Milton Friedman, perhaps—is visiting China, perhaps in the ’80s, and sees a construction project where workers are digging a canal, with picks and shovels. “Why not use bulldozers?” the economist suggests.
“But Professor Friedman,” his host points out, “this is a jobs project.”
“Oh!” says the apocryphal professor. “Well, in that case, why are they using picks? Why not give them spoons?”
While this is meant to illustrate the supposed idiocy of Solution D, it
actually illustrates the design space. The purpose of Solution D is to
lose as little money as possible, while maintaining the human quality of
your assets and preventing them from degenerating into *Hardcore Pawn*
customers, 10th St. zombies or other revolting parodies of the human
condition.
Digging ditches with appropriate hand tools is a simple and almost ennobling, in its own small way, form of manual labor which is ideally suited to the condition of most humans, delicate aristocrats perhaps excepted. (It is possible to construct makework for delicate aristocrats, but it takes more imagination.) Digging ditches with spoons is a degrading punishment appropriate only for refractory pedophiles. Since there is never any shortage of ditches you’d rather have than not, there is no need to issue spoons—unless the purpose of the project is exemplary degradation.
So what’s the problem? Why isn’t USG sending its millions of gangstaz, its hundreds of thousands of zombies, and its uncountable hordes of ordinary young people who just can’t find a damn job, to self-improvement-through-labor facilities where they create gleaming new national parks, which no one ever visits, on the North Slope of Alaska? It might seem illiberal, but it can’t be—FDR did it.
In general, makework programs are restricted to strong governments. Ours is a large government, but by no means a strong one. FDR’s was a strong one. When a strong government wants to “create jobs,” it just hires people. If the product is useless and the work is just makework, it says so. The strong are confident and can tell the truth. A weak government has to shroud the truth in a cloak of lies—it has to convincingly pretend that our great nation may be doomed without substantial and immediate improvements to “Gates of the Arctic National Park.”
Oddly, makework, a superior solution by any standard, is a softer
political target than good old Solution B welfare. Makework has to be
defended by lies, whereas welfare is indefensible. The defenders of
welfare are therefore forced into the brazen fortress of property—they
and their clients must assert that they are *entitled* to these
emoluments, which is of course the one thing they ain’t. But, having
established adverse possession, they make a pretty good go of retaining
it.
Politically, the ideal way to apply Solution D is to make the actual
work as separate as possible from the source of funding. This brings us
back to AGDP and “growth.” (You didn’t think we were
*digressing*, did you?)
Politically, the best way to fund and operate makework is to make it indistinguishable from the rest of the economy. If you tax productive citizens $1T a year to employ 10 million Americans to build, with hand tools, a 1:1 copy of Rome at the base of Mount Igikpak, you create a giant political target for stupid unruly peasants who persist in not understanding the genius of Lord Keynes. If instead, you manage to inflate aggregate consumer spending by $1T, you create ordinary jobs for ordinary Americans all across America—because where does that $1T go?
Some of it winds up as profit, of course—but most of it goes into the costs of production, i.e., labor costs. I.e., creating jobs. The love is spread all over the country as a delicious buttery layer of prosperity. No one votes against prosperity. Ever.
Of course, you’re still annoying the people you tax. Nor is it possible to pump tax money directly into consumer spending. It has to go through government spending instead. This makes two targets: the taxation and the spending. It can be managed. But it also can be improved.
Suppose you borrow instead of taxing? This is better because no one feels the bite. However, government borrowing—which, as we’ve discovered, is identical in every way to mere “money printing,” i.e., equity issuance, since fiat currency is government equity and there is no real accounting distinction between Fed and Treasury, or between Fed notes and Treasury obligations—still has a downside. The downside is that it’s reported and has to appear in the newspapers—alarming the stupid peasants who persist in not understanding the genius of Lord Keynes, instead believing that Washington has to pay back its debts as though it were some stupid peasant.
Moreover, even the most dedicated bureaucrat who is more Keynesian than Keynes himself feels a slight sense of alarm about the indiscriminate use of this privilege, because somewhere in the back of his reptile brain he understands what he is doing: imposing a stealth capital levy on the wealthy, by diluting the dollar supply. Suppose dollar holders evaded this tax, which after all is a tax, by switching to some other monetary asset? Gold? Bitcoin? Honus Wagner baseball cards? Besides, if the dollars have to pass through Washington, the spending side remains a target.
Therefore, absolutely the best way to inflate AGDP is to *increase
private-sector capitalization*, generating a
wealth effect. Moreover, there are two ways to do this, since there are two forms of
capital asset: debt and equity. Debt is dangerous because it has to be
paid back. More on this in a moment. So we have a second-best way to
inflate AGDP, convincing the private sector to borrow more; and a
first-best way, making the stock market and real estate go up.
The latter is solution D-1, the absolute bestest way (from a political perspective) to create jobs, and the mainstay of the Greenspan-Bernanke era of American prosperity. In short, our actual reality. The former is solution D-2, as practiced in the great nation of China. (And, wonderfully, Angola.)
Is there a downside? Of course there is. Capitalization (debt or equity) is supposed to reflect actual capital. When you increase capitalization (debt or equity) without a corresponding creation of productive assets, you are storing up trouble. Excessive market cap is like nuclear waste. If it gets out, as in 2008—you have a problem. Fortunately, this problem can always be solved by solution D-3, printing money to buy nuclear waste.
Note that all of this is a question of financial alchemy. None of it
has *anything at all* to do with the resolution on your iPad, or
other technology ingredients that go into computing the FGDP hedonic
deflator. It is all a matter of AGDP. It’s true that AGDP inflation is
reflected in FGDP, but the number that matters is AGDP—the quantity of
money, not the quality of products. If you try to solve the problem of
inadequate AGDP inflation by improving the GDP deflator, i.e., by better
technology, you are committing cargo cult economics. You are trying to
cause a cause with an effect.
Is there any problem at all with this insane machine? Sure there is. It’s insane, after all. Its insanity is totally disproportionate to its actual purpose, i.e., employing otherwise idle and useless humans. As we’ll see, a sane regime could accomplish the same goal far more sanely.
A financial system is a central planning mechanism. To the extent that a planning mechanism, whether automatic or bureaucratic, is sane, it instructs economic actors to make sane and rational decisions, like investing in productive assets. To the extent that it instructs economic actors to do insane things, like building empty cities in the middle of Mongolia, its automatic and supposedly free-market nature is no different from the bureaucratic insanity of a Gosplan, or the autocratic insanity of a Houphouët-Boigny. Of course, it still fulfills its actual mission of creating jobs. But not without significant and unnecessary financial weirdness, whose only purpose is to pretend that the machine is not in fact a makework scheme.
In America, the consequence of job creation through AGDP inflation is the notorious FIRE economy, a central-planning system in which the only income source is asset-price inflation, and the employment created involves Mexican immigrants installing granite countertops and nice white ladies selling real estate to each other. For instance, this is essentially the economy of Ohio, once one of the world’s great industrial centers. As a way for human beings to spend their time, frankly, it seems lame and depressing. Is it that much better than digging ditches with spoons?
Moreover, nuclear waste is dangerous. A critical point is passed in an event like that of 2008, which might be described as the transition from debt capitalism to debt communism. Under debt capitalism, it is possible to sustain the illusion that both borrower and lender are private-sector agents. Under debt communism, a state we have now attained, borrowing remains a private action, but the Fed is now and forever the lender of first resort.
The bottom line is that in business terms, what’s wrong with the American economy is very simple. It loses money. In order to keep operating on an even keel, it needs to borrow roughly $1.2T a year. In other words, we have a simple way to get 2% AGDP growth per year—expand the debt bomb by 2% a year. We can also inflate the stock and real estate markets, which is better, of course, because equity does not create obligations. It’s a way of enriching the rich at the expense of the poor, but hey, what else is new?
A money-losing economy, like a money-losing restaurant, sucks. It sucks in all kinds of ways that have no apparent connection to finance. The entire dining experience is grim. This, indeed, is the experience of the entire “old economy” outside the little bubbles of Silicon Valley and Wall Street. My in-laws live in Columbus. Columbus sucks. Even with Chairman Ben’s $85-billion-a-month bond-buying “recovery.” It is more and more palpably a Soviet restaurant.
The fantasy of the money-losing economy, like that of the money-losing restaurant, is that if enough money is pumped in, eventually the “pump will be primed” and the engine will restart on its own. On this theory, America has been expanding its debt bomb since the 1930s. Yo, it’s not working. I’d be happy to bet any sum of money on what would happen if Chairman Ben turned off the QE. If you are into professional betting, bet on zero interest rates for the infinite future. “Capitalism” with zero interest rates is a mockery and a monstrosity—but there is no alternative within the system.
Worse, we will face a 2008 all over again when the private sector’s debt
becomes so large that even borrowing from an infinite lender is no
longer a market operation. Goldman Sachs can borrow at 0%, but you
can’t. Debt has to be serviced. Debt saturation happens. When it
happens, debt
deflation begins—and in a debt-deflation phase, even *borrowing
at 0%* is no longer a market operation. Once debt deflation begins,
there is no alternative but reflation by direct government spending, in
the classic
“Bernanke
helicopter” mode. While the political system is always capable of
helicopter drops in theory, it may not be capable of it in practice. And if it can achieve direct inflation, it has still entered a purely
Soviet mode in which all investment is directed by the State.
And all this, just so that marginally employable Americans with an IQ of 95 can have jobs. Which many of them can’t. Evaluated as a job creation mechanism, which is what it is, this insane financial inflation machine earns no better than a C–. Granted, it has put us so far behind the debt 8-ball that if you turn it off, you go straight from C– to F–. Which was about to happen in 2008 before Bernanke fired up his helicopter. It’s a trap! And the bait isn’t even that tasty.
So are there other solutions? Is there a Solution E? An F? Well, sure. They’re not politically possible. Nothing other than what we’re doing now is politically possible, at least, not without regime change. As Hunter S. Thompson put it, that bothers me the way VD bothers a Hell’s Angel. If not less.
E is a factor we left out: foreign trade. As it happens, the US with its disastrous 37% labor-force nonparticipation rate (i.e., the real measurement of “unemployment”, which is commonly cited in terms of the meaningless benefit-claims number), besides borrowing $1.2T a year, runs a trade deficit of $600B a year. I.e., 3% of US GDP. What does this mean?
What it means is that if USG *entirely eliminated foreign trade*,
closing its ports like Tokugawa Japan, US businesses would experience an
immediate 3% jump in gross revenue, and hence in employment. Of
course, this would involve a boom in import substitution industries and
a bust in export industries, but the net effect would be a boom. $500B
ain’t nothing. The hedonic effect, of course, would be negative—but
as we’ve seen, inadequate hedonism is anything but our problem.
We could do even better than this. We could eliminate imports, while maintaining exports. Of course, we would be admitting the mercantilist reality of world trade, something our Asian trading “partners” already understand. Does it hurt that much to say: “Friedrich List was right?” Let’s say that retaliation would cut our exports not to zero, but just in half. In that case, we have $0 in imports and $650B in exports, meaning a net gain in revenue to US businesses of roughly $1.2T—and that’s not counting a multiplier effect of money spent over and over again.
Again, we’d see some hedonic pain. We’d also see something like a 10% boost in AGDP overnight, as all the crap we buy from China now had to be made in America. Which means a titanic economic boom perhaps unparalleled in history, except at the inception of the Third Reich when Hitler adopted more or less the same autarkic policies. Less fun—more prosperity.
Call me crazy, but I don’t believe mercantilism—which, before Adam Smith, was no more than conventional wisdom in political economy—is inseparable from yet another persecution of the Jews. Indeed, any pre-liberal mercantilist would regard the combination of free trade, massive trade deficits, and massive unemployment, as economic insanity on a par with persecuting the Jews.
(I’m sure Professor Krugman, who can be accused of many things but not of misunderstanding the power of aggregate demand, understands this perfectly. Which makes him yet another concern troll—i.e., a person concerned not with solving the problem, but with exploiting it.)
As List puts it, free trade is the weapon of the strong. England and later America adopted free trade when we were strong. Well, face it, we’re not strong anymore. But we keep hitting ourselves over the head with the weapon. Why? It’s simple: blithering idiocy.
And yes, there is also a Solution F. Solution F is a reality we’ll eventually have to face: technology restriction. Actually, Solution E is a special case of Solution F, because foreign imports are best considered as a technology of production. From the perspective of the American economy, there is no difference between production by Chinese workers, and production by robots—both imply production which does not employ American workers.
It is hard to imagine technology restriction working, because we have to
get past imagining this terribly powerful tool being wielded by our
utterly incompetent and corrupt rulers. The same problem exists in
contemplating effective protectionism. The most obvious outcomes of
both these tools simply amount to featherbedding if not outright theft. As a result, protectionism has gained a bad name, and technology
restriction is well outside the policy landscape. Yet in actual
reality, the problem is not with the tool, but with the wielder. Once we
admit that USG isn’t working and has to go, we can imagine replacing it
with something that doesn’t suck—and can actually wield such a tool.5
I am not suggesting across-the-board technology restriction, general medieval stasis, low-res iPads, banning Google Glass, or anything of the kind. My idea of Solution F involves targeted technology controls designed to create market demand for the type of unskilled human laborers that modern industry has made obsolete, but that we are politically unwilling to kill and sell as organ meat. Being so unwilling, we have no choice but to provide these people with a way to survive as human beings—preferably as human as possible.
For instance, two forms of semi-skilled labor well-known to be good for the human soul are (a) craftsmanship and (b) farming. Compared to the demand for these professions that once existed, both have been essentially eradicated. How many meth-heads, thugz, etc., are there in America whose great-great-grandparents were craftsmen, farmers, or both?
Consider one targeted technology restriction: no plastic toys. If my children are going to have toys, these toys will be made from wood, with hand tools, by Americans, in America.
Results: (a) negative financial impact on parents who need to buy toys for their children, and might have to increase their toy budgets; (b) negative hedonic impact on children, whose toy bins are no longer filled with brightly colored Chinese plastic crap; (c) negative economic impact on China, which is not our country, so who cares; (d) gigantic economic boom in the American wooden toy industry, providing employment to any fool who can whittle.
How can anyone contemplating these outcomes not agree with me that (d) considerably outweighs the sum of (a), (b) and (c)? Or take agricultural labor, for which an arbitrary level of demand can be created simply by banning industrial farming techniques. Every ghetto rat in America today could find employment as an organic slow-food artisan. Crap—even a 10th Street zombie can milk cows. We’d have to pay them for their work, of course. We already pay them for not working. Is this better for us? For them? WTF, America?
Is this unrealistic? Of course it’s unrealistic. To be exact, it is completely inconsistent with consensus reality, to the point of seeming utterly bizarre. I doubt a Sam Altman would even be able to evaluate it. Normal sane people, especially rich ones, are socially well-integrated and live in consensus reality, i.e., a Plato’s cave of pure blithering idiocy.
And yet, I assert, in *actual* reality—my Solutions E and F are
slam-dunk no-brainers. (We’d still need a way to safely shut down the
Solution D debt bomb, but that’s a matter for another post.) Will this
gap ever be bridged? Will we ever escape from the 20th century? Almost
certainly not. But for whatever crazy reason, I still feel the need to
point out that we could. Perhaps it’s just because I’m a nut.
5. As Moldbug writes in “Carlyle in the 20th Century”:
Observe the fascist or socialist State again, through the eyes of the orthodox libertarian or classical liberal. We see an 800-pound gorilla on acid, whooping it up at the wheel of a running bulldozer. Your libertarian says: stop that bulldozer! Your Carlylean says: stop that gorilla!
A bulldozer, well-made, well-maintained and well-operated, is a positive force in the world. But only if it is controlled by a
manand not a gorilla. If you saw a bulldozer driven by aqualified bulldozer operator, dear libertarian, would you cry: stop that bulldozer! I think not. You might be amazed at all the good works a qualified bulldozer operator can work with a bulldozer.
| true | true | true |
Unqualified Reservations by Mencius Moldbug. Ebook links and full chronological archive. An introduction to neoreaction (NRx).
|
2024-10-12 00:00:00
|
2009-04-14 00:00:00
| null | null | null | null | null | null |
37,658,050 |
https://blog.diarupt.ai/introducing-diarupt/
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
4,295,631 |
http://blog.appbrain.com/2012/07/more-monetization-options-banners-in.html
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
716,196 |
http://www.pleaseletmedesign.com/projects/iq-font/
|
This domain name has been registered with Gandi.net
| null |
# This domain name has been registered with Gandi.net
**View the WHOIS results of pleaseletmedesign.com** to get the domain’s public registration information.
**pleaseletmedesign.com**
is unavailable
Want your own domain name?
Learn more about the domain name extensions we manage
| true | true | true |
This domain name has been registered with Gandi.net. It is currently parked by the owner.
|
2024-10-12 00:00:00
|
2023-01-01 00:00:00
| null | null | null | null | null | null |
26,415,128 |
http://homepages.cwi.nl/~steven/enquire.html
|
Enquire: Everything you wanted to know about your C Compiler and Machine, but didn't know who to ask
| null |
Steven Pemberton, CWI, Amsterdam.
This is a program that determines many properties of the C compiler and machine that it is run on, such as minimum and maximum [un]signed char/int/long, many properties of float/ [long] double, and so on.
As an option it produces the ANSI C float.h and limits.h files.
As a further option, it even checks that the compiler reads the header files correctly.
It is a good test-case for compilers, since it exercises them with many limiting values, such as the minimum and maximum floating-point numbers.
The C source contains a long preamble comment that explains how to compile and run it.
| true | true | true | null |
2024-10-12 00:00:00
| null | null | null | null | null | null | null |
6,692,914 |
http://insidemovies.ew.com/2013/11/07/star-wars-episode-vii-release-set-for-december-18-2015-breaking/
|
Dotdash Meredith - America's Largest Digital & Print Publisher
| null |
We are America’s largest
digital and print publisher.
The brands you
*love.*
The experiences you
*want.*The answers you
*need.*to escape into glamour
to pick a houseplant
to have a baby
to invest sustainably
to get the perfect smoky eye
to make a manhattan
Nearly 200 million people each month trust us
**PEOPLE is a cultural force.**
**Mélanie Berliet, SVP of The Spruce Home**
"
"
*One Thing:*" A Video Series From The Spruce**Trusted health information when you need it most**
**Jenneh Bockari & Joseph Rishe Wedding**
A Stunning Wedding at the Historic Mission Inn Hotel & Spa in California
A Stunning Wedding at the Historic Mission Inn Hotel & Spa in California
**Caleb Silver, Editor-in-Chief of Investopedia**
Award-winning podcast from Investopedia
Award-winning podcast from Investopedia
To make decisions, take action and find inspiration
**Food & Wine Classic in Aspen**
A 3-day culinary experience with game-changing culinary leaders, innovative wine & spirits experts, and epicurean insiders
A 3-day culinary experience with game-changing culinary leaders, innovative wine & spirits experts, and epicurean insiders
**Serious Eats researches deeply, tests rigorously, and pioneers novel techniques for home cooks, like using a torch to recreate the flavor of a restaurant-style stir-fry**
**Byrdie Beauty Lab Live Stream**
How To Get Your Glow On with Hallie Gould & Erin Pulley
How To Get Your Glow On with Hallie Gould & Erin Pulley
**We’re Back!**
Getting your kids classroom ready (finally!)
Getting your kids classroom ready (finally!)
| true | true | true |
Dotdash Meredith is America's largest digital and print publisher, with brands including PEOPLE, Better Homes & Gardens, Allrecipes, Investopedia, Verywell, and more! Learn about career opportunities, our leadership team, and how we can help you reach audiences across our network of trusted brands.
|
2024-10-12 00:00:00
| null | null |
dotdashmeredith.com
|
dotdashmeredith.com
| null | null |
|
11,862,550 |
http://www.economist.com/news/science-and-technology/21699898-fraud-bureaucracy-and-obsession-quantity-over-quality-still-hold-chinese
|
Schrödinger’s panda
| null |
# Schrödinger’s panda
## Fraud, bureaucracy and an obsession with quantity over quality still hold Chinese science back
CHINA seems to swing from insecurity about its science to hubris. In 2015, when Tu Youyou, a pharmacologist, became the first scientist to win a Nobel prize for work carried out in China, the state media’s reaction was not to celebrate her ground-breaking medicinal chemistry. Rather, they claimed that the award was a recognition of traditional Chinese medicine—something she said had little to do with the work that won her the award.
This article appeared in the Science & technology section of the print edition under the headline “Schrödinger’s panda”
## Discover more
### Could life exist on one of Jupiter’s moons?
A spacecraft heading to Europa is designed to find out
### Noise-dampening tech could make ships less disruptive to marine life
Solutions include bendy propellers and “acoustic black holes”
### Meet Japan’s hitchhiking fish
Medaka catch rides on obliging birds, confirming one of Darwin’s hunches
### AI wins big at the Nobels
Awards went to the discoverers of micro-RNA, pioneers of artificial-intelligence models and those using them for protein-structure prediction
### Google’s DeepMind researchers among recipients of Nobel prize for chemistry
The award honours protein design and the use of AI for protein-structure prediction
### AI researchers receive the Nobel prize for physics
The award, to Geoffrey Hinton and John Hopfield, stretches the definition of the field
| true | true | true |
Fraud, bureaucracy and an obsession with quantity over quality still hold Chinese science back
|
2024-10-12 00:00:00
|
2016-06-02 00:00:00
|
Article
|
economist.com
|
The Economist
| null | null |
|
16,498,353 |
https://medium.com/@kushnickbruce/we-solved-net-neutrality-400-billion-broadband-scandal-is-the-evidence-94cc9d7b279c
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
16,138,057 |
https://www.forbes.com/sites/ciocentral/2018/01/12/what-a-world-without-net-neutrality-looks-like/
|
What A World Without Net Neutrality Looks Like
|
CIO Central Guest
|
Imagine this: You're at work one day and you see colleagues gathered around a monitor, watching breaking news. The day’s storm has required the local school to close early and you are seeing live news footage of parents picking up their kids. But you’re still at the office because you didn’t get the notification. You were in meetings all day and not able to check your email or voicemail. You’re frustrated because the school didn’t send you a text message which you could have looked at while you were in your meetings. Later you learn the school actually ** had** sent you a text message with instructions about an early pickup -- on the same platform it uses for other important parent notifications -- but you didn’t get the message this time. Why not?
You might be surprised to learn that you didn’t receive this critical message because your wireless carrier blocked it without informing you and without asking for your permission. It unilaterally denied you this vital message without regard to who sent it, the message’s content, or even that you specifically chose to get it. What will the next message be that you don’t get?
This scenario is real and prevents consumers from receiving tens of millions of messages every year. Importantly, it previews a world without net neutrality. Because the Federal Communications Commission (FCC) just rolled back net neutrality protections, the status quo with text messaging - where carriers decide unilaterally what organizations are allowed to text with you - could spread to everything you do on the Internet. You might not have access to some websites due to indiscriminate blocking by your internet provider, and some other websites might work so slowly that they don’t function correctly or allow you to view videos or other content.
Text messages were not protected by net neutrality rules that protected voice and broadband communications. Because of this, wireless carriers have been free to block messages that consumers have actively opted in to receive, and expect to receive, from schools, non-profit organizations, government organizations, and businesses -- without explanation.
Wireless carriers say message blocking protects consumers by preventing spam and fraud. That’s an important issue to tackle, but the carriers are not providing any transparency into how they decide which messages to block and they are not alerting the senders (or receivers) that those messages were blocked. The industry needs to work together to address spam, but do so without blocking legitimate messages that consumers have opted in to receive.
While they say eliminating spam is their objective, wireless carriers are offering senders an option that costs 500x more to guarantee delivery or offering a competing service. This isn’t just about protecting consumers, this is anti-competitive behavior.
Innovative services that enable schools to communicate with parents and students via text message - are already impacted, and parents could be denied critical messages without any explanation or notification from their wireless carrier.
Non-profit organizations like CareMessage, whose partner clinics serve more than one million low-income Americans, note that when their appointment reminders get blocked by wireless carriers, without permission or notification, underserved patients could miss important reminders about taking their medication, or even go without medical care.
When important messages like these get blocked, wireless carriers simply point to their right to control the traffic on their network. But this blocking would never be tolerated under the rules that protected voice calls and broadband access.
Twilio estimates that wireless carriers block more than 100 million text messages that consumers opt-in to receiving each year. In light of the recent rollback of net neutrality protections, odds are that this pattern of having your communication arbitrarily blocked by your wireless carriers will only accelerate.
That’s because the same wireless carriers that are blocking and throttling consumers’ important text messages also happen to be internet service providers. Some of these companies insist that they will continue to operate the way they have when net neutrality protections were in place. However, without rules that protect consumers from arbitrary blocking, filtering, or paid prioritization, there’s no doubt these same providers will take a similar approach to broadband services as they have with text messaging.
The reason this offends us is that wireless carriers have a monopoly status over our communications. For any individual customer, a phone carrier has a monopoly over access to their device. Without neutrality regulation, carriers wield unilateral power over who can communicate with you. That's why our society has always ensured that power couldn't be used in unfair ways.
It's tempting to say that carriers are operating a business and should be free to set prices and discriminate services they offer as part of their product. Yet consumer access to Internet services is a duopoly, or monopoly, in many places. This is not a coincidence. In decades past, municipalities granted monopolies to phone companies, and later cable companies, to run wires through their towns. In exchange for those monopolies, municipalities asked for fair access to those wires. The Telecommunications Act of 1934 established "common carriage" to ensure that those monopolies - free from competition - couldn't exercise monopolistic tendencies they'd been granted. Again in the 1970s and 1980s, when cable television sprung up, municipalities asked for fair access in exchange for a monopoly right to run wires through our cities - at the time, that fairness was in the form of public access television.
Today, however, the Internet has far exceeded in importance the original intention of those writs. Not only is the Internet a lifeline of communications - for both television and phone calls - it's a lifeline of innovation and of our economy.
These unfair behaviors don’t just hurt the organizations sending the messages, they impact you. You may have to pay more to get these critical messages. Or you may be left wondering if you are getting these critical messages at all.
Fundamentally, whether you’re on the internet, making a phone call or sending and receiving messages, your communication should simply go through, free of interference from your service provider.
**Your Voice Matters. Be Heard.**
So what can you do? While the current FCC is rolling back the existing regulations that protect consumers’ access to a free and open internet, Congress can and should take action. Every citizen and taxpayer needs to tell their congressional representative that a future without net neutrality protections is unacceptable and they should act now to preserve open and accessible communications.
To make sure your voice is heard, text “Resist” to 50409 or call 1-844-USA-0234 or click here and you’ll be connected to the representatives for your area.
A world without net neutrality is here today but doesn’t have to persist tomorrow.
| true | true | true |
Fundamentally, whether you’re on the internet, making a phone call or sending and receiving messages, your communication should simply go through, free of interference from your service provider.
|
2024-10-12 00:00:00
|
2018-01-12 00:00:00
|
article
|
forbes.com
|
Forbes
| null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.