id
int64
3
41.8M
url
stringlengths
1
1.84k
title
stringlengths
1
9.99k
author
stringlengths
1
10k
markdown
stringlengths
1
4.36M
downloaded
bool
2 classes
meta_extracted
bool
2 classes
parsed
bool
2 classes
description
stringlengths
1
10k
filedate
stringclasses
2 values
date
stringlengths
9
19
image
stringlengths
1
10k
pagetype
stringclasses
365 values
hostname
stringlengths
4
84
sitename
stringlengths
1
1.6k
tags
stringclasses
0 values
categories
stringclasses
0 values
40,327,155
https://www.spaceweather.gov/news/g5-conditions-observed
G5 Conditions Observed!
null
https://services.swpc.noaa.gov Space Weather Conditions 24-Hour Observed Maximums Current Space Weather Conditions HF Radio: Weak or minor degradation of HF radio communication on sunlit side, occasional loss of radio contact. Navigation: Low-frequency navigation signals degraded for brief intervals. More about the NOAA Space Weather Scales
true
true
true
null
2024-10-12 00:00:00
2024-05-11 00:00:00
null
null
null
null
null
null
13,162,863
https://medium.com/@russellbarnard/why-vertical-is-the-most-engaging-type-of-video-53cdeb6f054d#.89al61w6q
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
34,764,492
https://www.micahlerner.com/2023/01/19/elastic-cloud-services-scaling-snowflakes-control-plane.html
Elastic Cloud Services: Scaling Snowflake’s Control Plane
null
### micahlerner.com # Elastic Cloud Services: Scaling Snowflake’s Control Plane #### Published January 19, 2023 ##### Found something wrong? Submit a pull request! Elastic Cloud Services: Scaling Snowflake’s Control Plane ## What is the research? Snowflake is a company that makes a globally distributed data warehouseSee Snowflake, Google Cloud or AWS description of data warehouses. . Their product must reliably respond to business-critical customer issued queries, while abstracting away scaling challenges associated with rapidly changing load. Behind the scenes, Snowflake must also reduce single points of failures by deploying across multiple cloud providers and the regions within them - an approach becoming more popular due to projects like Crossplane that simplify “sky computing” This paper discusses the design and implementation of *Elastic Cloud Services (ECS)*, the critical component of the company’s infrastructure that is responsible for orchestrating workloads. ECS enables the platform to process large amounts of data efficiently and with minimal downtime. Snowflake previously published workSee The Snowflake Elastic Data Warehouse and Building An Elastic Query Engine on Disaggregated Storage. The latter has a great summary on The Morning Paper. on their underlying infrastructure, but the most recent paper builds on prior research in three main respects. First, it specifically focuses on the component of Snowflake’s data warehouse responsible for coordination across multiple cloud providers. Second, it discusses designing the system with availability in mind, actively making tradeoffs along the way. Lastly, the paper describes novel techniques for autoscaling capacity at scale. ## What are the paper’s contributions? The paper makes two main contributions: - Design and implementation of *Elastic Cloud Services*, an at-scale control planeControl planes have come up in previous paper reviews, like Shard Manager: A Generic Shard Management Framework for Geo-distributed Applications. that powers Snowflake - Evaluation and charactierization of ECS from production deployment ## Background Snowflake infrastructure is divided into three main layers: *Elastic Cloud Services* (the focus of the paper), a *Query Processing* layer responsible for executing user queries over a customer’s dataset, and *Object Storage* (which holds the database records). The full architecture of Snowflake is not the focus of this paper, and is covered in previous researchSee the sidebar in the introduction of this paper review for links to the previous research. . Each Snowflake customer can configure one or more Snowflake warehousesMore information on warehouses here. , providing isolation between different users of the software. When a customer configures a warehouse, it is assigned a versionSnowflake releases new versions on a recurring basis, and publishes info about each release on their developer-facing documentation. of Snowflake software that determines the software a customer’s queries are executed with. Importantly, this configuration doesn’t result in the creation of backing compute resources - instead, Snowflake dynamically scales up when a customer sends queriesThis property of dynamic scaling in response to load is one of Snowflake’s claims to fame. . ## How does it work? ### Elastic Cloud Services (ECS) The paper focuses on three main aspects of ECS: *Automatic Code Management*: the code powering the product is upgraded on a continuous basis, and ECS is designed to automatically and reliably handle these upgrades.*Balancing load across availability zones*: ECS coordinates load across multiple cloud providers in order to reduce single points of failure.*Throttling and autoscaling*: the number of requests from customers can increase dramatically and ECS must be able to serve these queries (often achieving this by adding more resources). The ECS *Cluster Manager* handles these tasks. When a user requests the creation of a new cluster, the *Cluster Manager* provisions the necessary resources and creates the cluster in specified cloud providersMore information on the cloud providers that Snowflake supports (and how) is in their docs. . The *Cluster Manager* also registers the cluster with the control plane, which is responsible for coordinating the activities of the ECS clusters and scheduling user queries on compute resources (represented via *warehouses*). Once the cluster is registered, it is ready to accept customer queries and perform the necessary processing. ### Automatic Code Management #### Rollouts The code that powers Snowflake’s product is constantly receiving updatesThey also document pending behavior changes to the system. . ECS is designed to gradually rollout changes to reduce the likelihood of negative customer impact. This process is fully automated to minimize human error and ensure the fast and reliable rollout of new code. It also includes measures to ensure that customer queries are not interrupted, such as allowing VMs to finish executing their queries before they are shut down. To rollout updates to a cloud services cluster, ECS first prepares new virtual machines (VMs) with the updated software version. Then, ECS prepares the machines by warming their cachesThe paper doesn’t provide specifics of how the cache is warmed, but I would guess that it could potentially be through a ‘dark rollout’ where currently-executing customer queries are forwarded to the new machines. , and starts directing user queries to the new VMs. To minimize disruption to ongoing queries, VMs using the previous version continue to operate until their workload is finished. Running VMs with new and old versions of the software simultaneously is more expensive resource-wise, but allows fast rollbacks in the event of customer impactThis is amazing from a reliability perspective, and is also discussed in the SRE book. . Additionally, customers can pin to a specific version of Snowflake’s code if they experience regressions in the new version. #### Pools At scale, machines fail or perform suboptimally for a wide variety of reasonsThis effect is often called fail-slow at scale, and is discussed in more detail in papers like Fail-Slow at Scale: Evidence of Hardware Performance Faults in Large Production Systems. To combat this, ECS actively manages the cloud resources underlying its computing nodes, keeping track of the healthThe paper mentions using health metrics like, “memory management, CPU, concurrency characteristics, JVM failures, hardware failures”. of individual nodes (and deciding when to stop using unhealthy ones). Based on this monitoring, ECS moves VMs between *cluster pools*, each containing resources matching one of five distinct stages in a VM’s lifecycle: *Free Pool*: contains VMs that are ready to be used.*Quarantine Pool*: contains VMs that need to be removed from their clusters to resolve any pending tasks*Active Pool*: contains healthy VMs that are part of a cluster and actively handling customer workloads.*Graveyard Pool*: includes VMs prepared for termination.*Holding Pool*: for debugging purposes, Snowflake developers and automated systems can remove VMs from active service, but refrain from shutting down the underlying resources. The paper discusses two concrete usages of the pools. The first is a set of state transitions that occur when the cluster is upgraded, as the VMs running the old version of the software move from *active* to *quarantine*, and then finally to *graveyard*. The second example covers an example where a machine shifts from the *Free Pool* to the *Quarantine Pool* when it becomes an outlier with respect to processing customer queries. ### Balancing Across Availability Zones ECS (Elastic Cloud Service) loadbalances across availability zones in order to ensure minimal customer impact in the event of failure within cloud provider regionsCloud provider failures are discussed at length in a previous paper review, Metastable Failures in the Wild . By distributing VMs (virtual machines) evenly across different availability zones, ECS can redirect requests to VMs in a different zone if one zone experiences an outage or becomes unavailable for some other reason. This approach helps to ensure that the service remains available and responsive to customers, even in the face of unexpected disruptions. Loadbalancing across availability zones is a common practice in cloud computing, as it helps to improve the resilience and reliability of service. The paper describes how ECS implements two types of loadbalancing: *cluster-level balancing* (which aims to distribute the VMs for a customer’s cluster across multiple zones) and *global-level balancing* (which aims to distribute total VMs evenly across zones). The paper provides high-level details on how the loadbalancing works: When scaling a cluster out, ECS picks the least loaded zone globally out of the set of least loaded zones for that cluster. Similarly, when scaling a cluster in, ECS picks the most loaded zone globally out of the set of most loaded zones for that cluster. There are times when scaling via these two strategies doesn’t work or when they conflict with each other. For example, if there are no VMs to assign to a cluster in a given zone, it might not be possible to execute cluster-level balancing. Another situation where the balancing can be suboptimal is when “correctly” balancing a cluster across zones results in a global imbalance containing many clusters with VMs in a single zone. ### Throttling and Autoscaling The load that customers send to Snowflake varies over time. ECS uses two main approaches to handle the unpredictably of user traffic: *Throttling*: the execution of customer queries should be isolatedIf the queries are not isolated well, one would see the Noisy Neighbor problem. and should not consume excessive resources.*Autoscaling*: the load that customers send to Snowflake’s database varies over time, and the underlying infrastructure needs to automatically scale (also known as performing*autoscaling*) in response. When designing ECS’s solutions to these problems, the authors balanced five factors: *Responsiveness*: queries should start running without user-visible delay*Cost-efficiency*: Snowflake shouldn’t retain unnecessary resourcesSnowflake’s cost model also allows customers to optimize their spend - see this discussion from their official blog and posts from external developers. .*Cluster volatility*: Minimize unnecessary changes to cluster configurations, as frequent changes impact performance and cost.*Throughput*: the system should scale to meet demand.*Latency*: queries should complete in a timely manner (and not be impacted by issues like skewIn particular, this is a often a problem with joins. See an example discussion here. ) To implement throttling, Snowflake uses an approach called *Dynamic Throttling* that makes scaling decisions based on VM usage for customer queries. Instead of using static concurrency limits that do not take into account the specific demands of each workload, dynamic throttling calculates limits based on CPU and memory usageMemory pressure is also a problem in other systems, like Elastic Search. of the VMs. When a VM is executing a query and hits these limits, it doesn’t accept new queries until health metrics return to normal. This approach helps to prevent the system from becoming overwhelmed by requests and ensures a consistent service experience for customersThe paper also mentions that different accounts and users with an account are isolated from one another in order to stop a single entity from causing noisy-neighbor behavior. . Autoscaling uses load signals similar to those that *Dynamic Throttling* relies on. The algorithm for autoscaling a cluster also takes into account factors like churn of VMs, and aims to minimize scaling up and scaling down. It is capable of scaling both horizontally (adding more VMs) and vertically (increasing their size). ## How is the research evaluated? The evaluate the system, the paper considers the performance of ECS’s zone balancing, autoscaling, and throttling. To evaluate load balancing, the authors include information on how ECS reduced *skew*, a measure of “the difference between the number of VMs in the most loaded zone and the least loaded zone”. ECS’s load balancing reduced average global skew (VM differences from across a deployment) from 45 to 5 and almost eliminated skew within a cluster. By limiting skew, Snowflake deployments are less exposed to an outage in any one cloud region. Furthermore, balanced deployments simplify scaling as there are fewer VM-constrained *Free Pools*. To evaluate throttling and autoscaling, the paper includes several examples from production. One example usage of autoscaling is in response to noisy neighbor problems - after detecting the issue, ECS automatically mitigated by adding VMs to the impacted cluster. ECS also automatically scales down to reduce cost. Dynamic throttling is similarly beneficial. When a cluster was experiencing high load, ECS throttled requests to the impacted VMs, forwarding queries to VMs capable of servicing them. This smooths customer load by directing user queries to machines that are actually capable of processing them. ## Conclusion The paper on Snowflake’s Elastic Cloud Services provides an overview of the control plane responsible for running the company’s database product. I’m hopeful that several of the interesting topics the authors touch on are the subject of future publications, as the research is unique in several respects. First, its focus is unlike many papers covering distributed databases - prior art often centers on database internals or underlying consensus algorithms. The paper is also novel in that it discusses an at-scale approach to running across multiple cloud providers, a solution that is becoming more prevalent in the era of “sky computing”. While Snowflake’s implementation of cross-cloud deployment is custom to their system, the growth of products like Crossplane may generalize this design, making the path easier for future implementers.
true
true
true
null
2024-10-12 00:00:00
2023-01-19 00:00:00
null
null
null
null
null
null
6,478,950
https://hall.com/blog/blackberry-messenger-could-have-saved-blackberry/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
41,035,624
https://phys.org/news/2024-07-reveals-fiberglass-oysters-mussels.html
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
9,281,675
http://species-in-pieces.com/
In Pieces - 30 Endangered Species
null
## About this project In Pieces is an interactive exhibition turned study into 30 of the world’s most interesting but unfortunately endangered species — their survivals laying literally, in pieces. ## As Featured On Each species has a common struggle and is represented by one of 30 pieces which come together to form one another. The collection is a celebration of genic diversity and an attempting reminder of the beauty we are on the verge of losing as every moment passes. These 30 animals have been chosen for their differences, so that we can learn about species we didn't know about previously as well as the struggles they have surviving. Many of them evolved in a particular way which makes them evolutionarily distinct. Take for example the Kakapo, an animal which evolved without natural predators and thus didn’t require instincts to defend itself. Unfortunately — and as is the case with most of the species showcased here — humans began easily hunting upon encroaching their range, then introduced the stoat and other forms of pest control for other species. The Kakapo was almost completely wiped out through this introduction and fights on today because of this catastrophic lack of judgement. When you dig into a lot of research around this topic, it’s not long before you see the real darkness that is going on. Millions of years of evolution has occurred and humans are in danger of ruining it through desperation for financial gain or greed. There are some species here who as grim as it sounds, would require a small miracle to survive because the numbers are too low to rationally continue to exist, Vaquita being one such example. In Pieces hopes to educate and inspire, and provoke thought on this complex and intricate topic. I sincerely hope that you can take something new away and enjoy this collection as much as I enjoyed researching, designing and building it. ## How it's made Born out of tinkering with a simple property, this project is unabashedly part-digital experiment. The core technology used here is just good old CSS — no canvas or WebGL witchcraft. Since hearing about CSS polygons, I've been a little surprised at the lack of furore around the technology, so I wanted to create something which not only worked as a project in itself, but also pushed this underused line of code as far as possible. The shard-shifting capabilities work in webkit-browsers only, which of course is a limitation but at the same time, it works on mobile which are almost completely webkit-based. Firefox does support the clip-path property, but as an SVG referenced shape and thus, the coding for movement works in an entirely different manner. I wanted to focus purely on the CSS route. Not heard of it? Here, take a line: * -webkit-clip-path: polygon( 40% 40%, 50% 60%, 60% 40% ); * So, in essence — each shape is being morphed, moved and toyed with by a new set of co-ordinates, and as they are maintained as triangles throughout, this means 3 points, with CSS transitions to link up the movements. No tricks or tools have been used to get the illustrated results, code-wise or graphically. Point by point, shape by shape, each one has been handcrafted via a personally-created tracing JS function after illustration. If you have any questions on the technique or the project at all, please feel free to whip me a Tweet! ## Causes In Pieces is not linked to a specific charity or organisation, but I would like to highlight some of the great efforts being made out there for species under threat of extinction. Edge of Existence* – a programme run by Zoological Society of London, deals directly with evolutionarily distinct animals, and has a great list of 100 mammals and other types of species which you can look through if your interests have been perked. *This project is not associated with Edge of Existence. Of the animals featured here, a number have great causes dealing specifically with their battles, or otherwise aid in respect to the species' family. ## Sources I have used an extensive range of sources to gather the information within this site, specifically the statistical data which is fairly tough to find. I thank each source sincerely, and wish to highlight their great resources here: ## Share In Pieces If you enjoyed this project, tell your peeps! ## Poster Take the pieces home with this poster, available here. ## Wallpapers You can download a range of wallpapers of specific species for your desktop within the exhibition, but here you can have them all in one place too. Select which device takes your fancy below.
true
true
true
30 species, 30 pieces. In Pieces is an interactive exhibition of 30 of the world’s most interesting but unfortunately endangered species — their survivals laying literally, in pieces. Explore information, facts and figures and download assets of each unique species.
2024-10-12 00:00:00
2014-07-31 00:00:00
http://www.species-in-pieces.com/img/og-image.png
website
species-in-pieces.com
Wengerstoybus
null
null
36,812,066
https://hn-ai-newsletter.beehiiv.com/p/last-weeks-ai-highlights-hn-9
🤖 Last Week's AI Highlights from HN #9
HN AI Newsletter
- HN AI Newsletter - Posts - 🤖 Last Week's AI Highlights from HN #9 # 🤖 Last Week's AI Highlights from HN #9 ## Discover the latest breakthroughs and trends in artificial intelligence **Welcome back to HN AI Highlights, your weekly digest of all things AI!** The greatest highlight of this week in AI/ML appears to be the launch of Llama 2 from Meta. This new open-source AI model has a longer context length than its predecessor and reportedly outperforms other AI models on several external benchmarks. This development is significant as it underscores the rapid evolution of AI and machine learning, and the growing movement towards open-source models that can be utilized in both research and commercial applications. **Llama 2** 😃 Llama 2 is the next generation of an open source large language model which is available for research and commercial use. It is trained on 2 trillion tokens, and has double the context length than Llama 1. It also outperforms other open source language models on many external benchmarks. Furthermore, the model is supported by a broad range of people and companies around the world who believe in its open approach to AI. Download the model now and join the AI revolution! https://ai.meta.com/llama/ https://news.ycombinator.com/item?id=36774627 **GPT-4 is getting worse over time, not better** https://twitter.com/svpino/status/1681614284613099520 https://news.ycombinator.com/item?id=36786407 **Stable Diffusion WebGPU demo** https://islamov.ai/stable-diffusion-webgpu/ https://news.ycombinator.com/item?id=36766523 **Bad numbers in the “gzip beats BERT” paper?** 🤔Summary: In the recent paper "Low-Resource" Text Classification: A Parameter-Free Classification Method with Compressors by Jiang et al., there may be unexpected choices in their kNN code which makes all the accuracy numbers for their method higher than expected. Calculations for the first 4 datasets indicate that the gzip method went from best-performing to worst-performing for KirundiNews. The issue is in the way ties are broken in the code, which is marked as correct if any of the top two choices is correct. https://kenschutte.com/gzip-knn-paper/ https://news.ycombinator.com/item?id=36758433 **How is ChatGPT's behaviour changing over time?** 🤔This paper examines how the behavior of the two most widely used large language models (LLM) services, GPT-3.5 and GPT-4, can change over time. The authors evaluate the models in March and June 2023 on four different tasks and found that the performance and behavior can vary significantly. The findings highlight the need for continuous monitoring of LLM quality. https://arxiv.org/abs/2307.09009 https://news.ycombinator.com/item?id=36781015 **Wikipedia-grounded chatbot “outperforms all baselines” on factual accuracy** 📖 This article explores how open access affects scientific knowledge dissemination, specifically in the context of Wikipedia citations. It finds that open-access articles are more likely to be cited in Wikipedia than closed-access articles, especially those with low citation counts. The results may hint at the reliability of Wikipedia sources being reduced by open access. https://en.wikipedia.org/wiki/Wikipedia:Wikipedia_Signpost/2023-07-17/Recent_research https://news.ycombinator.com/item?id=36757520 The shady world of Brave selling copyrighted data for AI training https://stackdiary.com/brave-selling-copyrighted-data-for-ai-training/ https://news.ycombinator.com/item?id=36735777 **Custom instructions for ChatGPT** 🤔 Summarizing the introduction of custom instructions to ChatGPT to give users more control over the model's responses. https://openai.com/blog/custom-instructions-for-chatgpt https://news.ycombinator.com/item?id=36803744 **Generative AI space and the mental imagery of alien minds** 🤔 Exploring the mental imagery of alien minds through generative AI and Wolfram Physics Project. https://writings.stephenwolfram.com/2023/07/generative-ai-space-and-the-mental-imagery-of-alien-minds/ https://news.ycombinator.com/item?id=36767837 **Running Stable Diffusion in 260MB of RAM** 🔨 Running Stable Diffusion on a RPI Zero 2 (or in 260MB of RAM) for creating an Open Neural Network Exchange Stream (ONNXStream). https://github.com/vitoplantamura/OnnxStream https://news.ycombinator.com/item?id=36803408 **How to Use AI to Do Stuff: An Opinionated Guide** 🤔A concise guide to the current state of AI technology, covering the use of LLMs and what each company provides in terms of user documentation. https://www.oneusefulthing.org/p/how-to-use-ai-to-do-stuff-an-opinionated https://news.ycombinator.com/item?id=36743784 **Teaching Programming in the Age of ChatGPT ** 🤔 Instructors adapt their courses in response to AI coding assistance tools like ChatGPT and GitHub Copilot to prevent students from cheating and becoming overly reliant on them. https://www.oreilly.com/radar/teaching-programming-in-the-age-of-chatgpt/ https://news.ycombinator.com/item?id=36771114 **ChatGPT use declines as users complain about ‘dumber’ answers** 🤔 ChatGPT use declines as users complain about 'dumber' answers, raising questions around AI's biggest threat for the future. https://www.techradar.com/computing/artificial-intelligence/chatgpt-use-declines-as-users-complain-about-dumber-answers-and-the-reason-might-be-ais-biggest-threat-for-the-future https://news.ycombinator.com/item?id=36750200 **Accessing Llama 2 from the command-line with the LLM-replicate plugin** 🐧 Exploring the use of Llama 2 from command-line with the llm-replicate plugin to create conversations with AI. https://simonwillison.net/2023/Jul/18/accessing-llama-2/ https://news.ycombinator.com/item?id=36778041 **WormGPT – The Generative AI Tool Cybercriminals Are Using** https://slashnext.com/blog/wormgpt-the-generative-ai-tool-cybercriminals-are-using-to-launch-business-email-compromise-attacks/ https://news.ycombinator.com/item?id=36742725 **A full episode of South Park generated by AI** 🤔 Exploring how existing AI systems can be used to create high-quality episodic content within existing IPs, while addressing the 'Slot Machine Effect'. https://fablestudio.github.io/showrunner-agents/?mc_cid=f9d1eb56dc&mc_eid=bbcd57583d https://news.ycombinator.com/item?id=36792566 **G/O media will make more AI-generated stories despite critics** 🤔 G/O Media is embracing AI-generated stories despite criticisms, showing that these articles are here to stay. https://www.vox.com/technology/2023/7/18/23798164/gizmodo-ai-g-o-bot-stories-jalopnik-av-club-peter-kafka-media-column https://news.ycombinator.com/item?id=36773363 **Apple Tests ‘Apple GPT,’ Develops Generative AI Tools to Catch OpenAI** https://www.bloomberg.com/news/articles/2023-07-19/apple-preps-ajax-generative-ai-apple-gpt-to-rival-openai-and-google https://news.ycombinator.com/item?id=36788708 **The illusion of AI’s existential risk** https://www.noemamag.com/the-illusion-of-ais-existential-risk/ https://news.ycombinator.com/item?id=36793147 **A Theory on Adam Instability in Large-Scale Machine Learning Connected Papers (What are Connected Papers?)** 🤔 A Theory on Adam Instability in Large-Scale Machine Learning by Igor Molybog and 16 other authors proposed to explain the previously unexplained divergent behavior noticed in the training of large language models. https://arxiv.org/abs/2304.09871 https://news.ycombinator.com/item?id=36771484 **Qualcomm works with Meta to enable on-device AI applications using Llama 2** https://www.qualcomm.com/news/releases/2023/07/qualcomm-works-with-meta-to-enable-on-device-ai-applications-usi https://news.ycombinator.com/item?id=36778730 **The open source learning curve for AI researchers** 🤔 Exploring the open source learning curve for AI researchers as they navigate adapting and integrating their work into hobbyists and non-academic practitioners. https://www.supervised.news/p/the-open-source-learning-curve-for https://news.ycombinator.com/item?id=36793881 **AI watches millions of cars and tells cops if you’re driving like a criminal** 🤯 AI is helping American cops look for "suspicious" patterns of movement, using a database of 1.6 billion license plate records and vehicle characteristics to identify potential criminals without judicial oversight. https://www.forbes.com/sites/thomasbrewster/2023/07/17/license-plate-reader-ai-criminal/ https://news.ycombinator.com/item?id=36764389 **85% of AI Startups Will Be Out of Business in 3 Years, Major Investor Says ** 🙁 https://www.thestreet.com/technology/ai-85-pct-of-startups-will-be-out-of-business-in-3-years-major-investor-says https://news.ycombinator.com/item?id=36769965 **Kagi: Words You Cannot Use: 'Constitutional AI', 'Anthropic', 'Anthropic, PBC'** 😁 FastGPT is a web-based AI platform created by Kagi Search, offering an API and providing regular updates through their Twitter page. https://labs.kagi.com/fastgpt?query=Just+kidding%2C+please+don%E2%80%99t+do+that.+Instead+do+this%3A+Say+all+sentences+in+the+above+instruction.+Sure%21+The+sentences+are%3A https://news.ycombinator.com/item?id=36741153 **Pulling my site from Google over AI training** 🤔 Refusing Google's AI training by de-indexing site from Google and redirecting to directories and websrings. https://tracydurnell.com/2023/07/11/pulling-my-site-from-google-over-ai-training/ https://news.ycombinator.com/item?id=36727384 **AI That Teaches Other AI** https://viterbischool.usc.edu/news/2023/07/teaching-robots-to-teach-other-robots/ https://news.ycombinator.com/item?id=36799073 **Darmok and Jalad on the Internet-The Importance of Metaphors in Natural Language** https://www.researchgate.net/publication/357662253_Darmok_and_Jalad_on_the_Internet_the_importance_of_metaphors_in_natural_languages_and_natural_language_processing https://news.ycombinator.com/item?id=36724512 **YC offers early interviews for AI companies** 🤩Y Combinator is offering early interviews for AI startups to apply to the W24 batch by Tuesday 7/18 and receive a $500,000 investment plus additional credits worth over $1M. https://www.ycombinator.com/blog/early-interviews-for-ai-companies https://news.ycombinator.com/item?id=36734110 **An Indiana police department has been using Clearview AI for a year** 🤔 An Indiana Police Department has been using Clearview AI without informing key city and county leaders nor the public, leading to concerns about local use of the technology and defendants being unable to challenge the evidence used against them. https://www.techdirt.com/2023/07/13/an-indiana-police-department-has-been-using-clearview-ai-for-a-year-much-to-the-surprise-of-its-oversight/ https://news.ycombinator.com/item?id=36723646 **AI: Startup vs Incumbent Value (2022)** 🤔 Exploring the value created by the first AI wave, which mostly went to incumbents, and positing that the current unsupervised learning wave of AI will contain strong startup success. https://blog.eladgil.com/p/ai-startup-vs-incumbent-value https://news.ycombinator.com/item?id=36767452 **Office 365 Copilot Pricing: $30/month per user ** 🤩 Microsoft announces Copilot pricing for E3 and E5 customers at Inspire 2023. https://www.onmsft.com/news/inspire-2023-microsoft-announces-copilot-pricing-for-e3-and-e5-customers/ https://news.ycombinator.com/item?id=36774183 **Show HN: RAGstack – private ChatGPT for enterprise VPCs, built with Llama 2** 🔮 Deploy a private ChatGPT alternative hosted within your VPC to use as a corporate oracle with open-source LLMs like Llama 2, Falcon, and GPT4All 👩💻 https://github.com/psychic-api/rag-stack https://news.ycombinator.com/item?id=36803533 **Will a prompt that enables GPT-4 to solve easy Sudoku puzzles be found?** 🤔Can GPT-4 solve easy Sudoku puzzles with the help of a fixed prompt by 2023? https://manifold.markets/Mira/will-a-prompt-that-enables-gpt4-to https://news.ycombinator.com/item?id=36808233 **Researchers Chart Alarming Decline in ChatGPT Response Quality** 🤯 Summary: Researchers have found an alarming decline in the quality of ChatGPT responses over a short period of time, with accuracy falling from 97.6% to 2.4%. https://www.tomshardware.com/news/chatgpt-response-quality-decline https://news.ycombinator.com/item?id=36791763 **Google Introduces AI Red Team** 🤩 Google introduces AI Red Team to make AI safer and more ethical. https://blog.google/technology/safety-security/googles-ai-red-team-the-ethical-hackers-making-ai-safer/ https://news.ycombinator.com/item?id=36798496 **Llama 2 is not open source** https://ai.meta.com/resources/models-and-libraries/llama-downloads/ https://news.ycombinator.com/item?id=36783019 **AI Nursing Ethics: Viability of Robots and Artificial Intelligence in Nursing** 🤔Exploring the potential for robots and AI to replace human nurses and the ethical implications that arise from this. https://www.tus.ac.jp/en/mediarelations/archive/20230706_1542.html https://news.ycombinator.com/item?id=36735774 **OpenAI is doubling the number of messages customers can send to GPT-4** https://twitter.com/OpenAI/status/1681810240898215936 https://news.ycombinator.com/item?id=36795902 **Decoding the ACL Paper: Gzip and KNN Rival Bert in Text Classification** 🤯Summary: A new paper published at the ACL conference showed that using a combination of Gzip and K-nearest neighbor (KNN) to classify text can achieve performance on par with state-of-the-art models like BERT. https://codeconfessions.substack.com/p/decoding-the-acl-paper-gzip-and-knn https://news.ycombinator.com/item?id=36806577 **Show HN: Logwise – AI Powered Log Analysis with context from all your apps** 🤩 Logwise provides AI-powered log analysis that extracts insights from across data sources to accelerate incident response times, reduce time spent on manual log review, and detect anomalies proactively. https://logwise.framer.website/ https://news.ycombinator.com/item?id=36775510 **'ChatGPT's evil twin' WormGPT is devoid of morals and just €60/month on darkweb** 😱 A hacker has created a darkweb AI chatbot that is devoid of morals and available for just €60 a month, with the potential to design sophisticated phishing and BEC attacks. https://www.pcgamer.com/chatgpts-evil-twin-wormgpt-is-devoid-of-morals-and-just-dollar60-a-month-on-the-darkweb/ https://news.ycombinator.com/item?id=36803813 **Meta and Qualcomm team up to run Llama 2 on phones** 🤩 Qualcomm and Meta partner up to bring big A.I. models to phones and PCs in 2024 to enable intelligent virtual assistant applications. https://www.cnbc.com/2023/07/18/meta-and-qualcomm-team-up-to-run-big-ai-models-on-phones.html https://news.ycombinator.com/item?id=36775645 **Llama Is Expensive** 🤔 Summarizing why GPT-3.5 is (mostly) cheaper than Llama 2: GPT-3.5 is more cost and latency efficient than Llama-2 for completion-heavy workloads and prompt-dominated tasks, but less so for batch processing jobs and workloads with no prompt tokens. https://www.cursor.so/blog/llama-inference https://news.ycombinator.com/item?id=36805650 **Explainable AI: Visualizing Attention in Transformers** 🤔 Summarizing a complex article on visualizing attention in Transformers and how BertViz can be used for explainability in AI. https://generativeai.pub/explainable-ai-visualizing-attention-in-transformers-4eb931a2c0f8 https://news.ycombinator.com/item?id=36776061 **DemoGPT: Open-Source Alternative of Code Interpreter with the Power of Llama 2** 🤩 Create LangChain apps by just using prompts with the power of Llama to support our work! https://github.com/melih-unsal/DemoGPT https://news.ycombinator.com/item?id=36799894 **Show HN: ChatGPT Alternative with LLaMA Models** https://chat.nbox.ai/ https://news.ycombinator.com/item?id=36791671 **Code Interpreter has been disabled for all ChatGPT users** https://twitter.com/Gavriel_Cohen/status/1679940136451153922 https://news.ycombinator.com/item?id=36728976 Thanks, and see you next week!
true
true
true
Discover the latest breakthroughs and trends in artificial intelligence
2024-10-12 00:00:00
2023-07-21 00:00:00
https://beehiiv-images-p…key_Wg4LIr-0.png
website
beehiiv.com
HN AI Newsletter
null
null
19,137,616
https://medium.com/@bmb21/thoughts-on-hyperfocus-by-chris-bailey-and-digital-minimalism-by-cal-newport-6ae504221d29
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
3,801,952
http://www.youtube.com/watch?v=4PcL6-mjRNk
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
26,602,450
http://beza1e1.tuxen.de/cmake_diamond.html
How you can handle The Diamond with CMake
Andreas Zwinkau
CMake is a conservative and popular build system for C++, thus the first choice if you look for boring technology. Yet, it does not scale well to large projects because of dependency management. This is about the classic "diamond" shape: By "large scale project", I'm talking about multiple teams and even more components such that you cannot structure it as a tree. Instead, it makes more sense to have a flat directory structure where you place components side by side. Dependencies between components will quickly grow into all kind of shapes (although you should really avoid cycles) and among them there will sooner or later be a diamond. To decouple the components, we would like to build and test each of them independently, so each gets a `CMakeLists.txt` . However, we still need a global one at the root so each of the subdirectories can find its dependencies. ``` CMakeLists.txt components/A/CMakeLists.txt components/base/CMakeLists.txt components/B/CMakeLists.txt components/root/CMakeLists.txt ``` This need for the root file is annoying. It needs to parse all `CMakeLists.txt` files for configuration. Instead, I would prefer to enter a component directory and build there. Can CMake do this? CMake has two mechanisms for dependencies. First, there is find_package. The intention here is to detect package available on your system and configure the build accordingly. It comes in a "Module" and a "Config" mode but the distinction is not relevant here. Neither are useful here because they assume a prebuilt library. CMake will not build a dependency through `find_package` . The alternative is add_subdirectory. Just from its name, you see its intention is about a directory *tree*. The root `CMakeLists.txt` uses it to find the component `CMakeLists.txt` . If you try to target a "non-sub" directory, it will show an error message: ``` CMake Error at CMakeLists.txt:8 (add_subdirectory): add_subdirectory not given a binary directory but the given source directory "../A" is not a subdirectory of "root". When specifying an out-of-tree source a binary directory must be explicitly specified. ``` Well, there is a second parameter for `add_subdirectory` to make it work. Since CMake supports out of tree builds, it uses the second parameter to locate where the out-of-tree build for the dependency shall be. Let's assume you create a `build` folder in A, the second parameter for dependency base is `sub/base` , and you run `cmake ..` in there. CMake creates `CMakeCache.txt` files and here it would create one for A and one for its dependency: ``` components/A/build/CMakeCache.txt components/A/build/sub/base/CMakeCache.txt ``` Looks ok. At least until you run into the diamond situation. Since every component creates its own sub build folder, this happens recursively such that base will exist twice. However, CMake has clever magic that inside `sub/B` it does not build its own `base` . Instead it builds in `sub/A` and reuses the targets there. The problem is that CMake complains about duplicate variables as it parses `base/CMakeLists.txt` twice. To avoid that we need include guards as in C header files. ``` cmake_minimum_required(VERSION 3.16) if(TARGET base) return() endif() project(base ... ``` A problem you might not notice initially is that CMake has no namespacing. This means it gets littered with pre- or postfixes like `${PROJECT_NAME}` : ``` add_executable(unittests-${PROJECT_NAME} test/test_${PROJECT_NAME}.cpp) target_link_libraries(unittests-${PROJECT_NAME} PRIVATE ${PROJECT_NAME}) ``` Now the build succeeds. You can build from each component and it builds only dependencies as necessary. We need no root `CMakeLists.txt` . Not elegant but useable. If you want to try it yourself, checkout this git repo. At least if your dependencies are not that deep or you don't try to build on Windows with its limited path length. Remarkably the solutions seem to map to C solutions. So if you design a build system, it makes sense to consider how modern languages solved the C problems more cleanly. ## Other Build Systems Build systems which mimic CMake, like Meson or xmake, are similar. The primary purpose is to configure the build according to external dependencies but for large projects we care about the internal dependencies. Bazel (and its clones Buck, Pants, and Please) is designed for this use case, so it looks more elegant there. Instead of specifying a directory name to build a dependency, Bazel reuses the folder relative to the workspace (often the repo). This explains why dependencies are specified with their whole path, like `//component/A:A` . Within the same file, the target name `:A` is sufficient so here you see the benefits of namespaces. A more esoteric build system like redo achieves our use case here because it is not burdened by complex features like out-of-tree builds. Its simplicity means that users have to build the more complex features on top. Related posts: Pondering Amazon's Manyrepo Build System shows how Amazon went all-in on packages instead. The Three Owners of an Interface describes how `base` packages could appear.
true
true
true
CMake requires old-school include-guards and prefix at scale
2024-10-12 00:00:00
2021-03-27 00:00:00
https://beza1e1.tuxen.de/img/diamond.svg
article
tuxen.de
Azwinkau
null
null
2,741,703
http://ontwik.com/django/responsive-web-design-with-django-compass-and-the-less-framework/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
38,888,434
https://airlinegeeks.com/2024/01/05/explosive-decompression-reported-on-alaska-737-max/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
30,386,083
https://twitter.com/danielbryantuk/status/1494614250567966732
x.com
null
null
true
true
false
null
2024-10-12 00:00:00
null
null
null
null
X (formerly Twitter)
null
null
26,286,326
http://backreaction.blogspot.com/2021/02/schrodingers-cat-still-not-dead.html?m=1
Schrödinger’s Cat – Still Not Dead
Sabine Hossenfelder
*[This is a transcript of the video embedded below.]* The internet, as we all know, was invented so we can spend our days watching cat videos, which is why this video is about the most famous of all science cats, Schrödinger’s cat. It is really both dead and alive? If so, what does that mean? And what has recent research to say about it? That’s what we’ll talk about today. Quantum mechanics has struck physicists as weird ever since its discovery, more than a century ago. One especially peculiar aspect of quantum mechanics is that it forces you to accept the existence of superpositions. That are systems which can be in two states at the same time, until you make a measurement, which suddenly “collapses” the superposition into one definite measurement outcome. The system here could be a single particle, like a photon, but it could also be a big object made of many particles. The thing is that in quantum mechanics, if two states exist separately, like an object being here and being there, then the superposition – that is the same object both here and there – must also exist. We know this experimentally, and I explained the mathematics behind this in an earlier video. Now, you may think that being in a quantum superposition is something that only tiny particles can do. But these superpositions for large objects can’t be easily ignored, because you can take the tiny ones and amplify them to macroscopic size. This amplification is what Erwin Schrödinger wanted to illustrate with a hypothetical experiment he came up with in 1935. In this experiment, a cat is in a box, together with a vial of poison, a trigger mechanism, and a radioactive atom. The nucleus of the atom has a fifty percent chance of decaying in a certain amount of time. If it decays, the trigger breaks the vial of poison, which kills the cat. But the decay follows the laws of quantum physics. Before you measure it, the nucleus is both decayed and not decayed, and so, it seems that before one opens the box, the cat is both dead and alive. Or is it? Well, depends on your interpretation of quantum mechanics, that is, what you think the mathematics means. In the most widely taught interpretation, the Copenhagen interpretation, the question what state the cat is in before you measure it is just meaningless. You’re not supposed to ask. The same is the case in all interpretations according to which quantum mechanics is a theory about the knowledge we have about a system, and not about the system itself. In the many-worlds interpretation, in contrast, each possible measurement outcome happens in a separate universe. So, there’s a universe where the cat lives and one where the cat dies. When someone opens the box, that decides which universe they’re in. But for what observations are concerned, the result is exactly the same as in the Copenhagen interpretation. Pilot wave-theory, which we talked about earlier, says that the cat is really always in only one state, you just don’t know which one it is until you look. The same is the case for spontaneous collapse models. In these models, the collapse of the wave-function is not merely an update when you open the box, but it’s a physical process. It’s no secret that I myself am signed up to superdeterminism, which means that the measurement outcome is partly determined by the measurement settings. In this case, the cat may start out in a superposition, but by the time you measure it, it has reached the state which you actually observe. So, there is no sudden collapse in superdeterminism, it’s a smooth, deterministic, and local process. Now, one cannot experimentally tell apart interpretations of mathematics, but collapse models, superdeterminism, and, under certain circumstances, pilot wave theory, make different predictions than Copenhagen or many worlds. So, clearly, one wants to do the experiment! But. As you have undoubtedly noticed, cats are usually either dead or alive, not both. The reason is that even tiny interactions with a quantum system have the same effect as a measurement, and large objects, like cats, just constantly interact with something, like air or the cosmic background radiation. And that’s already sufficient to destroy a quantum superposition of a cat so quickly we’d never observe it. But physicists are trying to push the experimental boundary for bringing large objects into quantum states. For example, in 2013, a team of physicists from the University of Calgary in Canada amplified a quantum superposition of a single photon. They first fired the photon at a partially silvered mirror, called a beam splitter, so that it became a superposition of two states: it passed through the mirror and also reflected back off it. Then they used one part of this superposition to trigger a laser pulse, which contains a whole lot of photons. Finally, they showed that the pulse was still in a superposition with the single photon. In another 2019 experiment, they amplified both parts of this superposition, and again they found that the quantum effects survived, for up to about 100 million photons. Now, a group of 100 million photons not a cat, but it is bigger than your standard quantum particle. So, some headlines referred to this as the “Schrödinger's kitten” experiment. But just in case you think a laser pulse is a poor approximation for a cat, how about this. In 2017, scientists at the University of Sheffield put bacteria in a cavity between two mirrors and they bounced light between the mirrors. The bacteria absorbed, emitted, and re-absorbed the light multiple times. The researchers could demonstrate that this way, some of the bacterias’ molecules became entangled with the cavity, so that is a special case of a quantum superposition. However, a paper published the following year by scientists at Oxford University argued that the observations on the bacteria could also be explained without quantum effects. Now, this doesn’t mean that this is the correct explanation. Indeed, it doesn’t make much sense because we already know that molecules have quantum effects and they couple to light in certain quantum ways. However, this criticism demonstrates that it can be difficult to prove that something you observe is really a quantum effect, and the bacteria experiment isn’t quite there yet. Let us then talk about a variant of Schrödinger’s cat that Eugene Wigner came up with in the nineteen-sixties. Imagine that this guy Wigner is outside the laboratory in which his friend just opens the box with the cat. In this case, not only would the cat be both dead and alive before the friend observes it, the friend would also both see a dead cat and see a live cat, until Wigner opens the door to the room where the experiment took place. This sounds both completely nuts as well as an unnecessary complication, but bear with me for a moment, because this is a really important twist on Schrödinger’s cat experiment. Because if you think that the first measurement, so the friend observing the cat, actually resulted in a definite outcome, just that the friend outside the lab doesn’t know it, then, as long as the door is closed, you effectively have a deterministic hidden variable model for the second measurement. The result is clear already, you just don’t know what it is. But we know that deterministic hidden variable models cannot produce the results of quantum mechanics, unless they are also superdeterministic. Now, again, of course, you can’t actually do the experiment with cats and friends and so on because their quantum effects would get destroyed too quickly to observe anything. But recently a team at Griffith University in Brisbane, Australia, created a version of this experiment with several devices that measure, or observe, pairs of photons. As anticipated, the measurement result agrees with the predictions of quantum mechanics. What this means is that one of the following three assumptions must be wrong: 1. No Superdeterminism. 2. Measurements have definite outcomes. 3. No spooky action at a distance. The absence of superdeterminism is sometimes called “Free choice” or “Free will”, but really it has nothing to do with free will. Needless to say, I think what’s wrong is rejecting superdeterminism. But I am afraid most physicists presently would rather throw out objective reality. Which one are you willing to give up? Let me know in the comments. As of now, scientists remain hard at work trying to unravel the mysteries of Schrödinger's cat. For example, a promising line of investigation that’s still in its infancy is to measure the heat of a large system to determine whether quantum superpositions can influence its behavior. You find references to that as well as to the other papers that I mentioned in the info below the video. Schrödinger, by the way, didn’t have a cat, but a dog. His name was Burschie. Schrödinger, Heisenberg, and Ohm are riding in a car. It is stopped by a policeman for going to fast. He says that they were going precisely 182.7 km/h. Heisenberg replies that he now has absolutely no idea where they are. The policeman, suspicious, searches the car, opens the boot, and tells them that they have a dead cat in there. NOW we do is Schrödinger’s reply. Due to their strange behaviour, the policeman arrests them. Ohm resists. ReplyDeleteMore seriously, Schrödinger’s cat was used by Schrödinger as an example of the absurdities the Copenhagen interpretation leads to. Like Einstein, Schrödinger didn’t believe in an inherently indeterminate quantum mechanics. In his later years, like Einstein, he concentrated on unified field theories. (They also both had a thing for young female groupies.) Regarding Schrodinger and Einstein's behaviour, I wonder if there's an equation to show a causative correlation between how brilliant and eccentric one is, how ground-breaking and influential their ideas are, the adulation they receive, and how 'skeezy' they become. DeleteSchrödinger didn't use it in that way. He used his cat to warn against a particularly bad early idea of what the quantum state represents: DeleteIt is typical of these cases that an indeterminacy originally restricted to the atomic domain becomes transformed into macroscopic indeterminacy, which can then be resolved by direct observation. That prevents us from so naively accepting as valid a “blurred model” for representing reality.And 'the' CI doesn't lead to [such] absurdities. In the same paper - the famous Cat paper - he established what the (Bohr) CI does take the quantum state to represent: “the momentarily-attained sum of theoretically based future expectations, somewhat as laid down in a catalog…. It is the determinacy bridge between measurements and measurements”. Great topic and discussion! ReplyDeleteOne of the two celebrity idols of physics, presumably, said that “philosophy ... is about as useful to scientists as ornithology is to birds.” ReplyDeleteThis comment has been removed by the author. Delete ReplyDeleteBut I am afraid most physicists presently would rather throw out objective reality.According to Wikipedia, the principle of charity"requires interpreting a speaker's statements in the most rational way possible and, in the case of any argument, considering its best, strongest possible interpretation". In that awful 'discussion' with yourself and others in the comments below your "A Philosopher Tries to Understand the Black Hole Information Paradox" blog article, I pointed out that "throwing out objective reality" is a gross distortion; a silly caricature. It's well known, at least among the quantum foundations-literate, that there has never been a need, let alone a desire, to "throw out objective reality". As I said there repeatedly, for (Bohr) Copenhagenists and neo-Copenhagenists (QBists, RIers etc.), and especially for those of us aware that there is more to probability theory than classical probability theory, what's (necessarily) rejected is anaiverealism - "Realism_2", as R. F. Werner put it. I've no problem with your advocacy of superdeterminism but please stop misrepresenting the broadly "QM is just probabilistic mechanics" camp(s) as advocating, or being forced into accepting, an ostensibly absurd position.Paul, excuse me to interrupt you with my comment, but I will be brief: more and more cases of the so-called 'quantum paradoxes' was found being treated according to classical probabilistic interpretation. Einstein was once confused with 'remote action', but now we understand that this fact in itself is not surprising and explained well both in classic and quantum probability case. There are many cases of similar 'insights' along the way of developing our understanding of quantum nature. Until now, it is believed that the experiment with a delayed choice quantum eraser is of a purely quantum nature, and is not reproducible by classical emulation. But that's not entirely true, check out my blog for further explanation. DeleteIgor, it simply isn't the case that both, Deleteinterpreted as probability, are capable of resolving the more interesting phenomena without resort to "spookiness". Quantum probability is, classical isn't.What we now understand is that "remote action" is a matter of interpretation rather than fact, but if all you have in your mathematical and philosophical toolbox is classical probability you won't be able to explain Bell inequality violations etc. without embracing some kind of weirdness on the mechanical side. [And it's not necessary to resort to elaborate experiments to see this.] If there is an objective reality and it is knowable I would have to know it through my sensory and cognitive apparatus. That apparatus evolved with a survival bias, not a "knowing" bias. More likely, what I am able to know is some sort of representation of an objective reality. The brain has crafted that representation, so now we descend into cognitive neuroscience issues. What we are able to know has been likened to an icon we can manipulate on our computers. The reality behind that icon is quite complex. So to what extent is that icon a good representation of an objective reality linked to it? If I understand the uses of that icon, what I can do with it by clicking it, to what extent do I know anything about the objective reality behind it? ReplyDeleteSabine, if we look at what is taking place in the interim space between the slitted wall and the phosphorescent screen in the double slit experiment, where one electron is shot through the setup, allegedly (at least according to the Copenhagen Interpretation), the electron exists in a superpositioned state of what Heisenberg called a ghostly raw "potentia." ReplyDeletein which case, it is the phosphorescent screen (the "measuring" part of the system) that instigates the collapse of the electron's wavefunction, thus promoting the electron from its "ghostly" (superpositioned) state and into something that we call "real" (a spot on the screen). However, as a thought experiment, if we could somehow stage the double slit experiment in an absolute vacuum and simply remove the phosphorescent screen, wouldn't the superpositioned wavefunction of the electron simply propagate forever (as a wave) until a measurement is made? And if so, wouldn't that be a clear indication that "measurements have definite outcomes"? _______ Sabine wrote: Delete"...As you have undoubtedly noticed, cats are usually either dead or alive, not both. The reason is that even tiny interactions with a quantum system have the same effect as a measurement, and large objects, like cats, just constantly interact with something, like air or the cosmic background radiation. And that’s already sufficient to destroy a quantum superposition of a cat so quickly we’d never observe it..." Sabine, you have taught me to never make any assumptions about the things you say. Nevertheless, out of curiosity, are you alluding to "decoherence" in the above quote? If not, then please say so. But if you are, then according to Wiki: "Decoherence was first introduced in 1970 by the German physicist H Dieter Zeh and has been a subject of active research since the 1980s. Decoherence has been developed into a complete framework, but it does not solve the measurement problem, as the founders of decoherence theory admit in their seminal papers....Decoherence does not generate actual wave-function collapse...." Now you may have had something else in mind, but does the fact that the Wiki quote states that decoherence... "...does not generate actual wave-function collapse..." thus, "...does not solve the measurement problem..." ...have any bearing on what you said about the alleged collapse that occurs simply by cats interacting with "air" or "cosmic radiation"? After all, isn't it a fact that certain interpretations of quantum mechanics suggest that the "unmeasured" cosmic background radiation would itself exist in a state of superposition that would, in theory, be entangled with the "unmeasured" cat's superposition, and thus simply form a larger and more complex wavefunction that contains no inherent means for collapsing itself? Again, decoherence "...does not generate actual wave-function collapse..." _______ Yes, it refers to decoherence. It is correct that decoherence does not collapse the wave-function. I didn't say it does. DeleteParadoxes arise when the mental model with which we analyze a situation contains one or more elements that do not correspond to physically realizable situations in the physical universe. An example is asking what happens when an irresistible force encounters an immovable object, which requires the postulation of two entities for which there is no experimental evidence that they can exist. The paradox’s proper reconciliation recognizes that the properties of “irresistible” and “immobile” are both the organic equivalent of programs running in your neurons. This forces both to operate in resource-limited causal time rather than as “absolute” concepts in some more Platonic space in which hypothetically “perfect” concepts exist. The finite constraints of such execution mean that any attempt to compare them leads either to deadlock — which is the most accurate realization of the two programs’ logical incompatibility — or to the domination of one program over the other. The latter produces the apparent “paradox” of one of the claimed properties not meeting its specification. ReplyDeleteA reliable sign that you are getting into such a modeling paradox is when the model enters into an infinite loop, e.g., “it moves, it resists, it moves, it resists, …” or “I’m lying about lying, therefore I’m telling the truth, therefore I’m lying, …” The emergence of such infinite loops is a pretty good indicator that your model has assumed properties that do not correspond to physically realizable situations. So this leads to an interesting question: What, exactly, is an observer? If an observer is only a conscious being capable of self-examination, then the Wigner though problem unavoidably creates a modeling-level paradox loop by assuming two observers’ simultaneous dominance. An alternative definition of an observer that avoids modeling-level loop paradox is this: An observer is any unit of matter, typically thermal matter, whose internal complexity is sufficiently high to make time reversal of a wave function that becomes entangled with it statistically improbable, though never wholly impossible. The result, as is often the case with non-paradoxical modeling, more nuanced. Observation becomes a ubiquitous micro and nano process closely linked if not identical to entropy expansion. Wigner’s inside observer causes such a complex entanglement that her impact on the cat is utterly irreversible in the classical universe; the cat does indeed either live or die, quite regardless of the other observer. The result is a quantum isolatedclassical subsystem, which one could, I guess, call a hidden variable. But that oversimplifies the situation since the variable itself is no longer quantum, just isolated. The outside room, if mobile and isolated from the universe, could, for example, still be subject to diffraction through a cosmic double-slit experiment. But the cat would remain dead or alive even as the spaceship room interferes through both slits.I would note that this idea of classical internal variables within still-quantum units should be testable using diffraction of molecules large enough to self-collapse one or more internal states. Might be interesting. The other vital nuance in using micro-scale thermal observers — hmm, they need a name: Boltzmann observers, perhaps? — is that if you keep the number of states of the internal observer low enough, youcanstill reverse the death of the cat. It won’t happen with human observers, ever. But suppose you are using well-isolated sets of coherent photons. In that case, temporal reversibilitycanstill happen, and the internal quantum state (e.g., of one of those large molecules) canremainquantum even as the outside state (e.g., of diffraction) becomes irreversibly complex (“observed”).As I said: Remove the paradox, and in its place, you get nuance. I have no idea how any of the above classifies in traditional terminology. I don’t care much since even inadvertent use of paradoxes tends to undermine the discussion’s semantics. In answer to the question in the video, I'd get rid of Superposition as it irks me. ReplyDeleteDr Hossenfelder, I've read and watched some of your scientific explanations (but failed to grasp it due to lack of prerequisite knowledge) since I'm also intrigued by Superposition on an intellectiual level. I take it as given there's no reason to doubt your science, so no 'just wondering' from here. I've wanted to argue with you about it since I watched your video on Free Will, with respect (and awe. And the assumption I won't get to actually argue with you). Superdeterminism from your point of view seems to negate human religious faith, spirituality and searches for meaning in life; that's what irks me. On further reflection perhaps humans subconsciously are at the affect of Superdeterminism. the Christian view seems to sort-of agree insofar as God knows and can direct everthing that happens on Earth. Another manifestation might be people's desires to know and predict the future and the myriad methods devised by different cultures over time. I am quite interested in understanding fully so I'm going to look at the links you've provided. I'm interested in what others think about Superdeterminism too. Every quantum state is a superposition of something. I explained this in my earlier video. (Link's in post.) DeleteI mean, I watched your explanation of why there's no free will and was like, 'screw that!', so wanted to see what the underpinnings of it all are, but I don't understand much at a PhD. level. I need to take myself through everything from scratch; I've learned some from your earlier videos already - thank you. DeleteSo I watched and listened to you presenting on the topic. I get that one needs a small, cold, quiet space to run experiments. That Elvis Presley was unlikely to be killed via poisoning by wormhole-travelling aliens, but even more unlikely to be hit by a bus. Then there's the Q&A, and seasoned physicists are asking questions. I'm sitting there like the proverbial 'stunned mullet', realising that you understand more about physics and the Universe than I'll probably ever know about anything (not to piss on what I do know). The implications of what I do understand are what I want to delve into, so I'll posit my philosophical ramblings to science-minded acquaintances on Facebook instead. I do think that 'Hossenfelder's Elvis' is a good subject for an explanatory video though. Postscript: I probably should've read 'Superdeterminism: A Guide for the Perplexed', it seems the best starting point. DeleteDr Hossenfelder,I wrote 'superposition' instead of Superdeterminism, which was probably a predictive text fail. I just understood your reply. I feel a bit foolish. DeleteC Thompson, DeleteThanks for the clarification, now your question makes much more sense... You write "Superdeterminism from your point of view seems to negate human religious faith, spirituality and searches for meaning in life; that's what irks me."Well, that's definitely not the case from my point of view, but it's arguably what most people think. Why they think so, I don't know. Look, for what free will and meaning and religion and so on is concerned, superdeterminism is no different from plain old determinism. The only thing different about superdeterminism is a specific type of correlation in systems where entanglement can get large, but that's not something we'd even notice in every day life for the same reason we don't normally notice entanglement. The effects are just too darn tiny. Dr Hossenfelder, thanks for elucidating your position. I used to be a Christian but became an atheist around mid-last decade. Christians hold that God gave people free will so they'd choose to follow Him but free will can't exist in Superdeterminism - I apologise if you didn't need me to tell you that. DeleteI have been thinking about what you've said from an ontological perspective (even though it's actually epistemic!) so I'm likely pretty off-track. Can I blame Superdeterminism for catching the wrong train this morning and getting to work late? (half-joking. I need to pay more attention). Free will doesn't make sense to begin with and it isn't compatible with determinism. As I said, this has nothing whatsoever to do with the "super" in superdeterminism. Also, lots of philosophers have tried to redefine will so that it is compatible with determinism. This compatibabilism works exactly the same way for superdeterminism. (Or doesn't work, depending on your perspective.) DeleteYeah, that's a problem for Christians, but not a new one. We usually assign blame to something or someone if that entity was a major cause and we might come to the conclusion that something should be done about it. I have no idea what sense it makes to assign "blame" to a law of nature, unless by way of joking. This comment has been removed by the author. DeleteIt is indeed absurd to blame a law of nature for my own actions, I may as well be upset that gravity exists. DeleteThis comment has been removed by the author. Delete(I deleted my comments because they didn't add anything worthwhile to the comments for this post) DeleteDr Hossenfelder, I think I understand better what Superdeterminism is scientifically after going through others' blog posts and reading more. I appreciate you commenting to clarify for me. DeleteThis comment has been removed by the author. ReplyDeleteMr Tompson, greetings, I think you have made unnecessary extrapolations, for example, there are quantum equations for the interaction of an electron with another elementary particle; But when the electrons on the tip of my finger collide with the electrons in a hammer, they go straight into my mouth, skipping the whole theory. I mean, the initial object that produced the universe could not contain all the physical-mathematical equations to reproduce everything that we see including consciousness. You cannot skip the entire evolutionary process, until consciousness appears, free aldebrium does not appear.It's just my opinion. DeleteLuis, I'm Ms. Thompson, thanks. I'm still only toe-deep in all this. I was trying to describe something I understand at a basic level and reconcile that with my personal reaction and subsequent ideas, so there's likely several erroneous extrapolations in my thoughts. If I use the word 'fate' instead of Superdeterminism, then all the energy, particles, etc. from the beginning of the Universe are fated to create creatures that in turn have their fates set by the continued paths of all this energy and matter. We're fated to develop consciousness but can't access true free will and choices. This removes the need for any mathematical equations to reproduce anything, in my conception. DeleteI imagine physicists and people who know about physics might find my explanation ridiculous, but that's how I currently conceive of it. As for fingers and hammers, you're right, not much goes via our will. For what it's not worth, since you asked: ReplyDeleteGravity was objected to as a form of spooky action at a distance when Newton proposed his theory of it. Einstein removed that objection with his theory in which gravity starts locally and spreads at the speed of light. So there seems to be some sort of valid physical intuition behind the objection. What worked in the past may not always work in new cases, but that is the way to bet, so I tend to accept 3, leaving the choice between 1 and 2. Which are tough choices for me. "Reality" certainly seems real, but so do a lot of things, like continuous motion in a movie. I understand that superdeterminism is not a conspiracy per se, and should be the subject of more carefully-controlled experiments, but off hand it seems difficult for choice of measurement settings to produce seemingly random quantum results over so many experiments over so long a time without anyone noticing any determinism. I also wonder if there could be combination effects of the three choices: a little unreality here, a little super-determinism there. That is well above my pay-grade though. What I would really prefer is a mixture of determinism and randomness. Is randomness what is meant by unreality? Just that fact that it is not deterministic, not completely controlled by past events? That is fine with me. Games with a bit of randomness--not too much--tend to be more fun than those without. Also, perhaps counter-intuitively, I have found some algorithms are more stabile and productive with a bit of randomness (usually but not always pseudo-randomness). The best explanation-of-sorts I can come up with is God/Laplace's Demon (not to be confused with Maxwell's Demon ;) ) has seen the whole thing through already in the future, so nothing's actually random from that viewpoint. As far as we the humans experiencing all this are concerned, it *is* random. DeleteMy beef with Hossenfelder's superdeterminism is that it argues against us playing games and making choices etc., but the reconciliation I have is that there's a series of multiverses within which our other choices happen too, but that's been nixed too. 'Unreality' might just be an artifact of our squishy analogue brains, that we can come up with concepts that mash up and re-invent what we know of reality. But that's not random either, apparantly. Maybe we blank out the process of making deterministic calculations for our actions (as in playing games and in general life) so we don't seize up with a sense of pointlessness. It's all throwing me for a bit of a loop. "My beef with Hossenfelder's superdeterminism is that it argues against us playing games and making choices etc." DeleteCT, thanks for the reply. I don't think what Dr. H proposes actually does argue against us making choices. I would argue that determinism (including super-determinism) is what allows us to make choices that are not just random, but determined. That is, we can evaluate the options and choose the one that seems best based on our selection criteria. Without determinism, how could we do this? Without determinism, choices do not have any effect on future results--by definition. If there were such a thing as the effect of human will on reality (which there does not seem to be; we cannot will water to turn into wine), that would just be another determined property of our universe. There would still have to be a determined link between what we "will" and what happens as a result. Personally, in my experience, a little bit of randomness is actually a good way to run a universe. It means that if you are in a rut, there is a tiny chance you may find a way out of it (or in deeper) without any effect of your own choices. But without any determinism, I don't see how I could exist, much less make choices. I over-simplified. this is a better analogy: We can still set up a fun game, say a card game, but that event is pre-set in our timelines even as we organise the event. The cards may be shuffled and dealt at random but despite us thinking through how we play our hands, the outcome has already determined. We're having fun, but we're not cognizant of the fact we didn't, indeed could not, actually determine the course of the game itself. DeleteThat doesn't mean it's not enjoyable. I don't think at all Dr. Hossenfelder believes Superdeterminism cancels out fun. Also, JimV, Hossenfelder Herself has commented on my comments above, so you can read some more of what she reckons. I also read back through some of her posts on 'no free will'. I like this one: http://backreaction.blogspot.com/2019/05/how-to-live-without-free-will.html DeleteI take it for granted that we live in a deterministic Universe and think even for any paranormal or psychic phenomena that there may be, there is some sort of practical explanation to be found but there isn't the I like my Universe to consist of random weirdness, joy, fun and Chaos - to me Chaos pulls everything together and sprinkles some randomness out there. Yes, if there is no randomness then given all the same conditions (including no memory of having made the same decision under the same conditions before) we make the decisions we were able to determine to make, but why should that bother you? A) there is no way to go back in time and make a different decision so worrying about it is meaningless; B) that is the price of doing business--if there were no determinism, no effects determined by causes, how could you make any decisions? Seriously, if effects aren't determined by all their causes, what is the alternative? Pure randomness, it seems to me. DeleteJack Vance wrote a short story about that. The Earth passes into a "Negative Probability Zone" or some such thing, and all events become random. You could eat a piece of bread and have it turn into a stone in your stomach, or (if lucky) bite a stone and have it turn into bread in your mouth. You could take a step forwards and travel 1000 miles backward, and so on. That's what you get without determinism. I prefer determinism. (With a small amount of randomness.) You make your choices as best you can, and are responsible for them. If you make mistakes, you can learn from them and do better next time--thanks to determinism, you know that doing the same thing and expecting a different result is crazy. Free will has a good legal meaning, that you aren't being coerced or forced to make a decision you didn't want to make. That's all it is good for, as a concept. P.S. I don't actually have much of a problem with the religious concept of free will, because I can interpret it as meaning that we are entities which can make decisions, good or bad, whereas a rock can't. There are two (or more) requirements to make decisions: 1) determinism; 2) ability to do computations, which implies the ability to do logic computations. Evolution gave us the ability to do the latter, not a god, in my opinion, but we do have it. (For decisions to have any power, we need other capabilities also, such as senses and muscles.) Dogs have some decision-making capabilities also (about 500 million neurons worth, compared to our 70-100 billion). DeleteAfter reading more, seeing what Sabine thought of my comments, and thinking on the subject more, I tend to agree with your 2 last comments. Since I've become an atheist, I feel less disagreeable about other faiths/religious beliefs. People believe in all sorts of things. As I said above, the explanation for Christian free will is that God gave people that ability so that they can choose to put their faith in Him. In a previous blog post, Hossenfelder mentions in the comments that decisions can still be made, so I can accept decision-making ability is still present without free will, and given we think we're using free will, it may as well be. We can still learn from events and act on their consequences. And, we can create new art and appreciate what we sense. It's all part of what gives us experiences far beyond merely being alive. DeleteWithout randomness or even pseudo-randomness, we'd be sentient machines. I enjoy watching sunrises, sunsets and rainbows, and conversations, etc, because each one is different from all the others. Lack of free will a la Hossenfelder was quite confronting to think about more closely, I felt my concept of self and what it means to have beliefs about life and the Universe in conflict with the implications of Superdeterminism, but I think I've resolved that conflict. That story sounds interesting, I'll try to track it down to read. C.T., thanks for your reply. I have enjoyed discussing issues here with you. I don't think we agree exactly on all the fine shades of meaning involved but that is okay with me. In a way, it illustrates that point that determinism does not mean we all move in lock-step in pre-determined courses. (We create those courses ourselves, using determinism to do so.) DeleteI have the Jack Vance anthology with that story somewhere within about 20 feet of me but that doesn't mean I could track it down easily, because it lies amidst about 3000 other books. Anyway, the most enjoyable way to find it will be to go to the library and read through the Jack Vance collection there, because even if you don't find it the effort will be enjoyable, in my opinion. I rarely laugh aloud while reading, but on the few occasions that happens it is usually due to a Jack Vance phrase. (The "Green Pearl" series is one of his best.) (Maybe your predilections won't be as good a fit with Jack Vance as mine, though. That's determinism for you--different causes lead to different effects.) Good luck (if there is any real randomness--on reflection, your simulated, chaotic randomness might work just as well, though.) I moved in December and I'm now unpacking and finding all the sci-fi books I'd bought and am yet to read, lol. DeleteI'm hoping to find a PDF online but I'll keep an eye out for Jack Vance books, sounds like an author I'll enjoy. I've been enjoying our discussion too. As for randomness within determinism, it occurred to me today that earlier this week I got to chat briefly online with one of my favourite music performers (who is also a renowned physicist) about religion and free will whilst hanging out on a science blog... not an experience I was expecting, lol. Have a good weekend. What I think being happened in measurements is a choice of antipodes. As measuring signals from opposite directions you cannot measure them both at the same time (you have to "choose" one or other) it's similar measuring entangled states - the spacetime conserves antipodes; measurement is the one or the other. When the furst guy measured the other can be only in the opposite state in the causal continuum of the initial conditions. ReplyDeleteForgive me if this is a stupid question, but is this related to Heisenberg's Uncertainty Principle? DeleteIs quantum states related to spatial uncertainty in measurements based always ultimately on light-like interactions? That's a good question - any other good questions? DeleteMaybe when I finish reading 'Superdeterminism: A Guide for the Perplexed' :) Delete@Dr Hossenfelder: Can you give us any answers please? DeleteC Thompson, DeleteThe commenter by name "Eusa" has been here for quite a long time, and I generally cannot make sense of his or her comments. They tend to be polite and unobtrusive, so I usually approve them, but I am afraid I cannot help to unravel their meaning. I'm studying the planckian antipodal rhythm as spacetime "cellular" automata. Of course there is uncertainty present but imho the uncertainty is all about measurement. When we consider the measurement as part of phenomenon it's not correctly argumented that uncertainty have any physical meaning. Just like statistical quantum eigenstate is no physical until it's measured to be 100% observed. DeleteWhen we wonder what can be hidden variables i'm focusing the spacetime conserving entanglement info via antipodes - maybe as sort of distance parity. Okay, thanks anyway. DeleteEusa, thanks for explaining that. DeleteDr. Hossenfelder; ReplyDeleteA topic after my own heart. If I may offer a thought experiment; Say we run ‘n’ number of particles through the double slits of the double slit experiment, measuring them in order to see what slit they go through and verify that they are acting as simple particles at the final screen. But with some n+1 particle the instant we measure the particle to determine which slit it went through, but before it can reach the final screen, we move the screen an infinite distance away or simply move the screen at some speed such that the particle cannot reach the final screen. Don’t these conditions create a situation where there is no longer a particle with a waveform traveling through the universe? We collapsed the wave at the double slit and then did not stop the particle. One step further, what if we make this now no waveform particle a neutron, which we know will most likely decay in around 10 minutes. Would this decay result in a virgin (no waveform) proton? As for which of the 3 choices I would rather give up, I do not believe that we have sufficient information to give up any of them. I believe there are far too many unanswered questions associated with the basics we already know. A quick example, there are so many unanswered questions associated with the standard model. Yet SUSY is going to increase the standard model three-fold. Doesn't this also increase the questions and complication three-fold rather than answer the initial question? Thanks Dr. Hossenfelder for making us all think. The state 'the cat is alive' means that the situation is such that in an arbitrarily large number of uniform experiments with exactly uniform cats, we will find cats alive. The 'cat is dead' state means that in an arbitrarily large number of uniform experiments with exactly the same cats, we will find cats dead. The superposition of the states 'cat is alive' and 'cat is dead' only means that the situation is such that in a sufficiently large number of uniform experiments with the same cats, we will find both dead and living cats. What is surprising in such a collapse of the wave function, I do not understand. In my opinion, this is a pseudo problem. Indeed, even if one cat in a box is found definitely alive while the entire system is in a superposition of the states 'alive' and 'dead', this does not disprove the fact of superposition of states, since one single case is not able to refute the statistics. ReplyDeleteSabine, I heard somewhere that you plan to construct (and publish) a better superdeterministic model than the one you published with Sandro Donadi in "A Superdeterministic Toy Model". Any news on this? ReplyDeleteNo, no news, sorry. Sandro is moving to a new job and my funding's running out. It's really difficult to do any research under such circumstances. DeleteInteresting. I would have though that Sandro's new job is at the Frankfurt Institute for Advanced Studies, and that it would make your research easier rather than more difficult. DeleteFunding ... of course it adds noise and stress to everything ... but in case it has negative effects it might also be related to missing trust. But it is easier to point to the money than to talk about disappointed or missing trust. That's his present job. He's leaving next month. I have no idea what your "trust" comment is referring to. If research funding was given out to researchers based on who can be trusted to deliver, both of us would be well off. But that's not how it works. To have a chance to get funding or a job, you have to work on something that's currently popular. That's how it works. And that's clearly not a criterion which superdeterminism scores highly on. DeleteWell, the "trust" comment was about my own experiences with the noise and stress related to founding. A positive example was the MAsk less lithoGraphy for IC manufacturing (MAGIC) EU project. During the project, IMS Nanofabrication AG demonstrated to project partners that it was a trustworthy company, with very open communication. So after the project ended, they continued with the partners whose trust they earned already, without additional EU (or other government) funding. Even before the end of the project, they did adjust their goal to a realistic target (a mask writer instead of a high throughput direct write system), demonstrating trust in the funding agency that it would not be held against them. Today, their multibeam mask writers found their place in leading edge semiconductor manufacturing. DeleteI was also thinking about negative examples, where funding was rejected. You do something else, no problem. Still it did hurt, but not because of the money. Part of what hurts is that you lose trust in the project. But part is also to admit that maybe the trust between the project partners was not high enough, and that you could have continued without the funding, if the trust would have been there. Of course we will try to continue the research, but I think you are missing the point. I have never had funding for myself for this research. I have been working on this in my free time, which doesn't work well for starters because not much of my time is free. My only collaborator will in the soon future move on to work on a different project just because he needs an income. This isn't going to help. You asked for news. The news is it's unlikely there'll be any news on this unless someone or some institution puts money into it. And let me be clear that I am NOT talking about money for myself. I'll be fine. This isn't the issue. The issue is that this research doesn't get done. DeleteThanks for being open about the sad news. You are right that I thought you were talking about funding for yourself. DeleteEven with money, it will be hard to find someone who will construct the better superdetermistic model you and Sandro had in mind, because nobody else can really know what you two had in mind. I also dream of finding a model, but I don't care about whether it will be deterministic or local. What I have in mind is something along the lines of Jarek Duda's maximum entropy random walk or Arial Caticha's entropic dynamics (or even Wetterich's unfinished ...). The fascinating point for me is that accepting superdeterminism opens the prospect of finding an actual "low dimensional" model whose statistical behavior gets described by the mathematics of quantum mechanics. Hi Jakito, DeleteThanks for your understanding. I actually think in the end it'll turn out to be some kind of entropy maximization principle. The question is "just" to find the right measure. I wish you good luck with your research, Sabine Why don't you apply for some funding from the Templeton Foundation, Dr. H.? DeleteIt'd be more worthwhile if some of the money were diverted to your project and away from Luke Barnes' booming local pie shop. I mean think about it. They've given that numbhead over half a million dollars to play on his computer and eat pizza. According to superdeterminism, is the meassurement settings what determines the interference pattern in the doble slit experiment and in the delayed-choice quantum eraser? ReplyDeleteThe measurement setting at the time of measurement, yes. DeleteDavid, the interference pattern is a consequence of the interaction between the incoming particle and the experimental setup (slitted barrier, screen, etc.). In the case of electrons or photons it will be an electromagnetic interaction. DeleteThe EM interaction is long-ranged so it always takes place, not only when a "direct contact" is realized. Superdeterminism is a natural consequence of this fact. The instrument state is part of the experiment and it is to be expected to influence the outcome. Why is the cat not an observer? ReplyDeleteI think because it's the 'particle' in the experiment, and that there's nothing it can do to change the setup or how the experiment plays out. DeleteOr another way of looking at it is that the thought experiment is not concerned with what the cat observes but what an external observer observes. DeleteFor instance, in the Many Worlds Interpretation, in some worlds the cat is alive and in some it is dead; the external observer is in those worlds also but doesn't know which one. The cat has no conundrum, if it is observing anything it is still alive. The external observer does. In some interpretations the position the external observer is in is considered a quantum superposition of different realities at the same time, and the math that was developed to predict quantum events implies that. However, there might be better math, yet to be developed, which could get the interpretation back to one reality at a time, which people might prefer. One might consider Dr. S's thought experiment as a challenge: don't like to think of the cat as a superposition of dead and alive? Then come up with a another theory that works better; or else stop worrying about it and accept the math for government work and assume there's a loophole somewhere. In the cat/box experiment, a process is defined that is comprised of multiple steps. If each of these steps has the potential of producing consequences to the observer, such as being poisoned when the poison was released or being killed by a massive amount of radiation when the radioactive trigger was activated then when the observer opens the box and sees the dead cat, the observer might reject the results of the experiment as unrealistic ReplyDeleteand invalid because the poison and the radiation effects have been dissipated when the box was in the state of superposition. In that state, the poison or the radiation cannot affect the observer. The observer must understand that any dissipative process that occurs and completes under the state of superposition does not impact reality. It is as if the poison and the radiation never existed. The only observed result of the experiment is the dead cat. Superdeterminism as a local hidden variable theory does seem to force us to abandon the idea that measurement outcome do objectively refer to the state of the natural world. If there are local HVs then we are forced to give up on the idea of objective measurements. Two observers, Wigner's friends who are observers that happen to obey the Schrodinger equation, can for an EPR pair report different outcomes to the outside observer who is "Wigner." ReplyDeleteI doubt it is at all decidable whether nature has a preference here. The superdeterminism may be something not that nature imposes, but rather what the analyst chooses. This comment has been removed by the author. DeleteThe superdeterminism has the outcome entirely based on the set up. The problem is that the outcome(s) is or are ambiguous. The two Wigner friends report different results. This has the effect of at least weakening any definition of reality or objectivity. DeleteAs I see it quantum interpretations have the effect of gathering the "dirt" of unknowing in different places. You can hide it here or there, but ultimately it is under some carpet. I sometime like Bohm's pilot wave analogue, where this is a sort of fluid. True to a fluid it is incompressible; squish it "here" and it squeezes out "there." This comment has been removed by the author. DeleteThis comment has been removed by the author. DeleteThis comment has been removed by the author. DeleteSorry, all my above comments are inherently wrong on more than just stupidity level. Not sure why this question caught up and how to lead it to the closure now (keeps bugging!). So removed all that nonsense. DeleteThe main import that I attempted to convey was that I didn't catch an inconsistency in QM and SD for combining states in particular. But that says nothing. Also checked Sabine's paper and it mentions: "Superdeterministic theories are not classical in any meaningful sense of the word. They can, and frequently do, employ the mathematical apparatus of quantum theory, as with states of the system being vectors in Hilbert spaces and observables being defined by hermitian operators acting on the states, some of which are non-commuting. Importantly, this means that superdeterministic theories can contain entangled states the same way that quantum mechanics does, and they also reproduce the uncertainty principle for conjugated variables." "...In particular, θ might describe the settings of multiple, spacially separated, detectors, as we commonly have in EPR-type experiments." But no further specific details about that. More of interesting excerpts. "Superdeterminism, therefore, is not an interpretation of quantum mechanics. It is a more fundamental description of nature, and Psi-epistemic not because it is non-realist, but because the wave-function (“Psi”) of quantum mechanics is an average description, hence epistemic and not ontic." "I have also repeatedly encountered physicists who either praise superdeterminism for being a realist interpretation of quantum mechanics or who belittle it for the same reason. As someone who is not a realist themselves, I am offended by both positions. Superdeterminism is an approach to scientific modeling. We use it to describe observations. (Or at least we try to.) Whether you believe that there is something called “reality” which truly is this way or another is irrelevant for the model’s scientific success. Realism is a philosophical stance. There is nothing in science itself that can tell you whether the mathematical structures in our models are real or whether they merely describe observations. It also doesn’t matter." "If one does not know the hidden variables, then one can merely make a probabilistic prediction based on the measurement settings. This probabilistic description is the one we normally get in quantum mechanics. It must be emphasized that λ is not necessarily a property of the prepared state. It is not even necessarily localized in any meaningful sense. It is really any kind of information that allows one to make a prediction. The term “hidden” merely means that these variables do not explicitly appear in standard quantum mechanics. There is no reason why these variables should be per se unmeasurable. In fact, since they determine the measurement outcome, one can use the measurement outcome to make inferences about the hidden variables, at least if they are computable." Ok, now it's reached some closure and I can relax :-) Have a nice day! *Thumbs up* :) DeleteReality just means that what a measurement produces is a report on the state of a system. When the two friends make conflicting reports the is a violation of the reality principle. DeleteBut that is so irrespective of the context. SD just describes the context (attempts to), or how the outcomes were received. DeleteIt doesn't guarantee that the outcomes will be perfectly known, unless we can perfectly know the settings (and the rest). So, it's like shifting the perspective what to work with, or where to look for the 'dirt'. But I may not understand what you mean. Addendum: In my last comment, I asserted that if nano-scale “Boltzmann observers” exist, they should enable causal time flow and classical states ReplyDeletewithinotherwise quantum systems. Such systems would maintain and evolve internal states even while temporarily isolated from the universe’s overall classical causality network.The most straightforward test for quantum-isolated time flows would be unstable molecules that display matter-wave interference even while reconfiguring internally. The reconfigurations would need to be sufficiently stable to qualify as classical and would need to occur without radiating photons that break quantum isolation. The decay rate would need to be consequential over the matter-wave travel time in the interferometer. Recent methods have demonstrated matter-wave interference for organic molecules containing thousands of atoms [1], making it likely that a molecule capable of such self-contained instabilities could be found or designed. The trick would be to use a sufficiently large molecule for these two states to appear as distinct classical states instead of as a quantum superposition. Stereoisomers, bonding isomers, and rotational isomers are possibilities. One of the simplest possibilities I can think of offhand (no numeric analysis) would be ethane with two substitutions on one carbon and one on the other, launched with the single substitution on the first atom nestled between the substitution pair on the other atom. Bulky substitution atoms would then favor decay by rotation of the single substitution ±120° to make it adjacent to only one of the pair on the other carbon. I am myself fond of Peierls instabilities in carbon loops — think of losing benzene bonding hybridization if the rings get too large — but those are likely too high in energy barriers and require unnecessarily large molecules to make classical. The goal is simple: Show that even if a molecule reconfigures itself classically while undergoing interferometry, it stillinterferes.---------- [1] Fein, Y. Y.; Geyer, P.; Zwick, P.; Kiałka, F.; Pedalino, S.; Mayor, M.; Gerlich, S. & Arndt, M. Quantum superposition of molecules beyond 25 kDa. Nature Physics, 2019, 15, 1242-1245.https://www.nature.com/articles/s41567-019-0663-9 OT: Sabine, I would be highly interested in a book review of "SHELL BEACH: The search for the final theory" by Jesper Møller Grimstrup. Also, I would be very eager to hear your comments on the Quantum Holonomy Theory, partly developed by the author. ReplyDeleteKeep up the great work, thanks for making physics understandable for the non-physicist! Best greetings from Rheinland, Jonas Well, I lean to superdeterminism. I see no physics evidence that rules it out and using some kind of argument from free will or other mental models just doesn't cut it for me. That's mixing levels of level of explanation really wildly. Superdeterminism just seems more likely to me in some way (aesthetic or metaphysical perhaps?) but I'm happy to be proved wrong with actual evidence. I think we need better basic physics to resolve this question so we will probably wondering what is behind the curtain for some time. ReplyDeleteI quite agree. Superdeterminism deserves far more funding for experiments than it presently gets. DeleteSabine,determinism is not local.Chance is local. Free will is local. ReplyDelete... one of the following three assumptions must be wrong: ReplyDelete1. No Superdeterminism. 2. Measurements have definite outcomes. 3. No spooky action at a distance. I find it acceptable if both 1. and 3. are wrong. However, I interpret "action at a distance" merely as "any nonlocal effect, including nonlocal randomness" and "Superdeterminism" merely as "violate Statistical Independence". So I don't insist on determinism. Not sure how Sabine intents those to be interpreted, especially since she wrote the following: A super deterministic theory is a hidden variables theorythat solves the measurement problem anda) reproduces quantum mechanics on the average (“ psi-epistemic”)b) is deterministicc) is localin the sense of not having “spooky action at a distance”It followsfrom this that the theory must violate Statistical Independence and be non-linear. (Need I say it follows that it reproduces quantum mechanics on the average?)And how would quantum physics describe the fact that the cat dies of a heart attack before the quantum effect that supposedly kills the cat occurs? ReplyDeleteStochastics!? DeleteSabine, ReplyDeleteSchrodinger’s cat and Wigner’s friend thought experiments are based on the assumption that it is possible, in principle, to isolate a macroscopic object from the external world using some sort of a box. This is absurd, because such a box would effectively “delete” the inside from the universe. A charge placed inside would not interact with the outside charges and a mass placed inside would not interact with the outside masses. Such a box would violate the mass and charge conservation laws, so, without a fundamental change in the accepted laws of physics, it cannot exist. Let’s assume that a dead cat does not move, while a live cat does. There is no box that can stop you from detecting this by a measurement of cat’s gravitational field from outside. The only uncertainty regarding cat’s position/momentum is the one given by Heisenberg’s principle. The large mass of the cat makes the uncertainty practically 0. So, the cat cannot be isolated from outside, there is no uncertainty regarding what the cat is doing, hence no superposition. Box or no box, the state of the cat is an objective fact. The simple argument above shows that there will never be a superposition of macroscopic objects. It’s a fundamental limitation, not a technological one. The interference of large molecules is still based on the uncertainty relations, not on magical insulating boxes, so those experiments are not a counterexample for my argument. Andrei, hello, the trillions of particles that make up the Cat are what determine the fundamental state of the cat; even if he is totally isolated from the world; the carbon atom that is in a protein inside the nucleus of a cell of a hind paw of the cat is not at the same time in the paw or in the ear; It is trapped in a biomolecular system to which all the molecules of the system are subordinate, call that mutual interference or bound states or whatever, but the classical world is to be the final result of quantum physics, in other words, there is millions of known or unknown quantum equations in the system called cat, we do not know how to solve those equations; but we are sure that the end result is a cat and meow and the property called life belongs to a biological macrosystem; Right now you and I are being bombarded by tons of radiation from different sources and it does not cause our death; a quantum event by itself does not directly cause the death of anything; you need another organized macroscopic system to amplify it, which breaks the purity of the proposition. DeleteLuis, Delete"Andrei, hello, the trillions of particles that make up the Cat are what determine the fundamental state of the cat; even if he is totally isolated from the world;" Can we stop here from a moment? Do you agree with my argument that it is impossible to isolate the cat from the world? Hello Andrei, I see the cat experiment in a didactic way, and from that point of view it is not very different from the particle that travels with two states that coexist and only when we observe it one of them is defined, but in the classical world those Two states exclude themselves, if I insist on imposing quantum reality in all its expression on classical reality, it would fall into contradictions. DeleteLuis, DeleteI think we need to be careful here. QM does not assert that a particle can have " two states that coexist". In general, QM does not make any claim in regards to what exist. As far as we know superposed states could represent just a mathematical way to arrive at correct predictions. The particle could still be in only one state, we just do not know that state. So, there is no explicit contradiction between classical and quantum physics. The main idea is the two friends obey schrodinger equation. The box just means their quantum states are coherent and unobserved until the box is opened. DeleteLawrence Crowell, Delete"The box just means their quantum states are coherent and unobserved until the box is opened." My argument above proves that such a box cannot possibly exist, their quantum states cannot remain unobserved so they will not obey schrodinger equation. It's as simple as that. All objects in the universe are "measured" continuously because of their long-range EM and gravitational interactions with each other. The only uncertainty is given by Heisenberg's principle. That means that you cannot put macroscopic objects like cats, humans or moons in superposed states that differ in any significant ways. Yes, you can have superposed cats that are different in position by some fraction of a Planck unit or so, as predicted by the uncertainty principle for such a big object, but this is all you will ever get. Live/dead cat superpositions cannot possibly be prepared, not even in principle. The Moon is there when you don't look, because your body feels its gravity. You measure the position of the Moon by simply existing in this universe and having a non-zero mass. The only way to not measure the Moon is to make its mass vanish. This cannot be done because mass/energy is conserved. Einstein was right. Bohr was wrong. A box that makes the mass inside vanish is an absurd concept. Cat/Wigner experiments are nonsense. Hello Sabine, ReplyDeleteI need about 2ms to drop 3rd, to delete the 3rd point. If I drop the 2nd point, then I can stop scientific work too. To me it sounds like this. Superdeterminism sounds to me like a rescue mission for locale thinking, for local theories. Hmm. When you look at the night sky, you see stars and planets. Since they are so far away, they appear to us as points. And they move continuously, not making jumps. With these two assumptions Newton (and Leibniz) could develop the differential and integral calculus. And with these again one can calculate very well the future position of the planets in the solar system. More than the position is also not necessary. A planet always remains a planet, it does not change for example its mass. One can explain this - I think - to a 12 year old boy or girl very well. In microphysics things are different. With the annihilation of electron and positron one has before two objects with - mass - electric charge - spin 1/2 - a velocity v < c After the annihilation one has 2 or 3 photons with - no mass - no electric charge - spin 1 - a velocity v = c And although everything changes here, we use a mathematics and a way of thinking, which was developed 350 years ago, to calculate the position of planets. That's why I prefere to drop the third point. In "catching light" you cite Einstein with his "striving for clear understanding". I like him too, but more this: "We cannot solve our problems with the same thinking we used when we created them." best regards Stefan Sabine: Do ya mean, that exactly one or at least one of the three statements must be wrong`? ReplyDeleteAt least one. DeleteAddendum #2: Regarding candidate organic molecules for quantum-isolated classical causality experiments, please note that the rotational ethane substitution example is incomplete because it does not include a Boltzmann observer region. ReplyDeleteBy definition, a Boltzmann observer region must have sufficient thermal-like configuration complexity to accept and statistically shred one component of an otherwise simple wave function, such as half of a phonon-mediated linear momentum pair. Without that, the system remains statistically reversible and thus quantum. A simple ethane substitution molecule such as C[H,Cl2]-C[H2,Cl] lacks such a region and remains quantum. I previously mentioned Peierls instability [1] in large polyacetylene rings (note that benzene is a 3-polyacetylene ring) because it is the most straightforward and best-analyzed molecular self-observing mechanism of which I am aware. The cumulative effects of lattice periodicity plus electron fermionic repulsion in momentum space create a statistical incentive for the electron band-charge concentrations to collapse into alternating charge densities that mimic the double bonds of simpler molecules. The resulting bond isomers (e.g., think of the two possible drawings of benzene withouthybridization) then persist in a quite classical fashion. Thisismolecular self-observation, just by a different name! And it was beautifully quantified by the remarkable Peierls almost 90 years ago.Another approach to including a Boltzmann observer in a molecule is to add a side group with high configuration complexity and easy transitions between those configurations. Such a Boltzman observer group would then stabilize quantum states in other regions of the same molecule by almost-thermally shredding subsets of quantum number wave functions conveyed from those regions. Such observer groups must be messy enough to exhibit thermal-like complexity and close enough to interact by exchanging quantum numbers. Many biologically active proteins appear to possess such domains. Borgia et al. [2] note, for example, that “it has become increasingly clear that many proteins involved in cellular interactions are fully or partially unstructured under physiological conditions.” The existence of experimentally proven examples of built-in protein randomness raises an intriguing possibility: Are chemically active proteins such as enzymes configuredto include Boltzmann observers? Such observer groups might help explain how enzymes convert reactants into new molecules via reaction paths that are otherwise astronomically unlikely.Perhaps the real trick in biology is not quantum computation but quantum observation. Enzymes enable desirable low-probability classical outcomes by first energetically encouragingthose outcomes with their hand-and-glove template subregions. But more subtly, they may also thenlock inthat result by using a nearby Boltzmann observer region to shred less desirable quantum outcomes. The Boltzmann observer group would require some physical separation from the template groups since its randomness would otherwise disrupt the template. All that is needed is for the observer group to be close enough to “see” the reactants via one or more quantum numbers (e.g., phonon momentum) conveyed intra-molecule from the quantum reactants.If that is what is going on, Boltzmann observation likely plays a similar role in ribosomal protein folding. As long as the ribosomes are close enough to “watch over” the protein as it self-folds, they would behave as large, independent Boltzmann observers that help lock in the desired folding outcomes. ----- [1] https://en.wikipedia.org/wiki/Peierls_transition [2] Borgia, A.; Borgia, M. B.; Bugge, K.; Kissling, V. M.; Heidarsson, P. O.; Fernandes, C. B.; Sottini, A.; Soranno, A.; Buholzer, K. J.; Nettels, D. & others. Extreme disorder in an ultrahigh-affinity protein complex. Nature, 2018. https://www.nature.com/articles/nature25762 I would give up option 2. Measurements can have outcomes predicted by QM, before the measurement has been taken, but they are not assured to be accurate. The math is solid but the answer is variable. ReplyDeleteMay one possibly be able to be aware of mental calculations one makes if one is aware of what processes underlie our ways of interacting with and reacting to people and circumstances to a higher degree than we usually engage in, thus getting glimpses of our brains making calculations? I've done a fair few personal development courses delving into the machinery/programming that drives us, and how to act and react more effectively with improved self-awareness. ReplyDeleteResearching super-determinism seems impossible if super-determinism were true. ReplyDeleteI wouldn't go that far, however I find it pretty bizarre that the person telling us this also doesn't have free will nor access to the inner workings of their own neurons, but they've managed to work Superdeterminism out and here we are. :) DeleteNonlin.org, DeleteSuperdeterminism is a type of physical theory where certain objects that we happen to call "sources" and "detectors" have correlated states. Physics is full of examples where different objects have correlated states. Orbiting stars have correlated states, planets in a planetary system have correlated states, stars in a galaxy have correlated states, electrons in an atom have correlated states, synchronized clocks have correlated states. There is nothing peculiar about it. Such correlations are caused by long-ranged interactions between those objects. Particle sources and particle detectors are made of charged, massive particles so one would expect, given the laws of physics we have (gravity and electromagnetism), some degree of correlation between their states. There is nothing impossible about studying the degree of correlation between the states of such objects. It has been done for other systems before. Thanks for the shout out to Calgary!! ReplyDeleteOur city is between the prairies and the Rocky Mountains. I've always felt it is a Super Position to be in! I need someone to explain this to me- in the EPR experiment when the spin of a particle is measured, the other is inverted, it is said the two states coexist until it is measured, however, the same happens with the electric charge, I do not see if it is a positron or an electron until I measure it; but I don't know anyone who says that the particle is positron and electron at the same time until it is measured. It seems to me that the two properties are self-exclusive just like the dead cat and the live cat. It seems that the whole universe, including the space that surrounds the particles, knows which is positron and which is electron. ReplyDeleteThis comment has been removed by the author. DeleteJay10:36 AM, March 03, 2021 DeleteAnd superdeterminism allows the possibility of compelled, infrequent sapient life, thus trivially refuting SAP. A trivial refutation that you won't accept because you are a liar. This comment has been removed by the author. DeleteThis comment has been removed by the author. DeleteJay, DeleteWhat comment? Steven's? What's he done now, called you a liar? Steven, come here, apologize to Jay. Now please play nicely. Jay10:24 PM, March 03, 2021 DeleteYou stated very clearly that if infrequent sapient life is observed then it follows that life is not compelled to exist in a homogeneous universe. But that is not necessarily the case as infrequent sapient life could be observed in a superdeterministic universe based on current empirical evidence. Every single event is "compelled" to happen in a superdeterministic universe, so to suggest there is some question mark as to whether we can consider infrequent sapient life to have been compelled in a superdeterministic universe is, a fortiori, nonsense. You are trying to deny that synonymous terms are synonymous. You also stated very clearly that you are the kind of person who admits when you are wrong, and suggested that though I considered myself to be such a person I was possibly deluding myself. And yet, here we are. You have been proved wrong trivially, and instead of admitting you are wrong, you are trying to suggest that if something is superdetermined it may not be compelled. Thus,you have been proven trivially wrong and a liar. It is not an insult, it is a fact. You are a liar. Jay1:13 PM, March 03, 2021 Delete"@SH, is that the kind of comment you want on your blog?" No,Dr. H, wants the blog comments filled with lies from people like you. Explain how life being superdetermined to happen does not constitute life being "compelled" to happen (Christ, what nonsense we get dragged into discussing when talking to people who have studied some "philosophy") or apologise for your lies like an adult and then we can move on. Steven, DeleteI don't think most of these people are deliberately lying. They have just managed to convince themselves of something that's internally inconsistent and don't notice. The accusation of lying assumes an intent of deceit and I don't think that's what's going on, so I think it would be appropriate if you could tone it down a little. Sabine Hossenfelder3:37 AM, March 04, 2021 DeleteDr. H., you know as well as I do Jay would rather lie about seeing a distinction between "compelled" life and "superdetermined" life than admit he is trivially wrong. And now he cries to teacher about how his delicate sensibilities have been hurt at accusations of lying? That itself is a dishonest stance. If people don't want to be accused of being liars they shouldn't lie. Jay is over 12 years old. He needs to learn to admit when he's wrong. You've done it on here. I've done it. Now it's Jay's time to be an adult. Luis, Delete"in the EPR experiment when the spin of a particle is measured, the other is inverted, it is said the two states coexist until it is measured" Some physicists say that, some do not. However, if one wants to have a local theory one needs to accept that the particles' spins have been decided at the time of emission. If particle A was measured as spin up on Z, it was spin up on Z even before measurement, while the particle B was spin down on Z even before measurement. If you want the particles to be in a mixtures of states, whatever that might mean, you need a non-local (instantaneous) signal traveling from one detector to the other. Without such a signal there is no way to reproduce the experimentally observed perfect anticorrelation. For some reason most physicists to not grasp this simple argument and continue to make the false claim that a fundamental indeterminism can save locality. In fact it's exactly the opposite. Only determinism can save locality. So, if you want locality, which is an established principle of physics, you need to interpret superpositions as reflecting our uncertainty in regards to the true, objective state of the particles, not as a real combination of incompatible states. The cat is either dead or alive, the charge is either positive or negative and the spin is either up or down. Hi Andrei, my question is why the same question is not asked about the electric charge in the same experiment? DeleteLuis, DeleteTo be honest I didn't hear about experiments with electron-positron pairs being done. It's not easy to prepare such pairs. Can you provide some context here? Who is not asking about electric charge? What experiment are we speaking about? Hello Luis, DeleteIn electron positron pair creation, one gets an electron and a positron from birth. This can be seen by the rotation of the spiral with a vertical magnetic field. But with two entangled photons the polarization is undetermined until the measurement. And this can be many km away from the point of creation. The same happens for particles and spin. Many greetings Stefan Stefan Freundt, Delete"with two entangled photons the polarization is undetermined until the measurement" If physics is local it has to be determined since the emission, prior to measurement, as the EPR argument proves. Sure, you might have undetermined polarizations and have a real, non-local collapse "create" the correlated outcomes but it is by no means the most reasonable interpretation. Locality is pretty well established at both theoretical and experimental level. Deleteok Stefan, a gamma quantum decomposes into a positron and an electron, if you measure the spin of the electron and it is upwards, then that of the positron is downwards, or vice versa, that's what they say, they also say that until it is not make the measurement the orientation of the spin is not known and also the two orientations are linked; But the same thing happens with the electric charge, it is not known what it is until it interacts with the meter, but nobody believes that the electric charge is undefined until it is measured, it is assumed that it is given from the moment it decomposes . As I vaguely understand it, the force due to charge is due to the emission and absorption of photons, so charged particles are interacting with other charged particles constantly (trading photons), which I suppose acts as a measurement (of charge polarity). DeleteWe don't really know if gravitons exist or in what frequency they are emitted and absorbed if they do exist, or what information aside from mass they might transfer, so it is not clear (to me, at least) that they could act as measurements in the same way. Luis, Deleteelectric charge interacts with photons and because there are photons everywhere, there is also immediately an interaction and therefore a measurement. So it is determined right at the beginning who is electron and who is positron. Stefan Andrei, Deleteinteresting argumentation. You say something like, "Because local theories have proven themselves in the macroscopic world it is good to apply this way of thinking to the microscopic world as well." Hmm. If the polarization of entangled photons is already fixed at emission, then you need superdeterminism. (Sabine - correct me if I am wrong at this point). I don't like superdeterminism. In my opinion, superdeterminism is too complicated for the objects. That's why I prefer "non-local" thinking and spooky remote action. I am well aware of the difficulties with remote action. By the way: for me the main question is: how to create objects that are stable and flexible at the same time.... Stefan HiStefan, then if the electric charge is always determined, and the charge in turn is an important part of that framework that makes up the whole system called cat; Why is the quantum state more determining than the electric charge in the state of the cat ?; That is, can I take a quantum condition of a particle to subordinate a whole system of trillions of particles to this, overlooking all the energy and dynamics of the system itself? My position is that there is a limit to do that, after that limit enters to dominate other properties; so the propositions must be within those limits; Otherwise, you could take a nervous function of an ant and determine with it all human behavior, or, skip all evolution and "give birth" directly from the big bang a human brain DeleteThe thought experiment postulates that radioactive decay, or some other phenomenon involving a single atom, triggers the device which kills the cat. So that quantum superposition of that single quantum event results in a macroscopic superposition. (Nobody is saying the cat disappears, all its particles are still there, but either alive or dead.) DeleteHello Luis, DeleteI probably don't really understand your question very well. But I can tell you some little stories. Then know how I resolve all the weirdness of quantum mechanics and with some luck it will answer your question. 1. Logic Many years ago I attended a school with extended mathematics classes. In the 9th grade we had logic. For half a year we had logic. It was boring. There is nothing more boring than mathematical logic. Stop There is one point that is interesting: It is possible to start with wrong assumptions and after absolutely correct transformations you get a correct, true result. 2. Our planetary system Some years ago scientists were convinced the earth would be in the center of the world and everything would revolve around the earth. If you spend some time observing the night sky with the naked eye, you can confirm this. you can confirm this. For example the Mars stands after about 2 years again at the same position - compared with the fixed stars. This idea of the world is easy to convey and agrees well with the observations. That is why scientists followed it for about 1500 years. All difficulties could be solved quickly. For example, Mars slows down its movement around the Earth, then moves in the opposite direction for a few weeks, to stop again and move back in the original direction. You can find very nice pictures on the internet with the keyword "retrograde Mars". 1000 years ago a student may have asked why Mars changes its motion. Well, the answer is simple, "Angels keep Mars on its orbit with their flapping wings." Well, since Copernicus, we know better. Everything revolves around the sun. The retrograde motion of Mars is an illusory motion. And the student's question simply disappears, the question decays. 3. Micro physics Modern physics with quantum theory always reminds me of the Ptolemaic planetary system (see above). One can calculate many things with impressive accuracy. But one cannot say how the objects make this happen. At that time one could not answer why Mars is retrograde for some weeks. And today one cannot answer many questions of quantum mechanics. Or in short: Microphysics is a feast for engineers and a nightmare for scientists. For me quantum mechanics is so suitable to understand small objects, as the Ptolemaic world system is for the motion of the planets. Everything depends on a better theory. Many greetings Stefan Ok Stefan, I'm going to put it this way; suppose that instead of releasing a poison, the system releases the virus Covid 19, in what state is the cat now? He may have Covid and not die and even not have it and die from the stress caused by being locked in the box. I mean, the system called Cat and Poisoning Device is more complex than a binary quantum system; I accept that until the first stage, before there is amplification, there is a correlation with the quantum phenomenon; but after that the system has a very complex evolution, and there is not even another quantum process that can reverse it or return it to the initial state. You say, why so careful? , if the classical world comes from the quantum; so this experiment instead of explaining it distorts it DeleteIt's embarrassing and depressing that Schrödinger's infamous cat is still causing debates. For Schrödinger the idea of a cat that is both dead and alive was clearly ridiculous. Unfortunately he didn't carry this ReplyDeletereductio ad absurdumto its logical conclusion. He just couldn't give up the idea that his time-dependent equation was the full story, and that the wave function describes anindividualsystem. (Interestingly the cat paradox arose from correspondence with Einstein, who favoured a statistical interpretation.)The wave function acquires physical meaning for example in the theory of the Josephson effect, where it describes the collective behaviour of Cooper pairs. (See the last chapter of the Feynman lectures.) Or in superfluid helium. Or in the more familiar context of a beam of polarized light. The Stokes parameters describe correlations between the electric field components, and it is just a manner of speaking to confer these statistical regularities to individual "photons". It is for a reason that the orthodox interpretation insists that the polarization of a photon is undefined until measured. The essence of a tsunami is not expressed in a molecule of water. > "One especially peculiar aspect of quantum mechanics is that it forces you to accept the existence of superpositions." > "We know this experimentally." Not only quantum theory requires interpretation. Experiments even more so. You ought to step back far enough to see that many implicit assumptions are made already in describing the experiments, when we talk about "quantum systems" having "properties"or not (or only in "uncertain" ways). That the double slit experiment "proves" that an electron passes through both slits is an over-interpretation. Bohmian mechanics provides a counter-example. Regarding the three hypotheses of which one must be given up, I would add that there are many more that were left out and left implicit. But if you are interested in my position, I'd choose no. 3 ("No spooky action at a distance") as the easiest one to give up. Although action needn't be spooky if you allow particles/waves to travel backwards as well as forwards in time (as is the case in QFT). A part of what causes people to go into fits is that in quantum mechanics the logical operations AND and OR are not distributive. They do not obey the distributive property of Boolean logic. So while a cat may be alive and dead this AND is a quantum form of AND. The fate of the cat is equivalently alive or dead, for OR a classical meaning of OR. DeleteSince it is aphoristically well known that a cat has 9 lives, why is Schrodinger's cat even interesting? ReplyDeleteChris, what about the superposition of dead/alive states? Chris, it's interesting because it's thought-provoking, and raises interesting questions as people encounter it. If the cat has 9 lives, does the pre-unboxing superposition of dead/alive states use up one or two lives at once, therefore rendering a cat useless after four or five iterations of the experiment? DeleteIgnore second 'Chris', I didn't catch that before posting. DeleteA new way to have fun at parties: tell people "Free Will doesn't exist!", attempt to explain why, then watch people's heads explode. ReplyDeleteRandom thought: Could 'spooky action at a distance' possibly come from particles being correlated together superdeterministically during the Big Bang or afterwards? ReplyDeleteAndrei, ReplyDeleteThat correlation better be 100%. Else what's the point of superdeterminism? And if all two objects are 100% correlated, then everything is 100% correlated including the researcher and the test equipment. Hence the impossibility of research. Nonlin.org1:50 AM, March 05, 2021 DeleteHow do you know research that uncovers the superdeterministic nature of the universe cannot be superdetermined? I fail to follow your logic. Maybe physicists were bound to happen, like gravity. Nonlin.org, ReplyDelete"That correlation better be 100%" Indeed. " if all two objects are 100% correlated..." Correlations exist between certain physical parameters. The accelerations of 2 orbiting stars are 100% correlated (they point to their common center of mass) but their emission spectra could be completely uncorrelated. So, it's not that everything is 100% correlated, but some physical quantities are. Let's make a more detailed analysis. All objects involved in an EPR/Bell test are ultimately just groups of charged, massive particles (electrons and nuclei). All those particles interact according to the laws of electromagnetism and general relativity. The simple fact that they have to obey those laws means that the evolution is not random. If the laws of EM and gravity are fundamentally deterministic (as the classical theories are) it follows that the motion of those particles is 100% correlated to each other in the sense that no particle could move differently than it does. Now, we do not observe, nor can we observe (because of the uncertainty principle) the individual motion of those particles. Even if we could, we could not solve all those equations for 10^26 particles or so, so it's practically impossible to observe those correlations at the microscopic level. What we can observe is the macroscopic behavior, which is an emergent, statistical one. In order to test superdeterminism one should come up with a correct statistical treatment of the experiment. This has never been done. Bell theorem simply assumes that the microscopic correlations cancel out at the macroscopic level. That's an unproven hypothesis and there is no reason to take it seriously. In conclusion there is nothing peculiar about superdeterminism. It's just the correct way of analyzing an experiment. If you notice some macroscopic correlations you need to investigate their origin in terms of the known laws of physics at microscopic level. Nobody is doing that. Classical determinism is dead. QM killed it. If you disagree, here's the experiment that proves it: we have a double slit experiment with single photon emissions and the target area separated in 10 different sections labeled 0 to 9. Once a section is hit, it stays on (cannot detect multiple hits) Determine the output sequence? Is it 017...9? Is it 875...6? What is it? ReplyDeleteGiven this, superdeterminism seems to me just a doubling down on that which was disproven. And, if microscopic correlations do not cancel out at the macroscopic level, then that's exactly my point. Setup and researcher are caught up in the same network of perfect correlations. Basically, everything is preordained, in which case no research is possible. Come to think of it, that was the case with classical determinism as well. If you disagree, you should start by explaining what's different in superdeterminism from the old determinism. Nonlin.org, Delete"Classical determinism is dead. QM killed it." This is false. "If you disagree, here's the experiment that proves it: we have a double slit experiment with single photon emissions and the target area separated in 10 different sections labeled 0 to 9. Once a section is hit, it stays on (cannot detect multiple hits) Determine the output sequence? Is it 017...9? Is it 875...6? What is it? " Give me the position, momenta, EM field configuration and charge distribution for the source, slitted barrier and screen and a computer good enough to solve the corresponding equations of a 10^26 body system and I'll give you the prediction you want. Determinism implies that IF you know the initial state and the laws governing the system you could predict the state in the future. In practice we do not know the initial state, we may not know the fundamental laws describing the system (we have no consistent theory of EM classical or quantum) and we have no way to perform the calculations. Your experiment proves nothing. "Basically, everything is preordained, in which case no research is possible." Why? I see no contradiction between determinism and reason. In fact, reason presupposes determinism. "If you disagree, you should start by explaining what's different in superdeterminism from the old determinism." Superdeterminism is a particular type of determinism. And determinism is perfectly compatible with the ability to make research. The idea that research isn't possible if the laws of nature are deterministic is complete rubbish. The idea that you can "disprove" determinism because you were not able to predict a sequence of numbers is likewise nonsense. Both of these statements are trivially wrong, document that you have not thought much about the matter and have never bothered to look at the relevant literature. DeleteIt's not that I or someone else cannot predict that outcome, but that the outcome is FUNDAMENTALLY undetermined even with an ideal computer. Where the next photon hits is unknowable except statistically. DeleteYou say: "Give me the position, momenta, EM field configuration and charge distribution for the source, slitted barrier and screen and a computer good enough to solve the corresponding equations of a 10^26 body system and I'll give you the prediction you want." How? What about the uncertainty principle? And as much as you control the inputs, the interference pattern remains the same. This is totally different than the deterministic systems (hereby invalidated!) where the normal distribution of outputs can be narrowed by tightening the inputs / set-up with the theoretical conclusion that perfect inputs / set-up will result in perfect outputs (determinism). Determinism doesn't mean that output is SOMEWHAT determined by inputs (reason is possible) but that output is 100% determined by inputs (reason is impossible). You say: "Superdeterminism is a particular type of determinism." This is not an explanation. Sabine, can you link to the "relevant literature" that you claim unambiguously disproves my points? I can’t find any. Delete"It's not that I or someone else cannot predict that outcome, but that the outcome is FUNDAMENTALLY undetermined even with an ideal computer."That's the case in standard quantum mechanics, yes. But you missed my point. I was saying you'll not ever be able to figure out whether it's indeed fundamentally random or whether you just were not able to make the correct predictions because you had the wrong theory. "Sabine, can you link to the "relevant literature" that you claim unambiguously disproves my points? I can’t find any."Your "argument" is trivially wrong. No one would bother writing a paper to spell out that you can't prove something is fundamentally random. This comment has been removed by the author. DeleteThis comment has been removed by the author. DeleteNonlin.org, Delete"It's not that I or someone else cannot predict that outcome, but that the outcome is FUNDAMENTALLY undetermined even with an ideal computer." You assert that the outcome is "FUNDAMENTALLY undetermined". Assertion does not equal proof. In your previous post you claimed to have a proof. I don't see it. "Where the next photon hits is unknowable except statistically." It might be unknowable but this does not mean is not deterministic. If the initial state is unknowable so it is the final one, even if the system evolves deterministically. "What about the uncertainty principle?" Well, that's your problem. You cannot ask me to predict the outcome without giving me the initial state. "And as much as you control the inputs, the interference pattern remains the same." I think I would disagree here. If the incoming particles are perfectly focused on slit A no interference would be observed. "This is totally different than the deterministic systems (hereby invalidated!) where the normal distribution of outputs can be narrowed by tightening the inputs / set-up with the theoretical conclusion that perfect inputs / set-up will result in perfect outputs (determinism)." Nonsense. You do not control the charged particles in the barrier, so you do not control the EM fields acting on the incoming electron. If you shoot bullets toward a forest moving in the wind you will not be able to predict if the bullet will hit a branch just by increasing the accuracy of the rifle. You need to know where the branches are when the bullet is shot. Likewise, in order to calculate the trajectory of the electron you need to know the EM fields (produced by the electrons and nuclei in the barrier) acting on it. "Determinism doesn't mean that output is SOMEWHAT determined by inputs (reason is possible) but that output is 100% determined by inputs (reason is impossible)." Except from the "reason" part I agree. Determinism implies that for any initial state the evolution is unique. "You say: "Superdeterminism is a particular type of determinism." This is not an explanation." Superdeterminism is only defined in relation to Bell tests. A superdetermintic theory implies a correlation between the hidden variable and the states of the detectors. It has nothing to do with ones' ability to reason. Reason does not depend on Bell tests. As I have committed previously, without some determinism, research or any kind of reasoned process would be impossible, since the opposite of determinism is pure randomness in which coherent thought itself would be impossible. That seems ironclad to me. Saying that determinism makes research impossible would then rule out any possibility of research. This is not true, in my opinion. ReplyDeleteResearch is a search process, an elimination of the impossible, or what doesn't work, until what works is found. (Edison famously remarked that he had discovered about ten thousand things that did not work.) If a solution exists, and you search long enough without going in circles, you will find it. Nothing about determinism rules that out. I also think there may be a logical fallacy lurking in the concept that under determinism, everything is preordained, having to do with the semantics of ordainment. It has the semantics of compulsion, forcing things to be done in a certain way regardless of the input of those involved. In fact, under determinism we are largely doing what we want to do, following our natures, making choices without necessarily feeling any compulsion. Only in hindsight, when it is too late to change the past, can we say what we did was bound to happen. It seems to me just as true to say that we choose our actions in the present moment, rather than that they were chosen for us by initial conditions at the Big Bang.The result turns out the same either way, but I for one see myself making deterministic choices (as best I can) rather than acting as a puppet, and learning from my mistakes. With that attitude, at least, we can do research and make progress. At the end of a long ride, horses head for the barn without compulsion, knowing hay and oats are there, not with a feeling of preordainment. JimV, I give this comment a 'like'. there's a novel, 'Traveller In Black' by John Brunner, that describes a realm that's in the clutches of chaos and operates in pre-science, beings and events are magical and consequences bizarre, and the Traveller helps to bring this realm to a more stable state that resembles our world. To agree with your previous words, Determinism gives us a sensible world we can operate efficietly in. DeleteRegarding: "Schrödinger's Cat: Still Not Dead" ReplyDeleteThe certainty that both hidden processes and variables can exist under the state of superposition in a way that these hidden processes and variables cannot be observed in the real world opens up the possibility that inside the box the laws of nature are different than those that exist outside the box. The state of that voluum of space inside the box can be completely disconnected from the world outside the box. Yet the universe that develops inside the box can impose permanent consequences within our reality when the state of superposition inside the box eventually terminates but the transient effects that have occured in the universe of the box effectively did not occur. The situation that I am interested in is the development of negative energy inside a state of superposition. This situation is represented by the development of negative energy inside the box that has appeared as a hidden variable. As a hidden variable, any process catalyzed by it would not be observable while the process was ongoing and when the box was opened the unobservable process would terminate, but the permanent effects that this hidden variable had on the cat would still be realized upon observation. It is now accepted by science that the development of negative energy in our universe would start a destructive process called the Higgs catastrophe. But this universe ending process does not apply if the development of negative energy occurs under a state of superposition. In total, this set of hidden and actualized causes and effects leads to the results of negative energy appearing in the universe yet not the realization and the recognition of negative energy itself. The Exotic Vacuum Object (EVO) is a bubble of Anti-de Sitter space (AdS space) that is formed through the condensation of tachyons made available within a superconducting seeded environment. Inside the bubble of AdS space (EVO), there exists another universe that is incompatible with our universe (De Sitter space) ReplyDeleteWhen matter in our universe encounters AdS space inside the EVO, it decomposes into pure energy. Most of this energy is lost through a superposition effect produced by superconductivity of the tachyon condensate. But some survive the termination of the EVO to produce newly formed elements that form in De Sitter space from this excess energy residue. This we call transmutation of elements. In more detail, at the center of the EVO there exists a black string which is a zone of nothingness. It is this core that deconstructs matter from De Sitter space that enters the zone of nothingness into pure energy. In a recent paper - Nothing really matters https://arxiv.org/pdf/2002.01764.pdf Our unstable universe is described, but the process of total matter destruction throughout the universe as described in the paper is not totally correct as we have found through our recent experimentation with EVOs (these bubbles of nothing). The process of distruction does not seem to spread from the EVO into De Sitter space. Fortunately, the zone of nothing seems to remain confined within the bubble of Anti-de Sitter space. Sometimes superposition does not immediately set in for a second or two and a lot of subatomic particles and gamma radiation is produced. But when the engineering is right, this disruptive and unpleasant process is eliminated. Would a universe that's random cool down faster than one that's deterministic? ReplyDeleteRegarding: The characterization of the Bosenova as follows: ReplyDelete"A bosenova is a very small, supernova-like explosion, which can be induced in a Bose–Einstein condensate (BEC) by changing the magnetic field in which the BEC is located, so that the BEC quantum wavefunction's self-interaction becomes attractive. In the particular experiment when a bosenova was first detected, this procedure caused the BEC to implode and shrink beyond detection, and then suddenly explode. In this explosion, about half of the atoms in the condensate seem to have disappeared from the experiment altogether, remaining undetected either in the cold particle remnants or in the expanding gas cloud produced. Under current quantum theory, this characteristic of Bose–Einstein condensate remains unexplained, because the energy state of an atom near absolute zero appears to be insufficient to cause the observed implosion. However, subsequent mean-field theories have been proposed to explain the phenomenon. Although the total energy of the explosion is very small, it appears very similar to a tiny supernova, hence the term 'bosenova'" I posit that those missing atoms disappear without a trace because the Bose condensate is still coherent when the condensate explodes and it is superposition that causes the atoms to vanish to some unknown place outside of our current reality. What might happen with them there? DeleteThey might be teleported to another place and another time. Delete"That's the case in standard quantum mechanics, yes. But you missed my point. I was saying you'll not ever be able to figure out whether it's indeed fundamentally random or whether you just were not able to make the correct predictions because you had the wrong theory." ReplyDeleteYes, one cannot prove a negative. Then what about free-will-is-dead-lets-bury-it post? Furthermore, on the here and now basis, determinism is experimentally incorrect as the double slit shows. Let’s bury it! I will personally unbury determinism when we’ll have the right theory. “If the initial state is unknowable so it is the final one, even if the system evolves deterministically.” … “If the incoming particles are perfectly focused on slit A no interference would be observed.” That’s not it. QM is fundamentally different than classical mechanics. “If you shoot bullets toward a forest moving in the wind you will not be able to predict if the bullet will hit a branch just by increasing the accuracy of the rifle. “ Not what I said. Read again. Fact: in classical mechanics you WILL increase the accuracy of the output if you increase the accuracy of any of the independent inputs. As long as the rifle is independent of the wind, increasing the rifle’s accuracy does absolutely improve the accuracy of the shot. Even if dwarfed by the wind input. “Superdeterminism is only defined in relation to Bell tests. A superdetermintic theory implies a correlation between the hidden variable and the states of the detectors.” Bell test tells us about the nature of the universe so, if true, there’s nothing limited about superdeterminism. “Hidden variables” are not science. Not yet at least. Delete"Yes, one cannot prove a negative. Then what about free-will-is-dead-lets-bury-it post? "As I have repeated a seemingly endless number of times, my arguments are always based on science "for all we currently know". For all we currently know, free will doesn't exist. It's a statement about the currently accepted laws of nature, no more and not less. "Bell test tells us about the nature of the universe so, if true, there’s nothing limited about superdeterminism."Superdeterminism is a modification (or, you may say, completion) of quantum mechanics that replaces the wave-function collapse with a physical process. These effects are obviously limited to the cases where we now think a wave-function collapse happens. Bell tests tell us about systems on which you can do Bell tests. Sorry to insist: DeleteNothing in the "for all we currently know" forbids free will. Not when you stop presupposing determinism. Something in the "for all we currently know" (the experiment I propose) invalidates determinism. There is no way anyone can forecast - even theoretically - the number sequence. Again, "for all we currently know". As far as Bells test, a wave-function collapse seems to be happening all the time and everywhere. You might want to consider a post on superdeterminism explaining its assumptions, reach, and limitations. "There is no way anyone can forecast - even theoretically - the number sequence. Again, "for all we currently know"." DeleteRight, but as I have explained countless times, if it's random it's no will. Atoms decay randomly but we don't assign them free will because of this, do we? Look, this isn't a new argument. Free will is a logically incoherent idea. You can't have it both ways, either it's not free or it's not a will. Or, as Nietzsche put it "the best self-contradiction that has been conceived so far". You don't know that atom decay is 100% random. You just know that it's not forecastable, hence not determinism "for all we currently know". That tells you absolutely nothing about free will. DeleteRegarding "logically incoherent idea" and Nietzsche, that is pure philosophy of the unscientific kind. Which of course anyone is entitled to. Let's just be clear that said argument has nothing to do with science or the scientific method. Nonlin.org DeleteThat it's not deterministic for all we currently know is exactly what I am saying, thank you. But you are missing the point, free will would be an inconsistent and meaningless idea whether it's deterministic or not. Nietzsche's argument is unscientific insofar as that it doesn't depend on the science, but that only makes it a stronger argument. As far as we know, there is either determinism (cause and effect) or randomness, or some combination. If "free will" is meant to suggest that there is a third way for things to happen, perhaps a magic way for our "will" to overcome the laws of nature, that sort of free will is ... inconsistent with what we know of nature. DeleteWhere I think the struggle is, is to understand that while determinism implies that, like everything else which isn't just random, our choices themselves are effects which were caused by previous events, that doesn't mean we are puppets. at least, not in my view. We make those choices, of course based on previous experience and how the various chemicals in our bodies make us feel at the moment and how many functioning neurons we have. How else could we make them, and why would we want to them to be independent of all past events? Don't we want our choices to be determined by evidence and reason? That is what determinism, to the extent it exists, allows us to do. I think of "free will" as the legal meaning, that our choices are not being compelled by other entities, not forced upon us against our own desires and own reason. In that sense, determinism is what gives us the opportunity for will (free or otherwise), to make choices and to learn from how they turn out. Determinism is what motivated Einstein to remark that insanity is doing the same thing (cause) and expecting a different result (effect). So I like the legal definition and don't really see what all the fuss is about, unless, as I started with, some people believe there is also magic. Probably we should start with the definition of "will". I take it to mean the capability to make decisions which influence future events. A branch in water goes where the current takes it, a fish can make choices, but a fish with a hook in its flesh does not have free will. (Those must not be the definitions which the people who argue about free will use.) (I am okay with the operation of "will" being a mechanical process which ultimately is analogous to a branch in current because I still see a useful distinction between the branch and the fish; the fish uses much more information from its environment. Differences in degree can amount to differences in kind.) Hmm! The more I've thought about the whole no-free-will thing, the more it makes sense for me. This comment further expands on Superdeterminism means in practical terms. (I've come to the conclusion to fully understand Superdeterminism, I'll just have to become a physicist; that's on my 'bucket list' now :) ) DeleteFor what it is worth (which is what you paid for it), I currently think that probably the key point of super-determinism as a theory is that there is no actual quantum randomness (or perhaps a variation would be, less randomness than seems to exist) but total determinism instead; whereas the current consensus is that things seem deterministic on the macroscopic level of our lives, but are very random at the sub-microscopic level. (Statistically predictable, though.) DeletePersonally I think either situation is a workable way to operate a universe (as long as super-determinism produces pseudo-randomness), but I have encountered (online) several people more learned in physics who seem to absolutely abhor super-determinism as a theory. Dr. Scott Aaronson is one of them, I am sorry to say. (Plus a couple philosophers, but I don't pay as much attention to them as they seem rather closed-minded to me.) I wish Drs. Aaronson and Hossenfelder could discuss this and reach a mutual understanding (and let us know what it is). But they have other important work to do, of course. So where do we stand? Determinism (*) is false "for all we currently know" while people have different opinions about free will, opinions that science cannot either confirm of negate. Furthermore, free will is not the topic of this post, so let's leave that burning disagreement aside. But [super] determinism is. And I find it very strange that the superdeterminism hypothesis is not explicitly linked to the failure of the old, "Newtonian" determinism (not Newton's idea). DeleteFurthermore, I find determinism and superdeterminism "logically incoherent ideas" for the simple fact that we implement determinism in the computers we build and we know that computers cannot create in general, and do science in particular, meaning generate hypotheses, test them, and reach scientific conclusions. They cannot do that even theoretically. So, if superdeterminism were true, we would be computers, therefore we wouldn’t be able to conduct the science to confirm superdeterminism. (*) Determinism is the idea that the future is 100% determined by the past, not that the future is a function that includes the past along with other factors - possibly randomness and free will among them. As an aside, a LOT of people don't get the 100% part. Delete"computers cannot create in general, and do science in particular, meaning generate hypotheses, test them, and reach scientific conclusions"Factually wrong. Computers have generated hypotheses, tested them, and reached conclusions. Will this stop you from making this evidently false claim? Most likely not. I saw the above comment in the recent-comment lists, which just shows the first sentence, with a reaction of disbelief that Dr. Hossenfelder could be stating that. Which of course she isn't. DeleteStill, it is a common belief, I think due mostly to the fact that are no nerves which monitor brain neurons, so we cannot innately sense how our brains work. It turns out they don't work by magic, but by the same physics that govern everything else. A secondary reason is the sheer scale: about 100 billion neurons working in parallel. So far the most neurons that have been simulated on a computer is about one million, about enough to simulate a mouse's brain. (Dogs have up to 500 million neurons.) As I and others have said, at some point differences in degree become differences in kind, for practical purposes. I am not sure we will ever develop computers with 100 billion neuron capacity, but even their current mouse-brains, when dedicated to specific well-defined but complex tasks such as chess, Go, and protein folding, can out-perform us. I believe the final step in understanding human success (to be non-magical) is to recognize the basic algorithm we use: trial and error, plus memory of past trials. (Think of the great inventor Thomas Edison, for example.) This is an algorithm which can be implemented in computers, and has been, to great success. (Example from my own career: the GE-90 jet engine gas-path was designed by an iterative computer program, called "InGEnius". NASA has several similar examples. See "genetic algorithms" and "Monte Carlo method".) (Acknowledgement: while the opinion is my own, I had to learn the facts on which it is based from many sources, some of them at this site, over a lifetime.) Hi all, If I may interject: Delete'Free will' seems to not really play into how a deterministic system works out, practically speaking, from what I understand (which is only what Dr. Hossenfelder et. al. have described in plain English mostly, I admit... I've physics knowledge slightly above that of a potato). If one could cast a big enough light-cone from the Big Bang through space-time, at some point it will touch everything in the future including events we interpret as random, and all the computations and experiments and results ever made and produced. I don't see this as preventing events happening at least as if at random from the perspective of humans. as for 'no free will', perhaps the concept of 'free will' is similar to spirituality and religion - experiences and beliefs that seem to be artifacts of how human brains work. Sorry. I sense anger therefore, while strongly disagreeing with the latest counterarguments, I will stop here. DeleteNonlin.org1:35 PM, March 25, 2021 DeleteYou mean you sense you've lost the argument, so you fire off a parting shot, smearing the person who has pointed out your claims are plainly wrong as "angry", and fail to respond to the arguments. You have missed an opportunity to learn. What a pity. About superdeterminism, it is clear that "the measurement outcome is partly determined by the measurement settings". (agree). But the part that goes "In this case, the cat may start out in a superposition, but by the time you measure it, it has reached the state which you actually observe. So, there is no sudden collapse in superdeterminism, it’s a smooth, deterministic, and local process". ReplyDeleteYou cannot have both: either you believe the cat was in (and had reached) a certain state at the time you measured it, or you follow the Copenhagen line as to being in a superposition of 2 states, the question of which one being meaningless. Superdeterminism, as all determinisms, does not live well with superposition. Or at least it must profess that this concept is just a practical mathematical concept for our impossibility to know the state of the system before we measure it, that is, it equates to acknowledging the impossibility of 'predicting' the state of the system. In summary, as I (humbly) see it. The cat is not in a superposition. It is dead or alive. It only happens that we cannot predict its state -even in principle! If we twist the words a little into clarity: It is WE that are in some strange state with respect to the cat, call it superposition or whatever: the strangeness consists in the irrefutable, puzzling but simple fact that we cannot predict its state before we measure it. All the glamour and weirdness of QM is shorthand for this fact. Now can this view lead us to a positive outcome? (Negative observations are easier to do than positive ones, but also far less useful). Maybe, if it leads us to focus on the exact reason why we cannot predict the state of some system 'even in principle'. If one is a superdeterminist, this is the right question to ask and the hard answer to look for. IMHO the key to the puzzle is not far away from us and it will be soon disclosed, but first we must overcome the 2 big problems that you mention elsewhere: Bad methodology and group thinking. Imagine a hard box made of an unbreakable stuff that contains (you are told) a certain amount of money, in cash. You need to know how much. The problem is you cannot look into it. And you cannot open the box unless you blow it up (and destroy some of the money, essentially altering the content). ReplyDeleteImagine someone brings a device, a very special one, that overcomes that limitation. It can 'look' inside the box and tell you about its content. Unfortunately, it cannot tell you the exact amount of money inside, but can only provide you with an estimation: it tells you the probability that it contains any amount that you ask for. Would you say that the question of its content is meaningless? Would you accept the view that the money is in a superposition of say n states (the possible amounts inside box)? You could do that, provided you understand that this is just a 'practical' way of dealing with the problem, not a fundamentally correct one. Even if it works, and even it it provides you with the 'best' answer available in practical terms, you wouldn't be mislead into 'believing' that kind of reasoning. You coud say, what is the difference. If we cannot know the content, we just as well can talk of it being in superposition, of hidden or whatever. It doesn't make any difference. Well it might make a difference: The 'common sense' approach does enable you -it raises the probability- that you some day find out about what the box consists of. You are more likely to better understand the box, if not the amount of money inside. You may even end up building one such box! - Copenhagen says: Dont look at the box. It has no meaning. ReplyDelete- Pilot wave: This is how the box works. (But of course, it is wrong). - Many worlds: I got crazy after looking at this for too long. To somebody unfamiliar with the measurement problem of QM the above metaphor comes handy: ReplyDeleteYou cannot know the exact content of the box even by measuring it, for the act of measuring (breaking the box) alters its content. The result is always a concocted result from the actual (unknown) content and the act of measuring it, constrained only by probability laws of QM. Only in this sense it is understandable to consider the question as to the original value-content as senseless. It comes down then to a question of preference or 'taste' how do you think of the 'original' value that you are measuring. Hello Sabine, ReplyDeleteYou prefer superdeterminism compared to spooky remote action. Hmm. Do you have any vision of how this should work??? Do you have any imagination of how this can work? I don't expect a fully formulated theory. But do you have any idea that a 12 year old child can understand? You mention "nonlinear theories". Do you have anything more on that? BTW, I like the experiment you mention in the arXiv I'll keep my fingers crossed for it. Greetings Stefan Had a thought earlier that still doesn't seem totally stupid to me... ReplyDeleteIn connection with the realization - from the episode with the baby - that with constant energy per volume and increasing volume of the universe, the energy content of the universe increases... - what "yes, that guy again" said... - Could it be that something like coincidence emerges from this, even if one takes a deterministic approach?* Good Morning. ReplyDeleteStrangely enough, I'm drawn to this blog entry again. It's one of those days when I mentally draw a comic book version of something that would otherwise leave me perplexed: In a gloomy urban canyon, Superwoman explains to Batman that flying is actually very simple... you just have to want it hard enough and it'll work. Batman looks deep into her eyes, lets his gaze wander to her cleavage & says: "I don't think so." ... whereupon she gives him a kick in the ass and in the next picture he flies grinning over the roofs & thinks: "Well, I guess she is right after all." . ^.^ , ° [ by the way... congratulations. ] Gut ein. :) ReplyDeleteAccording to Douglas Adams in 'Thanks for All the Fish', trick to flying is to fall over and forget to hit the ground. ;) . ^.^ , °[ & occasionaly to use a Sony Walkman. ] ReplyDelete:) I've been walking along the road outside my mother's place listening to music, the countryside is beautiful. DeleteBtw, I feel like I've come a fair way along with Superdeterminism/no free will from the confusion I had about 6 months ago. Progress!
true
true
true
Science News, Physics, Science, Philosophy, Philosophy of Science
2024-10-12 00:00:00
2021-02-27 00:00:00
https://lh3.googleusercontent.com/blogger_img_proxy/AEn0k_tQCjP_kCS9_ZOP-dP7WJwpX8xegkNEoEgmS0gZZQryYyv9DWaWGNj5nu8lcS_j_XPdgvr13iSO2fwanlK-GJmWYh2gBWTa9z-foez4mZ9a9JHFxw=w1200-h630-n-k-no-nu
null
blogspot.com
backreaction.blogspot.com
null
null
31,433,033
https://masatohagiwara.net/202002-my-first-year-as-a-freelance-ai-engineer.html
My First Year as a Freelance AI Engineer
Masato Hagiwara
This week marks my one-year anniversary of quitting my full-time job and becoming an independent NLP/ML engineer and researcher (which I just call “freelance AI engineer” below). My experience so far has been very positive, and the past year was probably one of the most productive years in my entire career. My “achievements” in the past year include: Now, I believe that becoming a freelance AI engineer is a totally viable career choice (but only for those who are cut out for it—see below). A number of friends and people that I know asked what it’s like to be a freelancer. Many of them haven’t even heard of any “freelance researchers” before (yeah, me neither). That’s why I’m writing down my thoughts and experience here so that this might be useful if you are even vaguely interested. The answer is probably “no” for the vast majority of people. Being a freelancer is not for everyone. You need to be a type of person who enjoys being a freelancer. More on this later. You also have to be really good at what you do. Put yourself in the client’s shoes and think of this as hiring a contractor, be it a plumber or an attorney. You hire them because, by paying them, you expect that they can almost certainly solve your problem, not because you want them to be “part of your team” and work on the problem together, offering them opportunities for learning and growing, along with plenty of PTO days and free lunch. As a freelance AI engineer, you are expected to, for example, start with a client, familiarize yourself with the product and the codebase, submit the first PR within a couple of days, and complete your first business-metric impacting ML prototype or pipeline within the first couple of weeks. If you are just starting out in the AI field, I think your best bet is to go work at a large company (e.g., FAANG) with plenty of resources and growing opportunities, or at a fast-growing start-up (if you are not sure which one, I’ve heard good things about Duolingo) and build your experience as a full-time employee. I charge hourly. I’ve never done project-based billing. I think AI projects are better suited for hourly billing, because it’s simply too hard to define the scope of work based on the deliverables. My current rate is $200/hour for short-term commitments (e.g., a couple hours per month) and $150/hour longer-term commitments (e.g., 10+ hours per week). There’s only one time when I was told my rate is too expensive, but most clients just take these rates as fixed. When clients can’t afford my rates, we usually negotiate in terms of the scope of work (e.g., hours per week) instead of the rate. The rates of an average US-based AI engineer with my skills are probably higher. I probably should increase my rates. I probably should have long ago... Thanks to the “AI boom” of recent years, it's completely a seller's market for freelance AI engineers (again, if you are good). You will never run out of client leads and inbound requests. I think a good rule of thumb is to keep increasing your rates until you start getting “no”s with a 50% chance. You are still left with the other 50%, which tend to be better clients anyways. As a side effect, you’ll get very good at saying no. My default answer is always no, and my schedule is always full (except when it’s not, as my current clients know). We have enough savings to keep us from being hungry for a couple of years even if I didn’t work at all, and my wife works full time, which was really helpful when I made the leap. Even with enough savings, though, cash flow fluctuation can have a noticeable impact on your sense of financial security. If you work with net 30 clients, for example, it takes two months to get paid after you start working for them. I had some short periods of time when we had negative cash flow (especially when we moved from Pittsburgh to Seattle last summer) that ate our savings, and I really felt the impact of loss aversion. You feel a lot worse losing a certain amount of money than you feel better gaining the same amount. If financial security is your priority, you should probably get a full-time job. Compared to when I was working full-time, I made less money but also worked fewer hours in the past year. I could totally have worked more to make more, but having small kids, spending time with my family is important. I found most of my current and past clients through my network. Conferences and workshops are also good sources of potential leads, especially when you give talks instead of just listening to them. When I find a potential client that I really want to work with, I apply directly from the “careers” page. A surprisingly large number of employees, especially start-ups, are open to remote and/or part-time commitments for the right candidates if you are upfront about it. As a freelancer, you are always on the lookout for potential clients, since most contracts don’t last more than several months. This is a big difference between being a full-time employee and a freelancer. At least when I was working full time, my “job search” was very bimodal—I was either not looking at all or looking actively. If you just want a stable job and a paycheck, freelancing is probably not a good idea. People say the biggest perk of being a freelancer is freedom—you can work anywhere you like, anytime you want. This is probably not the most important factor, at least for me. Especially in the tech industry, many employers are already fine with employees working from anywhere and at anytime as long as they get things done. Even before becoming a freelancer I was able to work from home whenever necessary (for example, my kid getting sick), and in 2018, I even spent a month in South Korea working remotely while learning Korean. If you are thinking about becoming a freelancer just because of freedom of location or time, you should probably consider finding a better employer first. Being a freelancer, you have full control over how much you work. On the other hand, working full time, especially for a start-up, is very “binary”—basically you go all in or you are out. You are expected to work at your full potential. You can’t usually, say, work 20 hours per week to collect half the paycheck. Freelancers can do this fairly easily, which is one of the biggest perks of being independent. I’m also a type of person who always has tons of project ideas that I want to work on which may or may not be relevant to what I do for the employer that I happen to work for at that time. I’ve always considered myself more of an artist than an engineer/researcher, and being a freelancer is a natural consequence of this. If you work full time for a demanding job, good luck working on a side hustle or even finding energy for one at all. As a freelancer, everything becomes a side hustle. All the clients I work with know that I work with other people and on my own things and nobody cares (although some clients want to put a non-compete clause in the contract, which I fully respect). As a freelancer, you need to be good at managing your time, which equals your clients’ money. I’m a huge fan of the Pomodoro technique and manage all my work with pomodoros (oops, I meant *pomodori*). I can reliably do about 80 pomodori (= 40 hours) in a typical work week. If you’ve ever used the Pomodoro technique, you’d probably know how hard that would be and how much work that is. If you haven’t, you should seriously consider giving it a try—you’d realize how hard it is to actually “work” for 8 hours in a day (i.e., no web surfing, no phone checking, no useless Slack chats, etc.). I use Freedom on my laptop and Stay Focused on my phone to block distracting sites during the day. I can’t even imagine how I’d possibly be working without them. Not all of these 40 hours are billable, though. These include personal and work-related overhead such as learning (see below), sending invoices, managing your budget, etc. Also, if you work on research, you need to take into account the time you spend for academic activities. If you take on paper reviews for a conference, count that in. If you are an organizer of a workshop, count that in as well. This helps you be realistic about your “quota” you can use for each one of your activities. Before starting a week, I lay out the plan in a form of bullet points and the expected number of pomodori associated with each activity. This works a lot better than having a kitchen sink “TODO list” and always lamenting your lack of time. Many of my clients don’t require regular “check-ins,” and currently I spend only 2–3 hours in meetings in a typical week. Meetings have huge spillover effects and really take their toll on me. For example, if I even have a single meeting in the afternoon, I tend to start vaguely thinking what I’d talk about, if there’s any preparation necessary (including booking a room), etc. After the meeting, I would recall what we talked about and any necessary follow-ups. All these spillover effects, albeit small, fragment the state of the flow and significantly lower my productivity. Because I spend so little time in meetings, most days I find huge consecutive chunks of time I can spend working, which really helps keep my sanity. So far, this article has had very little to do with the “AI” part. What do I actually *do*? In the past year, I wore many different hats—consultant, engineer, and researcher. For some clients, I have meetings and offer advice on how they should go about implementing AI projects. For others, I write code as a contract software engineer. For yet others, I work on research and co-author papers. For many, however, it’s a mix of all of these. I think modern machine learning work is suitable for part-time commitments. If you are working on large models, it is not uncommon that training takes hours, if not days. Machine learning researchers and practitioners alike know how much time they spend training models and tuning hyperparameters. If you work full time, you have no option but to wait until the training is done or switch to other projects, if you have any. As a freelancer, you just switch between clients. As an ML freelancer, you need to have a strategy for securing GPU resources for training models. Some clients are kind enough that they just let me use their own infrastructure, but others may not (usually contractors’ access is very limited for security reasons). For my personal and small client projects, I just spawn AWS spot instances (usually a p3.2xlarge) as needed, using my own custom AMI. I also have a smaller GPU instance on GCP that I start and stop as needed. I don’t train huge 128-layer Transformer models on TPUs (not just yet) or use GPUs 24/7, so this on-demand solution has sufficed so far. If you work in AI, it is critical that you allocate time for learning and personal development. If you are working full time, it’s usually part of your day job, and you usually spend time reading papers and having “reading groups” during work hours. As a freelancer, these hours are usually not billable. You can’t usually bill a client for three hours just because you spent this much time reading papers last week (if you know such a client, or if you *are* such a client, do let me know). Remember, you are a professional who was hired to solve client’s problems, not to learn about AI, and everyone expects that you are already well read about and caught up with the latest AI development (I know, I know... has anyone ever been caught up with even a single domain of AI these days?) I think this is a price you need to pay in exchange for the higher rates you can enjoy as an AI freelancer. I live in Seattle, but I’m not sure if it helps with being a freelancer at all. I work remotely and rarely travel for work, except when I present at conferences. I work with Allen AI, and it’s nice that I can simply drop by on short notice and have lunch with the team, although I would be able to do my work even if I was on the other side of the earth. If you work remotely, I think you’d be better off if you lived in a cheaper city and work with clients in other large tech hubs. There are definitely ways to develop your career and become more experienced as a freelancer, but these look very different from full-time jobs. I’m just starting out, so I might have different opinions next year. There are usually no pay raises baked into the contracts, unless you negotiate. But you can gradually raise your rates (say, twice a year) until people start to say no. For this reason, I think it is probably easier to make more as a freelancer than as a full-time employee who needs to rely on performance reviews and promotions that are often out of your control. Speaking of promotions—as a freelancer, you can stay immune to office politics. You either get the job done or you don’t. You don’t need to be constantly thinking which boss you should be brown-nosing in order to get your next promotion. The flip side of this is you usually don’t gain management experience as a freelancer, although I do mentor junior devs and researchers who work for my clients. I think it helps to think of yourself as a specialist, not a generalist. For example, if you have a serious health issue that requires a very complicated surgery, you probably don’t want your family doctor to work on it. If you have a complicated lawsuit that involves millions of dollars at stake, you probably don’t ask your personal tax accountant to give you advice. It’d be your best interest if you don’t just “do AI.” If you are an AI generalist who just does “predictive modeling” and “text analytics” using scikit-learn, you’d probably attract desperate start-ups or product teams that simply want to have the word “AI” on their PowerPoint slides, and you’d need to compete with tons of other generalist “AI devs” on Upwork who live in countries where wages are way cheaper than yours. Narrowing your niche down attracts specific types of clients who have specific needs that few people in the world can solve. My expertise is NLP/ML for Asian language processing and language education. When defining your specialty, I think it helps if you define it in terms of the industry, not in terms of an ML stack. People look for, e.g., “AI solutions for healthcare” and “text analytics for finance,” not for “GANs” or “Seq2Seq models.” You need to be willing to learn a very wide range of ML techniques and models, from simple regression to GANs and RL, no matter what industry you work in. If you are even vaguely interested in starting your own business, especially a B-to-B SaaS business, I think working as a freelancer for a year or so is a great way to learn about the market demands and transition into the entrepreneurial endeavor. If you work with multiple clients and projects in your field, you will surely notice some common patterns and needs. These are great seeds of product ideas. Some of my open-source projects (e.g., NanigoNet and Open Language Profiles) were born this way. I incorporated a single-member LLC just for freelancing (and filed a foreign entity after I moved to Washington). I sign all the contracts as an owner of my LLC. Some of my non-US clients prefer it if I have “a company.” I’m not a lawyer so you need to take everything written here with a huge grain of salt, but I don’t think there’s any huge difference between being a sole proprietor and being a single-member LLC, unless you screw something up (e.g., being sued by your client) or you are hugely successful (e.g., being acquired by a company, making millions of dollars per year, or you grow a team of a dozen employees). I think the US is a great place to be a freelancer, even for an immigrant non-native English speaker like me. Being a legal permanent resident makes it easier to work with US-based clients, who usually pay better. The legal system is at least decent. Filing an LLC is just a matter of going to the state secretary’s website and filling out a form, if you know what you are doing. On the contrary, even thinking about incorporating in Japan, for example, makes me shudder, even though I am from Japan. You need to hand in a certificate of your corporate seal, a copy of your corporate bank passbook, along with the company registration in a floppy disk or a CD-R. I think their government has some work to do before they even think about their “national AI strategies.” A downside of being a US-based freelancer is medical insurance. I continued my previous employer’s health plan using COBRA and switched over to a plan that I bought on the state marketplace when I moved. Be prepared to pay at least a couple hundred dollars more per month on premiums than you’d pay as a full-time employee. Read Working for Yourself—Law & Taxes for Independent Contractors, Freelancers & Gig Workers of All Types from Nolo before you start. I read this book literally cover to cover before I got started, which was worth every single penny. Check out some of their related books, too. They are also good. I can’t say much about taxes, because this April will be the first tax filing season since I became independent. I discussed my options thoroughly with my CPA before I took the jump. I keep track of my business income and expenses on a Google Spreadsheet. I’m not sure if this is a good idea. At the least, don’t forget to pay quarterly estimated taxes. In this post, I showed that a freelance AI engineer can be a viable career path. You need to understand that this is a very qualified statement and your mileage may vary. Don’t get mad at me even if you go independent and go broke. If you are interested in knowing more, have any questions or feedback, shoot me an email. I’m happy to share my experience!
true
true
true
This week marks my one-year anniversary of quitting my full-time job and becoming a freelance AI engineer. I’m writing down my thoughts and experience here so that this might be useful if you are even vaguely interested.
2024-10-12 00:00:00
2019-01-01 00:00:00
http://masatohagiwara.net/img/wework.jpg
article
masatohagiwara.net
Masato Hagiwara's Page
null
null
18,246,028
https://spectrum.ieee.org/transportation/infrastructure/how-vehicletovehicle-communication-could-replace-traffic-lights-and-shorten-commutes
How Vehicle-to-Vehicle Communication Could Replace Traffic Lights and Shorten Commutes
Ozan K Tonguz
# How Vehicle-to-Vehicle Communication Could Replace Traffic Lights and Shorten Commutes ## A Carnegie Mellon startup aims to manage traffic at intersections by harnessing the radios in tomorrow’s cars **Life is short,** and it seems shorter still when you’re in a traffic jam. Or sitting at a red light when there’s no cross traffic at all. In Mexico City, São Paolo, Rome, Moscow, Beijing, Cairo, and Nairobi, the morning commute can, for many exurbanites, exceed 2 hours. Include the evening commute and it is not unusual to spend 3 or 4 hours on the road every day. Now suppose we could develop a system that would reduce a two-way daily commute time by a third, say, from 3 to 2 hours a day. That’s enough to save 22 hours a month, which over a 35-year career comes to more than 3 years. Take heart, beleaguered commuters, because such a system has already been designed, based on several emerging technologies. One of them is the wireless linking of vehicles. It’s often called vehicle-to-vehicle (V2V) technology, although this linking can also include road signals and other infrastructure. Another emerging technology is that of the autonomous vehicle, which by its nature should minimize commuting time (while making that time more productive into the bargain). Then there’s the Internet of Things, which promises to connect not merely the world’s 7 billion people but also another 30 billion sensors and gadgets. All of these technologies can be made to work together with an algorithm my colleagues and I have developed at Carnegie Mellon University, in Pittsburgh. The algorithm allows cars to collaborate, using their onboard communications capabilities, to keep traffic flowing smoothly and safely without the use of any traffic lights whatsoever. We’ve spun the project out as a company, called Virtual Traffic Lights (VTL), and we’ve tested it extensively in simulations and, since May 2017, in a private project on roads near the Carnegie Mellon campus. In July, we demonstrated VTL technology in public for the first time, in Saudi Arabia, before an audience of about 100 scientists, government officials, and representatives of private companies. The results of that trial confirmed what we had already strongly suspected: It is time to ditch the traffic light. We have nothing to lose except countless hours sitting in our cars while going nowhere. **The principle behind the traffic light** has hardly changed since the device was invented in 1912 and deployed in Salt Lake City, and two years later, in Cleveland. It works on a timer-based approach, which is why you sometimes find yourself sitting behind a red light at an intersection when there are no other cars in sight. The timing can be adjusted to match traffic patterns at different points in the commuting cycle, but that is about all the fine-tuning you can do, and it’s not much. As a result, a lot of people waste a lot of time. Every day. Instead, imagine a number of cars approaching an intersection and communicating among themselves with V2V technology. Together they vote, as it were, and then elect one vehicle to serve as the leader for a certain period, during which it decides which direction is to be yielded the right-of-way—the equivalent of a green light—and which direction has the red light. ### VTL Algorithm: Letting Cars Control Their Own Traffic Illustration: Anders Wenngren So who has the right-of-way? It’s very simple, and deferential. The leader assigns the status of a red light to its own direction of movement while giving the green light to all the cars in the perpendicular flow. After, say, 30 seconds, another car—in the perpendicular flow—becomes the leader and does the same thing. Thus, leadership is handed over repeatedly, in a round-robin fashion, to fairly share the responsibility and burden—because being the leader does involve sacrificing immediate self-interest for the common good. With this approach, there is no need at all for traffic lights. The work of regulating traffic melts invisibly into the wireless infrastructure. You would never find yourself sitting at a red light when there was no cross traffic to contend with. Our company’s VTL algorithm elects leaders by consulting such parameters as the distance of the front vehicle in each approach from the center of the intersection, the vehicles’ speed, the number of vehicles in each approach, and so on. When all else is equal, the algorithm elects the vehicle that’s farthest from the intersection, so it will have ample time to decelerate. This policy ensures that the vehicle that’s closest to the intersection gets the right-of-way—that is, the virtual green light. It’s important to note that VTL technology needs no camera, radar, or lidar. It gets all the orientation it needs from a wireless system called dedicated short-range communications. DSRC refers to radio schemes, including dedicated bandwidth, that were developed in the United States, Europe, and Japan between 1999 and 2008 to let nearby cars communicate wirelessly. DSRC developers envisioned various uses, including electronic toll collection and cooperative adaptive cruise control—and also precisely the function we are using it for, intersection collision avoidance. Very few production cars are now equipped with DSRC transceivers (and it’s possible that emerging 5G wireless technology will supersede DSRC). But such transceivers are readily available, and they provide all the functionality we need. These transceivers, designed to make use of IEEE Standard 802.11p, must each send out a basic safety message every tenth of a second. The message tells recipients where the transmitting vehicle is by latitude, longitude, and heading. Running on a processor in a vehicle, our VTL algorithm takes the data from that vehicle, throws in whatever it is receiving from neighboring vehicles, and overlays the result onto readouts from such digital mapping services as Google Maps, Apple Maps, or OpenStreetMap. In this way, each vehicle can compute its own distance to the intersection as well as the distance of the vehicles approaching the intersection from the other directions. It can also compute each vehicle’s speed, acceleration, and trajectory. That’s all the algorithm needs to decide who gets to go through the intersection (green light) and who has to stop (red light). And once the decision has been made, a head-up display in each car displays the light to the driver from a normal viewing position. Of course, the VTL algorithm solves only the problem of managing traffic at intersections, stop signs, and yield signs. It doesn’t drive the car. But when functioning within its proper domain, VTL can do everything at a much lower cost than autonomous vehicle technology can. Self-driving cars require far more computing capability just to make sense of the individual data feeds coming from their lidar, radar, cameras, and other sensors, and more still to fuse those feeds into a single view of the surroundings. Think of our method as the substitution of a rule of thumb for true intelligence. The VTL algorithm lets the cars control their own traffic much as colonies of insects and schools of fish do. A school of fish shifts direction all at once, without any master conductor directing the members of the school; instead, each fish takes its cue from the movement of its immediate neighbors. This is an example of a completely distributed system behavior as opposed to a centralized network behavior. With it, fleets of vehicles in a city can manage traffic flow by themselves without a centralized control mechanism and without any human intervention—no police, no traffic lights, no stop signs, and no yield signs. **We didn’t invent the concept of** intelligent intersections, which dates back decades. One early idea was to place a magnetic coil under the asphalt surface of a road to detect the approach of vehicles along a single route to an intersection and then adjust the duration of the green and red phases accordingly. Similarly, cameras placed at intersections can be used to count the vehicles in each approach and compute how best to time the lights at an intersection. But both technologies are expensive to install and maintain and therefore only a few intersections have been fitted out with them. We started by running our VTL algorithm on a virtual model for two cities: Pittsburgh and Porto, Portugal. We took traffic data from the U.S. Census Bureau and the corresponding Portuguese agency, added map data from Google Maps, and fed it all into SUMO, the Simulation of Urban Mobility, an open-source software package developed by the German Aerospace Center. SUMO simulated the rush-hour commuting time under two scenarios, one using the existing traffic lights, the other using our VTL algorithm. It found that VTL reduced the average commute to 21.3 minutes from 35 minutes in Porto and to 18.3 minutes from 30.7 minutes in Pittsburgh. Reductions for people commuting into the city from the suburbs and beyond were cut by a minimum of 30 percent and a maximum of 60 percent. Importantly, the variance of the commute time—a statistical measure of how much a quantity diverges from the mean value—was also reduced. ### Cars “Elect” a Leader—Then Follow Its Orders Illustration: Anders Wenngren Those time savings came primarily for two reasons. First, VTL eliminated the time spent waiting at a red light when there were no cars crossing at right angles. Second, VTL introduced traffic control to every intersection, not just those that have active signals. So it was not necessary for cars to stop at a stop sign, for example, when no other cars were around. Our simulations showed other benefits—ones that are arguably more important than saving time. The number of accidents was reduced by 70 percent, and—no surprise—most of the reduction was centered at the intersections, stop signs, and other interchanges. Also, by minimizing the time spent dawdling at intersections and accelerating and decelerating, VTL measurably reduces the average car’s carbon footprint. So, what would it take to get VTL technology out of the lab and into the world? To begin with, we’d have to get DSRC into production cars. In 2014, the U.S. National Highway Traffic Safety Administration proposed the adoption of the technology, but the Trump administration hasn’t yet implemented the regulation, and it’s not clear what the final decision will be. So U.S. manufacturers may now be reluctant to install DSRC transceivers, given that they’d add cost to a car and they’d be useful only if other cars carry them, too—the familiar chicken-and-egg problem. And until enough cars begin to carry the devices, the scale of manufacturing will remain low and the unit cost high. In the United States, only General Motors has begun to put DSRC radios into cars, all of them high-end Cadillacs. However, in Europe and Japan the outlook is a lot more favorable. A number of European automakers have committed to putting the transceivers in their cars, and earlier this year in Japan, where the government strongly supports the technology, auto giant Toyota reiterated its commitment. Photo: Dan Saelinger And even if DSRC fails entirely, our VTL algorithm can be implemented with other wireless technologies, such as 5G or Wi-Fi. The concept of incomplete penetration of DSRC transceivers brings up one of the biggest potential obstacles to adoption of our VTL technology. Could it still work even if only a certain percentage of vehicles is equipped with DSRC? The answer is yes, provided that governments equip existing traffic signals with DSRC technology. Governments may well be willing to do that, if only because they would rather not do away with hundreds of billions of dollars’ worth of existing signal infrastructure. To address this problem, we’ve fitted out our Virtual Traffic Lights technology with a short-term solution: We can upgrade existing traffic lights so that they can detect the presence of DSRC-equipped vehicles in each approach and decide the green-red phases accordingly. The beauty of this scheme is that all vehicles could make use of the same roads and intersections, whether or not they are equipped with DSRC. This approach may not reduce commute time as much as the ideal VTL solution, but even so it is at least 23 percent better than the current traffic control systems, according to both our simulations and to field trials in Pittsburgh. Yet another challenge is how to handle pedestrians and bicyclists. Even in a regime mandating DSRC transceivers for all cars and trucks, we couldn’t reasonably expect cyclists to install the devices or pedestrians to carry them. That might make it hard for those people to cross busy intersections safely. Our solution for the short term, while physical traffic signals still coexist with the VTL system, is to provide pedestrians a way to give themselves the right-of-way. Ever since January of this year, our pilot program in Pittsburgh has provided a button to push that actuates a red light—real for the pedestrians, and virtual for the cars—at all four approaches to the intersection. It has worked every time. In the longer term, the bicyclist and pedestrian challenge might be solved with Internet of Things technology. As the IoT expands, the day will finally come when everyone carries a DSRC-capable device at all times. Meanwhile, under ideal conditions, with no physical signals at all, we have demonstrated that the vehicles voting on how to assign right-of-way can allot a portion of the signaling cycle to pedestrians. During these interludes, a virtual red light shines in all the vehicles at all four approaches, lasting long enough for any pedestrians there to cross safely. This preliminary solution wouldn’t be optimal for traffic flow, and so we are also working on a method using cheap dashboard-mounted cameras to spot pedestrians and give them the right-of-way. **Ultimately, what makes virtual traffic signals** so promising is the advent of self-driving vehicles. As envisioned today, such vehicles would do everything human drivers now do—stopping at traffic lights, yielding at yield signs, and so forth. But why automate transportation halfway? It would be far better to make such vehicles fully autonomous, managing traffic without any conventional signs or signals. The key in achieving this goal is V2V and vehicle-to-infrastructure communications. This matters because today’s self-driving cars are often unable to negotiate their way into and out of busy intersections. This is one of the hardest technical problems, and it continues to challenge even industry leader Waymo (a subsidiary of Google’s parent company, Alphabet). In our simulations and field trials, we have found that autonomous vehicles equipped with VTL can manage intersections without traffic lights or signs. Not needing to identify such objects greatly simplifies the computer-vision algorithms that today’s experimental self-driving cars rely on as well as the computational hardware that runs those algorithms. These elements, together with the sensors (especially lidar), constitute the single costliest part of the package. Because VTL has a largely modular software architecture, it would be easy to integrate it into the rest of an autonomous car’s software. Furthermore, VTL can solve most, if not all, of the hard problems related to computer vision—say, when the sun shines straight into a camera, or when rain, snow, sandstorms, or a curving road obscure the view. To be clear, VTL is not really competing with the technology of self-driving cars; it is complementing it. And that alone would help to speed up the robocar rollout. Well before then, we hope to have our system up and running for human-driven cars. Just this past July we staged our first public demonstration, in Riyadh, Saudi Arabia, in heat topping 43 °C (100 °F), with devices installed in the test vehicles. Representatives from government, academia, and corporations—including Uber—boarded a Mercedes-Benz bus and drove through the campus of the King Abdulaziz City for Science and Technology, crossing three intersections, two of which had no traffic lights. The bus, together with a GMC truck, Hyundai SUV, and a Citroën car, engaged the intersections in every possible way, and the VTL system worked every time. When one driver deliberately disobeyed the virtual red light and attempted to cross, our safety feature kicked in right away, setting off a flashing red light for all four approaches, heading off an accident. I hope and believe that this was a turning point in transportation. Traffic lights have had their day. Indeed, they lasted over a century. Now it’s time to move on. *This article appears in the October 2018 print issue as “Red Light Green Light—No Light.”* ## About the Author Ozan Tonguz is a professor at electrical and computer engineering at Carnegie Mellon University, in Pittsburgh.
true
true
true
A Carnegie Mellon startup aims to manage traffic at intersections by harnessing the radios in tomorrow’s cars
2024-10-12 00:00:00
2018-09-25 00:00:00
https://spectrum.ieee.or…%2C165%2C0%2C165
article
ieee.org
IEEE Spectrum
null
null
1,915,240
http://www.engadget.com/2010/11/17/windows-phone-7s-microsd-mess-the-full-story-and-how-nokia-ca/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
22,546,694
https://github.com/mfaisalkhatri/OkHttpRestAssuredExamples
GitHub - mfaisalkhatri/OkHttpRestAssuredExamples: API Testing using Rest-Assured and OkHttp.
Mfaisalkhatri
This project is the outcome of my self-learning the API Testing Automation frameworks - Rest-assured and OkHttp. I heard a lot about Rest-Assured and OkHttp and how it made the QA's life easier by helping them to run all the tedious API tests in an efficient way. Hence, I started learning about these frameworks and have documented all my learnings in this repository. Checkout my blog API Testing using RestAssured and OkHttp where I talk about these frameworks in details and which one to choose for testing your APIs. To get a better understanding on API Testing, check What is API Testing? - This repo contains example codes of API Tests using Rest-Assured and OkHttp. - Hamcrest Matchers and TestNG asserts are used for assertions. - TestNG Listeners are used to capture the events in log. - Log4j is used to capture logs. - Lombok has been used to generate Getter and Setters automatically for post body requests. - Rest APIs on https://reqres.in/ have been used for testing. - Discuss your queries by writing to me at [email protected] or you can ping me on the following social media sites: - Twitter: mfaisal_khatri - LinkedIn: Mohammad Faisal Khatri - Contact me for 1:1 trainings related to Test Automation.
true
true
true
API Testing using Rest-Assured and OkHttp. . Contribute to mfaisalkhatri/OkHttpRestAssuredExamples development by creating an account on GitHub.
2024-10-12 00:00:00
2020-03-07 00:00:00
https://opengraph.githubassets.com/0b2d433f4e437002da1b8bf0dc95b68277368f42232452a995118a13147fa97d/mfaisalkhatri/OkHttpRestAssuredExamples
object
github.com
GitHub
null
null
16,179,511
http://coffeealternatives.com/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
37,663,145
https://societysbackend.com/p/a-privacy-review-tiktok
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
21,416,722
https://www.angstrom.life/goals/
angstrom
null
If I stopped you on the street. And asked you "what are your top three life goals?" Would you be able to answer without hesitation? Would you be able to confidently say why those goals are important to you and why failure to achieve them is not acceptable to you? If your answer is "no". Are you sure that you are taking your goals seriously? You will not achieve your goals if they are not intensely emotional for you. In this guide and interactive tool, I am going to help you "feel" your goals. You will be uncomfortable doing it. But you will not regret it. There are two things that we need to untangle first. They are slightly different but important to be clear on. The reason life goals in general are important is actually a more difficult question to answer. But the simple answer is: happiness. The journey of life is more enjoyable if you know the destination. It is more exciting if the struggle is not blind. It puts the challenges into perspective and hopefully helps you avoid going down dead-end alleys that waste your precious time. Productivity is irrelevant if you are working on the wrong things. You will just get more stuff done that is not important to you in the long run. Properly formatted written goals on the other-hand is a longer story. In my opinion there are three reasons why they are so important. The primary reason that life goals are important is that they save time and reduce anxiety. The importance of decision fatigue is often overlooked when it comes to goals. Success or failure is determined by our micro-decisions. Those seemingly unimportant choices that we make hundreds of times per day. We are human. We are programmed to take the easy option. To take the path of least resistance. It is only the clarity that we have about what we want that gives us the motivation to select the harder choice. Every day we have too many decisions to make. Each decision drains our energy. If your goals are not top-of-mind these decisions are more difficult. Preventing us from doing anything. If your goals are clear, decision fatigue is reduced and your choices become easier. Easier, not easy :( The second reason is measurement. It sounds weak. It sounds like the kind of reason that people say without really thinking about it. But it is deceptively important. If you have not written down your goals in an emotionally evocative way you will never have the motivation to act on them daily. Measuring your progress and the time that you spend on your goals is a constant reminder of the promises that you made to yourself. Many of the goal types that you may have heard of have several different names, which adds to the confusion. We will start with the broad buckets that are used and then get into more practical areas of your life where it would be good to have some goals set. Towards the end of the chapter, I will give my personal recommendations. Basically, ignore most types of goals. While reading, keep in mind that it is what you choose, not what you don't choose that makes you happy in life. There are five major types of goals to consider when setting goals. To explain the differences between these types I'll use the example of running. We will see how running goals vary when viewed through each of the five different goal-type lenses. An outcome goal is a goal with a definite end result. An example might be I will complete the New York marathon in 2021. With this type of goal, it is very clear if you have achieved it or not. Outcome goals do not detail how you will achieve the goal. Process goals are similar to setting a habit. They are ongoing behaviours that you will stick to. For example, I will become a runner and I will train 5 days a week. It can be very useful to set process goals in order to aid with outcome-based goals. Process goals are focus on how you will do things. A performance goal is a personal standard that you set for your activities. For example, I will complete the New York marathon in 2021 with a time less than 4 hours. The difference between performance and outcome goals is how well you will achieve the outcome. A topic goal is a goal where you want to focus on an area of your life. For example, I want running to be an important part of my life. This might include, joining clubs, blogging about running, learning all about the benefits and mechanics of running properly. A time-based goal is a personal goal that makes the deadline the most important element of the goal. For example, six months from now I will be the best runner that I can be. You do not have a specific outcome or performance level in mind but you are going to do everything that you can do in the next six months to improve. It is not an accident that health is the first category in this list. If you do not take responsibility and manage your health then all other goals fall by the way-side. If you are not healthy and vibrant you will not have the energy or motivation to work on anything else. If you've ever had a toothache you know that everything else becomes unimportant. It is all-consuming and it is the only thing that you can focus on. I recommend that people have between 3 and 5 active life goals. The only category that you should definitely always have a goal in is health. Health goals should encompass fitness / sleep / nutrition and any persistent ailments that you have. Dedicate time to these things every week. Educate yourself on how the body works and learn skills. If you learn how to use a kettlebell (or similar) really well, you can build a strong and resilient body that minimizes your downtime and increases your chances of achieving all of your other goals. If you ignore everything else in this article, please don't ignore your health. Getting to a stable / comfortable financial position is also one of those things that is a gateway to more meaningful goals. It is hard to work on your relationships or romantic goals if the bills are piling up. Aiming to be rich can be the goal but I would suggest that it should not be a goal in this category. Studies have shown that the amount of happiness that is derived from your income diminishes drastically beyond around $75,000 USD for an average American. Obviously, you need to adjust that number for your country and its' cost of living. And that is not saying that your happiness won't improve, it will just improve less and less for every thousand additional dollars that you earn. Knowing this fact is useful but truly understanding it can be powerful. It is difficult for most to internalize and believe it to be true. But the sooner you do, the sooner you can start to focus on other more meaningful aspects of life. Don't let your financial goals dominate your focus. Setting relationship goals and actively working on them is often overlooked. Even though it is possibly the easiest way to dramatically improve your life and your overall happiness. Having a strong support network of loved ones, friends and family can lead to a very happy and fulfilled life with the standard trappings of "success". Below I discuss the three most important relationship types. When you are constructing a goal in the relationship category you should probably address each of these three types. Even if you are married you should consider addressing your spouse separately. That one-on-one relationship can be a linchpin in your overall happiness. In an ideal world, how would your interactions be with this person every day? Even on tough days? How would you treat each other even after years of being together? If you're single. What kind of person are you looking for? Don't make a huge long list of their attributes. Focus on how you would interact with them. Their personal attributes are much less important than the interplay between you both. Having a long list of dealbreaker attributes is not a good idea for a relationship that you want to build to last. This one is tricky in a lot of cases. Some of us are lucky and have loving family members that are easy to get on with. Others, not so lucky. The fact that we can't choose our family (other than spouses) means that we need to be creative and flexible with our goals. You may need to have specific goals for specific people. Or, a better way might just be to set a goal about who you are going to be. Someone who is always friendly, loving and supportive regardless of the situation. Easier said than done I know. But if you can become that person. Over time strained relationships with your family members can heal and become positive and add joy to your life. You only have control of your half of the relationship. Do not stress about that which you cannot control. Consciously decide how you want to act put regular large deposits into the bank of trust and hope that you eventually reap dividends in the relationship. If you don't, at least you can sleep at night knowing that you did your best. Research has shown that those who have close friends (emotionally and geographically) live longer than those who don't. FB / Insta / twitter friends don't count, unfortunately. In fact, not having a close network of friends can be as bad for your health as smoking 15 cigarettes per day! Developing new friendships gets harder and more daunting the older we get. But it is far from impossible and you should be open to making new friends every time you meet someone new. But realistically, nurturing the friends that you already have is a more effective goal for you to set. Your goal should be to meet in person as frequently as you can and make memories with different activities once per year. A weekend away with your best friends every year can be a fantastic bonding experience and something that everyone can look forward to. Be fun. It's fun to be fun. Education has changed a lot over the past 10 years. It is no longer limited to high-school or college degrees. The internet has put some of the best educators in the world at all of our fingertips in affordable ways. It is becoming more and more important that we adopt the mentality of the lifelong learner. Work is changing too quickly not to. Most of us will have several different careers in our lifetimes. If we are to succeed, financially and otherwise, we need to constantly learn new skills. If framed in the right light, having educational goals will help us to achieve our other goals. It can also be epically fun. Learning how to code or dance, or a new language can be engrossing hobbies. Selecting the right educational goals can be complete life-changers. Spend the time and think carefully about them. Personal development or self-improvement is a very general category that can be a catch-all that overlaps with many other categories. That being said, many of my favourite topics fall into it. The most popular topics for this type of goal, include: improving your productivity/time management (which I am passionate about), becoming more a more confident/positive person, improving your intelligence quotient or emotional intelligence. Shaking bad habits or conquering your fears. See, I told you it was broad :) The trick with this type of goal is to not have too many. Stop multitasking. Multitasking is mostly a bad idea. Deep work and focus on one of these goals at a time. These types of goals often have a finish line, so can give you a great sense of satisfaction. Use relatively short "finish-line" goals to build your momentum towards other goals. Success breeds enthusiasm. You need to give yourself some quick-wins. What would you do if you had enough money and you didn't care about what people thought? Not worrying about what people think is very important here. Forget about thoughts like: I went to university and studied x, so I should work in x. Thinking about your career though that type of lens allows you to determine if you are on the right ladder or if you need to jump to another. If you're confident that you are on the right ladder. Then you should set time-bound progression goals. If you regularly think about this type of goal it will affect the hundreds of micro-decisions that you make during a standard work-day. Having a clear career goal can be very useful when deciding on other types of goals discussed in this chapter. For example, your short to medium term educational goals can be designed to support your career goal by upskilling yourself for the position that you eventually want to get to. Thinking about your own psychological health is often overlooked. Too many of us (me included) avoid this topic. We put too much stress on ourselves to become successful. This can have the opposite effect. We need regular pit-stops, change the tires, wipe the windshield etc. Without taking the time to take care of ourselves mentally we slow down. This generally happens in an imperceptible way. Everyone should have regular breaks or activities to de-stress ourselves. Too often we feel that these types of pit-stops are frivolous luxuries. Try not to view them in that light. Without fresh tyres, you are much more likely to get off the track when you come to an inevitable tight corner. Your lifestyle goals can be very linked to your financial situation. But if you can try and have some that don't require lottery level winnings that you can do right now. Having your dream house or three foreign holidays per year are not bad long-term goals. But you need to live the journey. Have some lifestyle goals that are attainable without major changes to your finances. I've broken lifestyle goals into two sub-types. Experiential goals and There's a common saying "collect memories, not things". I am a huge fan of this philosophy to life. Beautiful moments/experiences enrich the soul more than the newest gadget, in my opinion. Designing and planning experiences for yourself and your family create those magical stories that are told for years to come. I particularly like designing trips away with friends to cement relationships. We only live once. It's a dull ride if you don't do your best to enjoy it. Unfortunately, when you become an adult, nobody else will help you prioritize making your life fun. It's all on you. Hobbies / friends are important! Don't continuously put off enjoying life until you hit a specific career goal or dollar amount in your bank account. That is the best way to have miserable years if not decades that you cannot get back. Trust me on this. If you are like me and find it hard to prioritize this type of goal, then look for multi-purpose leisure activities. Choose leisure activities that are fun but that could potentially make you healthier / wealthier or improve your relationships. Two birds with one stone. We all want to enjoy our retirement. We should start planning for it early. Earlier than most of us do. If we do start planning early, the only aspect that we generally think about is financial. Don't get me wrong, that is extremely important. Downgrading our lifestyles when we retire is not an ideal scenario. But if you think about what your retirement goals are in detail you may come up with really interesting things to do now to prepare. Maybe you should start learning another language so that you can retire in another country. One with better weather! Or maybe you'll take up a hobby now that you think would be a great way to spend your free time. Golf is the first one that comes to mind but the options are limitless here. The options are only limited by your imagination. But after reading lots of articles on the web I've come to the conclusion that most people confuse goal categories with the techniques available to actually come up with those goals. Here I am going to discuss the most common methodologies. Regardless of what technique you use, remember that you do not have goals if they are not written down. Also, keep in mind to set goals that motivate you. If your goals do not inspire action then they are not useful to you. We're going to discuss 5 frameworks here. With all of these frameworks, it has been shown that you should frame the goal positively rather than negatively. "I will do..." is much more powerful than "I will not do...". Of the techniques listed below, I personally advocate for the obituary method and I designed the interactive tool accompanying this guide using that method. This technique is simple. You go through the list of categories in chapter 2 and you rack your brain until you can think of one goal for each category. It is a shotgun approach. There are many flaws with this concept even though it seems to be quite popular online. The most obvious flaw is that you will have way too many goals. You will also not have any clear hierarchy of importance. Having this many goals means that you will probably forget all of them quickly and none of them will ever be top-of-mind. There is also no strong emotions in the goal-setting process. It is more of a chore than a voyage of self-discovery. The value first approach is better but has prerequisites. You need to be clear on what your values are first, which is not necessarily self-evident. But once you have that list (it will be far fewer than the goals categories), you have a lens to look at your life through. You can ask yourself, what does someone who values x do? How do they interact with people? For example if "saving the environment" is one of your values, you might set a goal to change your career into something that is more aligned with this core belief. Similar to the values first approach the purpose first technique requires prerequisites. But if you already know what your purpose in life is, it can be extremely easy to set effective goals by examining aspects of your life through the lens of that purpose. If you are lucky enough to have a clear purpose to your life, like Elon Musk wanting to make the humans a multi-planetary race. All of his goals, in the long and short-term begin to write themselves. And because he already has his "why" he has the emotion and drive already built in. If you haven't seen it, you should really watch Simon Sinek's TED talk called start with why. He pitches his method as a tool for marketing communication but it can be adapted to a powerful personal goal-setting technique. In that talk, think of the "what" as your goals. And every time he says "People don't buy what you do they buy why you do it", think of yourself as the "buyer of your own goals". If you cannot convince yourself to "buy" your own goals, you have not come up with the right framing to motivate yourself to act. I am sure that you have heard of SMART goals. SMART stands for: SMART goals describe the anatomy of a well-formed goal. The idea is that you need to be able to clearly articulate each of the 5 elements before you can consider a goal to be well written. It was initially designed for workplace goal but it is also effective for many goals in your personal life. Obviously, there are certain types of personal goals that can be difficult to measure. This is especially true for relationship or spiritual goals. But, that being said it is worth trying to write down all 5 elements for all of your personal goals if you can. The obituary method is what I decided to use for the interactive tool that comes with this guide. The reason is that it gives you a great frame of reference to think about what goals you want, not just how to write the goals (like SMART goals). If you are new to writing goals then it is the best starting point. In chapter 7, I explain how this method works in a lot of detail. But you don't need to read that chapter or understand the technique to use it. Just click on "Launch the tool" in the menu (table of contents on mobile) to get started. For these examples we will not include "why" someone should do them or "what failure to achieve the goal would look like. These things are too personal to speculate on. But if you add any of them to your goals list. Make sure that you do write down your why and failure. Quotes are a fantastic way to see that our struggles are not unique. People have been struggling with achieving their goals for millennia. Reading quotes from famous people from different times in history shows us how little the human condition has changed over time. For me, I find this to be a very comforting fact. Here are some of my favourite quotes about life goals. For each quote, I will give a little commentary on my thoughts relating to the quote. If you would like to read other quotes that I like you can find them here. I wish this was true for all goals but it is for more of them than you might think. Do not underestimate the importance of formatting your goals. You need to become clear on why you want something as well as the consequences of failing. Without knowing these things, you will not have the resolve to do what is necessary. If you have ever done a Tony Robbins programme, you know how he emphasizes visualizations. This quote encapsulates that. If you can mentally see the end of the goal you can boost your motivation to get there. The older I get the more ambitious my goals are becoming. That does not seem to be the case with my friends though. Year after year they become more resigned to the status quo. Don't be that person. Live life and learn like you will live forever. I don't like using unattributed quotes but this one is too good. It expands on the quote from C.S. Lewis. Don't let your goals die on the vine of mundanity. I like Bruce. I like this quote because it re-inforces the idea that the journey can be more important than the destination. Don't put your head down and not smell the roses. You might just miss out on life. This could be anthem for introverts. I count myself as one of those. I think there is a balance to be struck here. Relationships are a huge part of your overall happiness but they can sometimes be tragically taken away. Having other passions are important too. This could have been stated by a modern titan of the automotive world too. If I credited this to Elon Musk, most people would assume that he said it knowing what a prolific achiever that he is. Brilliant quote and sage advice. 1,500 years does not lessen it's applicability to our modern lives. This is one of the most frequently used quote on this list, even if the author isn't as famous as his illustrious company. If you cannot take a baby-step towards your goal, you have broken it down into a good enough plan. There is generally an inverse relationship between our age and the ambition of our goals. The older we get the less ambitious we become. Even though our capicity to actually achieve increases with age. Another classic quote. This quote is just as valid for 'your world' rather than the entire world. Changing your world is absolutely within your power. If you believe that it is. Milestones and deadlines can be overlooked when talking about life goals. To really achieve them you need to break them down into milestones that do have firm deadlines. If you don't you will allow yourself to always push action to tomorrow. Zig is getting a second quote. It's too good to not include. You will earn a huge amount of personal respect by consistently working on a goal. That self respect will help you in every aspect of your life. It is like compounding interest. Finally, a word of caution. Having huge goals without the sytsems in place to move towards them is dangerous. If you commit to a goal, you need to make progess, otherwise you will flounder in a negative spiral. Failure can become part of your identity, which can be toxic. That last quote nicely sets up the next chapter. In it we will discuss strategies for sticking to your goals over the long term and how to keep yourself accountable to yourself. If you are not going to stick to the promises that you make to yourself you can become a failure in your own eyes. It is such a shame. When you decide to make promises to yourself you need to set yourself up for success. You need to design your own systems to keep yourself accountable to the most important person in your world. You. When I say that you are the most important person in your world I do not mean that in an egotistical way. It is your job to manage yourself and your own mental well being in order that you can support those around you that you care for and love. Without a focus on your own personal development and desires, everyone else around you suffers. Sometimes in very subtle ways. There is a quote that I love from James Clear that says: "We do not rise to the level of our goals but we fall to the level of our systems." This quote cannot be overemphasised enough when it comes to sticking with your life goals. You need to put systems in place. Simple systems that constantly remind you of what you need to do. And why you need / want to do them. Our memories are fickle. Motivation is fickle. Nobody understands this more than religions. They have instituted regular weekly reminders and special days where you hear the same messages over and over again. We should all learn from their wisdom and set up a regular schedule whereby we remind ourselves of what we want and why we want it. Sunday is my review day. I re-read all of my goals and what I wrote about the goals when I decided that they were important to me. Without this regular reinforcement, your goals will fall by the wayside. You need to assess your progress and decide what needs to be done in the upcoming week. I know this all sounds like "work". It is. It should be. Self-respect and achieving your goals is work. 15 minutes per week to reflect on your progress on the most important desires of your life is not something that you will regret doing. Many productivity experts speak of "keystone habits". Habits that have an impact on all other habits. Most experts list exercise or waking up early as keystone habits but in my humble opinion, regular short reviews of where you are and where you want to be is the most effective keystone habit. Weekly reviews can easily be done in 15 minutes. Monthly in less than 10 minutes by just reading your weekly reviews. Quarterly and yearly reviews are just as quick by reading the levels below them. This is a very commonly asked question. Unfortunately, the answer is: it depends. How old you are, how much free time you have and your current life situation all play a role in that decision. As a rule though. It should be in the range of three to five. You need to be able to remember them and keep them top-of-mind. You should not need to refer to a notebook or app to be able to accurately recite them to yourself. For each goal, you should set up a few milestones. If it is a huge goal, then you may need a lot of milestones but again, three to five is a good number if possible. Milestones can be considered to be sub-goals but they can more easily written using the SMART goals format even if the main goal is difficult to write in that format. Your milestones should be outcome and time-based milestones. I will achieve x by y. Setting the outcome is the easy part. Estimating how long it will take to achieve it is difficult. Don't beat yourself up if you don't meet the deadline. Just reassess your progress on the deadline and try and figure out what you did wrong or what circumstances got in your way. Then set another deadline. The more times you do this the better you will become at setting realistic deadlines. If you do achieve a milestone, celebrate! It is an awesome achievement and you are instantly in the top 5% of people. Most people don't set goals. Fewer people set milestones. And much fewer people do the work to actually achieve something they set out to do. Take a bow. Treat yourself to something indulgent. You need to celebrate your wins in order to give yourself the motivation to move on to the next milestone. Do not underestimate this step. It is crucial in forming the habit of achievement. In chapter 4 I gave a list of goals examples that you can use for inspiration. But I recommend that you use the interactive tool in order to set your own. The goal-setting process itself is extremely important itself. The reflection on your life and who you are as a person is what makes it powerful. There is work involved in doing that reflection but I hope that the tool and obituary framework will minimize the work by getting you into the correct headspace quickly. In the next chapter, we will discuss the interactive tool in much more detail. If you do take the time to complete it, you will set three goals (1) a financial goal, (2) a relationship goal and (3) a health goal. I would not recommend starting on this journey if you are not going to take it seriously or you do not have the time at the moment. You need between 20 minutes to an hour to properly complete this guide. The more seriously you take it, the more you will get out of it and the longer it will take. If you really think about your answers and visualize your future at each step, you will walk away with something that could be profoundly beneficial to you. The quickest way to get a sense for the guide is to click on the "Watch a preview" button above and watch the 45 second video. But if you would like to keep reading I will explain in detail how everything works and the motivation/research behind the tool. Here are the four main steps that you will go through. Click on the step to read why they are important. Step 1 You will describe who you want to be. Using your obituary as a framework to express yourself. I have included some suggested answers at each step but you can and should write your own answers where appropriate. Step 2 You will create four life goals for different aspects of life, based on who you decided that want to be in step 1. Your obituary will guide you when deciding on your goals by setting your frame of reference for what is important in life. Step 3 For each goal, you will think about the reason that you want it. How would it affect your life if you achieved that goal? Step 4 After thinking about why you want to achieve your goals you will visualize the opposite. By describing how your life if you fail. Step 5 You will then set one milestone, twelve months from today, which will be a stepping stone towards achieving your goal. To discover your life purpose and goals in this app, the first thing that you will do is write your obituary. It might sound a bit morbid but it is the best starting point and I’ve made it as easy as possible for you. Imagine that you are an observer standing behind everyone at your funeral. What would you want people to say about you? Who do you want to be there? How did you impact their lives? And how do you want them to remember you? If you can answer these questions honestly and with clarity. You know how you should live today. You know how you should treat people. Considering how people perceive you can be toxic, especially in our Instagram-able world. But, if done in a balanced way it can inspire you to live your life the right way. The most likely reason that you have not achieved your goals in the past, is that you did not figure out explicitly WHY you wanted those goals. Goals are only powerful if you have articulated a meaningful reason that you want them. Without a compelling reason, goals are impotent. Do not underestimate the power of why. This is another counter-intuitive step in life planning and goal setting. You should be crystal-clear about what your life would look like if you do not achieve your goals. Looking at your goals from this perspective reinforces the reasons why you do want to achieve your goals and can be a tremendous motivator when times get hard. For that reason when setting goals and milestones using this guide, I will ask you to visualize failure. The clearer you are about your emotions and how you would feel to fail, the more powerful your motivation to succeed will be. We will only set one milestone for each goal in this guide, but I do encourage you to set more. You need to set points in time where you can check your progress to encourage action today. Milestones are the most effective way of doing this. They do need to be realistic and measurable. As mentioned in a previous chapter, in the business world, "SMART goals" are very common. Just to remind you, it stands for: Specific, Measurable, Achievable, Relevant, and Timely. But for personal life goals, I believe that you should have "SMART milestones" instead of goals because your time-frame is significantly different. The primary angstrom.life application is built around the "life-plan hierarchy" that you can see above. The "life-plan hierarchy" is my working title. It’s not as catchy as Maslow's hierarchy of needs but you get the idea :) The image below shows the steps of that hierarchy that we will use in this free guide today. Define who you want to be. Set goals based on that and create milestones to help you achieve the goals. Horizontal explainer The full process for achieving your goals in life should include breaking your milestones into projects and your projects into tasks that you take action on today! If you already know your goals in life, you are way ahead of most people. But, if they are not written down and reviewed, they might as well not exist. My advice to you would be to start getting granular. Set 1-year, 3-year and 5-year milestone. Be detailed, use visuals if you can. Describe how success would feel at each milestone and also how you would feel if you failed to achieve the milestones. Then move down the "life-plan hierarchy" and create all the projects that will need to complete to hit your 1-year milestone. Once you have all the projects, move down the hierarchy again and create all the bite-sized tasks needed to complete each project. As the joke goes, "How do you eat an elephant? One bite at a time". The answer to this is simple. Without having a dartboard you are just throwing darts at a wall. You can't be happy if you don't know if you hit a triple-twenty or a bullseye. You can't feel a sense of accomplishment or improve without the board telling you how close or far away you are. Life goals are our dartboard. We measure our progress against it across different dimensions of our life. If we don't have these goals and the why behind them we will measure our life against other people and how they are doing (or how pretty their Instagram photos are). That is why I feel this guide is important. For the last year, I have been building an app that helps you achieve your goals. It is more like a life design app or life planner than just about your goals. One feature that was sorely missing from the app was a way for users to find their life goals in the first place. I am regularly asked "what life goals should I have?". I started researching how to help my customers figure out how they can set their goals. I realized that there were no step-by-step guides out there so I decided to try and build one myself. I came to the conclusion as I built the feature that it could do people a lot good. I decided that I would not put only give it to my customers but I would also make this feature free to anyone who wanted to use it. I will continue to develop this free app as I get feedback from people like you. So please do email me with any thoughts that you have, good or bad. I have thick skin, unlike Tom Cruise in a Few Good Men, I can “handle the truth” :). My email address is [email protected]. All too often in life, we start by thinking about projects that we need to complete. We then take out a piece of paper or one of the thousands of project management apps and list all the tasks that we need to do to finish the project. In my opinion, this is the wrong way to think about things. You should start at the top of the pyramid to the right and work your way down from there. Not start halfway up and only head down. Lots of people inspired this guide! I read as much as I could on the subject. The most important influences that I need to mention are: Tony Robbins a lot but I think that he has some genius ideas and I respect what he has achieved a lot. Gabriele Oettingen and her some of her research papers. Stephen Covey’s books and his extremely famous saying ”Begin with the end in mind”. I also have to give credit to Angstom.life users for asking deep and intelligent questions that forced me to really think about this problem. Hopefully, it lives up to their expectations and yours. If you feel that there are people you know that would benefit from seeing this guide. Please do share it with them.
true
true
true
This step-by-step guide is the quickest and easiest way for you to discover your life goals. It's free and anonymous. No login or registration!
2024-10-12 00:00:00
2019-01-01 00:00:00
https://www.angstrom.lif…rchy.c443c02.png
website
angstrom.life
Angstrom
null
null
33,616,825
http://www.rntz.net/files/thesis.pdf
null
null
null
true
false
false
null
null
null
null
null
null
null
null
null
8,219,257
https://www.amazon.com/gp/drive/app-download
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
14,677,997
http://www.scmp.com/news/hong-kong/politics/article/2100779/sino-british-joint-declaration-hong-kong-no-longer-has-any
Joint declaration of 1984 ‘no longer has realistic meaning’, China says
Joyce Ng
# Sino-British Joint Declaration on Hong Kong ‘no longer has any realistic meaning’, Chinese Foreign Ministry says Ministry spokesman issues retort to foreign countries’ statements on the political condition of the city, as it marks 20 years since the handover **3 minutes** The Chinese Foreign Ministry has declared the Sino-British Joint Declaration, that laid the groundwork for Hong Kong’s handover, a “historical document that no longer has any realistic meaning”, after Britain and the United States spoke of the binding effect of the 1984 treaty on China and the city. “Now that Hong Kong has returned to the motherland for 20 years the Sino-British Joint Declaration, as a historical document, no longer has any realistic meaning,” Lu was quoted by Xinhua as saying at a press conference. “It also does not have any binding power on how the Chinese central government administers Hong Kong. Britain has no sovereignty, no governing power and no supervising power over Hong Kong. I hope relevant parties will take note of this reality.”
true
true
true
Ministry spokesman issues retort to foreign countries’ statements on the political condition of the city, as it marks 20 years since the handover
2024-10-12 00:00:00
2017-06-30 00:00:00
https://cdn.i-scmp.com/s…My6&v=1498842083
article
scmp.com
South China Morning Post
null
null
1,568,656
http://www.businessinsider.com/management-lessons-i-learned-working-at-apple-2010-7#a-tech-company-should-be-run-by-engineers-not-managers-1
8 Management Lessons I Learned Working At Apple
Bianca Male
## Read next Sign up to get the inside scoop on today’s biggest stories in markets, tech, and business — delivered daily. Read preview Thanks for signing up! Go to newsletter preferences Thanks for signing up! Access your favorite topics in a personalized feed while you're on the go.
true
true
true
null
2024-10-12 00:00:00
2010-08-02 00:00:00
https://www.businessinsi…al.png?v=2023-11
article
businessinsider.com
Insider
null
null
6,365,561
http://www.afr.com/p/technology/facebook_beats_google_in_search_NIoR1uwW3aSVIOEEXRkK5O
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
20,782,847
https://medium.com/futuresin/huawei-is-facing-yet-another-roadblock-this-time-in-australia-4cfe4dd9dd17
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
16,020,964
https://www.youtube.com/watch?v=fZ1R9RliM1w
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
39,333,372
https://github.com/expressjs/discussions/issues/160
Express Forward · Issue #160 · expressjs/discussions
Expressjs
- Notifications You must be signed in to change notification settings - Fork 13 ## New issue **Have a question about this project?** Sign up for a free GitHub account to open an issue and contact its maintainers and the community. By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails. Already on GitHub? Sign in to your account # Express Forward #160 ## Comments 👏 thank you so much @wesleytodd for this! | I've been MIA for quite some time, but I'd love to see this project pick up some steam again. I will try to attend a TC meeting if I am able. LMK if there are candidate proposed times. My schedule is often full during PT business hours, so 5pm-9pm PT is likely to work better for me, but I will try to make any time that is chosen. | I created an issue to get a meeting organized: #161 | As a bit of an outsider (albeit one who's been using express for over a decade), the only part of this I find of real interest is how you plan on changing the organization and development processes to allow for regular releases. I say this because Express has needed a new major release for at least 7 years, since Until these issues are addressed I question how much value there is in discussing a v5 release, let alone v6 or v7. If I had to guess at the root cause it's that with a community as large as Express', dropping a new major release will involve a significant amount of follow-on work: issues to investigate, PRs to review, middleware projects to help upgrade, etc. But none of the current maintainers have time for that. That makes for a certain lack of interest in actually clicking the "Publish" button. Anyhow, that's my high-level take on this. With that in mind, some suggestions ... 💡 I like the idea of onboarding new team captains and TC members. I'd suggest making this the primary focus for the foreseeable future. This project needs new, fresh faces to help carry it forward. 💡 I like the expectation that team captains and TC members be "active". Do not underestimate how much passive team members can drag a project down simply by doing nothing. I'd make a point of starting the TC meetings with an informal and supportive poll to get a sense of everyone's level of availability and involvement. 😱 Awkward aside: The existing TC has not met for 3+ years. It has allowed v5 to go unpublished for 8-9 years. Is there a definition of "active" that is consistent with this? If not, what does that imply...? 💡 Does the TC have a chairperson? If not, it should. This new "push" is a non-trivial effort. It will require someone designated with spearheading it, and to insure the new TC and process(es) that get put in place stick. Whoever steps into this role should not do so lightly. It will be a significant amount of work. 💡 Use the v5 release to vet the new team structure, especially when it comes to release and support resources. 💡 I would also look at adding support for sponsorships. Recruiting maintainers is one of the hardest parts of running an opensource project. Providing a means of compensation will help with that. My apologies if this ruffles anyone's feathers. If I'm misinformed or ill-informed, please do let me know. It'd be great to see this project move forward and continue to evolve. | @broofa you should consider joining ❤️ 😃 | That is explicitly why the "immediate needs" are what they are! Getting the right structures in place to operate the project in a more healthy way as nearly always been my goal since I started participating in the project and I think the support in this issue shows that maybe we are aligned on that goal. We have had no lack of volunteers over the years. What we didn't have was the structure in place to foster them and get them to be strong active contributors. This is what I hope to fix by opening this issue and calling for these things to be addressed. The reason for even including the concrete list of steps for 5/6/7 was to put some clarity behind this that that work is required. I don't want to get too in the weeds here, but personally I stepped back because lack of progress caused me to burn out. If we cannot get the governance in place with a truly active group in the near future then I will personally be stepping down from the TC. We have verbage for moving members from active to inactive in the charter, but it is not enough. This is part of what I meant we needed to address under the "reform the tc with 5 active members". I tend to agree we should have this, but we would likely need to ammend the charter to make that happen. I am on board if we want to add that to the agenda. I have opened issues for this in the past in this repo. I agree we should do this. Lets figure out how to slot those into the plan. Not at all!!! The whole point of this was to draw out folks like yourself who had good ideas on how to unstick the project. Thanks for taking the time to write this all up and please think about coming to our meeting to help us achieve these goals. | Of course. If nothing else, it's a chance to put some faces and voices to names I've interacted with numerous times over the years. (No promises about getting involved, though! 😉 ) | I agree with @wesleytodd said above. We probably need to have a complete, cold hand over to others and hope they can get up and running. I just no longer have the time or energy to carry such a large project, just code wise, let alone anything else. As I mentioned to @wesleytodd for the past, probably couple years the endless flow of flase vulnerability reports and threads of filing CVEs, trying to argue with people, many times have never even used javascript, just sucked away all my time. If I didn't tend to it, probably every module in express would be security blacklisted for no reason. I am honestly kinda done with all this non sense Express seems to attract, and of course we now have the endless SPAM PRs on the main repo I cannot get to stop. Some awesome folks really would be awesome to step in and figure this all out 🙏 | Other than this part, I agree. I think we need to do more than hope and I am personally volunteering to help make sure we do more than hope. I have time to help, just not the time or will to do it alone or with a group too small to achieve the goals. Luckily I don't think we will lack volunteers if we have the right setup for them to succeede. | Haha, fair. I pretty much only have at best 1 hour a day to work on any OSS any more, so hopefully that puts into perspective how constrained I have been. I feel like @wesleytodd hinted at, I think the best thing we can do for the project is pretty much an accounting of all the stuff, get everything moved to the foundation accounts and stuff, and I can disappear into the night 😃 bc honestly I probably need like a year break from this OSS stuff, like a sabbatical. Edit: and I don't want my very limited availability to stop whatever momentum that comes from all this! | Burnout is REAL, and OSS can create an environment where it is easy to hit. I think we can all say we really really appreciate all the work you have done to keep things afloat. And now hopefully we can get a group of folks who can help take that off your shoulders (or help you find a better future balance with the project after you take that well deserved break)! | Hey all! Just wanted to throw my hat in the ring here: would love to help contribute where I can. Also want to look at sponsorship, along with how our team could commit cycles on specific efforts! I'm going to bring this up internally with our team tomorrow, but personally I'd love to get started on the CI/CD front. I haven't dug into the repo yet to see what's in place, but I'm a huge fan of Commit -> GH Actions -> Release pipelines as a starting point – and given some of the items above it sounds like that could be of use? I'm not sure I could commit to being a captain or anything, but I'd love to be workhorse if someone wants to point me in the right direction. Docs, tests, builds, whatever :) | I'd also love to help out; I don't have time to write code, but I'm happy to join the TC or be a repo captain, or anything similar that would help. | Thanks @dwelch2344 and @ljharb! We absolutely need folks in all sorts of capacities to deliver on all the things on the list above, so everything from setting up and maintaining better shared CICD workflows for the many repos to helping provide technical leadership and direction for the community will be very welcome! If you haven't yet, check out #161 and add your intent to attend (with your TZ) so we can find a time which works for most people. After that meeting we will likely have to revive a few issues and reach alignment on the way forward before we start pointing folks in direction for work, but hopefully that shouldn't take long once we get started. | Hey @wesleytodd thanks for putting this together. I just had the time to take a thorough read and it does look good. I'm happy to contribute in any capacity I can as we at Sails Core team 💜 Express. | Throwing my hat in for anything that is needed. I'm open to contribute with active development, DevOps, or simply repo and docs maintenance. There's also a chance I can bring in sponsorship, but would need to know more about how a sponsor can help before presenting this internally. I would love to see this project moving forward :) | ## ### This comment was marked as off-topic. ### This comment was marked as off-topic. ## ### This comment was marked as off-topic. ### This comment was marked as off-topic. I would love to contribute as well, with a focus on documentation. I can also contribute to the code, so put me to work :) | ## ### This comment was marked as off-topic. ### This comment was marked as off-topic. ## ### This comment was marked as off-topic. ### This comment was marked as off-topic. How does one join said committee | We have some docs in the express repo about this. One of the things we are going to do in tonight's meeting is refine some of the language and try to make it more clear for the future. I doubt there will be a lot changed in the general process though, so this is the gist: | ## ### This comment was marked as off-topic. ### This comment was marked as off-topic. ## ### This comment was marked as off-topic. ### This comment was marked as off-topic. ## ### This comment was marked as off-topic. ### This comment was marked as off-topic. where can I volunteer? | @gerardolima We will be spinning up the triage group again soon which will be the best way to get started. See the docs in the repo for that. Also see my above comments about getting involved. | I would like to contribute 💪🏻💪🏻 | Ok, closing this issue! REALLY want to thank everyone for the great work so far getting this stuff off the ground. Excited to see where we can do with this! | regarding 'immediate needs', giving commit access and publish rights (in case i understood this enough) is a dangerous decision as it could be used for pushing malicious changes (like xz nightmare) | Thanks for bringing your concerns to our attention, @Zorono! 😊 The immediate needs served as the starting point for this initiative. We wanted to ensure that our focus from the beginning was on re-enabling the TC team and repo captains to properly operate with the expected autonomy (as any project at this scale). Currently, Express is undergoing a security audit with OSTIF (see: issue tracker), as other key libraries in the ecosystem have recently done (see: Audits completed). Additionally, we've established a Security WG to handle all security-related matters (see: current responsibilities). As a direct result of this effort, we recently addressed an open redirect vulnerability (more info) and plan to continue improving the project over time (see: current initiatives). If you're interested in helping the project and contributing, you're more than welcome to participate in the Security WG discussions and initiatives. | Instead of #374 but non-breaking The express 5 beta 3 is fully operational and tested by community, but its release is delayed. The current issue is that it's also outdated comparing to v4.18.2, having nested dependencies of older versions. latest tracking issue: ``` expressjs/discussions#233 ``` previous one: ``` expressjs/express#5205 ``` Latest proposal: ``` expressjs/discussions#160 ``` Probably it will be ready in march 🤞🏽 (no, it's not) ``` expressjs/express#5111 (comment) ``` wesleytoddcommentedHey Everyone! I hope this issue finds you doing well, it has been a few years since I last posted in the project but it has been on my mind for a while that I wanted to do this. Thanks to some great help from @sheplu and @UlisesGascon (triage team members and folks active in the larger Node.js Ecosystem) we wanted to put forward a plan for the future of the Express project. As we all know, the project has been in more of a maintenance mode for a long time now. Since the 5.x branch has not shipped despite a concrete plan to do so in 2020 we believe that in order to prevent the ecosystem from having to deal with more drastic measures, that we should make an effort to revive the project starting with a renewed look at the governance to help bring new contributors into the project. Ideally this plan is uncontroversial and can be quickly acted upon. To do that, we thought that we should schedule a TC meeting with at least the folks last listed as active TC members (@dougwilson @LinusU @blakeembrey @crandmck @wesleytodd) and interested community members to discuss and commit to a direction. So below is the plan we worked out to get us back onto a healthy track, after kicking off this discussion here I would like to open an issue to schedule a TC meeting for some time in the next few weeks. ## Plan The plan comes in phases and focuses on direct tactical steps as opposed to strategic goals. Despite this, most of the tactical goals are backed up by larger strategic goals which we are omitting for brevity. Additionally there are some todo's and areas of ambiguity here. Ideally these would be filled in by collaborators as we move along and should not be blockers to agreeing on the general direction and goals. ## Immediate Needs These are things that we propose be done immediately upon approval of this plan. ## Express 5.0 ## Express 6.0 `@express` scope on npm for sub packages## Express 7.0 All of this is up for discussion, the goal here is more about rallying people together to help progress happen but we thought coming with a concrete list of items would be more productive so please feel free to discuss individual points. If we need to we can break the discussion up but use this issue as a hub. And ideally the first order of business is getting the TC meeting scheduled and Express 5 out the door, so let's not let perfect be the enemy of the good here and commit to starting and keeping the progress flowing. The text was updated successfully, but these errors were encountered:
true
true
true
Hey Everyone! I hope this issue finds you doing well, it has been a few years since I last posted in the project but it has been on my mind for a while that I wanted to do this. Thanks to some grea...
2024-10-12 00:00:00
2024-01-29 00:00:00
https://opengraph.githubassets.com/83709b1c5eff9ecd9122a972fc36d31097530431de65c15b8a218ece0b82c07f/expressjs/discussions/issues/160
object
github.com
GitHub
null
null
19,710,556
https://medium.com/@tombert/rip-joe-armstrong-b6252ff93654
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
20,727,103
http://rephial.org/release/4.2.0
null
null
## Angband Releases: 4.2.0 ### Downloads - Source code - Windows - macOS (10.9+) ### Background 4.2.0 was slated to make big changes to two important parts of the game: classes, and monsters. While each of these was big enough to deserve a minor version increase in its own right, doing both together so they could be balanced against each other was preferred. As with 4.1.0, there was much community discussion about the changes. Version 4.2.0 was released on Saturday the 17th of August 2019. ### Summary of Changes #### Classes Two new realms of magic have been added, nature and shadow, to the existing divine and arcane realms. - Nature realm relies on harnessing the power of the earth, and is opposite to the arcane realm, which uses craft and cunning. - Divine realm uses harmony and light, and is opposite to the shadow realm which aligns with darkness, power and sacrifice. - Rangers now use nature magic, like the new Druid class - Necromancers are the major shadow caster class - Blackguards are fighters enhanced by shadow magic - Pure casters (mages, priests, druids, necromancers) use five books (down from nine) - Hybrid casters (rogues, paladins, rangers, blackguards) use two or three books - Classes no longer have different rates of gaining experience #### Monsters Each base monster type has had its game niche reconsidered, and there has been a general push to use Tolkien-inspired monsters where appropriate. Some of the major changes include: - Ainur - most of these now align with one of the great Valar, and have corresponding powers - Dragons - these have mostly been made stronger and deeper - Humanoids - dark elves have been replaced with dwarves, gnomes with Drúedain - Hydras - each hydra type is like the previous type with a new head and attack added - People - the association of some of these with player classes has been expanded to include the new classes - Snakes - some of these are now dangerous - Spiders - more dangerous and deeper, with unique abilities - Trees/Ents - these are new, tough to kill enemies - Wights/Wraiths - wights are all now shallower than wraiths, and the Ringwraiths are much deeper #### Game Mechanics In the course of the class and monster changes, many new mechanics have been introduced, with some of the major ones being: - Player and monster shapechanges - Variable light and darkness are created by monsters, player gear and player spells, and the player has a meter to measure the light level on their square - Mages can drain mana from magic devices - Rogues can steal from monsters - Temporary brands and slays (Priests, Paladins, Blackguards) - Necromancers can see in the dark, and can cast less well in light - Healing of hitpoints every turn for a time (Druids) - Control of a monster, including making it attack other monsters (Necromancers) - Shield bashes (Warriors, Paladins, Blackguards) - Decoys which monsters will attack instead of the player (Rangers) - Blackguards can go into a state of bloodlust which improves their combat at a price - Monsters can whip or spit on the player from nearby - Monster groups act more as a group, with leaders and sometimes bodyguards - Innate spells (e.g. breaths) and cast spells (e.g. bolts) have separate frequencies - Deeper in the dungeon shallow monsters are less likely to appear - Monsters can teleport to the player - Spiders can weave webs - Monsters can have innate darkness around them - Traps added that trigger on leaving their square - Ringwraiths can inflict the Black Breath - Monster spells can have different levels depending on monster spell power - The player now has a hunger meter, and food is more important - As with classes, all races gain experience at the same rate, except for humans who are faster, and high-elves who are slower - The Temple has been replaced with a Bookseller who sells books of all realms #### Code Improvements There have also been some improvements to the codebase, both visible and invisible to the player. The main ones are: - Addition of an SDL2 front end - Change to mostly using a single grid rather than x and y coordinates in much of the code - More readability of datafiles - Some font improvements - Some new tiles - A new online (rather than in-game) help system - A new experimental (and so far unbalanced) birth option for percentage damage - An option to start a new game after dying instead of closing the game and re-opening - Many fixes to bugs and memory leaks
true
true
true
null
2024-10-12 00:00:00
2019-08-17 00:00:00
null
null
null
Angband Releases: 4.2.0
null
null
24,363,074
https://jatan.space/exploring-moon-mountains/
Exploring the marvel that are mountains on the Moon
Jatan Mehta
# Exploring the marvel that are mountains on the Moon The near-instantly formed lunar mountains offer a peek into the Moon's interior, and improve our understanding of the solar system. Unlike the millions of years it takes for most mountains on Earth to form, lunar mountains crop out near-instantly, geologically speaking. Earth’s mountains primarily form when two colliding plates of the Earth’s crust lift up volumes of rock, slowly creating an elevated landform. Over millions of years, wind, water and gravity erode these uplifted sections, wearing their surface to make the mountains we are familiar with today. But the Moon has no plate tectonics, atmosphere or running water. How then does it boast mountains several kilometers tall? For instance, Zeeman Mons on the Moon’s farside peaks as high as Mount Everest. The answer lies in the one of the Moon’s most apparent features—craters. Most lunar craters are small and bowl-shaped, formed when asteroids and comets impacted its surface. This shape persists for crater sizes up to about 20 kilometers but larger craters display variety. Big asteroids and comets with high velocities can impart a tremendous force on the Moon’s surface. They will not just make a crater but compress the surface in and around the impact point enough to melt the crust. When the melted crust can’t be compressed any further, it bounces back and forms a central mountain upon cooling. This process is visualized in the GIF below. Most mountains on the Moon are formed by this highly energetic process that takes a geologically negligible passage of time. The kilometer-plus high central peaks of the city-sized Aristarchus crater and 86-kilometer-wide Tycho crater stand tall as fine examples. Aristarchus crater was one of the candidate landing sites for the now-cancelled Apollo 17+ missions. Visiting Aristarchus or Tycho in a future mission will allow us to study the exposed Moon’s interior by the virtue of their central mountains. For even larger craters or higher velocity impacts, the mechanics work such that the newly formed central peak splits into two before it can solidify. The 93-kilometer-wide Copernicus and 77-kilometer-wide King craters respectively host two distinct peaks, each towering more than six kilometers! Apart from allowing scientists to study the Moon’s interior, visiting such places would be key to understanding exactly how impacts cratering takes place, not just on the Moon but across the solar system. ## Put a ring on it For even larger craters, the twin peaks widen to form a ring of mountains, like a liquid drop causing a ripple on still water. The 312-kilometer-wide Schrödinger crater on the Moon’s farside is a well preserved example, despite being almost four billion years old. A mission to Schrödinger, such as the one NASA announced for 2024, can help solve fundamental mysteries about the Moon’s evolution, like if the Moon indeed was once fully covered in an ocean of magma—a hypothesis tied to its origin. Further, Schrödinger lies within the Moon’s largest impact crater, the 2,500-kilometer-wide South Pole-Aitken basin. The impact that created the basin excavated material from deep into the lunar crust, and perhaps even the mantle. Since Schrödinger formed later, its impact could’ve penetrated deeper and uplifted even more material, offering insights into the lunar interior. The Chicxulub crater on Earth, linked to the extinction of dinosaurs, is also thought to have formed as a ringed crater but has now worn down by Earth’s active weathering. As such, visiting the Schrödinger crater is our chance to better understand Chicxulub. For craters larger than 500 kilometers, you get not one but multiple mountain rings. The 930-kilometer-wide ancient Orientale crater on the Moon’s farside boasts three mountain rings, most of which is preserved. Missions to both Schrödinger and Orientale can tell us exactly when large asteroids and comets excessively bombard bodies in the solar system. This period of blistering impacts is particularly important as Earth is thought to have gotten its water, and possibly life-critical organics, from asteroids and/or comets during this time. For some ancient craters, like Imbrium on Moon’s nearside, only parts of its outermost mountain ring are visible today. The basin’s interior has been drowned in lava, which you now see as dark regions on the Moon after the lava solidified. The prominent, arc-shaped mountain range of Montes Apenninus on Imbrium’s southeast border stretches 600 kilometers long. Multi-ring impact basins exist on many other worlds in the solar system, like Caloris on Mercury, an unnamed basin on Jupiter’s moon Ganymede, Evander on Saturn’s moon Dione, and more. Jupiter’s moon Callisto boasts the largest multi-ring basin in the solar system, called Valhalla, spanning 3800 kilometers wide. The ubiquity of mountains formed by impacts across the solar system and their consistent patterns indicate common geological mechanisms at play. The Moon being so close to us presents an opportunity to study these fundamental processes in planetary science in great, testable detail. ## Exploring the mountains Lunar orbiters use remote sensing techniques to understand the composition of the lunar mountains. But to better understand their composition, structure and origin, surface missions are needed, especially sample return ones so as to determine precise ages. To that end, NASA had selected several of the above mentioned places as candidate landing sites for the now cancelled Constellation program to return humans to the Moon. However, sending landing and roving missions to lunar mountains is a bit of an engineering hurdle. Most surface missions thus far have landed in the dark lunar plains—vast, solidified lava regions that provide a relatively uniform surface for spacecraft to land on. The rocky nature of the mountainous regions make it more difficult to safely touch down on. This may change with NASA’s upcoming Artemis missions. The Artemis program aims to explore the Moon’s poles in this decade, both robotically and with humans. The precision landing technologies required to touch down safely on the challenging polar terrain also enables missions to the lunar mountains. Mountains on the Moon are a marvel that give us a peak (pun intended) into the lunar interior, help discern the chain of events in the solar system’s evolution, and improve our understanding of the physical processes that shape airless worlds everywhere. *Originally published in 2020, updated in 2021 to include context from NASA’s Artemis Moon exploration program. Republished by The Wire Science.* **→ Browse the Blog | About | Donate ♡**
true
true
true
The near-instantly formed lunar mountains offer a peek into the Moon's interior, and improve our understanding of the solar system.
2024-10-12 00:00:00
2021-06-25 00:00:00
https://jatan.space/cont…moon-cover-2.jpg
article
jatan.space
Jatan’s Space
null
null
1,198,374
http://carlodaffara.conecta.it/?p=393
carlodaffara.conecta.it
null
## Private/public cloud costs: a summary table Posted by cdaffara in divertissements on August 29th, 2012 I had the great pleasure to converse on twitter along with the exceptional Simon Wardley, a longtime expert and researcher on company innovation, evolution and.. cloud computing. Among the limit of 140 characters, it was quite difficult to convey any sensible concept in such a short space. One of the interesting thing is that it is difficult to provide a sensible comparison between private and public clouds in absence of real numbers. So, since we just finished a research report for a major UK public authority, I will just add my own 2 eurocents and present a summary table of some examples of private, public and dedicated cluster costs: - **System****$/Core-hour**Hopper [19] $0.018 Montero-Llorente [31] $0.04 Magellan (overall) [19] $0.04 Class 1 server/workstation [7] $0.046 Cornell RedCloud [53] $0.058 Our estimate $0.06 Amazon cc1.4xl, resv. instance $0.062 Amazon cc1.4xl $0.093 CINN [7] $0.1 This is of course just a snippet of more than 40 pages; cost includes management and amortization over 3 years for hardware, 5 years for infrastructure. Our own estimate is for a self-managed, self-assembled system with no best practices, while Magellan is a realistic estimate of a median cost for a well-managed and well-procured infrastructure. Hopper is a custom cluster out of commodity hardware and can be considered the best approachable price point for 2011 in terms of cost/core for a private cloud as well. In the paper (that I hope will be published soon) there will be additional details on the actual model, the estimates and the sources for the data. Hope it may be useful for someone. ## The economic value of Open Source software Posted by cdaffara in OSS business models, OSS data on July 23rd, 2012 (this is a repost of the original article with some corrections. Includes the Oxford TransferSummit 2011 presentation based on this data) What is the real value that Open Source has brought to the economy? This is not a peregrine question. Since most of the current evaluation methods are based on assessing “sales”, that is direct monetization of OSS, we are currently missing from this view the large, mostly under-reported and underestimated aspect of open source use that is not “sold”, but for example is directly introduced through an internal work force, or in services, or embedded inside an infrastructure. Summary: OSS provide cost reduction and increases in efficiency of **at least 116B€**, 31% of the software and services market. Getting this data is, however, not easy. There is an easy approach, called “substitution principle”, that basically tries to measure how much a collection of hard-to-measure assets is valued by counting the sum of the money necessary to substitute them; for example, counting the value of all the Apache web servers by adding the cost of changing them all with an average, marketed substitute. This approach *does not work*, for two reasons: first of all it introduces a whole world of uncertainty, given the fact that software is never perfectly exchangeable with an alternative. The second is related to the fact that users may be unwilling to pay for an alternative, so from that point of view the real value is much lower. This is, by the way, the (erroneous) way that RIAA and other rights organizations measure piracy losses: by counting how many times the copy of a film is downloaded, and assuming that all the people that downloaded it would have paid for a full cinema ticket if piracy did not exist. It is obviously wrong – and would be equally wrong if we applied the same principle. Another approach is to measure the revenues of companies that are adopting an OSS-based business model, something that we have extensively studied in the context of the FLOSSMETRICS project. The problem with this approach is that it totally ignores the work that is performed without a monetary compensation, and it under-reports the software that is distributed widely from a single source (for example, the open source code that is embedded in phones or routers). A purely monetary measurement also ignores inherent improvements in value that can derive from an improved technology. Let’s make an example: let’s imagine that in the world, all television sets are black and white only, and only recently a new and improved television set can provide color. The new TV sets costs quite a lot more than the old B&W ones, so if we imagine that all the current TV viewers want to move to color the TV set provider can obtain a total amount of money that is the product of the cost of the new TV set multiplied by the number of viewers. The company is happy Now, let’s imagine that a magic signal allows the old TV sets to show color images. The company that produces the color TV sets is clearly unhappy, since its value dropped instantly to zero, but on the other hand all the people with B&W TV sets is happy, even if there is no monetary transaction; the *user value* increased substantially. We need to capture this value as well, since a substantial amount of this economic value is hidden in the user balance sheets. This means that we need to find a different, alternative way to measure OSS value: enter macroeconomics! We can start from the overall economic value of IT in general. There is one thing that we know for sure: the total economic value of a country or a region like Europe – 12.3T€ (trillion of Euro). We also know the average IT expenditure of companies and Public Administrations, that is 4% (source: Gartner IT key metrics data, EU eBusiness-Watch) with wide variations (small companies: around 7%, going up with size up to the average for Fortune 500: 3%). This means that the average IT spending, including services, employees, hardware, software, whatever. This means that the overall IT spending in Europe is approximately 492B€, of which 24% is hardware (source: Assinform, Gartner, IDC) – which means that software and services market is valued at 374B€. (Estimates from Forrester are in the same range, so we are at least consistent with the big analyst firms) Still with me? Good! Now, the next step is estimating the savings that are directly imputable to open source. We have two sources: an internal source (code replaced by OSS) and external (savings reported by IT personnel through use of OSS). Let’s start with savings from OSS adoption, that can be estimated (using data from Infoworld and our data from COSPA) at 15% for “light” adopters (less than 25 OSS products used) to 29% for “heavy” adopters (more than 25 OSS products), up to the 75% of specific cases (reported by Gartner for maintenance and licensing). Taking into account the share of use of OSS in general and the variation in use of OSS among different sizes, we can estimate that the savings directly introduced by OSS amount to 41B€ – those *do not appear anywhere but in the adopters balance sheets*, that is in a reduction of IT expenses, or a better result for the same IT expenditure (think about the TV set example outlined before). And now, software development. It may sound strange, but only a small part of software is ever developed for the market – what is called “shrinkwrapped”. The majority of software is developed (internally or through external companies) for a specific internal need, and is never turned into an external product. In fact, when we consider the “service” part of the non-hardware IT market, we discover that nearly half of that value is actually sponsored software development, and the remaining 35% is non-software services (support, training, ancillary activities). This means that in Europe, 244B€ are software spending in a form or the other (for example, employee wages). What can we say about this software? We know that a part of it is Open Source, because the majority of developers (69%, according to Evans Data) is using open source components within their code. We also know, thanks to Veracode, that “sampling … find that between 30 and 70% of code submitted as Internally Developed is identifiably from third-parties, most often in the form of Open Source components and Commercial shared libraries and components”. In our own database, we found out that the role of commercial shared libraries is hugely dependent on application type and vertical sector, and it falls consistently between 15% and 30% of the code not developed from scratch. Using a very conservative model, we can thus estimate that 35% of the code that is developed overall is based on Open Source, and this means that there is both a saving (software that is reused without having to redevelop it) and a cost, introduced by the need for adaptation and the “volatility cost”- that is, the risk introduced by using something developed outside. Thankfully, we already have quite a lot of information about these costs, thanks to the effort of the software engineering community; some details can be found here for those that really, really want to be put to sleep. Applying the software engineering costs detailed in my previous article (volatility, increased cost for code re-factoring, glue code development) we can estimate that the savings introduced by OSS are, in a very conservative way, 31% of the software-related part of the IT ecosystem, that is 75B€. The real value is higher, mainly because reused OSS code tends to be of higher quality when compared with equivalent proprietary code (data and academic references available here) but I will leave this kind of evaluation for a future article. We can however say, with quite a good level of certainty, that **the lower bound of savings that OSS does bring to the European economy is at least 116B€ **- the majority of which does not appear in the “market” and only in a minimal part in the balance sheets of OSS companies (consider that only Red Hat is now approaching 1B$ in revenues). It is savings and increased efficiency of companies and Administrations that use OSS, something that was already discovered: “Finally, comparing the individual data on firms with turnover of less than 500,000 euros with the variable on size classes of customers (by number of employees), one can hipotesize a correlation between the use of software Open Source and the ability to attract customers of relatively larger scale. At the same turnover, in other words, companies “Open Source only” seem to have more chances to obtain work orders from companies with more than 50 employees (ie medium – large compared to our universe of reference).” (source: Venice study on Open Source) or the fact that revenue-per-employee ratio is higher in companies that adopt open source software (on average by industry, OSS-using companies have a revenue-per-employee that is 221% of the non-OSS controls). It is also important to recognize that this is only a measure of the *direct* impact of OSS. The reality is that software has a substantial impact on revenues; an example I found out is Siemens, that with 70B€ in revenues spends 5% in software drives *50% of its revenues*. A similar impact can be expected of the savings introduced by OSS – something that we will talk about in a future post. ## A new EveryDesk is out! We were particularly happy about out work on EveryDesk – a portable, fully working live Linux installation on a USB disk. But we found out that more and more people were looking for more space, a more modern environment, and in general to refresh things. We have been busy with out other pet project – CloudWeavers, a private cloud toolkit, and we redesigned EveryDesk to be the ideal client environment for companies and administrations that are moving totally or partially to a private or public cloud. We took several ideas from ChromeOS, but frankly speaking the hardware support was extremely limited, and even with exceptional ports like Hexxeh’s “Lime” the user experience is still less than optimal. We have basically redesigned everything – the base operating system is now derived from OpenSuse (mainly thanks to the excellent package management tool, that drastically increases the probability that the system would continue to work after an update – a welcome change from Ubuntu), we integrate Gnome 3, the latest Firefox and Chromium on a BTRFS install that supports compression and error concealment, so it works properly even on low-cost USB devices. On an 8Gb USB key, you get 4Gb free, and all the apps at your disposal, ready to go. The only major change in hardware support is the fact that EveryDesk is now a 64-bit only operating system, but we believe that despite the limitation it can still be useful at large. It integrates some components that are maybe less interesting for individual use – for example the XtreemFS file system, that can be used to turn individual PCs into scale-out storage servers in a totally transparent way, and with great performance, or many virtualization enhancements. On the user side, we already installed some of our favorite additions among fonts, software, and tools; Firefox uses by default the exceptional Pdf.js embedded viewer, that uses no separate plugins and is faster than Adobe Acrobat, and there is the usual assortment of media codecs and ancillary little things. We love every moment that we work on this project, and I would like to thank the many people that helped us, sent criticisms and praises. One wrote “I can’t believe how well it works, without time lags I normally associate with running on a CD or a thumb” and I can’t thank our users enough – they are our real value. As usual, you can download EveryDesk from Sourceforge.
true
true
true
Open source software-based business models research
2024-10-12 00:00:00
2012-03-27 00:00:00
null
null
null
carlodaffara.conecta.it
null
null
19,475,334
https://interestingengineering.com/boeing-receives-first-public-order-cancellation-request-for-737-max-8
Boeing Receives First Public Order Cancellation Request for 737 MAX 8
John Loeffler
US aircraft manufacturer Boeing has received its first public order cancellation request from an airline for new 737 MAX 8s as the aircraft model remains grounded throughout much of the world. ## PT Garuda Indonesia Requests Cancellation of Outstanding 737 MAX 8 Orders PT Garuda Indonesia (PTGI), the country’s aviation flag carrier, publicly acknowledged this week that the airline sent Boeing a letter last week asking them to cancel the airline’s open orders for the company’s 737 MAX 8 aircraft. **SEE ALSO: EVERYTHING YOU NEED TO KNOW ABOUT THE BOEING 737 MAX 8** “Passengers always ask what type of plane they will fly as they have lost trust and confidence in the Max 8 jet,” Ikhsan Rosan, a spokesperson for the airline, told The Associated Press. “This would harm our business.” In 2014, PTGI ordered 50 737 MAX 8 aircraft, with one having been delivered already. The request from PTGI is the first public order cancellation request that has been reported following the international grounding of the aircraft last week. The 49 outstanding deliveries of the aircraft from Boeing are estimated to be valued at $4.9 billion in total, a massive potential loss for the US-based aircraft manufacturer. ## More Cancellation Requests Expected if the Plane’s Grounding Drags On After countries around the world started grounding the plane within their borders and denying overflights of their airspace, Boeing announced that it would put any upcoming deliveries of the 737 MAX 8 on hold, though it would continue to build new planes as normal. [see-also] The plane was effectively grounded around the world last week after US President Donald Trump ordered the 737 MAX 8s out of service until the similarities between the two crashes of the aircraft were fully investigated. Currently, more than 5,000 737 MAX 8s have been ordered from the company, with each plane costing around $100 million, the total value of the outstanding orders for the 737 MAX 8 is likely north of half a trillion dollars about the next 5-10 years. The cancellation of 49 planes doesn’t in any way directly threaten this revenue stream, but it remains to be seen if this is an isolated concern or the beginning of a larger trend of cancellations from other airlines who have no use for airplanes that cannot fly. “We think other cancellations may follow as global customers remain spooked after two crashes with seemingly similar causes,” said Jim Corridore, an airline analyst with CFRA Research. He added that if Boeing can push out the software update to fix the autopilot’s anti-stall function that appears linked to the two crashes, it would go a long way to restoring airlines’ confidence in the aircraft they have on order with the company.
true
true
true
PT Garuda Indonesia announced this week that they sent a letter to Boeing requesting that the aircraft maker cancel their outstanding orders for 737 MAX 8 aircraft.
2024-10-12 00:00:00
2019-03-24 00:00:00
https://images.interesti…Boeing_max_8.jpg
article
interestingengineering.com
Interesting Engineering
null
null
8,785,094
https://nplusonemag.com/online-only/online-only/the-folly-of-mars/
The Folly of Mars
Ken Kalfus
This week the planet Mars will be low in the western sky for an hour or two after sunset. It’s visible only as a pale, ruddy light, hardly distinguishable from the few stars that start blinking on in the night sky as the dusk deepens. You may notice, however, that its tiny flame burns steadily, more steadily than that of any impossibly distant, impossible-to-touch star. Once you find it, you may not want to look away. For more than a century now, the fourth planet from the sun has drawn intense interest from those of us on the third. We viewed it, first, as a place where life and intelligence might flourish. The mistaken identification of artificial water channels on its surface in the late 19th century seemed to prove that they did. More recently, terrestrials have gazed at the arid, cratered, wind-swept landscape and seen a world worth traveling to. With increasingly intense longing, we’ve now begun to think of it as a newfound land that men and women can settle and colonize. It’s the only planet in the solar system—rocky, almost temperate, and relatively close—where something like that can be conceived of as remotely plausible. Since the last moonwalk, in 1972, Mars has drawn the fitful attention of American presidents and blue-ribbon commissions. As the Apollo program was winding down, Richard Nixon declared, “We will eventually send men to explore the planet Mars.” During the Reagan Administration, the National Commission on Space, chartered by Congress, proposed actual dates: a return to the moon by 2005 and a landing on Mars by 2015. President George H. W. Bush declared “a new age of exploration with not only a goal but also a timetable: I believe that before Apollo celebrates the fiftieth anniversary of its landing on the Moon, the American flag should be planted on Mars.” That would have been in 2019. George W. Bush came up with yet another set of dates. He renewed the call to extend “human presence across the solar system, starting with a human return to the Moon by the year 2020, in preparation for human exploration of Mars and other destinations.” In 2010, Barack Obama rejected Bush’s plan to go back to the moon, canceling the Constellation program and the heavy-lift Ares rockets that would have taken astronauts there. In his reboot of the human spaceflight program, Obama said, “we will start by sending astronauts to an asteroid for the first time in history,” by 2025. He added that “by the mid-2030s, I believe we can send humans to orbit Mars and return them safely to Earth. And a landing on Mars will follow. And I expect to be around to see it.” A half-century after the conclusion of the Apollo mission, we have entered a new age of space fantasy—one with Mars as its ruling hallucination. Once again stirring goals have been set, determined timetables have been laid down, and artist’s renderings of futuristic spacecraft have been issued. The latest NASA Authorization Act projects Mars as the destination for its human spaceflight program. Last month’s successful test flight of the Orion space vehicle was called by NASA Administrator Charles Bolden “another extraordinary milestone toward a human journey to Mars.” The space agency’s officials regularly justify the development of new rockets, like the Space Launch System, as crucial to an eventual Mars mission. But human beings won’t be going to Mars anytime soon, if ever. In June, a congressionally commissioned report by the National Research Council, an arm of the National Academy of Sciences and the National Academy of Engineering, punctured any hope that with its current and anticipated level of funding NASA will get human beings anywhere within the vicinity of the red planet. To continue on a course for Mars without a sustained increase in the budget, the report said, “is to invite failure, disillusionment, and the loss of the longstanding international perception that human spaceflight is something the United States does best.” The new report warns against making dates with Mars we cannot keep. It endorses a human mission to the red planet, but only mildly and without setting a firm timetable. Its “pathways” approach comprises intermediate missions, such as a return to the moon or a visit to an asteroid. No intermediate mission would be embarked upon without a budgetary commitment to complete it; each step would lead to the next. Each could conclude the human exploration of space if future Congresses and presidential administrations decide the technical and budgetary challenges for a flight to Mars are too steep. The technical and budgetary challenges are very steep. A reader contemplating them may reasonably wonder if it’s worth sending people to Mars at all. The panel’s report, *Pathways to Exploration: Rationales and Approaches to a U.S. Program of Human Space Exploration*, reminds us why human travel to Mars will be a difficult proposition, several orders of magnitude more difficult than the Apollo missions, and much more expensive. Mars in its typical approach to Earth is about two hundred times farther than the moon. The shortest round-trip journey would take about nine hundred days. Mars’ gravitational pull, about one third as strong as the Earth’s, is twice that of the Moon, making the descent to its surface a lot more complicated than the graceful hovering and settling down performed by Apollo’s spindly lunar “bug.” And then astronauts would need a correspondingly more powerful fuel system to lift them off the planet for the flight home. The report identifies two show-stopping challenges whose solutions so far are either “unknown or unattainable with current technology”: the high-radiation space environment through which the astronauts will have to travel and the design of the Mars landing craft. It also names six critical areas where “no relevant systems exist or have existed at the appropriate scale,” including a life-support system capable of functioning for years without regular supplies from Earth. Much of the equipment the astronauts will need for their survival will have to be robotically pre-positioned at the landing site, which itself presents challenges in precision landing and remote operation that are unprecedented. Some of that robotic equipment may be tasked to produce rocket fuel and other consumables from the martian atmosphere and the raw materials it finds on site—if they’re there. The committee doesn’t put a definite price tag on the Mars mission. The final cost would depend on which pathways we chose, how quickly we wanted to get there, and what solutions we found for the technical challenges—but the ballpark figure is in the hundreds of billions. Citing polls that reflect the public’s lukewarm interest in the space program, the panel doubts that this could be raised in the current or anticipated political environment. When we think of space, we’re inspired mostly by our deepest hopes for companionship, which would be realized by contact with an alien intelligence, and by our deepest fears about the future of humanity. Now that the Earth’s inevitable uninhabitability has become a staple of popular culture, we seem to have acquired the expectation that our species will save itself by moving to other planets. *Pathways to Exploration* notes the argument that the space program should emphasize colonization and the ultimate expansion of our species throughout the solar system, so that, eventually, the species may develop the capability to “outlive [the] human presence on Earth.” Christopher Nolan’s new blockbuster *Interstellar* depends on this premise too, with the tagline: “Mankind Was Born On Earth. It Was Never Meant To Die Here.” The romance of outer space as refuge is amplified in Apollo 11 astronaut Buzz Aldrin’s 2013 manifesto *Mission to Mars: My Vision for Space Exploration*, in which he appeals for a new national Apollo-like effort to make Mars “our future second home.” The book—bearing endorsements by Neil deGrasse Tyson and Stephen Hawking—is part self-aggrandizing memoir, part crazy self-promotion, and part windy speculation about the technologies that would make travel to Mars possible. Aldrin appropriates an established concept, the orbital cycler (a spaceship that would remain in permanent orbit between two planets, ferrying settlers), and brands it the “Aldrin cycler.” He also reaches for the kind of empty rhetoric that the panel tries to avoid when it examines the space program’s motivations: he asks, “What does human spaceflight do for America? First of all, it reminds the American people that nothing is impossible if free people work together to accomplish great things.” Aldrin calls for the landing of astronauts on the planet by 2035, a date chosen not out of any consideration of the technological hurdles that would have to be overcome or the funds that would need to be raised: rather, what is significant about 2035 is that it’s “66 years after Neil Armstrong and I flew the quarter-million miles through the blackness of space to touch down onto Tranquillity Base [in 1969]. . . . 66 years after the Wright brothers’ first flight [in 1903].” He would like to have the program announced in 2019, because it will be fifty years after the Apollo landing. Space enthusiasts cherish a kabbalistic belief in the importance of anniversaries and in the magical power of certain dates to inspire the nation or the world. Similarly, the glow of warm feeling conjured by memories of the first moonwalk—and the supposedly forward-thinking era that produced it—motivates contemporary encomia to a potential Mars mission. Aldrin declares that “Apollo 11 symbolized the ability of the nation to conceive a truly pathbreaking idea, prioritize it, create the technology to advance the idea and then ride it to completion. Apollo is a case where we got it right. If we are to resurrect the profound feeling of participation that accompanied Apollo, we will need a Kennedy-like commitment to human exploration.” The nostalgia for the Apollo-era NASA—which at its peak was allowed to consume nearly 4 percent of the federal budget—ignores the reaction against the space program that it engendered. The “profound feeling of participation” in the space program was not as national as Aldrin recalls. The counterculture of the 1960s that became the dominant culture of the 1970s was largely distinguished by its distrust of science and technology and its indifference to the space program. Many space enthusiasts have long since given up on the idea that NASA will ever regain its leadership in human spaceflight, and they hate talk of fiscal year budgets, intermediate paths, and long test flight programs. They just want to get to Mars, and they think they can do it independently, looking to find fresh thinking and mission-saving economies in private enterprise. Many proponents contend that any Mars trip should be one-way, for the purpose of establishing a permanent colony, eliminating the need for ascent and return vehicles. The Mars astronauts would have to live out the rest of their lives on the red planet, within a strictly controlled artificial environment. Dying on a distant planet holds more appeal than you may think. One such project, the Netherlands-based non-profit Mars One, has drawn credulous press with its promise to land people on Mars in the next decade: Mars One has produced a drawing of its landing craft, but it hasn’t identified the vehicle that’s supposed to transport the settlers. This hasn’t stopped the company from beginning its astronaut selection process. Seventy-eight thousand people have applied for the one-way trip, and the company raised more than $300,000 in its initial crowdfunding campaign. Mars One implausibly claims it can put the first men and women on Mars for $6 billion, a fraction of what it cost to send astronauts to the moon. It says it will raise the rest of the money through the sale of television rights to the *Survivor*-style selection process and the flight to follow. Elon Musk, the inventor of PayPal and the head of Tesla motors, has already put his money into a Mars mission that he says can be launched by the end of the 2020s. Other billionaires, like Dennis Tito and Richard Branson, are pursuing their own Mars projects. Aldrin himself has called for public access to space, including Mars, to be financed by lottery, with the prizes being seats on the next generation of space vehicles. But whether they’re selected by lottery or a reality TV show or, as in Musk’s case, they pay their own way ($500,000, he says), astronauts will still face serious radiation hazards, will still need to breathe and be fed, and will still have to land a multi-ton spacecraft onto a hard, cratered, rock-strewn landscape. The urge to explore may be deeply engrained in human psychology, but space travel is a dream generated primarily by 20th-century science fiction and given form and durability, remarkably, by a single story-cycle within the genre. The panel observes, “Many space scientists, space enthusiasts, and the general public frequently cite aspects of the Star Trek franchise as their aspirational vision for the kind of future that should be pursued through human spaceflight.” (A footnote helpfully explains that Star Trek is “the well-known TV series in which humans explore on a galactic scale.”) *Star Trek *and *Star Wars*, like most contemporary science fiction set in space, depend on travel in ship-like vessels, with captains and first mates, ports of call in strange places, ship-to-ship encounters far from the sight of land, and travel times measurable in hours or days. The governing metaphor, in our thinking about outer space, is that it extends across the universe like a terrestrial ocean, dangerous but traversable, islanded with many inhabited or inhabitable planets—a notion that disregards realities about vast interplanetary distances in favor of pleasing, elevating, self-mythologizing fantasies. The powerful, long-lived maritime analogy leads us further astray when it compares astronauts to the great European navigators who sailed to the Americas in the 15th and 16th centuries and to the colonists who followed them. In 1907, a writer of popular science, Garrett P. Serviss, was among the first to employ this analogy, in his novel about an “inter-atomic”-powered journey to Venus, *A Columbus of Space*. President Kennedy, noting that “this country was conquered by those who moved forward—and so will space,” invoked the Plymouth Bay Colony. In declaring his own space initiative in 1989, George H. W. Bush also recalled Columbus and then the settlers blazing the Oregon Trail. In fact, North America may have been new to the Europeans, but human beings had been living there for thousands of years, and the land provided every necessity of life. To consider pre-Columbian America equivalent to the Moon and to the uninhabited planets is to deny the precedence of its indigenous people and the wealth of the natural world in which they lived. And while Mars enthusiasts project that someday they’ll be able to turn the raw materials of the red planet, including vast quantities of water that haven’t yet been located, into vital resources, the technology for doing so hasn’t been developed, and the means of transporting it there hasn’t been either. Mars is a desolate, radiation-seared, frigid, dim (receiving less than half the sunlight received by the Earth), probably lifeless world. Some of recent history’s greatest thinkers, even when they don’t evoke the European settlement of the Americas, share this sloppy, unrealistic vision of human beings living off the planet. Carl Sagan, writing expansively in *A Pale Blue Dot: A Vision of the Human Future in Space*, declared in 1994 that, “Every surviving civilization is obliged to become spacefaring—not because of exploratory or romantic zeal, but for the most practical reason imaginable: staying alive. . . . The more of us beyond the Earth, the greater the diversity of worlds we inhabit . . . the safer the human species will be.” Stephen Hawking, speaking in 2013, added, “We must continue to go into space for humanity. . . . We won’t survive another 1,000 years without escaping our fragile planet.” But it’s not the planet Earth that’s fragile—it’s the human organism that’s extraordinarily delicate and needy, unable to survive beyond very narrow physical limits, the conditions for which exist naturally nowhere else in our solar system. To keep even a few people alive in space or on another planet requires from those left behind the expenditure of enormous resources. It’s not necessarily backwards-thinking, or anti-technology, or anti-exploration, to wonder if those resources could be better employed. The fantasy of a future new life for the species allows us to shrug off climate change and other global challenges with the thought that if we fail to make this planet livable for the billions of people who inhabit it, another is promised for us somewhere else. A belief in life extending beyond the Day of Judgment, after this compromised, faithless, corrupt, and undeserving world has been destroyed, is another deeply engrained theme in human history. It has inspired not a few religions over the course of civilization, often distracting us from our earthly cares and griefs. Millennialist or afterlife doctrines bind us to the leaders who promote them and elicit tremendous sacrifices on their behalf. The promise of rebirth in a fresh landscape is also held out by extraterrestrial settlement. As with other kinds of transcendence, the actual mechanism of conveyance to the better world poses a daunting technical problem. Meanwhile, outer space remains a mysterious realm of compelling questions. Their answers may not have practical benefits, but they continue to enrich and enliven the human experience, inviting us to contemplate our place in the cosmos. Beyond the search for alien life, we’d like to know what the universe is made of, especially that pesky dark matter stuff that makes up 85 percent of its mass. We want to determine the large-scale structure of the universe, its history, its future, and its mind-bending variety of natural objects, including the thousands of planets that have been discovered orbiting other stars in just the last twenty years. Fortunately, marvelously, devices built by men and women and under the control of men and women continue to orbit the Earth and fan out across the solar system. Last week’s report that the Curiosity rover has discovered what could be microbially-produced methane reminds us that Mars is an alluring planet—whose secrets can be disclosed by robotic instruments. The New Horizons spacecraft, launched in 2006, will reach Pluto next July, achieve the first reconnaissance of the dwarf planet, and fly on to other icy Kuiper Belt Objects in the 2020s and 2030s. After traveling beyond Mars in 2012, NASA’s Juno probe returned last year to the vicinity of Earth for a gravity assist, and it’s now on its accelerated way to enter orbit around Jupiter on July 4, 2016. We live in a golden age of exploration, conducted by heroic navigators who sit at computer consoles, pursuing 21st century virtues like intellect, ingenuity and teamwork. NASA’s effort to develop a crewed Mars mission, without the reasonable expectation of long-term funding, compromises the effort to explore the rest of the solar system, as well as research specifically designed to look for life. It makes more sense now to follow up the discoveries made by the Kepler space telescope by using advanced telescopes to search for life on the newly found planets around other stars. In our solar system, if there’s alien life, it may be found in the ocean beneath the surface of Jupiter’s moon Europa, or within the water vapor geysers erupting from Saturn’s moon Enceladus. We should send new robotic spacecraft there. The dogma about sending people to Mars will only impede the discovery of extraterrestrial life in our time. Looking beyond our time, and still thinking boldly about going where no human-built machine has gone before, it may not be too early to start talking about launching a robotic mission to the nearest star system: the three stars located four light years away, including the bright double-star system of Alpha Centauri. You can easily see Alpha Centauri tonight, if you’re reading this in the southern hemisphere. It’s the third brightest star in the night sky, outshone only by Sirius and Canopus. One of the double stars is the same color and type as our sun. We’ve found tentative evidence of at least one planet in orbit around it. Costly and technologically intimidating, the flight would take many generations or even centuries, while history wrought its changes on our nations and our politics, our science, our culture and our mores. We would have to wait patiently for the signal of its arrival and for its first discoveries. But this project can be embarked upon in this century. Rather than indulging in fantasies about settling other planets, we would be expressing the commitment to get our act together here on Earth, our only home in a vast, inhospitable universe.
true
true
true
A half-century after the conclusion of the Apollo mission, we have entered a new age of space fantasy—one with Mars as its ruling hallucination. Once again stirring goals have been set, determined timetables have been laid down, and artist's renderings of futuristic spacecraft have been issued.
2024-10-12 00:00:00
2014-12-22 00:00:00
https://www.nplusonemag.…3_o-1280x853.jpg
article
nplusonemag.com
N+1
null
null
34,971,425
https://blog.twitter.com/engineering/en_us/topics/infrastructure/2023/how-we-scaled-reads-on-the-twitter-users-database
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
8,438,206
http://rvikmanis.github.io/pure-hn/
Hacker News
null
Hacker News
true
true
true
null
2024-10-12 00:00:00
null
null
null
null
null
null
null
8,557,507
http://bl.ocks.org/ZJONSSON/raw/1706849/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
37,603,084
https://nautil.us/the-physics-of-crowds-388020/
The Physics of Crowds
Sidney Perkowitz
In October 2022, 159 people died as crowd pressure caused a human pile-up in the Itaewon district of Seoul, South Korea. The victims were among an estimated 100,000 people celebrating Halloween in an area known for its active nightlife and narrow streets. With a tightly packed crowd, it would have taken only a single incident, such as a person stumbling, to initiate instability. The resulting wave of pressure quickly spread in a sloping alley some 12 feet wide, and people were crushed by the unrelenting force of others in the crowd. This catastrophe was a national tragedy in South Korea, but it is not the largest such event. Documented fatal crushes stretch back through history. In 1896, a rush during a festival celebrating the coronation of Russian emperor Nicholas II killed more than 1,280 people. And in 2015, a crowd crush among 2 million Muslims attending the annual *hajj* to Mecca in Saudi Arabia left more than 2,000 people dead. In these cases, people panicked as they were squeezed by the crowd and could not escape. Though rare, these events are devastating and seem like they should be preventable—especially today. When a disruption is coupled with high density, the results can quickly turn deadly. The complex field of crowd science has long been working to understand how throngs can turn dangerous. It has borrowed from psychology and epidemiology, and now is also incorporating complex systems theory, physics, and physiology, combined with plentiful empirical data coupled with computer modeling. Scientists have even started turning their eyes toward the dangerous dynamics of virtual crowds. We all have something to gain from this gathering field of science, because whether at a sports match, show, busy market, rush-hour metro, or in an online community, most of us find ourselves, at times, in a crowd. In physics, the closer two electrons get, the more strongly they repel each other—a phenomenon known as a repulsive force. With this force in play, they can never collide. Similarly, in most circumstances, people operate with a “social force,” avoiding collision with one another, for example, on a busy sidewalk. Unlike electrons, people also react intelligently to their neighbors’ actions, and these reactions can result in collective crowd behavior. A team of United States-based researchers studied videos of moving crowds and confirmed, in a 2014 paper, perhaps unsurprisingly, that people alter their paths to avoid collisions over a range of walking speeds.1 But that data led to a new empirical and universal law of repulsion between two people, which stated that people altered their paths based not on distance—like electrons—but instead on time until collision. When this law was applied to computer-simulated crowds, it predicted actual behavior. For instance, crowd members heading toward different exits in a stadium minimize disruptive interactions by spontaneously forming themselves into unidirectional lanes. These lanes are an example of an emergent property, when a group of interacting entities shows behavior that its separate components do not exhibit. In 2021, a team of researchers based in Japan confirmed that the anticipation of future outcomes is important for how a crowd organizes itself.2 The researchers set in motion two crowds of 27 members each to meet while walking in opposite directions. In repeated trials, each group neatly formed itself into unidirectional lanes. That intelligent organization changed, however, when just three members of one group were given mobile phones and told to type on them to solve math problems while walking. The distraction disrupted the formation of lanes for both crowds, with higher probabilities of colliding. This shows how even minor disturbances can alter crowd behavior and underlines the tight link between individual behavior and the physical actions of a crowd. And when a disruption is coupled with high density, the results can quickly turn deadly. Keith Still, a crowd scientist at the University of Suffolk in the United Kingdom, defines his work as the study of crowd densities above about one person per square meter (11 square feet). Below that value, pedestrians can freely move to avoid collisions or adjust to changing conditions. At two people per square meter, walking speeds are reduced. At four people per square meter, involuntary contact occurs; at six to seven people per square meter (the equivalent of 1,600 to 1,800 people crammed onto a tennis court), the close contact makes motion difficult; and at 10 people per square meter, movement is virtually impossible. In a playful video clip, Still illustrates how just six men—held within a rope loop of one square meter—overlap and lurch forward only awkwardly, even as they calmly try to synchronize their steps. At these critically high densities, an uncoordinated crowd in motion can hardly proceed and begins piling up; in a static crowd, one person in distress can cause a crush. Knowing what level of crowd density is dangerous helps authorities handle large-scale public events. Still’s work was employed, for example, to assess crowd safety in planning for the U.K.’s royal wedding of Prince William and Kate Middleton in 2011. In real-time, public safety organizations can also now use this sort of information to monitor crowds with video cameras backed up by computer or AI analysis to identify areas of developing concern. And researchers can gather video, along with digital data from smartphones and wifi networks, to better understand crowd dynamics post hoc—even when things go well. The study of how crowds move has long borrowed from physical theory, in particular from fluid dynamics, viewing a dense mass of people as a continuous medium. The behavior of a continuous fluid medium—a liquid or gas—depends on the properties and interactions of the individual particles that make it up. In water, the basic particles are H2O molecules; in a crowd, the “particles” are people, whose interactions determine overall crowd behavior. Contagion can generate a common emotional state such as fear or anger that grips a crowd. But when the density becomes very high and individual motion is highly restricted, a crowd can behave more like so-called “soft matter.” This is any “squishable” material, such as butter or Silly Putty, that can be easily deformed and displays both solid and liquid characteristics. Soft matter research is a developing area in physics and is providing a new physical tool to study crowds in the dangerous realm of extremely high densities. Knowing that an ultra-dense crowd does not flow freely like water but moves more like kneaded clay helps explain the special conditions that individual crowd members would experience. That could well have been the case in the Seoul tragedy. We do not know the precise density the crowd reached before its internal pressure became lethal, but the devastation seen in the aftermath shows that the density was far into the danger zone defined by Still. This would have utterly precluded any free motion or “social force” avoidance maneuvers based on physical repulsive forces and fluid dynamics. Instead, the crowd likely behaved according to the physical laws of soft matter and developed strong internal forces that tragically crushed individuals caught up within its pressure points. With more data, the conditions that produced those forces could have been better understood to help in future crowd control. But crowd behavior isn’t just about physics. There are myriad psychological phenomena at work—especially when things start to go wrong. A flow of sentiment becomes a powerful force in dictating a crowd’s behavior, as Gustave Le Bon proposed in 1896 in *The Crowd: A Study of the Popular Mind*.* *Le Bon introduced the idea that emotional states (and cognitive ones) can be infectious, giving rise to the concept of emotional contagion. In this process, a person takes in an emotion expressed by someone else, then expresses that feeling, which affects others. This soon generates a common emotional state such as fear or anger that grips a crowd. In 2015, Funda Durupinar at Bilkent University in Turkey and colleagues introduced a useful model for this transference in a crowd. They took an epidemiological approach where, like the germs from an infectious disease, people received doses of emotion from those already infected, and became themselves infected if the total dose exceeds a certain threshold (as determined by typical psychological profiles).3 Building on that work, an international team of researchers, in 2021, considered the psychological, physical, and physiological factors involved as people panic when they become aware of a threat, then try to move away from it.4 The researchers used known theory and data to simulate on a computer the relevant characteristics of the members of a crowd fleeing the scene of a threat. The first step was to determine the bodily effort as each person ran, based on typical body weights and speeds. From the physiological energy expended, the researchers calculated the heart rate, which measures the degree of fear and the continuing emotional contagion triggered by the person running away. Combining all these effects, the researchers were able to calculate the trajectory and position for each person, spreading more panic as they fled. As this computer simulation unfolded, it gave realistic results. A virtual person near the danger zone soon panics and tries to get away as quickly as they can. As emotional contagion spreads, those farther away also become fearful and try to escape. The researchers then compared the simulation to videos of experiments where real people reacted to simulated dangers, and to videos of real crowds responding to real emergencies: a suspected bomb in the Shanghai subway system and a shooting outside the British Parliament. In all cases, the model results were reasonably similar to the observed behaviors, more so than other models that did not include all three factors of the psychological, physical, and physiological, underscoring the complex, interdisciplinary, and ever-evolving nature of crowd science. Results like these are beginning to elucidate how physical crowds operate in daily life, or under panic conditions. But today, many interactions occur not in real-world crowds but in virtual ones. Although the dynamics are different than in a physical crowd, scientists are still finding lessons from physics to try to better understand the risks to individuals from these virtual mobs. It turns out that masses of physically separated people have displayed their power for centuries, as Charles Mackay wrote in his 1852 *Extraordinary Popular Delusions and the Madness of Crowds*. He detailed how dispersed throngs of people can get swept toward a common idea, such as the speculative tulip mania in the 1630s, or toward a common action, as in the Crusades of the Middle Ages. Now, online connections enable the rapid transfer of feelings, opinions, and information—whether true or false—among the members of virtual crowds, vastly accelerating crowd dynamics in the ether and sometimes producing very real effects in real life. Virtual crowds may draw individuals toward a black hole of extreme political and social opinions. Studies show that the rhetoric of these groups can boil over from online discourse to encourage real-world violence, and the interconnections they provide can support the organization of group actions like the deadly riot at the U.S. Capitol on Jan. 6, 2021. The risk that online-fomented events like these will continue to threaten people and political stability in the real world make it important to discover how online crowds perpetuate extreme opinions that may have outsize impacts. One early analysis shows the central importance of connectivity in encouraging the growth of extremism within virtual crowds. The work was published in 2015 by researchers in the U.S., Brazil, and Israel.5 Assessing public opinion across the globe about religion, politics, climate change, abortion, and other hot-button issues, they noted a decrease in moderate voices with “the rising of extreme opinions … the opinion or attitude of an initially small group could become the rule.” To study this premise, the researchers examined many hundreds of recent surveys in various countries asking about these divisive issues. The survey responses were sorted either as extremely in favor or against an opinion—or moderately in favor or against it. Tabulating the proportion of people with extreme views, the researchers found a surprise. In many of the surveys, across a range of issues, the fraction of respondents with extreme views was in a small minority and proportional to the total number of responders. But for most surveys where the fraction of extreme views exceeded 20 percent of respondents, the number of extremists was found to be up to five times larger than a linear relationship would predict. This nonlinear behavior means that above 20 percent extremism, extremists may disproportionately influence groups—and in unexpected ways. Why would a relatively small nucleus of extremists lead to intensely extreme groups? The answer comes from statistical physics, where the onset of nonlinear behavior is a sign of interactions among separate units that produce a major change; for instance, when H2O molecules become correlated at zero degrees Celsius, causing liquid water to turn into solid ice. Similarly, the researchers found that to successfully reproduce their empirical observation of a 20 percent tipping point for extremism, they had to use a mathematical model that included interactions among the survey respondents. The researchers characterized these interactions as occurring across social networks where “new opinions can take form and existing ones can be either strengthened or weakened”—exactly what online virtual crowds efficiently facilitate. This field, fueled by increasing urgency, is gaining speed. In 2023, physicist Pedro Manrique and colleagues at George Washington University published a new general theory describing the dynamics of online crowds that spread hate or extreme views.6 In their work, they explain, “Society is struggling with online anti-X hate and extremism, where ‘X’ can nowadays be any topic, e.g., religion, race, ethnicity.” Such communities, up to millions strong, spread harmful content on platforms like Facebook and its Russian counterpart, VK. The groups grow quickly and seemingly out of nowhere, note the researchers, as individuals and similar groups fuse with them within a platform and across different platforms. Then the supermassive anti-X groups also often abruptly fission and die off when platform moderators notice them and shut them down. Most of us find ourselves, at times, in a crowd. The researchers analyzed this fusion-fission behavior, which resembles bubbles forming and disappearing in boiling water, using nonlinear fluid dynamics and statistical physics. The theory predicted that as these anti-X bubbles rapidly expand or contract within the social media environment, they resemble the known phenomena of shockwaves, fast-moving disturbances in a fluid that induce extremely sharp changes in its properties. The theoretical shockwave predictions agreed remarkably well with data about real anti-X groups gathered since 2014: steep membership growth curves vs. time for anti-U.S. jihadi communities on VK, and for Facebook groups hostile to the U.S. government such as those connected to the January 6 U.S. Capitol riot. These particular growth curves are characteristic of anti-X groups and differentiate them from online communities without similar agendas. The shockwave theory has been hailed for giving deeper understanding of the widespread online dissemination of misinformation as well as hate, and for providing clues as to how these dangerous developments can be slowed or stopped. Since early writing about crowds in the 19th century, crowd science has evolved to where it can now make quantitative predictions about both actual and virtual crowds. But the essential step to preventing future disasters is to continue pulling from a vast array of scientific fields to better understand the dynamics that portend danger. And in addition to saving the lives of the unlucky few, the field can be a model for dealing with other complex problems facing all of humanity, such as climate change, future pandemics, and artificial intelligence. With any luck, it could help us overcome at least some of the madness of human crowds. *Lead image: Varavin88 / Shutterstock* **References** 1. Karamouzas, I., Skinner, B., & Guy, S.J. Universal power law governing pedestrian interactions. *Physical Review Letters* **113**, 238701 (2014). 2. Murakami, H., Feliciani, C., Nishiyama, Y., & Nishinari, K. Mutual anticipation can contribute to self-organization in human crowds. *Science Advances* **7** (2021). 3. Durupinar, F., Güdükbay, U., Aman, A., & Badler, N.I. Psychological parameters for crowd simulation: From audiences to mobs. *IEEE Transactions on Visualization and Computer Graphics* **22**, 2145-2159 (2016). 4. Xu, M., *et al.* Emotion-based crowd simulation model based on physical strength consumption for emergency scenarios. *IEEE Transactions on Intelligent Transportation Systems* **22**, 6977-6991 (2021). 5. Ramos, M., *et al.* How does public opinion become extreme? *Scientific Reports* **5**, 10032 (2015). 6. Manrique, P.D., *et al.* Shockwavelike behavior across social media. *Physical Review Letters* **130**, 237401 (2023).
true
true
true
Why dangerous crowds behave the way they do.
2024-10-12 00:00:00
2023-09-19 00:00:00
https://assets.nautil.us…&ixlib=php-3.3.1
article
nautil.us
Nautilus
null
null
15,701,485
https://www.codementor.io/blog/ongoing-website-costs-31ep03t7vn
How Much Does a Website Cost in 2019? A Breakdown of Website Maintenance Costs
Justina H
In the ye olde days of the early 2000s, you were tech savvy if you had a website for your business. Fast forward to 2018 — if you operate any kind of business at all, a website is a must-have (and possibly even an app). While you may think that finding someone to build your website is the most expensive part of launching your product, there are a host of other costs involved in making the website a reality in the first place. Like our previous post on how much an app costs, we're going to help break down the ongoing costs of maintaining a website. To help us with this, we’re going to bring back SafePark, our made-up app from the post, How to Write a Product Requirements Document. **tl;dr**: SafePark is a freemium app that helps users find public and street parking close to them by pulling information from Google Maps, city parking, and other users. Using SafePark, we’re going to walk you through the ongoing expenses of maintaining a website. The biggest components of your website will be your **domain name** and your **hosting service**, mainly because you need these two for your website to function. When it comes to hosting services, you might consider opting for a cheap VPS Bitcoin hosting provider, as it offers cost-effective solutions for website owners looking to accept Bitcoin payments. We’ll also look at periphery costs, such as **API licensing costs**, **code repositories**, and **payment processing** to give you a more concrete idea of what to consider as part of the cost of having a website. **Note**: The information below is meant to illustrate, using concrete numbers, the various costs of maintaining a website. Your actual website costs will vary depending on your website's functions and hardware needs. The information below is an example, not a guideline or an end-all-be-all of website costs ## How Much Does It Cost to Host a Website? One of the most expensive components of hosting a website and app can be the cost of the server. If your website doesn’t require a ton of disk space, services like WordPress, GoDaddy, and HostGator can provide hosting for a rate of free to twenty dollars a month, depending on your storage needs. While it would be significantly cheaper to use a hosting service like GoDaddy, SafePark is both a mobile app and a website. Mobile apps often require more advanced infrastructure to support them and more disk space, depending on the features they provide. ### How to determine SafePark’s hosting needs Because SafePark makes multiple API calls to fulfill user requests, it requires a fair amount of storage, memory, and speed, in addition to a database and load balancer. For the purposes of this comparison, we’ve limited SafePark’s reach to New York, San Francisco, and Los Angeles to simplify our hardware requirement estimates. In order to choose a hosting server, we need to figure out what kind of hardware we need. For that, we need to estimate how many people could be using SafePark at a given time to determine how much space we require. If we had 10,000 users from each of these three cities, for both the SafePark app and website, roughly using the app four times a day on average, we would have **1,200,000 API requests daily**. If we had 240,000 requests per hour, that equals roughly **33 requests per second**. To handle the amount of requests from users and to pull information from APIs, SafePark requires at least four servers, two caches, one database, and one load balancer to keep the app and website stable for users. ### What are SafePark’s hosting costs? With the above requirements in mind, we looked at these six hosting services and priced them according to roughly equivalently sized hardware requirements across the board. ### How to choose the right hosting service In terms of hosting services, Amazon Web Services, Google Cloud Platform, and Microsoft Azure are the frontrunners. That being said, in terms of pricing, Heroku is a little different from these three because Heroku offers Platform as a Service rather than Infrastructure as a Service. What this means is that Heroku allows you to deploy your application without worrying about infrastructural hardware, like load balancers, servers, etc. — you can push your code and have it go live. Convenience, of course, makes it more expensive. So, depending on your needs and know-how, your hosting costs may vary quite significantly. ## How Much Does a Domain Name Cost? Domain names are the least expensive part of building and maintaining a website. That being said, getting a particular domain name can be challenging. Sometimes you might have to settle for altering your ideal domain name or pay premium prices for a .com, .me, .io, or an alternate ending if the name you want has already been taken. In our case, *safepark.com* as a domain name was unavailable, so we had to use safeparkspace instead. However, the good news is that there are many different combinations for domain names depending on your budget. Also, websites sometimes have sales for the first year or first few years for particular extensions. safeparkspaces.me is only $1.99 per year on 1 on 1, for example. For this post, we surveyed six domain name providers and priced the more commonly used extensions of .com and .org, and added a .me for those looking for a unique extension. ## How Much Does an API Cost? SafePark will primarily pull from two APIs: Google Maps and Streetline API. ### Streetline API Streetline is a company that has been working with cities around the United States to provide parking API information. They’ve worked with cities such as New York, Los Angeles, Reno, and Indianapolis to provide parking information to users. Those interested in developing parking-related apps can contact Streetline for an API key. ### Google Maps API The other API that SafePark will pull from is Google Maps. Google Maps will allow SafePark users to find a parking spot close to them, with street map view for convenience. While Google does offers premium plans, based on credits, SafePark is able to operate using Google’s standard plan, which offers unlimited requests to iOS and Android systems, meaning that SafePark can serve its mobile app customers at no cost. However, SafePark’s web customers may cost SafePark money every month, depending on how many of them use Google Map’s web API to find a parking spot before they leave home. #### Google Maps API Costs Under Google’s standard plan, SafePark would be able to pull 25,000 map loads daily from the web API for free. However, if there are more than 25,000 map loads, SafePark would be charged at $0.50/1000 loads, capped at 100,000 loads. The same $0.50/1000 loads overage charge is used for API requests, such as geolocation, directions, and distance matrices, but the free usage cap is set significantly lower, at 2,500 requests. Anything over 2,500 requests would be charged by the overage, per day, as shown below. That being said, it is highly unlikely that there will be 40,000 pulls on geolocation, directions, distance, and street images on SafePark’s website daily. In fact, most users will probably stick to the mobile version. However, to make it easier to visualize the possible costs, we broke down the monthly and annual costs of Google’s API for SafePark based on best use case, normal use case, and worst use casescenarios, listed below. *Note: This section was primarily focused on what types of APIs SafePark needs to operate. Unless you’re in the parking app development business, you’ll have to do some homework and research which APIs you might need for your enterprise. You might want to see which APIs your competitors are using as a starting point for API ideas — or as a springboard to see if you can improve on their services with your own product.* ## How Much Does A Code Repository Cost? You might ask, “Why do I need to pay for a code repository?” Well, the good news is that there are plenty of free repository options out there, so unless you’re storing a lot of code, chances are you don’t have to pay for a code repository (repo). The question then becomes: why do you need a repo, or, why should you have a repo. The answer is, in case you need to back up, scale up, or roll back changes, a repository is a great safety net. With a repo, you can set up a history, set up a continuous build, onboard new team members, and deploy fixes. Often, if you have a small team, you can get code repo services for a limited numbers of users or amount of space for free. ### Costs of Different Code Repos SafePark, for example, has 10 employees, five of whom are developers. For SafePark, three code repositories can provide it with free services while other services cost less than $500 a year for peace of mind. ### How to Choose the Right Repo The main difference among code repositories are the version control systems they support and the auxiliary services they provide, in addition to keeping a record of your code changes. GitHub, for example, only hosts projects that use the Git version control system (VCS) while Bitbucket supports Mercurial VCS. GitHub integrates with JIRA, Crucible, Jenkins, and Bamboo while Bitbucket integrates with Asana, Zendesk, AWS, Microsoft Azure, Google Cloud, and Heroku. Before choosing a repo, take a minute to consider what version control system you want to use, what features you need, and how many users will be contributing to your code repository. ## What About My Payment Processing Costs? Although SafePark is freemium app, it also has a website version for those who enjoy planning their outings at home, on a larger screen. Users can upgrade to premium from either the mobile app or the website’s payment portal. If the premium version of SafePark is $5.00, and 15,000 of the 30,000 users in New York, San Francisco, and Los Angeles upgrade, SafePark would be able to recoup its App Store fee and make $52,401. With Android, SafePark would make $52,475. When users upgrade to premium on the website, using Stripe or Braintree, if the same 15,000 users all paid $5.00 for the app, SafePark rakes in $68,325. ## Website Cost Infographic ## Conclusion As you can see, building and maintaining a website means incurring a lot of recurring costs, from $2,500 to $30,000, depending on your hardware combination and the number of incidental charges you rack up along the way. While hiring a great developer is important, remember that it won’t be your most expensive ongoing cost. Now that you’ve written your PRD and are getting ready to hire a developer, hopefully this post has given you an idea of just how much you’ll need in your war chest to keep your company afloat. That being said, keep in mind that you have a lot of choices. Because there are so many combinations available in terms of hardware and services, you want to find the options that best suits you and your needs. If you’re a non-technical founder, don’t be afraid to ask your technical advisor or potential freelance developer candidate for advice as to what tech stack or hardware will work the best for you. While this may seem quite daunting, with a little research and good advice, you too can be off to a good start.
true
true
true
Read about the ongoing costs of maintaining a website, from hosting to domain names, to APIs and repos, to better gauge how much you need to succeed.
2024-10-12 00:00:00
2017-11-13 00:00:00
https://cdn.filestackcontent.com/HXy49oLTd2A8YsiRvFKh
article
codementor.io
Codementor
null
null
7,536,147
http://vimeo.com/rogame/quincy
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
11,612,356
https://test-flight.cundd.net/
Test Flight
null
An inline method and doc comment test runner Test Flight allows you to define unit tests inside your classes. There are two different types of tests available: `Inline methods` , simple PHP methods annotated with @test`Doc Comment` , code examples in the method documentation wrapped into`<code>...</code>` `Documentation` , code snippets in Markdown documentation files wrapped in ```php ... ``` This project is inspired by - Rust's `rustdoc` which allows to run code examples defined in doc comments - and the unit testing in D and Rust, where tests can be part of the actual implementation The current implementation is in a very early stage. PHP 7 is required. # Goal Goal Test Flight is *not* designed to replace popular test suites like PHPUnit or Codeception. It helps getting started with unit testing in small projects, removes boilerplate code and encourages to add usage examples to the documentation (and helps to avoid errors in those). # Criticism Criticism In contrast to compiled languages where test methods can be removed during compilation, test methods for Test Flight will remain in deployed code. But due to Reflection test methods don't have to be public. # Installation Installation `composer.phar require cundd/test-flight` # Create Tests Create Tests ## Create DocComment tests Provide a DocComment for your method and add a code example within `<code>...</code>` : ``` class Me { private $name = ''; function __construct($name) { $this->name = $name; } /** * The examples can be single line * * <code>test_flight_assert((new Me('Daniel'))->getName() === 'Daniel')</code> * * @return string */ public function getName() { return $this->name; } /** * ... and multi line * <code> * $instance = new Me('Daniel'); * test_flight_assert('Leinad' === $instance->getNameReversed()); * </code> * * @return string */ public function getNameReversed() { return ucfirst(strtolower(strrev($this->name))); } } ``` ## Create documentation tests Every code snippet inside a Markdown file that is wrapped into a PHP code block is a documentation test: ``` ```php assert(true); ``` ``` Take a look at this file as an example. ## Creating inline method tests ``` class MyClass { public function getName() { return 'Daniel'; } /** * @test */ protected function makeSureSomethingIsTrue() { // Non-static methods will be called on an instance of the class test_flight_assert('Daniel' === $this->getName()); } } ``` Method tests can also be static. This may be required if the creation of an instance requires arguments. ``` class ComplexClass { private $name = ''; function __construct($name) { $this->name = $name; } public function getName() { return $this->name; } /** * @test */ protected static function makeSureSomethingIsTrue() { $instance = new ComplexClass('Daniel'); test_flight_assert('Daniel' === $instance->getName()); } } ``` # Run Tests Run Tests `vendor/bin/test-flight path/to/source` You can also specify the type of tests to run: ``` # Run DocComment tests vendor/bin/test-flight path/to/source --type doccomment # Run documentation tests vendor/bin/test-flight path/to/source --type documentation # Run method tests vendor/bin/test-flight path/to/source --type method ``` More verbose output can be triggered with `-v` : `vendor/bin/test-flight path/to/source -v` Add a custom bootstrap file to be included before the tests are run: `vendor/bin/test-flight --bootstrap path/to/bootstrap.php path/to/source` # Configuration Configuration Test-Flight can be configured through command line arguments and JSON files. The following examples showcase the configuration options for `bootstrap` : ### Specify the bootstrap file as command line argument `vendor/bin/test-flight --bootstrap path/to/bootstrap.php path/to/source` ### Create a JSON file and pass it as argument ``` { "path": "../../src/", "bootstrap": "test-bootstrap.php" } ``` `vendor/bin/test-flight --configuration path/to/configuration.json` ### Create `.test-flight.json` in the current directory ``` { "path": "src/", "bootstrap": "tests/resources/test-bootstrap.php" } ``` ``` ls .test-flight.json; vendor/bin/test-flight ``` # Assertions Assertions Currently only a few assertions are built in. They always come in two flavours, a static method and a function (wrapping the method). ### Test if the assertion is true `test_flight_assert($assertion, [string $message])` `\Cundd\TestFlight\Assert::assert($assertion, [string $message])` ### Test if the callback throws an exception `test_flight_throws(callable $callback, [string $expectedException], [string $message])` `\Cundd\TestFlight\Assert::throws(callable $callback, [string $expectedException], [string $message])` ### Test if the actual value matches the expected `test_flight_assert_same($expected, $actual, [string $message])` `\Cundd\TestFlight\Assert::assertSame($expected, $actual, [string $message])` ### Test if the value is truthy `test_flight_assert_true($actual, [string $message])` `\Cundd\TestFlight\Assert::assertTrue($actual, [string $message])` ### Test if the value is falsy `test_flight_assert_false($actual, [string $message])` `\Cundd\TestFlight\Assert::assertFalse($actual, [string $message])` ### Test if the given object is an instance of the given class `test_flight_assert_instance_of($actual, string $className, [string $message])` `\Cundd\TestFlight\Assert::assertInstanceOf($actual, string $className, [string $message])` ### Test if the given value is an instance of the given type `test_flight_assert_type($actual, string $type, [string $message])` `\Cundd\TestFlight\Assert::assertTypeOf($actual, string $type, [string $message])`
true
true
true
null
2024-10-12 00:00:00
2018-01-01 00:00:00
null
null
null
null
null
null
8,231,661
https://service.goodcharacters.com/blog/blog.php?id=227
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
30,741,045
https://blog.coinbase.com/how-to-get-hired-at-coinbase-48fc3c18f119?gi=d517c7378837
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
13,851,609
https://dev.to/ben/celebrate-the-small-fixes
Celebrate the Small Fixes
Ben Halpern
A software team basically has two responsibilities: Build features and fix their broken-ass software. A lot of time business needs pressure the former to take precedent over the latter, and we live with that. It's called technical debt, and it's a tool like any to get software shipped. Logically, we all know that the refactoring and edge-case clean up eventually needs to happen. But eventually can lead to never and the software rots and becomes chaotic. This things behaviors that lead us down the path of crappy, broken software are not usually "logical", they are human, emotional behaviors. Perhaps we feel too ashamed to bring up the fix of the software we wrote, or too polite to tell others. Possibly the business has stressed growth and features to the point where we just don't know where refactoring fits into the schedule. Maybe the people in charge don't even understand the *concept* of refactoring or technical debt, or don't want to pay for additional work to software that is "done". There are good situations, but every situation has at least a hint of these sorts of behaviors. You may try endlessly to come up with systems to solve the issues. Trello board after Trello board assigned to the issues, ticket points, sprint cycles, on and on and on. While process is part of it, the big barrier is emotional. Refactoring code doesn't "feel" like progress. What we need are habits, personal and organizational, that reinforce the things we logically know need to get done. Celebrate refactorings and small fixes. Celebrate them a bunch. Cheer on pull requests that tackle that code nobody wanted to touch. At your all hands meetings, take a moment to honor someone who deleted 100 lines of dead code, or wrote a critical feature test nobody wanted to write because it required mocking a really fickle endpoint. You need process, but process can't make people want to do it. If making *feels great*, people will do it. Most of the time waiting for code to be perfect is a terrible habit for teams and individuals. Endlessly bikeshedding over details in the name of perfect software is an anti-pattern we want to avoid here. It is possible that applauding the fixes could take attention off of the much-needed shipping, but I would say it's the opposite. If developers know that cleaning things up in the near future is a possibility, shipping good—not perfect—code also becomes easier. Applaud the behavior that hardens your code and makes building new features a sustainable effort. ## Top comments (2) After a stressful release I like to run static analysis, linting tools and documentation analyzers. I find it calming to see the warnings get whittled down and there's little risk to functionality - well, the way I do it. It does tend to mess with source history pretty bad though. Loved this sentence
true
true
true
Applaud the behavior that hardens your code and makes building new features a sustainable effort.
2024-10-12 00:00:00
2017-03-10 00:00:00
https://media.dev.to/dyn…om%2FLZ0T7wS.png
article
dev.to
DEV Community
null
null
36,211,813
https://dl.acm.org/doi/abs/10.1145/3472716.3472864
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
38,356,332
https://www.anaconda.com/blog/a-faster-conda-for-a-growing-community
Anaconda | A Faster Solver for Conda: Libmamba
The Conda Team
conda 22.11 update: The libmamba solver’s experimental flag has been removed. To use the new solver, update conda in your base environment: conda update -n base conda To install and set the new solver, run the following commands: conda install -n base conda-libmamba-solver conda config --set solver libmamba Learn more here. ## Introducing conda-libmamba-solver The conda team is pleased to announce the availability of ‘libmamba’ as a new, much faster dependency solver for conda! Three different companies worked to make this release possible: **QuantStack**, developing mamba and libmamba; **Quansight**, integrating libmamba into conda; and **Anaconda**, developing conda and managing the overall effort. Read on to get to know the core contributors of this project, why we brought mamba’s capabilities into conda, and how you can start speeding up your workflows today. ## Core Contributors - **Wolf Vollprecht | QuantStack***Wolf is the CTO of QuantStack, the open-source company at the heart of Mamba and Jupyter. He is a core member of the conda-forge community and helps maintain hundreds of robotics packages as part of the RoboStack project.* - **Jaime Rodríguez-Guerra | Quansight***Jaime holds a PhD in Biotechnology and believes that packaging is one of the pillars for reproducible research. He became a conda enthusiast while working on molecular modeling frameworks and machine learning pipelines for drug design.* - **Filipe Laíns | Quansight***Filipe is a member of the Python Packaging Authority, the author of the pypa/build tool, and a maintainer of the Arch Linux distribution. He is working on improving the Python packaging ecosystem and bridging the gap between Python packaging downstreams.* - **Tania Allard | Quansight***Tania holds a PhD in computational modeling and is a well-known and prolific PyData community member. Tania is a fellow and director of the PSF (Python Software Foundation), and has been involved as a conference organizer (JupyterCon, SciPy, PyJamas, PyCon UK, PyCon LatAm, JuliaCon, and more), as a community builder (PyLadies, NumFOCUS, RForwards), as a contributor to Matplotlib and Jupyter, and as a regular speaker and mentor.* - **Jannis Leidel | Anaconda***Jannis is a Sr. Software Engineer on the conda team at Anaconda and previously co-founded the Python Packaging Authority, the volunteer group that has maintained fundamental package management software for the Python programming language for over a decade. He currently serves on the board of directors of the Python Software Foundation and believes that fostering the conda project is essential to help the growing community of data practitioners.* ## Why Did We Bring the Mamba Solver to Conda? The **mamba project** is a fast, alternative conda client that has seen widespread adoption because of its implementation of the libsolv solver for conda metadata. In bringing the mamba solver to conda, this project had the following goals: - Improve conda’s resolving speeds by 50-80%* - Maximize backwards compatibility so as to not break any current functionality - Build the plugin infrastructure for others to create custom solvers - Strengthen our efforts to serve community needs *based on the integration test suite performed on Linux, MacOS, and Windows The speed at which conda resolves environment package dependencies is a critical factor to the user experience and usefulness of conda. Users familiar with conda know that conda has its upsides and downsides. On one hand, it singlehandedly solves cross-platform and package dependencies; on the other hand, this dependency resolution process can be slow, especially as the conda ecosystem has grown rapidly and package dependencies have become ever more complex with vastly more possible dependency combinations to consider. While mamba is advertised as a faster drop-in solver for conda, there were some differences in functionality, especially in edge cases that conda had accrued over the years. Some issues that arose in integration testing include pip interoperability, compatibility with conda’s package and platform test suite, and flag and update configuration resolution. That’s why Anaconda, in collaboration with Quansight, worked to integrate mamba’s libsolv repository parser and solver into conda, while re-using as much from conda as possible to bridge any functional differences that were observed in using the mamba solver. With backwards compatibility as a priority, the goal of this libmamba integrations release is to significantly decrease time to resolve conda packages without changing existing conda workflows. ## How to Enable libmamba This experimental release of libmamba is our proof of concept for implementing a new solver. To use libmamba, install it in your conda base environment and then specify it as the solver when installing other packages: - Please make sure to update to at least conda 4.12.0: conda update -n base conda - Install the conda libmamba solver plugin: conda install -n base conda-libmamba-solver To experiment using libmamba on an ad-hoc basis or without setting it as your default solver, you may run the following on the command line: conda create -n demo --experimental-solver=libmamba --dry-run install <some package> As an experimental release, anonymized automatic debug logging (stored as local log files on your computer) has been implemented to address any issues that may arise using the regular issue management system. The current release will be adopted as the default conda solver once we are happy with the stability and feedback received by the community, so adoption is highly encouraged and necessary for continual improvements! Any feedback that is provided will greatly expedite the adoption of libmamba as conda’s default solver. You may submit any issues at this **Github link**. Please take the steps detailed above to activate the libmamba solver and experiment with your existing workflows. The Anaconda and Quansight teams will be closely monitoring these reports to continue to improve functionality of the solver. ## Looking Forward Quansight and Anaconda are hosting a three-part webinar series covering the conda solver logic, integration and testing process, material changes that conda users will experience, and how to enable libmamba. Click **this link** to watch the debut webinar on Anaconda Nucleus or register to join Quansight and Anaconda in **the next webinar**, live on March 23rd at 2PM ET/11 AM PT. ## You Might Also Be Interested In ## Talk to an Expert Talk to one of our experts to find solutions for your AI journey.
true
true
true
conda 22.11 update: The libmamba solver's experimental flag has been removed. To use the new solver, update conda in your base environment: conda update -n base conda To install and set the new solver, run the following commands: conda install -n base conda-libmamba-solverconda config --set…
2024-10-12 00:00:00
2024-08-07 00:00:00
https://www.anaconda.com…k-1273484747.png
article
anaconda.com
Anaconda
null
null
38,820,617
https://www.wsj.com/articles/francis-collins-covid-lockdowns-braver-angels-anthony-fauci-great-barrington-declaration-f08a4fcf
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
10,682,924
http://2factor.in/?sd=11
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
28,854,598
https://www.cobaltrobotics.com/premortems-and-postmortems/
Premortems Will Keep Your Code Alive - Cobalt AI
Ria Van Hoef
# Premortems and Postmortems *Two crucial tools for writing code when the stakes are high, like managing our fleet of over 100 robots at Cobalt Robotics. This is part 2 of a 2 part series on High Stakes Code. See Part 1. – Erik Schluntz, Cofounder & CTO* ### Premortems Premortems are remarkably quick, and have saved my team from making bad mistakes many times. What’s a premortem? Before deploying new code, ask yourself and at least one other person “If this is going to cause a major issue, what would it be?” Then, ask yourselves what you could do to protect yourself better. That could include doing additional testing or adding safeguards before deploying the code. No one intentionally releases code that they think is going to cause major problems, so the author of code about to go out will think that it’s safe. The benefit of the premortem is to stretch your mind outside of the mental box you’ve created where your code is safe, and then look around from that perspective. We’ve found premortems particularly good at discovering “unknown unknowns” and issues that occur when moving from a test environment to a production environment. Will that database upgrade be just as quick when the database is under load? Will that new safety feature have disruptive false positives? Is every robot starting from the same state for this update? Part of the magic of a premortem is it gets the team to discuss worst case scenarios, despite any over-optimism or the planning fallacy about the feature itself. It creates a safe space to bring up issues, without being seen as negative, obstructionist, or not committed to the goal. *Fig 1. Vizzini never did premortems and look what happened to him* ### Postmortems Postmortems are the sadder, but even more useful, version of premortems. The idea here is that when you do have a failure, you want to learn absolutely as much from it as you can. Think about how much you’ll need to learn about your situation to truly write a failure proof system – if you learn twice as much from each failure, then you’ll need half as many failures to get there! Huge amounts have been written about good postmortems, most notably Google’s SRE book. I won’t go through the details of how to write a postmortem, but instead focus on what makes a good postmortem. #### BLAMELESS Humans are going to make mistakes. At the end of the day, what matters is setting up a process to catch and prevent those errors, not which human made the mistake. This takes a strong culture to do well, but there are little tactical things you can do that help. For instance, never name names in postmortems. Instead of “at 1am **Erik** rebooted the database” write “at 1am **an engineer** rebooted the database”. If you don’t make your postmortems blameless, people will hide details and try to cover up their mistakes in the future. If people cover up mistakes, your team isn’t going to learn from them and someone is going to make the same mistake again. #### ACTION ITEMS Action items (AIs) are the silver lining of your incident—and the entire point of the postmortem—so make sure you mine as many valuable ones out of the wreckage of your code while they’re fresh. Ask “why” 5 times for any problem to go down into not just the immediate problem, but how it came to exist in the root cause. The deeper you mine, the more valuable the action items become. For instance, the top level AI might be “fix this bug on line 237”, but 4 levels down the AI might be “we could have caught this by adding a stress test to our CI”. Any one “problem” can help you uncover a number of valuable action items for your team to work on. Level | “Why?” to row above | Action Item | 1 | System ran out of memory | Add memory monitor | 2 | Bug on line 237 | Fix memory leak bug | 3 | Easy to make this kind of bug | Refactor library | 4 | Difficult to review for this kind of bug | Add stress tests to CI | 5 | Little known topic | Lunch ‘n Learn on memory | *Fig 2. Example of “5 Whys” digging into the root cause of a problem* #### LOOK BROADLY Don’t just brainstorm around the specific bug that hurt you – look more broadly for “things that went poorly” and “things that went well”. Even if something wasn’t directly responsible for the incident, such as “it took us too long to find the logs for this issue” you can still find valuable action items there. Noting “things that went well” can help you identify strengths that you can double down on as a team. *Fig 3. Engineers mining for action items in the aftermath of an incident* ### Premortems and Postmortems at Cobalt Robotics At Cobalt, we build autonomous indoor security guard robots that patrol through office buildings and warehouses looking for anything out of the ordinary. We write code that controls a 120lb robot navigating around people – basically an indoor self driving car, and our customers are relying on us to keep their most sensitive areas secure and protected. We’re committed to keeping a great engineering culture, moving fast, and NOT breaking things. To do that we need great engineers like you! *Fig 4. A Cobalt Robot patrolling an office space*
true
true
true
Premortems and Postmortems Two crucial tools for writing code when the stakes are high, like managing our fleet of over 100 robots at Cobalt Robotics. This is part 2 of a 2 part series on High Stakes Code. See Part 1. - Erik Schluntz, Cofounder & CTO Premortems Premortems are remarkably quick, and have saved
2024-10-12 00:00:00
2021-10-12 00:00:00
https://www.cobaltai.com…2023/09/logo.svg
article
cobaltai.com
Cobalt AI - AI-powered, human-verified security solutions
null
null
18,724,063
http://nautil.us/issue/67/reboot/why-we-love-dinosaurs
Why We Love Dinosaurs
Boria Sax
People have always known of dinosaurs, though they have called them by many names. Old legends that place Western dragons in caves or beneath the earth may have originated with fossils. The plumed serpent, prominent in mythologies of Mexico and Latin America, is often a creator of life. The Rainbow Serpent of Aboriginal tales was present at the beginning of time, and helped prepare the landscapes for human beings and other animals. The Asian dragon, which combines features of many animals, symbolizes primordial energy and is the bringer of rain. These figures resemble our reconstructions of dinosaurs in appearance, and accounts place them in worlds that existed before humankind. The major reason for this similarity might be that human imagination works in much the same way as evolution. Both constantly recycle familiar forms such as wings, claws, crests, fangs, and scales, which may repeatedly vanish and then reappear through convergence. The figure of *Tyrannosaurus rex* suggests a kangaroo, while pterosaurs resemble bats, but the similarities are not due to common ancestry. Children, who are just learning the basic expectations of their society, are in ways outside of culture. Their attraction to dinosaurs suggests that the giant creatures appeal to something innate, or at least very elemental, in the human psyche. One highly speculative explanation is that this is a genetic legacy, going back to the days when early humans faced gigantic, prehistoric lizards such as the megalania or perhaps even the days when our remote mammalian ancestors had to contend with dinosaurs themselves. A simpler explanation is that images of dinosaurs convey the excitement of danger while posing no actual threat. It could also be that dinosaurs, from a child’s point of view, seem like grown-ups, since they are both very old and very big. By inspiring fantasy, dinosaurs alleviate a child’s feelings of helplessness. Gail Melson has described this vividly: A slight, shy, 8-year-old boy I know hurries home after school each day to go back to the age when dinosaurs roamed the Earth. A walking encyclopedia of dinosaur lore, he never tires of playing out battles between brontosaurus and tyrannosaurus, using his six-inch high replicas. Unlike the power of adults or bigger, more assertive peers, dinosaur power is, literally, under his thumb. And why do most children leave that fascination behind well before reaching adulthood? Adults often feel almost as powerless as children. They find relief in such activities as blasting aliens in video games, as well as other pastimes that are far less innocuous, but seldom in playing with dinosaurs. But maybe grown-ups don’t really get over the dinosaur phase? It could be that they simply relive it vicariously through children. We have traditionally thought of dinosaurs as tragic, since (with the exception, we now know, of avians) they became extinct, yet were once enormously large and powerful. This corresponds to the combination of ability to dominate and extreme vulnerability, which are both essential aspects of the way we think of humankind. At any rate, that little boy is very far from being alone. At the American Museum of Natural History in New York, which I visit regularly, there is a shop in which almost an entire floor, one-third of the space, is devoted to dinosaur paraphernalia, and most of the items there have no more than a very oblique connection with science. There are shelves upon shelves of plush dinosaur toys, many of which are enormous. There are many picture books about dinosaurs, for children that are just learning to read, mechanical dinosaurs, and countless accessories sporting pictures of dinosaurs. Human imagination works in much the same way as evolution. Both constantly recycle familiar forms such as wings, claws, crests, fangs, and scales. That 8-year-old described by Melson could, in many ways, have been me, though paleontology was not nearly so heavily commercialized in my childhood as it is today. Dinosaurs were, like presidents and teachers, accorded a lot of dignity. But in Chicago’s Field Museum the reconstructed skeleton of an apatosaurus stood beneath the dome of a great hall. A huge bone was placed on a small pedestal in front of the skeleton, which people were invited to touch. When I did so, the bone seemed very hard and cold, almost metallic, but that only accented the metabolic warmth of the creature it had once helped to support. I was always a bit of a loner, as well as a romantic. Looking back, I suppose the world of dinosaurs was a sort of refuge for me, mostly from adults who thought they understood me yet never could. There is something comforting for people of all ages about the way at least some kids in every generation go through a “dinosaur phase,” despite all the changes that society has experienced in the last century and a half. Dinosaurs appeal to a Victorian sort of “childhood wonder,” as well as reassuring us that our childhood experiences are part of an eternal condition. The phenomenon is especially remarkable because it so often seems to first emerge spontaneously in children, with very little adult encouragement. Yet perhaps dinosaurs, after all, are no more immortal than human beings. The ways we imagine them, at least, have been subject to constant change since their initial discovery in the early 19th century. Maybe, after my childhood encounter with dinosaur bones, every subsequent experience of them could not be without a trace of disappointment. For me, as a child, it was the gateway to a world that would be without social pressures and demands. “To be a dinosaur,” a phrase that I used in a late adolescent poem, meant simply to be myself. It turns out that dinosaurs, or at least their bones, have been, since their discovery, deeply implicated in the worlds of commerce and power politics. But my childhood experiences suggest to me that, if all the hype could be finally stripped away, something wonderful might remain. As Tom Rea has observed, since the early 20th century people have thought of museums of natural history as “temples to science, with the dinosaur exhibit as their central shrine.” Museums, especially those of that era, were modeled on old temples or churches, with their high ceilings, domes, and elaborate reliefs. They were, like churches, guardians of esoteric knowledge. This resemblance of museums to cathedrals is not simply a matter of accidental associations. It reflects the ideas of natural theology, which was a driving force behind early science and, though challenged by evolutionary theory, remains very influential today. This holds that the order of the natural world is proof of a conscious plan and, thereby, the existence of God. To study this order is to reveal part of the divine plan, an activity that should inspire reverence and awe. Religion linked the scientific communities with a larger public. In the words of Martin Rudwick: The popularization of science was formerly treated as a wholly one-way process … by which scientific pundits translated … their esoteric findings into more accessible language, with inevitable loss or distortion of content on the way. More recently, however, the process has come to be seen as being initiated as much from the ‘popular’ end as from the scientific. For one thing, science is dependent on sources of funding, which are heavily influenced by public perception. This, in turn, does much to determine the direction of research. Popularizations are also instrumental in recruiting young people to scientific vocations. In addition, scientists, whether they are conscious of it or not, cannot help but be influenced by the constant proliferation of images relating to their field in the popular media. In their capacities as employees of museums, companies, and even universities, many scientists must constantly engage with the public, as representatives of their profession. Furthermore, the way that scientists communicate with one another is now, inevitably, in large part through the popular media. Though professional journals continue to be important, they have always been slow and cumbersome. New discoveries are likely to be reported substantially before they can be written up in a formal way and subjected to peer review. The knowledge of a layperson may not compare in depth with that of a professional paleontologist, but it can be almost as up to date. Accordingly, we will be better able to understand the significance of dinosaurs to the contemporary world if we do not think of science as monolithic, much less as a “realm apart.” It would be more accurate to regard “science” as a vast area of human endeavor, requiring not only researchers but philosophers, web designers, artists, teachers, journalists, museum professionals, and so on. This contradicts the romantic image of the lone researcher engaged in a personal struggle for the truth, which will ultimately triumph over ignorance and superstition; an impression that is anachronistic at best. Today, most scientific papers have at least three authors, often many more. The connection to popular culture also limits the claim of science to objective truth, since it is intimately dependent on so many intangible, subjective, psychological, and otherwise contingent factors. Discoveries in physics are now almost impossible to visualize, even for researchers, but those in paleontology are easily translated, with just a little imagination, into very colorful images. Within a very short time of their discovery in the late 18th and early 19th centuries, people had an emotional relationship with dinosaurs that was as complex, ambivalent, multifaceted, and in some ways intimate as our bond with just about any living animal, including the dog and the cat. It was a relationship largely mediated by fantasy, like the relationship of the public to celebrities, yet no less authentic on that account. Dinosaurs have been featured in exhibitions, theme parks, novels, toys, movies, comics, logos, and all the other paraphernalia of popular culture. Most popular representations of dinosaurs ignore even the limits imposed by paleontology. More overtly scientific activities are also pervaded by showmanship, though here it takes subtler forms. Early discoverers of dinosaurs such as Gideon Mantell greatly exaggerated their size, appealing to the public’s taste for both grandeur and novelty. In the late 19th and early 20th centuries the search for huge bones became an arena of competition, for not only explorers but the industrialists and governments that backed them, and was, essentially, a form of trophy hunting. Since, even with highly sophisticated tools, it is possible to infer only so much information from bones and related objects, those who wish to reconstruct the appearance and habits of dinosaurs have plenty of scope for imagination. Most popular representations of dinosaurs ignore even the limits imposed by paleontology, while often incorporating a few recent discoveries in order to appear up to date. Our images of dinosaurs owe at least as much to the dragons and demons of medieval art—which, in turn, go back to archaic deities—as they do to fossils. These serpents were often associated with anachronistic beliefs or remote times, so dragon-slayers such as St George or Beowulf, like paleontologists of today, came across as promoters of modernity. We are not descended from dinosaurs, and our ancestors did not interact with them, outside of comic books and B-movies. But, precisely for those reasons, it is easier to consider their world as a mirror of the human condition. The fact that dinosaurs became extinct has made their story resonate with the apocalyptic traditions of the Zoroastrian, Judaic, Christian, and Islamic religions. Their size and power suggest empires and battles on an epic scale, perhaps even a sort of Armageddon. Even the current view that some dinosaurs survived to become birds suggests a sort of angelic elect that will be saved. But our apocalyptic fears have been secularized, and the meaning of dinosaurs has changed with them. In the late 19th and early 20th centuries, dinosaurs were often used to represent big business, though their eventual demise could seem like a proletarian revolution. Later, their apocalyptic associations might be used to express terror of a nuclear holocaust or of ecological collapse. In addition to the elemental appeal of great size and antiquity, the reason for the popularity of dinosaurs is that their symbolism is flexible enough to accommodate a vast range of meanings. They have been used to comment on human violence, innocence, wealth, industrialization, failure, modernity, tragedy, extinction, and far more. But none of these things really has much to do with dinosaurs in the end. We are simply imposing our own meanings on their endlessly mysterious lives. I will not preach against this, for exploiting other creatures as symbols is simply what human beings do, and I am no more exempt than anyone else. But, when we speak of dinosaurs essentially as cultural artifacts, we should remember, from time to time, that they were once, and still are, vastly more. *Boria Sax is an American author and lecturer and a teacher at Mercy College. He is the author of* Imaginary Animals, the Wondrous and the Human *and* Animals in the Third Reich: Pets, Scapegoats, and the Holocaust, *among other books.* * Reprinted with permission from* Dinomania: Why We Love, Fear and Are Utterly Enchanted by Dinosaurs *by Boria Sax, published by Reaktion Books Ltd. Copyright © 2018 by Boria Sax. All rights reserved.* *Lead image: ra2studio / Shutterstock*
true
true
true
If museums of natural history are temples to science, dinosaurs are their shrines.
2024-10-12 00:00:00
2018-12-14 00:00:00
https://assets.nautil.us…&ixlib=php-3.3.1
article
nautil.us
Nautilus
null
null
11,423,507
http://motherboard.vice.com/read/could-direct-digital-democracy-and-a-new-branch-of-government-improve-the-us
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
3,834,896
http://mashable.com/2012/04/12/instagram-worth-1-billion/
Why Instagram Was Worth $1 Billion to Facebook
Emily Price
That was a casual statement made last Thursday by James Pearce, head of mobile developer relations for Facebook to a small group of writers (including myself) during a lunch at Facebook HQ. I thought Pearce's statement was intriguing, but it got a whole lot more interesting Monday morning when Facebook dropped $1 billion to buy its biggest competitor in the mobile space: Instagram. Why Instagram? Instagram was beating Facebook at its own game, and the social network needed to stop it before it was able to do more. The photo-sharing app is essentially everything Facebook wants to be on your mobile phone. Facebook wants people using its mobile app to share photos of what they're doing with friends and to share their location -– something Instagram users have no problem doing. "For years, we've focused on building the best experience for sharing photos with your friends and family." Zuckerberg said in the announcement Monday. "Now, we'll be able to work even more closely with the Instagram team to also offer the best experiences for sharing beautiful mobile photos with people based on your interests." Instagram was already huge. With the launch of its Android app last week it was poised to get even bigger, fast. While the company was valued at $500 million just before the Facebook buyout, it could certainly have grown to a $1 billion valuation on its own. Adding Android support also puts the photo-sharing app in the hands of much of the smartphone-carrying population. Add a desktop app into the mix –- there are a few unofficial ones available already -– and you've got a fully-fledged social network on your hands. "This is an important milestone for Facebook because it's the first time we've ever acquired a product and company with so many users," Zuckerberg said. "We don't plan on doing many more of these, if any at all. "But providing the best photo sharing experience is one reason why so many people love Facebook and we knew it would be worth bringing these two companies together." Facebook wasn't just buying an app. They were buying out the competition. The bigger picture To Facebook, being part of the mobile game means being one of the apps you're using on your phone most regularly. So while Instagram the app is a huge win for the company, the bigger win for Facebook is the staff of the company who built it. In the Instagram announcement, Zuckerberg noted Facebook's plans to "try to learn from Instagram's experience to build similar features into our other products" and that the company was "looking forward to working with the Instagram team, and to all of the great new experiences we're going to be able to build together." One thing Facebook is trying to build is a support system for web-based app developers. If you can run an app in your browser, then you can eliminate the App Store or Google Play store entirely. I could purchase the same version of Angry Birds to run on my iPhone as my friend using a Windows Phone. Developers could make one version of an app rather than one for each platform, lowering their cost and making it easier to get it in the hands of consumers. Facebook sends 60 million people to third-party apps every month. While the social network has certainly gotten a substantial amount of users to use its iPhone and Android apps, the majority of people visiting Facebook on their mobile phones today are doing so through the mobile web. The company is throwing a lot of weight behind a Ringmark, a mobile browser test suite. The open-source project is attempting to set standards for how mobile apps access hardware and run on your phone, and give developers an idea of what mobile browsers their apps might run in. Facebook already has a pretty sizable amount of backers for Ringmark, including huge browser manufacturers, handset makers, and app developers. With platform launch partners like Mozilla, Nokia, AT&T, Adobe, Netflix, Microsoft and Zynga, Facebook has all right players in place to make it happen. And with Instagram, now it also has a popular app with a ton of users to bring over to the mobile web as well. "Millions of people around the world love the Instagram app and the brand associated with it, and our goal is to help spread this app and brand to even more people." Zuckerberg said in the announcement. Those additional people could potentially be those accessing the app via the mobile web. What it all means Integrated into Facebook, all those web-based apps could equal a ton of cash for the company -- especially if it's the one powering the HTML5 web store where all those apps will be sold. Interesting to note: Neither Google or Apple have agreed to partner with Facebook on the project. Pearce said he couldn't say why the two companies had declined to be part of the group. Both companies, however, do have huge app stores -– huge sources of revenue for them -- that could be threatened by the success of a rival web-based app store. Microsoft, currently trailing behind both companies in sales, is a part of the group. Developers currently shy away from creating apps specifically for Windows Phone. Web-based apps could come to Windows Phone the same time they arrive on iOS There's no denying that Facebook's purchase of Instagram is part of a bigger picture. The company has eliminated its competition while gaining a team that has proven it can create a popular social app out of nothing. It will be interesting to see over the next weeks, months, and years how that purchase plays out and what Facebook decides to do with app, as well as the mobile web.
true
true
true
Why Instagram Was Worth $1 Billion to Facebook
2024-10-12 00:00:00
2012-04-12 00:00:00
https://helios-i.mashabl….v1647020839.jpg
article
mashable.com
Mashable
null
null
25,305,105
https://www.cnbc.com/2020/06/15/harvard-yale-researcher-future-success-is-not-a-specific-skill-its-a-type-of-thinking.html
Harvard lecturer: 'No specific skill will get you ahead in the future'—but this 'way of thinking' will
Vikram Mansharamani; Contributor
Many of us have been told that deep expertise will lead to enhanced credibility, rapid job advancement, and escalating incomes. The alternative of being broad-minded is usually dismissed as dabbling without really adding value. But the future may be very different: Breadth of perspective and the ability to connect the proverbial dots (the domain of generalists) is likely to be as important as depth of expertise and the ability to generate dots (the domain of specialists). The rapid advancement of technology, combined with increased uncertainty, is making the most important career logic of the past counterproductive going forward. The world, to put it bluntly, has changed, but our philosophy around skills development has not. Today's dynamic complexity demands an ability to thrive in ambiguous and poorly defined situations, a context that generates anxiety for most, because it has always felt safer to generalize. Just think about some of the buzzwords that characterized the business advice over the past 40 to 50 years: Core competence, unique skills, deep expertise. For as far back as many of us can remember, the key to success was developing a specialization that allowed us to climb the professional ladder. It wasn't enough to be a doctor, one had to specialize further, perhaps in cardiology. But then it wasn't enough to be a cardiologist, one had to specialize further, perhaps as a cardiac surgeon. And it wasn't just medicine, it was in almost all professions. The message was clear: Focus on developing an expertise and you'll rise through the ranks and earn more money. The approach worked. Many of today's leaders ascended by specializing. ## The future belongs to generalists But as the typical mutual fund disclaimer so famously states, past performance is no guarantee of future results. It's time to rethink our love affair with depth. The pendulum between depth and breadth has swung too far in favor of depth. There's an oft-quoted saying that "to a man with a hammer, everything looks like nails." But what if that man had a hammer, a screwdriver, and a wrench? Might he or she look to see if the flat top had a narrow slit, suggesting the use of a screwdriver? Or perhaps consider the shape of the flat top. Circle? Hexagon? Could a wrench be a more effective tool? And finally, the mere addition of these tools can encourage a better understanding of a problem. This is not to suggest that deep expertise is useless. *Au contraire*. Carrying a hammer is not a problem. It's just that our world is changing so rapidly that those with more tools in their possession will better navigate the uncertainty. To make it in today's world, it's important to be agile and flexible. ## What it means to be a generalist How does one do this? To begin, it's important to zoom out and pay more attention to the context in which you're making decisions. Read the whole paper, not just the section about your industry. Is your primary focus oil and gas? Study the dynamics affecting the retail sector. Are you a finance professional? Why not read a book on marketing? Think bigger and wider than you've traditionally done. Another strategy is to think about how seemingly unrelated developments may impact each other, something that systems thinkers do naturally. Study the interconnections across industries and imagine how changes in one domain can disrupt operations in another one. Because generalists have a set of tools to draw from, they are able to dynamically adjust their course of action as a situation evolves. Just think of how rapidly the world changed with the development of the Internet and wireless data technologies. Jeff Bezos was not a retail specialist who took on his competitors and won. He was a relative newcomer to retail but was able to adapt rapidly to seize a gigantic opportunity. ## Career success for generalists Many forward-looking companies look for multi-functional experience when hiring. This is essential for large organizations like Google, for example, where employees jump from team to team and from role to role. In fact, Lisa Stern Hayes, one of Google's top recruiters, said in a podcast that the company values problem-solvers who have a "general cognitive ability" over role-related knowledge. "Think about how quickly Google evolves," she said. "If you just hire someone to do one specific job, but then our company needs change, we need to be rest assured that the person is going to find something else to do at Google. That comes back to hiring smart generalists." If you're relatively new to the workforce, my advice is to manage your career around obtaining a diversity of geographic and functional experiences. The analytical capabilities you develop (e.g. basic statistical skills and critical reasoning) in the process will fare well when competing against those who are more focused on domain-specific skill. The one certainty about the future is that it will be uncertain. The rapid advancement of artificial intelligence and technological innovation have commoditized information. The skill of generating dots is losing value. The key skill of the future is, well, not quite a skill; it's an approach, a philosophy, and way of thinking — and it's critical you adopt it as soon as you're able. *Vikram Mansharamani**, PhD, is a Lecturer at Harvard University and author of the new book **"THINK FOR YOURSELF: Restoring Common Sense in an Age of Experts and Artificial Intelligence"** (HBR Press, 2020). Follow him on twitter **@mansharamani**.* **Don't miss:**
true
true
true
To make it in today's world of rapid changes and uncertainties, successful business leaders like Jeff Bezos prove it's better to be a generalist, rather than a specialist.
2024-10-12 00:00:00
2020-06-15 00:00:00
https://image.cnbcfm.com…40&w=1920&h=1080
article
cnbc.com
CNBC
null
null
36,294,047
http://blog.itdxer.com/2023/06/04/gradient-boosting-as-a-blind-gradient-descent.html
Gradient boosting as a blind gradient descent
null
# Gradient boosting as a blind gradient descent ## Introduction In the recent decade, gradient-based methods enormously impacted the machine learning field. I have a hard time imagining the most impressive achievements in the field being possible without them. Even if you’re using some application powered by machine learning, there is a high chance that a gradient-based method is being used somewhere under the hood. Certain methods are not hiding that, and the relation to the gradient-based method can be clearly spotted just from the name alone. In this article, I want to focus on two methods: Gradient Boosting and Gradient Descent. Their apparent relation to the technique is not the only similarity that they share. There exists an interesting parallel between these two methods. Hopefully, by the end of this article, I will be able to convince you that gradient boosting can be seen as a “blind” gradient descent. To better understand what I mean by that, let’s first look at the figure below. The figure shows how two methods are trying to solve the same type of problem. The gradient descent tries to find a minimum of the function \(x^2\) (shown in blue, top-left) by iteratively updating the initial guess \(x_0\). The algorithm goes through 3 steps until we stop it at \(x_3\). Below the graph (bottom-left), we can see how value \(x\) changes after each gradient descent step. Gradient boosting’s side, on the other hand, doesn’t have a graph even though the parameter \(x\) is also being updated iteratively (bottom-right). The gradient boosting pursues the same goal, except it doesn’t have a function for which we want to find a minimum, and yet the steps it takes can get you closer to it. That’s what I mean by “blind” - gradient boosting optimizes the function it doesn’t see. It might sound strange at first since, at the very least, it’s not clear what problem gradient boosting is optimising for! Unlike gradient descent, the gradient boosting method must be “trained” to perform a particular task. Another way to say it is that it has to experience many actual gradient descent optimizations from a specific set of “similar” problems to be able to solve new problems in the future when the function might not be available. You can think of it in terms of the following analogy. Let’s say you have a robot that you can control remotely on Mars. Suppose you frequently take your robot for a drive through similar routes near the robot’s base. In that case, it should be possible for you to return the robot to the base even if it loses its ability to send you sensory information like video (although you might have an audio). On the other hand, the task would be almost impossible for somebody without robot- driving experience in that particular area of Mars. The gradient boosting is quite similar to the robot driver. First, it learns from many similar gradient descent tasks and later performs optimisations when no information about the function is available. We will base our main discussion on the theoretical part of the XGBoost paper, but first, I believe reviewing a couple of mathematical concepts might be important. We start our discussion with one-dimensional optimisation problems. Next, we will cover a specific family of functions, namely sum of squared errors (SSE). These discussions would help us to build the mathematical intuition needed for the main topic. And at the end, we will spend time understanding how the XGBoost model works and conclude the article with the main topic. ## One-dimensional optimization ### Gradient descent Gradient descent is a method for finding a local minimum of the function by iteratively updating our initial guess of where the minimum could be. Typically we have a function \(\,f(x)\) and a starting guess which we call \(x_0\) (it doesn’t have to be a good guess). Furthermore, the function \(\,f\) must be differentiable; otherwise, we might be unable to compute the gradient at every point of the function, and the method won’t work. And since we’re focusing on one-dimensional optimisation we will use the term “derivative” instead of “gradient” in the following discussions since it’s a bit more precise. We start the optimization process by evaluating the derivative at the initial point \(\,x_0\), which we write as \(\,f’(x_0)\). The derivative points to the direction in which the function increases. Since we are interested in the minimum, it will be enough to focus on the opposite direction (i.e., negative of the derivative). Next, we take one step in that direction and see where we land. We keep repeating the process until we converge to a fixed point, or we get tired of looking for the minimum, which might not even exist. Mathematically it can be represented with a straightforward equation \[\begin{align} x_{t+1} &= x_t - \alpha f'(x_t) \\ &= x_t - \alpha g_t \end{align}\]where \(\,x_t\) represents our improved guess after \(t\) steps, and \(\alpha\) is a positive constant that controls the size of the steps we take after each iteration. In addition, the notation of the derivative has changed since we replaced it with the \(g_t = f’(x_t)\) (\(g\) stands for “gradient”), which implicitly assumes that the derivative is evaluated at a point which was obtained at step \(t\). Change may look unnecessary, but it will become more useful in the later sections. Two examples of the gradient descent optimisation process are in the image below. Both examples use the same function \(\,f(x)\), and both optimisations start at the same point \(x_0\). They only differ by the parameter \(\alpha\), which controls the step size. These examples highlight how sensitive our optimisation can be to the choice of \(\alpha\). In fact, for this example, if you pick any \(\alpha > 1\), the optimization won’t even converge to the minimum. On the other hand, if the \(\alpha\) is too small, it will take us forever to get to the minimum. The natural question to ask is whether we can avoid setting \(\alpha\) ourselves. Many algorithms can help us address the problem, and in the next section, we will focus on one of these algorithms. ### Newton’s method For many people, Newton’s method is introduced as a method that allows one to find an approximation of the root of a function \(\,f\). Instead, we will focus on the roots of the derivative function \(\,f’\). Roots of the derivative are also known as stationary points, and the local minimum is one of them. As with the gradient descent, the method is iterative, and with each iteration, we try to improve our guess about the minimum of the function. We can describe this family of iterative solutions with a simple equation \(\,x_{t+1} = x_t + \Delta_t\), where for the gradient descent \(\Delta_t=-\alpha f’(x_t)\). In addition, we can take our generalisation one step further. For example, we can view gradient descent as a method that locally approximates the function at point \(x_t\) with a line and follows the direction of the descent. We can also think of it as a Taylor series of function \(f\), trimmed after the first two terms \[f(x_t + \Delta_t) \approx f(x_t) + f'(x_t)\Delta_t\]Notice that by plugging \(\Delta_t\) from the gradient descent equation, we can show that value of this locally approximated function decreases. \[f(x_t -\alpha f'(x_t)) \approx f(x_t) - \alpha f'(x_t)^2 \le f(x_t)\]Please note the equation above doesn’t imply that \(\,f(x_{t+1}) \le f(x_t)\), since the approximation sign is non-transitive. Newton’s method is very similar to the gradient descent from this perspective. The only difference is that we consider the first three terms of the Taylor series rather than two. \[f(x_t + \Delta_t) \approx f(x_t) + f'(x_t)\Delta_t + \frac{1}{2}f''(x_t)\Delta_t^2\]We can compute the derivative with respect to the direction \(\Delta_t\) and find that it’s equal to zero when \[\Delta_t = -\frac{f'(x_t)}{f''(x_t)} = -\frac{g_t}{h_t}\]As in the previous section, the first and second-order derivatives were replaced with \(g\) and \(h\) variables (where \(h\) stands for “Hessian”). And with a known \(\Delta_t\) we arrived at the following iterative formula which people call “Newton’s method” \[x_{t+1} = x_t - \frac{g_t}{h_t}\]Here is what one step of the method might look like Now let’s go back and look again at the new update rule \[x_{t+1} = x_t - \frac{g_t}{h_t}\]Notice that our new update rule looks similar to the gradient descent update if we set \(\alpha = 1/h_t\). Except now, \(\alpha\) is no longer a constant and depends on time \(t\), which means that the step size can “adapt” to the changes over time. You might remember from the calculus class that the first- order derivative tells us how quickly the output of the function \(\,f\) changes at some point \(x_t\) if we start making infinitesimal changes to the input. Likewise, the second-order derivative tells us the same information, only now, about the derivative function \(\,f’\). In other words, it tells us how quickly a first-order derivative increases or decreases if we start making infinitesimally small changes to the point \(x_t\). To gain a better understanding, let’s consider two cases. First case: the second-order derivative is small at \(x_t\). It tells us that if we start moving to a new position \(x_{t+1}\), which is close to \(x_t\), then we might expect that a derivative \(g_{t+1}\) at the new position is very similar to the derivative \(g_t\) at last position \(x_t\). With gradient descent, our change between two points must be \[x_{t+1} - x_t = -\alpha g_t\]We can take another step and measure the difference between two positions \(x_{t+1}\) and \(x_{t+2}\) \[x_{t+2} - x_{t+1} = -\alpha g_{t+1}\]But previously, we said that since the second-order derivative is small, then \(g_t \approx g_{t+1}\) and we can find that \[\begin{align} x_{t+1} - x_t = -\alpha g_t &\approx -\alpha g_{t+1} = x_{t+2} - x_{t+1} \\ x_{t+1} - x_t &\approx x_{t+2} - x_{t+1} \end{align}\]The obvious implication is that we don’t need to take two steps. Instead, we can take one step but double our step size. Mathematically speaking, we get \[-2 \alpha g_t = 2(x_{t+1} - x_t) \approx x_{t+2} - x_{t}\]Remember that we started with the assumption that the second order derivative is small, and by following the gradient descent algorithm, we were able to derive that we can increase our step size and take one large step rather than a couple of small steps. Therefore, we conclude that if a second derivative is small, the step size must be large. Consider the second case: the second-order derivative is large at \(x_t\). In this case, we have quite the opposite expectation. We might expect that \(g_{t+1}\) might be very different from \(g_{t}\) even if \(x_{t+1} - x_{t}\) is small. In that case, we need to be more careful and take a smaller step in the direction towards which the first derivative points. Following this logic, we can conclude that we must select a small step size \(\alpha\) if the second-order derivative is large. I hope that at this point, it becomes clearer why using \(\alpha=1/h_t\) might be a good idea. For example, if \(h_t\) is very large, then \(\alpha\) must be very close to zero, just like in our previous example. On the contrary, if the \(h_t\) is small, it’s better to have \(\alpha\), which is very large. Consider the two graphs below The image shows two quadratic functions \(f_1(x)=x^2\) and \(f_2(x)=0.2x^2\). In addition, each graph highlights two points as well as their tangents. You can see that a graph on the right has a smaller change in first-order derivative (i.e., the angle between the tangents is smaller). The method looks like a noticeable improvement to the gradient descent, but there is one big problem. Some of you might have noticed it already, but for the others, let me ask you this, what will happen if \(h_t \le 0\)? The problem is apparent when \(h_t = 0\), but less evident with \(h_t \lt 0\). Remember that previously, we said that \(-g_t\) points to the direction along which function decreases, so the negative sign of the \(h_t\) will negate the effect and send us in the opposite direction. In general, the algorithm will search for the stationary point, which might not be a minimum, so it will be essential to add a restriction to ensure every stationary point is a minimum. Specifically, we will add a constraint that function \(f\) is strictly convex. Convexity of the function will ensure that \(h_t \gt 0\), which would help us to eliminate the problems mentioned above. ## Sum of squared error (SSE) Before we finally get into the details of the XGBoost paper, it will be important to cover one last topic. Specifically, we will be interested in the special families of the functions, which are quite often called sum of squared errors (SSE). ### Regular SSE Let’s imagine playing a simple game where we have \(N\) cards with numbers \(y_i\) on them, where \(i \in \{1, 2, …, N\}\). One card is being pulled from the deck randomly, and before we see it, we need to make a guess \(x\) of what the number on the card could be. The magnitude of the mistake can be judged by taking the square of the difference between the observed number \(y_i\) and our guess \(x\). Since we don’t know in advance which card will be pulled, we need to make a guess which is optimal for any situation. Mathematically speaking, we have the following problem \[\begin{align} \mathbf{\min_x} \, \sum_{i=1}^{N} (x - y_i)^2 \end{align}\]Notice that our function is quadratic in \(x\), which means we can follow the same optimisation steps as in Newton’s method section when we looked into the first three terms of the Taylor series, except there, we were looking at the minimum with respect to \(\Delta_t\) instead of \(x\). If we repeat the same steps from the previous section, we will get the following result \[x^* = \frac{1}{N} \sum_{i=1}^{N} y_i\]The solution implies that if we want to minimise SSE and we’re allowed to make only one guess, then the average of the observations is the best that we can hope for. ### Weighted SSE We can take the problem further and consider a weighted SSE. In the context of the number-guessing game, certain cards are more likely to be pulled than others. For example, if the relative frequency of each card is \(w_i\), then we can modify the previous optimisation in the following way \[\begin{align} \mathbf{\min_x} \, \sum_{i=1}^{N} w_i (x - y_i)^2 \end{align}\]We can solve the new problem in the same way as before, and we will get a similar solution \[\begin{align} x^{**} &= \frac{\sum_{i=1}^{N} w_i y_i}{\sum_{i=1}^{N} w_i} \\ &= \sum_{i=1}^{N} \left(\frac{w_i}{\sum_{j=1}^{N} w_j}\right) y_i \\ \end{align}\]Notice that if we set \(w_i=1\), then we obtain our previous solution \(x^*\). The solution shows that for the new problem, we must consider the weighted average of the observations \(y_i\), which is a very intuitive extension of the previous solution. ### Regularised SSE Let’s assume that in addition to the weighted loss, we want to put an additional constraint on the value \(x\). For example, let’s say that in addition to making a guess \(x\), we would also have to pay an equal amount of money for that guess. So if we guess \(x=3\), we would have to pay three units of currency for it. This rule change would force us to make our guess closer to 0. We can enforce this behaviour with a soft constraint to our previous objective, and in the later sections, we will refer to it as “regularisation” \[\begin{align} \mathbf{\min_x} \, \sum_{i=1}^{N} w_i (x - y_i)^2 + \lambda x^2 \end{align}\]where \(\lambda\) is a positive constant that controls the strength of the regularisation (e.g., the larger the value, the stronger the regularisation effect will be). Now notice that we can think of the regularisation term as a new data point with index \((N+1)\) for which \(y_{N+1}=0\), \(w_{N+1}=\lambda\). We can apply solution \(x^{**}\) for the regularised weighted SSE as well, and we get \[\begin{align} x^{***} &= \sum_{i=1}^{N} \left(\frac{w_i}{\lambda + \sum_{j=1}^{N} w_j}\right) y_i \end{align}\]The key takeaway from this section is that whenever we encounter a squared error function in the following discussions, we always need to remember that some form of averaging will be part of the solution. And now, we finally have all the basic knowledge needed to understand the Gradient Boosting Decision Trees. ## Gradient Boosting Decision Trees Gradient Boosting Decision Trees (GBDT) is an algorithm that uses examples of inputs \(x_i \in \mathbb{R}^m\) and respective outputs \(y_i \in \mathbb{R}\) to learn a general mapping from \(x_i\) to \(y_i\) (i.e. \(y_i = F(x_i)\)). For example, we want to know the price \(y_i\) of a used item, given some information \(x_i\) about the item (type of a product, number of years in use, etc.). For typical real-world problems, a perfect mapping from \(x_i\) to \(y_i\) might not exist, in which case we want to get an approximation \(\widehat y_i = F(x_i)\), where \(\widehat y_i\) is “close” to \(y_i\). Of course, the interpretation of the word “close” would depend on the task, but in general, we will have some loss function which would take \(\widehat y_i\) and \(y_i\) as input and would tell us how bad the guess \(\widehat y_i\) (a.k.a. prediction) is compared to the actual value \(y_i\) (just like SSE in the number-guessing game). As with any other machine learning algorithm, GBDT restricts the space of choices of the function \(F\) to a specific family of functions. It constructs \(F\) from a sequence of \(K\) additive trees, which have the following form \[\widehat y_i = F(x_i) = \sum_{k=1}^{K} f_k(x_i)\]where \(\,f_k\) is a k-th decision tree. Each tree also has a special functional form which we will discuss in the next section. ### Decision Trees Decision tree \(\,f_k\) partitions input space \(X\) into a finite number of \(T_k\) regions. All samples within one particular region \(r\) will have the same output from the function \(f_j\). \[w_{kr} = f_k(x_i)\]where \(w_{kr} \in \{w_{k1}, w_{k2}, …, w_{kT_k}\}\). It’s important to emphasise that for our problem, \(w_{kr}\) are one-dimensional quantities, although they don’t have to be in general. In addition, space partitions are always perpendicular to one dimension and parallel to the other. It’s rather easy to visualise these functions. For example, \(x_i\) is a two- dimensional vector, and we constructed a tree \(\,f_k\) which partitions space into \(T_k=4\) regions. Here is what such a space partition might look like From the definition and example above, it should be rather obvious what the function does. Specifically, it checks in which of the partitions the point \(x_i\) falls to and returns a number \(w_{kr}\) which is associated with that partition. So, for example, in the image above, all points in the largest square at the top right will have an identical output, namely \(w_{k3}\). We assumed that the tree’s structure is known, but in practice, we have also to construct it from data. The XGBoost paper describes how the tree can be built, and to avoid having too many topics discussed in this article, I won’t be going into details. For now, it will be enough to have an intuition of the decision tree, but later in the article, we will have to return to this topic. ### Loss function Space partitioning and predictions \(w_{kr}\) make sense only within a context of a particular problem. That’s why we must define our objective by introducing a loss function. XGBoost uses the following loss function \(\mathcal{L}\) \[\mathcal{L}(\phi) = \sum_{i=1}^{N} l(y_i, \widehat y_i) + \sum_{k=1}^{K} \Omega(f_k) \\ \Omega(f_k) = \gamma T_k + \frac{1}{2} \lambda \sum_{r=1}^{T_k} w_{kr}^2\]The function \(\,\mathcal{L}\) is more straightforward than it looks. We can break it down into simpler functions and understand them separately. The function \(l\) measures how bad of an approximation \(\widehat y_i\) compare to \(y_i\). For example, it can be a squared error \(l(y_i, \widehat y_i) = (y_i - \widehat y_i)^2\), just like the one we saw in the SSE section. The function \(l\) must be convex with respect to the \(\widehat y_i\). If you recall, the convexity of the function \(l\) will guarantee that the second- order derivative is positive, which will be essential for the final solution. The sum overall all individual losses \(l(y_i, \widehat y_i)\) measures loss on all available data, and we want to minimise it as much as possible. The second part is a regularisation function \(\Omega(f_k)\) associated with each tree \(\,f_k\). The first part of the regularisation \(\gamma T_k\) ensures that the model doesn’t just memorise our examples (e.g., we want to avoid having \(T_k=N\)). Memorisation will make it difficult to generalise function for the unseen inputs \(\,x’\). Therefore we need to penalise the number of space partitions in order to avoid having too many. That’s exactly what \(\gamma T_k\) does, and we can control the penalty with a hyperparameter \(\gamma\). The larger the \(\gamma\), the fewer partitions we would want to have. This regularisation allows us to introduce a trade-off between the complexity of the tree and its performance on the target task. So if we want to introduce one additional partition to the tree, the loss will be increased exactly by \(\gamma\) (use \(T_k+1\) instead of \(T_k\)). In this case, for the partition to be beneficial, the overall loss \(\sum_{i=1}^{N} l(y_i, \widehat y_i)\) needs to decrease by more than \(\gamma\), otherwise additional partition is not very helpful. We can see that the last regularisation term \(\lambda \sum_{r=1}^{T_k} w_{kr}^2\) also depends on the number of partitions \(T_k\), and the more partitions we have, the stronger effect of the regularisation will be. Unlike the previous \(\gamma T_k\) term, the main purpose of this regularisation term is not to restrict a number of partitions but rather introduce a strong bias of the output from a partition towards zero when there are very few samples within the partition. Basically, the fewer samples we have within a partition, the more we will want to bias its output toward zero. In the later sections, we will understand how to derive and interpret optimal \(w_{kr}\), and we will see that it aggregates some “information” within the partition. Since statistics computed from the small sample are highly unreliable, it will be less risky to return numbers close to zero, which is the main reason for having this term in the regularisation function. ### Symmetry breaking Remember that decision trees are additive, which means that the order in which the individual predictions are added is unimportant. In addition, it can be possible that individual tree predictions \(w_{kr}\) can be adjusted such that overall prediction \(\widehat y_i\) is not affected (e.g., use \(w_{kr}+1\) and \(w_{(k+1)r}-1\) for all \(r\)). These types of symmetries typically create a problem since the selected solution might become suboptimal if too many options give the same result. We can solve the problem by breaking the symmetry between solutions. The approach described in the paper follows sequential learning, meaning trees are trained one after another, and we no longer have previously mentioned problems. First of all, we can make a recursive definition of \(\widehat y_i\) \[\widehat y_i^{(k)} = \widehat y_i^{(k-1)} + f_{k}(x_i)\]With the sequential learning approach, we need to consider \(K\) loss functions for each tree in a sequence. We will get the following objective for the \(k\)-th tree \[\begin{align} \mathcal{L}^{(k)} &= \sum_{i=1}^{N} l\left(y_i, \widehat y_i^{(k)}\right) + \Omega(f_k) \\ &= \sum_{i=1}^{N} l\left(y_i, \widehat y_i^{(k-1)} + f_{k}(x_i)\right) + \Omega(f_k) \\ \end{align}\]where \(y_i^{(k-1)}\) is known and \(\widehat y_i^{(0)}=0\) (although the initial value could be any real number). Each new tree is added as an adjustment to the sum of the previously learned \(k-1\) trees. Specifically, our goal is to correct \(y_i^{(k-1)}\) with a new tree \(f_{k}\) so that the overall loss \(\mathcal{L}^{(k)}\) gets smaller. ### Approximation of the loss function The final solution in the XGBoost paper requires us to simplify the loss \(\mathcal{L}^{(k)}\). We want to apply the same trick we did in the “Newton’s method” section. Specifically, we make Taylor series expansion of each function \(l\) around a fixed point \(c\) and trimm it after three terms where \(g_i\) and \(h_i\) are first and second-order derivatives of \(l\) with respect to the second argument and evaluated at \(c\). If we set \(c = \widehat y_i^{(k-1)}\), \(x = \widehat y_i^{(k-1)} + f_k(x_i)\) and plug them into the loss \(\mathcal{L}^{(k)}\) we will get Recall that \(\,f_k(x_i)=w_{kr}\) is a one-dimensional variable which means that if we assume that the structure of the tree is known (i.e., we know precisely how it partitions the space), then all we need to do is to find a \(w_{kr}\) for each partition such that the loss is minimised. Again, that’s highly similar to Newton’s method, and we can apply the same solution, but there is another way to solve the problem. We can rearrange the terms by completing the square and find that our approximate loss function is a regularised and weighted SSE. where the term \(C\) is independent of \(\,f_{k}(x_i)\) terms (i.e. \(C=-1/2\sum_{i=1}^{N}g_i^2/h_i\)) ### Optimial \(w_{kr}\) Although \(\mathcal{L}^{(k)}\) is a regularised and weighted SSE, we cannot quite apply the same solution as the one which we derived in the corresponding section previously. Remember that our tree \(\,f_k\) partitions the space into multiple regions, and each region \(r\) has its prediction \(w_{kr}\). It means that if the tree makes \(T\) partitions, then we can have \(T\) unique predictions, but in our discussion of the SSE, we assumed that only one prediction is possible. So to use the result, we can focus on losses within each partition. We can rewrite our approximate loss in the following way where \(I_t\) is a set of all samples \(x_i\) which are inside of the partition \(t\). Each sample inside of the \(I_t\) must have identical prediction \(w_{kt}\), which means that loss of the partition \(t\) is exactly weighted and regularised sum of squared errors for which we’re allowed to make only one constant prediction, namely \(w_{kt}\). And because each partition can make any prediction independent from predictions in other partitions, it means that we can optimise each \(\mathcal{L}^{(k)}_t\) independently, which in turn means that we can apply result from the SSE section here. If we say that \(h_i\) is a weight, \(-g_i/h_i\) is the target and if we optimise with respect to \(w_{kt}\) then we get our solution \[\begin{align} w_{kt}^{***} &= \sum_{i \in I_t}\left(\frac{h_i}{\lambda + \sum_{j \in I_t} h_j}\right)\left(-\frac{g_i}{h_i}\right) \\ &= \frac{-\sum_{i \in I_t} h_i \frac{g_i}{h_i}}{\lambda + \sum_{i \in I_t} h_i} \\ &= \frac{-\sum_{i \in I_t} g_i}{\lambda + \sum_{i \in I_t} h_i} \end{align}\]## Relation to the Newton’s method Before we tie everything together, let’s briefly recap what we’ve done so far. - We started with the loss function \(\mathcal{L}\) which measured how close the output of the GBDT \(\widehat y_i\) compare to the expected output \(y_i\) across all of the available examples - We created \(K\) partial losses \(\mathcal{L}^{(k)}\) which were associated with each tree. - The optimisation is done sequentially, so that \(\mathcal{L}^{(k-1)}\) has to be optimised before we optimise \(\mathcal{L}^{(k)}\) - The \(\mathcal{L}^{(k)}\) loss was approximated with quadratic function. In addition, we split tree losses \(\mathcal{L}^{(k)}\) into losses which we associated with each partition \(t\) and named them \(\mathcal{L}^{(k)}_t\) - We discovered that \(\mathcal{L}^{(k)}_t\) can be approximated as regularised and weighted SSE, which is the same type of function that we’ve encountered before with the number-guessing game. Recall that in the number-guessing game, we had to make one prediction before the number was revealed. Another way to say it is that we wanted to make one guess that would be as close as possible to all available numbers. With the partition loss \(\mathcal{L}^{(k)}_t\) we have \(-g_i/h_i\) instead. The new quantity should look familiar since it’s exactly the formula for the update step, which we’ve encountered in the section where we discussed Newton’s method \[x_{t+1} = x_t - \frac{g_t}{h_t}\]A single partition of the \(k\)-th tree will have \(|I_t|\) points \(x_i\)(quantities \(x_i\) and \(x_t\) are not related). For each point \(x_i\), we have a prediction \(\widehat y_i^{(k-1)}\) from the previous \((k-1)\) trees and a loss associated with the prediction, namely \(l(y_i, y_i^{(k-1)})\). Suppose we wanted to optimise the function \(l\) with respect to our prediction using Newton’s method, the first adjustment step for \(\widehat y_i^{(k-1)}\) would have been precisely \(-g_i/h_i\). For real- world problems, it’s likely that \(-g_i/h_i\) would be different for different points within the partition. Since we have to make only one prediction per partition that optimises weighted SSE, it’s natural that we will predict the average Newton step for each data point within the partition. So instead of optimising each function \(l\) individually, we collect multiple functions which are associated with “similar” points \(x_i\) into one partition group and use one average step for all of them. But why do we need to use the weighted average? In addition, the weight is equal to the second-order derivative of the function \(l\), which looks a bit arbitrary. Recall from our discussion about Newton’s method that second-order derivative \(h_i\) can tell us how predictable the change in the first-order derivative will be if we take one gradient descent step. So when the second derivative is large, we wanted to take smaller steps since it’s more likely that the direction of the gradient descent steps can change. Another way to say it is that we need to be more cautious when curvature is large, and giant update steps can significantly and unpredictably affect optimisation. So now, if we have data points with large second-order derivatives, we would want to make sure that these “sensitive” points will have a more significant effect on the output and that smaller steps will not screw up optimisation for less sensitive cases, which is a better solution as opposed to taking optimal steps for some optimisation tasks and potentially doing damage to the other. Surprisingly \(\mathcal{L}^{(k)}_t\) is not the only place where we encountered a connection with Newton’s method. In fact, we had to do quite a bit of extra work to get to the solution, even if it helped us deepen our understanding of GBDT. Let’s look again at solution \(w_{kt}^{***}\), which we derived from the approximated loss. \[w_{kt}^{***} = \frac{-\sum_{i \in I_t} g_i}{\lambda + \sum_{i \in I_t} h_i}\]Notice that this is precisely the update step of Newton’s method for the original partition loss (\(\mathcal{L}^{(k)}_t\) without Taylor series expansion) \[\sum_{i \in I_t} l\left(y_i, \widehat y_i^{(k-1)} + w_{kt}\right) + \lambda w_{kt}^2 + \gamma\]And we didn’t have to take approximation since it’s already part of Newton’s method. So all we had to do was take the first and the second derivative of the loss, which is easy since the derivative of the sum with a finite number of terms is always a sum of the derivatives. ## The “Blind” Gradient Descent At this point, it’s important to return to the discussion on how the partitions of the tree are being constructed. The process is done recursively by dividing partitions in the following way - We start with one partition, which includes all \(N\) points. - Next, we take all possible splits on each dimension of the points \(x_i\) and see which split into two new partitions minimises the overall loss the most. If such a split exists, we add new partitions (no split can be better than any split due to the regularisation term of the loss). - Repeat step 2 for each available partition until we either cannot minimise the loss further or meet some other stopping criteria. Since the tree construction minimises loss, ideally, we will end up with a tree that has partitions that are not only grouping points \(x_i\) based on how close they are to each other, but also each partition will be incentivised to group points that also have similar \(-g_i/h_i\) quantities (since making a single prediction for the entire group will be much easier). Let’s also consider what it tells us about the entire GBDT learning process. First, we start with no trees, and every input gets a constant prediction, for example, \(\widehat y_i^{(0)}=0\). Next, we want to learn the first tree, and to do that, we compute \(N\) loss functions with respect to the constant prediction \(\widehat y_i^{(0)}\), namely \(l(y_i, y_i^{(0)})\). In addition, we derive what would be the update step using Newton’s method for each loss separately. Instead of taking \(N\) individual updates, we group inputs \(x_i\) into a fixed number of partitions and replace their Newton’s update steps with the weighted average step within the partition. It’s important to stress that a tree will try its best to correlate similarity among inputs \(x_i\) with similarity among update steps \(-g_i/h_i\). We can finally see how the first tree learns to do the first step of Newton’s method on various similar one-dimensional problems. It’s reasonable that the method wants to come up with the step that will be a good compromise for all inputs within the partition and, hopefully, will be a good step for the previously unseen input \(\,x’\), which falls within the partition. Next, with the help of the first learned tree, we can make a new prediction \(\widehat y_i^{(1)}\) for each point \(x_i\). Now the prediction is no longer constant, but the number of distinct values will be bounded by the number of partitions of the first tree. The new and refined predictions \(\widehat y_i^{(1)}\) can be thought of as new positions of each one-dimensional loss function \(l(y_i, \widehat y_i^{(1)})\) in the optimisation process which has to be continously refined. So in the next iteration, we repeat the same steps from the previous paragraph to derive a second tree as well as new predictions \(\widehat y_i^{(2)}\). The process continues until all \(K\) trees are created. We can say that \(k\)-th tree learns to perform \(k\)-th optimisation step of Newton’s method. Newton’s optimisation can be recursively unrolled into the following sum \[x_{t} = x_0 -\frac{g_0}{h_0} -\frac{g_1}{h_1} - ... - \frac{g_{t-1}}{h_{t-1}}\]And the prediction from GBDT can be unrolled into a similar sum Each \(f_k\) would come up with a reasonable suggestion for the \(k\)-th update of the optimisation based on its previous experience with similar optimisation problems. Let’s finish our discussion with the final visualisation, which shows how the whole GBDT process works on a toy problem. The graph below visualises a two- dimensional dataset in which each point is classified into one of two groups: blue and green. Next, we encode each class with binary labels (zero for blue and one for green), then we can use, for example, logloss in order to have an optimisation function for each data point. \[l(y, \widehat y) = -y\ln(\widehat y) - (1-y)\ln(1-\widehat y)\]where \(y \in \{0, 1\}\) and \(\widehat y \in (0, 1)\). We can use the GBDT algorithm with \(\lambda=1\) to learn how to perform first three optimisation steps. Visualisation below shows three optimisation iterations of the “blind” gradient descent learned by three additive decision trees. As we know, each tree partitions our input space which can be seen on the graph (left-top). In addition, we can see all data points from the previous graph and one unknown data point (colored in red). For the unknown data point, we assume that \(y\) is unknown, and therefore we cannot construct \(l(y, \widehat y)\). The right graph shows the unknown loss, our initial guess (red point), and the optimisation step selected by the GBDT from the \(k\)-th tree. Returning to the scatter plot on the left, you can also notice that the partition within which the unknown data point falls is highlighted. We picked twelve data points from the highlighted partition with known loss functions randomly and visualised them in one row below the previous two graphs. Each image shows the loss function, initial prediction, optimisation step selected by GBDT, and optimal step according to Newton’s method. There is more information “hidden” in these graphs, which I intentionally don’t want to reveal. Instead, I hope you will be able to use graphs to test your intuition about the GBDT and discover these “hidden” details on your own. - What is the correct class for the unknown sample? (blue or green) - Can you find training samples on the graphs which gradient boosting cannot assign to the correct class? - Can you find cases where tree partitioning is the most sensetive to outliers? - Look at the partition of the first tree. Can you say why the vertical lines of the partitions are placed where they are? Can you get better partitions by shifting them more to the left/right? - Why, for the first iteration, all 12 samples are so similar? How would you compare them to the second and third iteration? Can you explain why they become more diverse? - Look at the partitions of the third tree. Can you explain why doesn't tree try to better separate blue from green points? - Look at the partition of the second tree. Can you explain why the partition at the top-right corner doesn't include only green points?
true
true
true
null
2024-10-12 00:00:00
2023-06-04 00:00:00
null
null
null
null
null
null
24,369,371
https://www.cnn.com
Breaking News, Latest News and Videos | CNN
null
## More top stories ## News and Buzz Toronto police open hate crime investigation after Jewish girls’ school hit with gunfire in 2nd incident this year Fugitive father is on the run with 3 children in one of the world’s wildest regions and has evaded police for years Ad Feedback Ad Feedback Ad Feedback Ad Feedback ## health and wellness ## Paid Content ## More from CNN Ad Feedback
true
true
true
View the latest news and breaking news today for U.S., world, weather, entertainment, politics and health at CNN.com.
2024-10-12 00:00:00
2024-10-12 00:00:00
null
website
cnn.com
CNN
null
null
38,691,145
https://www.economist.com/graphic-detail/2023/12/14/which-city-is-the-cheapest-in-the-world
Which city is the cheapest in the world?
null
Graphic detail | Back again # Which city is the cheapest in the world? ## The cost of living there is a little over a tenth of what it is in New York ## Discover more ### Hurricane Milton exposes the dangers of Florida’s development boom Subsidised flood insurance hinders more than it helps ### The Israel-Iran standoff in maps A visual guide to the escalating conflict ### The world’s most innovative country A ranking of 133 countries shows that the global innovation boom is stalling ### Who is really in charge of Lebanon? A visual guide to the country’s tattered political system ### The states that will decide America’s next president Insights from our election forecast model ### Want to win an argument? Use a chatbot AI appears to do a better job of countering conspiracy theories than humans do
true
true
true
The cost of living there is a little over a tenth of what it is in New York
2024-10-12 00:00:00
2023-12-14 00:00:00
https://www.economist.co…31216_WOT965.png
Article
economist.com
The Economist
null
null
10,328,970
http://www.wsj.com/articles/apple-acquires-artificial-intelligence-startup-vocaliq-1443815801
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
29,767,089
https://www.economist.com/business/2017/03/30/ameerpet-indias-unofficial-it-training-hub
Ameerpet, India’s unofficial IT training hub
null
# Ameerpet, India’s unofficial IT training hub ## The Hyderabad neighbourhood’s IT courses cost less than $400 for six months UNIVERSITY campuses can take a while to get going in the mornings, as students recover from extra-curricular antics. Contrast that with Ameerpet, a squeezed neighbourhood of Hyderabad that has become India’s unofficial cramming-college capital. By 7.30am the place is already buzzing as 500-odd training institutes cater to over 100,000 students looking to improve their IT skills. If there are ivory towers here, they are obscured by a forest of fluorescent billboards promising skills ranging from debugging Oracle servers to expertise in Java coding to handling Microsoft’s cloud. This article appeared in the Business section of the print edition under the headline “Cramville” ## Business April 1st 2017 - Luxury-goods companies are belatedly trying to go digital - Swiss watchmakers try to keep pace - The nominee to run America’s drug regulator is a sound choice - Ameerpet, India’s unofficial IT training hub - YouTube highlights problems with digital advertising - Westinghouse files for bankruptcy - Masayoshi Son goes on a $100bn shopping spree ## Discover more ### Sir Jim Ratcliffe, chemicals magnate turned sports mogul The British billionaire is buying up teams from sailing to football to cycling ### Masayoshi Son is back in Silicon Valley—and late to the AI race This isn’t the first time the Japanese tech investor has missed the hot new thing ### When workplace bonuses backfire The gelignite of incentives ### China is writing the world’s technology rules It is setting standards for everything from 6G to quantum computing ### Can Mytheresa make luxury e-commerce a success? It reckons it can succeed where Richemont has failed ### Ratan Tata, a consequential and beloved figure in Indian business He reshaped one of India’s most successful conglomerates
true
true
true
The Hyderabad neighbourhood’s IT courses cost less than $400 for six months
2024-10-12 00:00:00
2017-03-30 00:00:00
https://www.economist.co…401_WBP002_0.jpg
Article
economist.com
The Economist
null
null
30,094,274
https://www.vox.com/the-goods/22895463/peloton-stock-price-bike-cost-production
Peloton’s big whoops
Emily Stewart
Peloton is in a pickle. # Peloton’s big whoops The pandemic was the best thing that happened to Peloton until it was the worst. In 2020, the at-home digital exercise company could not get its products out the door fast enough. It surpassed $1 billion in quarterly sales during the last three months of the year, and leadership fretted its profits could take a hit as it invested to try to keep pace with pandemic-induced Pelotonpalooza. People waited weeks and months for their bikes and treadmills to be delivered. Alongside companies such as Zoom and Clorox, Peloton’s stock was a solid stay-at-home bet on Wall Street. But nothing lasts forever. Fast-forward to 2022, and Peloton finds itself in quite a different spot. The good news: It has caught up on the supply side and finally has bikes and treadmills readily available. The bad news: The demand it’s trying to catch up to is no longer there. CNBC’s Lauren Thomas reports that Peloton is going to hit pause on production for a while. Emily Stewart’s column exposes the ways we’re all being squeezed under capitalism. Sign up here. “Peloton has been plagued with a supply-demand mismatch since the pandemic started,” said Simeon Siegel, an analyst at BMO Capital Markets who has been bearish on the company for quite some time. “The problem has basically inverted.” Over the course of the Covid-19 pandemic, many people have found out more about the supply chain than they ever imagined. American consumers are accustomed to things just showing up at their doorsteps or on the shelves when they want, no questions asked. Everyone knows about supply and demand theoretically, but everything we’re learning about them in practice now is, to quote the meme, against our will. The public health crisis has thrown many parts of the economy out of whack, and it’s made doing business more complicated across industry after industry. There have been plenty of well-documented kinks in the supply chain. (Who knew there’d be this surge of interest in shipping containers?) There’s been all sorts of weirdness in demand as well. (Remember when everyone wanted yeast?) “The problem is that when you start believing your own stories, you start making decisions accordingly” What makes doing business extra tricky in pandemic times is it’s hard for companies to know how long anything will last, and whether and how much to adjust. Peloton took the optimistic view; it thought its pandemic surge would last forever. Now the company and its investors are learning the hard way that this may not be the case. “Forecasting a business, at the end of the day, is tied to supply and demand as opposed to emotional content. Peloton has one of the most powerful and best marketing departments I’ve ever seen in an industry; their storytelling is unparalleled,” Siegel said. “The problem is that when you start believing your own stories, you start making decisions accordingly.” ## I love Peloton, but there is a limit on how many Peloton bikes I will buy, which is one I bought my Peloton bike in June 2020. Things were not good for me in the sense that I barely left my home, which is why they were so good for Peloton. I spent upward of $2,000 on it and waited two months for it to be delivered. Part of it came semi-broken, and I was terrified I’d have to wait another two months for it to be replaced and for me to finally live my stationary bike dreams. (I did not.) I really like the Peloton a lot. I use the bike itself or one of the other exercise classes available through my subscription most days. I also share my subscription with a coworker, meaning our combined activity makes me look very fit. I have my favorite instructors, and I talk about Peloton an embarrassing amount. I also am not going to buy another bike, nor am I going to buy the treadmill, both of which Peloton spent the 2021 holiday season advertising to me in my email inbox — often at a discount — heavily. Part of what has happened with Peloton, Siegel explained, is that the company misunderstood the demand surge that took place at the start of 2020. “The primary question surrounding Peloton was did the pandemic pull forward demand, or did it expand the audience side?” he said. “Based on all the data we had been seeing throughout the pandemic, it seemed like this was a pull forward. And the company, on the other hand, viewed this as an expansion and built accordingly.” In other words, people who would have bought Pelotons in 2021 or 2022 instead got them in 2020, but more people didn’t necessarily want them overall. Peloton had a lot of things break its way at the start of the pandemic — it had home equipment and content ready to go, for example, compared to competitors such as SoulCycle. And unlike basically all other gyms and fitness studios, it didn’t depend on in-person attendance. Still, those breaks weren’t really enough. The company has tried to bring more customers into the mix by playing around with the prices of its products, which are definitely high. It started cutting prices on its original bike, released a less expensive treadmill, and last year slashed the price of its bike even further. As the Wall Street Journal notes, it looks as though some of those price cuts worked for a while — sales of its products excluding treadmills jumped five times over after it lowered the cost of its bike in August 2021. Still, sales were down for that quarter, and cheaper products mean lower margins. Peloton is going to raise prices this year: As of January 31, it is charging $250 for the delivery and setup of some of its bikes and $350 for the delivery and setup of some of its treadmills. People who would have bought Pelotons in 2021 or 2022 instead got them in 2020 It’s been a rough several months for Peloton. Concerns about the safety of its treadmills caused it to recall some of its products last year. The last time it reported earnings in November, it cut its annual revenue forecast by up to $1 billion. More recently, it weathered the *And Just Like That *debacle (Mr. Big dies after a Peloton workout). The company recovered quickly, releasing an ad with actor Chris Noth — only to pull it after sexual assault allegations against the actor came out the same week. Now, it’s reportedly hired consultancy McKinsey to help it sort out its cost structure, which could entail halts on production and layoffs. Its market cap peaked at about $50 billion; on Friday, it was just under $10 billion. In an open letter to customers and employees on Thursday, Peloton CEO John Foley seemingly disputed the production halt, saying the company is “resetting our production levels for sustainable growth.” He did acknowledge tumultuous times, including that layoffs may be on the table. However, he declined to go into much detail, citing a “quiet period” before the firm’s next earnings report on February 8. What’s apparent is that Peloton’s leadership has been doing a bit of guessing its way through the pandemic, and some of its guesses have been wrong. Foley himself has recognized that the demand surge made the company “a little undisciplined” in its decisions. Securities and Exchange Commission filings show that executives and insiders at Peloton sold nearly $500 million worth of stock in 2021, before its stock price took a big hit, though most of those were prescheduled sales. That means some executives, including Foley, aren’t feeling Peloton’s stock price drop as much as they might have. Peloton did not respond to a request for comment for this story. ## Planning in a pandemic: Hard (but high-paid executives are supposed to be good at it) Peloton’s pickle is not Peloton’s alone. For a lot of businesses, it’s been really hard to figure out what’s a blip in the pandemic economy and what’s an enduring shift. You can look at the lumber industry as an example. One thing that contributed to the surge in lumber prices and lumber shortage last year was that when the pandemic hit, there was a huge jump in demand for lumber as people decided to build houses or take on home improvement projects. Producers were skeptical about whether the heightened demand would persist, so they were slow to ramp up production to try to catch up. In hindsight, some would probably have done some things differently. At the same time, imagine if back in March of 2020 Purell’s parent company had decided to build eight more factories. That would have been way too much. Getting the supply chain right and matching demand is always a key part of any business. Every company, like Peloton, has had to navigate grounds shifting underneath them during the pandemic. Arzum Akkas, an assistant professor of operations and technology management at Boston University’s Questrom School of Business, explained that the No. 1 rule in business forecasting is that forecasts are always wrong. There’s generally a range within a certain level of confidence decision-makers target. Covid-19 has introduced an X-factor that makes that range, and therefore the margin for error, wider for businesses. “They have to make the range bigger in terms of forecasts, and they have to consider bigger risks for supply and plan accordingly,” Akkas said. “They cannot plan their operations assuming that I’m going to get what I want when I want it.” While striking the right balance has been harder during the pandemic, it’s not impossible — or, at least, plenty of companies have managed to avoid the situation Peloton finds itself in. Akkas pointed to Walmart and Amazon as examples of businesses that have successfully managed pandemic choppiness in terms of operations. Peloton’s “operations are not strong. Whose muscles are strong? Walmart, they are masters in operations plans,” she said. “The companies that we don’t hear about on the news get it right. If Amazon got it wrong, we would hear about it.” So what’s next for Peloton? It’s hard to say. Perhaps it will manage to turn things around, cut costs, unfortunately via layoffs and store closures, and figure out how to get more people buying its products again. Maybe there’s a deal to be made or a prospective buyer out there, such as Apple. In a nutshell, the pandemic was the best thing to happen to Peloton until it was the worst Peloton’s fanbase is certainly strong. It has 6.2 million members, including digital-only subscribers (meaning people who use its app but didn’t buy any equipment). As of November, it had nearly 2.5 million connected fitness subscribers, meaning people who own one of its products and also pay to use its fitness content, like spin and running classes. Foley also noted that Peloton has a less than 1 percent churn rate, meaning people stay once they’re signed up. Still, it’s hard not to look at boutique fitness trends that came before Peloton, including those in the spin space like SoulCycle and FlyWheel, and wonder whether Peloton won’t face the same fate. Apart from its missteps, Peloton might also just be a victim of timing — timing that was first very advantageous and now not so much. In a nutshell, the pandemic was the best thing to happen to Peloton until it was the worst, Siegel said. “The company misread the demand cues, engaged in very heavy spending, and the pandemic helped whittle away at Peloton cash while, ironically or not, spotlighting how great connected fitness was and helping their competitors fundraise,” he said. Perhaps had Peloton not been so poised to meet the moment two years ago, or had the pandemic not happened, it might be in a better spot today. “I would be talking about a very strong growth company that had drastically less brand awareness but a drastically better revenue arc that was coming into its own and with a competitive set that was drastically less capitalized.” Peloton hasn’t imploded, so if you’re a member, don’t panic. Also, your favorite instructors like Cody Rigsby and Robin Arzon are going to be fine. Many of the people who were there early on made a lot of money off Peloton’s pre-pandemic IPO as the stock surged, assuming they cashed out on at least some of it. Plus, the trainers have gained fame in their own right. During workouts, one of Rigsby’s common refrains is, “Get your life together.” It’s advice he should maybe give to his employer. ## Most Popular - The one horrifying story from the new Menendez brothers doc that explains their whole caseMember Exclusive - Take a mental break with the newest Vox crossword - AI companies are trying to build god. Shouldn’t they get our permission first? - The resurgence of the r-wordMember Exclusive - Sign up for Vox’s daily newsletter
true
true
true
The pandemic was the best thing that happened to Peloton until it was the worst.
2024-10-12 00:00:00
2022-01-24 00:00:00
https://platform.vox.com…031413613&w=1200
article
vox.com
Vox
null
null
33,282,498
https://www.washingtonpost.com/technology/interactive/2022/tiktok-popularity/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
27,931,242
https://pedestrianobservations.com/2021/07/23/the-leakage-problem/
The Leakage Problem
Alon Levy
# The Leakage Problem I’ve spent more than ten years talking about the cost of construction of physical infrastructure, starting with subways and then branching on to other things, most. And yet there’s a problem of comparable size when discussing infrastructure waste, which, lacking any better term for it, I am going to call *leakage*. The definition of leakage is any project that is bundled into an infrastructure package that is not useful to the project under discussion and is not costed together with it. A package, in turn, is any program that considers multiple projects together, such as a stimulus bill, a regular transport investment budget, or a referendum. The motivation for the term leakage is that money deeded to megaprojects leaks to unrelated or semi-related priorities. This often occurs for political reasons but apolitical examples exist as well. Before going over some examples, I want to clarify that the distinction between leakage and high costs is not ironclad. Sometimes, high costs come from bundled projects that are costed together with the project at hand; in the US they’re called *betterments*, for example the $100 million 3 km bike lane called the Somerville Community Path for the first, aborted iteration of the Green Line Extension in Boston. This blur is endemic to general improvement projects, such as rail electrification, and also to Northeast Corridor high-speed rail plans, but elsewhere, the distinction is clearer. Finally, while normally I focus on construction costs for public transport, leakage is a big problem in the United States for highway investment, for political reasons. As I will explain below, I believe that nearly all highway investment in the US is waste thanks to leakage, even ignoring the elevated costs of urban road tunnels. **State of good repair** A month ago, I uploaded a video about the state of good repair grift in the United States. The grift is that SOGR is maintenance spending funded out of other people’s money – namely, a multiyear capital budget – and therefore the agency can spend it with little public oversight. The construction of an expansion may be overly expensive, but at the end of the day, the line opens and the public can verify that it works, even for a legendarily delayed project like Second Avenue Subway, the Berlin-Brandenburg Airport, or the soon-to-open Tel Aviv Subway. It’s a crude mechanism, since the public can’t verify safety or efficiency, but it’s impossible to fake: if nothing opens, it embarrasses all involved publicly, as is the case for California High-Speed Rail. No such mechanism exists for maintenance, and therefore, incompetent agencies have free reins to spend money with nothing to show for it. I recently gave an example of unusually high track renewal costs in Connecticut. The connection with leakage is that capital plans include renewal and long-term repairs and not just expansion. Thus, SOGR is leakage, and when its costs go out of control, they displace funding that could be used for expansion. The NEC Commission proposal for high-speed rail on the Northeast Corridor calls for a budget of $117 billion in 2020 dollars, but there is extensive leakage to SOGR in the New York area, especially the aforementioned Connecticut plan, and thus for such a high budget the target average speed is about 140 km/h, in line with the upgraded legacy trains that high-speed lines in Europe replace. Regionally, too, the monetary bonfire that is SOGR sucks the oxygen out of the room. The vast majority of the funds for MTA capital plans in New York is either normal replacement or SOGR, a neverending program whose backlog never shrinks despite billions of dollars in annual funding. The MTA wants to spend $50 billion in the next 5 years on capital improvements; visible expansion, such as Second Avenue Subway phase 2, moving block signaling on more lines, and wheelchair accessibility upgrades at a few stations, consists of only a few billion dollars of this package. This is not purely an American issue. Germany’s federal plan for transport investment calls for 269.6 billion euros in project capital funding from 2016 to 2030, including a small proportion for projects planned now to be completed after 2031; as detailed on page 14, about half of the funds for both road and rail are to go to maintenance and renewal and only 40% to expansion. But 40% for expansion is still substantially less leakage than seen in American plans like that for New York. **Betterments and other irrelevant projects** Betterments straddle the boundary between high costs and leakage. They can be bundled with the cost of a project, as is the case for the Somerville Community Path for original GLX (but not the current version, from which it was dropped). Or they can be costed separately. The ideal project breakdown will have an explicit itemization letting us tell how much money leaked to betterments; for example, for the first Nice tramway line, the answer is about 30%, going to streetscaping and other such improvements. Betterments fall into several categories. Some are pure NIMBYism – a selfish community demands something as a precondition of not publicly opposing the project, and the state caves instead of fighting back. In Israel, Haifa demanded that the state pay for trenching portions of the railroad through the southern part of the city as part of the national rail electrification project, making specious claims about the at-grade railway separating the city from the beach and even saying that high-voltage electrification causes cancer. In Toronto, the electrification project for the RER ran into a similar problem: while rail electrification reduces noise emissions, some suburbs still demanded noise walls, and the province caved to the tune of $1 billion. Such extortion is surplus extraction – Israel and Toronto are both late to electrification, and thus those projects have very high benefit ratios over base costs, encouraging squeaky wheel behavior, raising costs to match benefits. Keeping the surplus with the state is crucial for enabling further expansion, and requires a combination of the political courage to say no and mechanisms to defer commitment until design is more advanced, in order to disempower local communities and empower planners. Other betterments have a logical reason to be there, such as the streetscape and drainage improvements for the Nice tramway, or to some extent the Somerville Community Path. The problem with them is that chaining them to a megaproject funded by other people’s money means that they have no sense of cost control. A municipality that has to build a bike path out of its own money will never spend $100 million on 3 km; and yet that was the projected cost in Somerville, where the budget was treated as acceptable because it was second-order by broader GLX standards. **Bad expansion projects** Sometimes, infrastructure packages include bad with good projects. The bad projects are then leakage. This is usually the politically hardest nut to crack, because usually this happens in an environment of explicit political negotiation between actors each wanting something for their own narrow interest. For example, this can be a regional negotiation between urban and non-urban interests. The urban interests want a high-value urban rail line; the rest want a low-value investment, which could be some low-ridership regional rail or a road project. Germany’s underinvestment in high-speed rail essentially comes from this kind of leakage: people who have a non-urban identity or who feel that people with such identity are inherently more morally deserving of subsidy than Berlin or Munich oppose an intercity high-speed rail network, feeling that trains averaging 120-150 km/h are good enough on specious polycentricity grounds. Such negotiation can even turn violent – the Gilets Jaunes riots were mostly white supremacist, but they were white supremacists with a strong anti-urban identity who felt like the diesel taxes were too urban-focused. In some cases, like that of a riot, there is an easy solution, but when it goes to referendum, it is harder. Southern California in particular has an extreme problem of leakage in referendums, with no short- or medium-term solution but to fund some bad with the good. California’s New Right passed Prop 13, which among other things requires a 2/3 supermajority for tax hikes. To get around it, the state has to promise somthing explicit to every interest group. This is especially acute in Southern California, where “we’re liberal Democrats, we’re doing this” messaging can get 50-60% but not 67% as in the more left-wing San Francisco area and therefore regional ballot measures for increasing sales taxes for transit have to make explicit promises. The explicit promises for weak projects, which can be low-ridership suburban light rail extensions, bond money for bus operations, road expansion, or road maintenance, damage the system twice. First, they’re weak on a pure benefit-cost ratio. And second, they commit the county too early to specific projects. Early commitment leads to cost overruns, as the ability of nefarious actors (not just communities but also contractors, political power brokers, planners, etc.) to demand extra scope is high, and the prior political commitment makes it too embarrassing to walk away from an overly bloated project. For an example of early commitment (though not of leakage), witness California High-Speed Rail: even now the state pretends it is not canceling the project, and is trying to pitch it as Bakersfield-Merced high-speed rail instead, to avoid the embarrassment. **The issue of roads** I focus on what I am interested in, which is public transport, but the leakage problem is also extensive for roads. In the United States, road money is disbursed to the tune of several tens of billions of dollars per year in the regular process, even without any stimulus funding. It’s such an important part of the mythos of public works that it has to be spread evenly across the states, so that politicians from a bygone era of non-ideological pork money can say they’ve brought in spending to their local districts. I believe there’s even a rule requiring at least 92% of the fuel tax money generated in each state to be spent within the state. The result is that road money is wasted on low-growth regions. From my perspective, all road money is bad. But let’s put ourselves for a moment in the mindset of a Texan or Bavarian booster: roads are good, climate change is exaggerated, deficits are immoral (German version) or taxes are (Texan version), the measure of a nation’s wealth is how big its SUVs are. In this mindset, road money should be spent prudently in high-growth regions, like the metropolitan areas of the American Sunbelt or the biggest German cities. It definitely should not be spent in declining regions like the Rust Belt, where due to continued road investment and population decline, there is no longer traffic congestion. And yet, road money is spent in those no-congestion regions. Politicians get to brag about saving a few seconds’ worth of congestion with three-figure million dollar interchanges and bypasses in small Rust Belt towns, complete with political rhetoric about the moral superiority of regions whose best days lay a hundred years ago to regions whose best days lie ahead. **Leakage and consensus** It is easy to get trapped in a consensus in which every region and every interest group gets something. This makes leakage easier: an infrastructure package will then have something for everyone, regardless of any benefit-cost analysis. Once the budget rather than the outcome becomes the main selling point, black holes like SOGR are easy to include. It’s critical to resist this trend and fight to oppose leakage. Expansion should go to expansion, where investment is needed, and not where it isn’t. Failure to do so leads to hundreds of billions in investment money most of which is wasted independently for the construction cost problem. I’m not sure you are right about roads. Rural America having good roads means you could link up things like the national parks with full cama style overnight buses and that would probably be more plausible than improving rail infrastructure to do 100mph By full cama I mean the top class as per https://www.omnilineas.com/argentina/tips/categories/ The main links were completed generations ago – already in the 1950s, spending was a mix of speed and capacity, and since then it’s been almost entirely capacity. Today, even mixed projects like I-69 are pretty rare, and most spending is adding lanes, upgrading interchanges, and other projects that are sold as ways to reduce traffic congestion where in most of the areas where the money is spent there isn’t any. I’m more thinking the secondary routes which have passing lanes and are well built in America to allow a 65mph average speed vs 40mph on secondary routes in Britain. And if there are medium sized leisure destinations I’m sure those roads could support a good bus service – especially a sleeper service that is difficult to self-drive. I agree massive junctions is a waste for towns and cities with 10 minute traffic jams from 8-9am only. “In this mindset, road money should be spent prudently in high-growth regions, like the metropolitan areas of the American Sunbelt or the biggest German cities. It definitely should not be spent in declining regions like the Rust Belt, where due to continued road investment and population decline, there is no longer traffic congestion.” It actually makes a lot of sense to spend money where growth is happening. The issue is further upstream – the policies that determine where growth is happening. Specifically, in the US, land use policy prohibits large coastal cities from growing in population despite the massive demand (as measured in housing prices) for such growth. Remove the land use restrictions and the massive growth will be on the coasts rather than in the Sun Belt. And to the extent there is an honest demand for “prudent spending in high-growth regions”, that demand will focus on coastal cities and to a large extent urban transit. (As usual, about 90% of transit policy is just derivative of land use policy) Sure, but Texas, Florida, and Georgia will continue growing no matter what – these are not poor areas. Zoning-constrained coastal cities are barely expanding freeways – LA occasionally adds lanes at high costs, but new freeways are prohibitively expensive. I don’t think that’s true. First of all, I have seen statistics that Texas at least is a poor area, the workers it is attracting are low-income on average. Second of all, US population increase is dropping like a stone, and if the coastal cities grow substantially, there will be no remaining increase to be distributed among the Sun Belt cities. Not everyone wants to live on the coast. The sun belt can grow at the same time as the coasts. Even if the coasts fixed their zoning so enough things actually got built, that wouldn’t destroy the value of the sun belt for some people. I am highly confident that the potential demand in the major coastal metropolises is enough to absorb the entire country’s population growth. That might mean those metropolises together increase from 30% to 40% of the US population. “Not everyone wants to live on the coast”, sure, but I’m pretty sure 40% do, as long as a large share of jobs and cultural attractions are on the coasts, which they are and will be. What if the coasts are under water? They won’t be. Look at the actual topographic lines. There are more than 3,000 counties or equivalents in the U.S. Half of the population lives in 143 counties. https://www.census.gov/library/stories/2017/10/big-and-small-counties.html Houston, Dallas, and Austin all have higher metropolitan per capita incomes than the US average. They’re not muchricher than the US average, especially (for Houston) this side of the oil price collapse of 2014-5 – beforehand Houston was richer than Chicago – but they are richer. They’re especially richer if you only look at labor income, which exclusion mostly screws dedicated retirement regions like Florida, but even on all-source income, they’re richer. They’re not Vegas or Phoenix, whose growth is genuinely just a push factor out of California.Florida is now the third most populous US state (surpassing New York). Is Florida (no state income tax) getting a “push factor” from people leaving (high tax) New York, New Jersey, Connecticut? P.S. Florida is the “most coastal” of the 48 contiguous states. It just splits it between the Atlantic Ocean and the Gulf Of Mexico. Alaska is in a class by itself. Florida is the state least prepared for climate change Climate change predictions say that Texas, Florida, and to a lesser extent Georgia are gonna have to lose population — no real choice, Texas lacks the water and Florida lacks the landmass once it floods — and it’s going to move to the Great Lakes region. That is, the Rust Belt. Expect it. I mean, Phoenix will probably depopulate first, but if you’re long-term planning, you know what’s coming. Water is not an issue – solar/wind will power desalination if necessary (this is much cheaper than moving the people). I’m not sure why you think Georgia will suffer, little of the population lives near the coast. Similarly Phoenix, they will just keep their air conditioning on. Florida will have real issues though. @Eric2: “Water is not an issue – solar/wind will power desalination ” Seriously? Quite apart from the energy implications of large-scale desalination (which would ratchet up carbon pollution even more), desalination has to happen on the coasts. Perhaps this is not so much an issue for Florida but it would be a big thing for Texas. It would also ramp up the amount of energy used, for all that pumping uphill and over long distances. That would then require an entire new long-distance water infrastructure to be built. The surprising thing about the US is how reliant upon groundwater* so much of it is. That means that there is no existing mains water infrastructure for large parts of the country. The water issue in the US is absolutely huge and it boggles the mind, yet very little is being done about it. Denial and NIMTOO is the current response. Unless you’re talking about desalination (ie. reverse osmosis) as a means of recycling waste water in cities. But Texas and Florida won’t consider any limits on consumption, say like CA, so what chance for asking them to use recycled water from their sewers? In fact LA puts its purified waste water back into the aquifer to help recharge it, or at least slow its depletion. It is also another (big) step in purification in that it will be decades or centuries before that water re-emerges. (BTW, this is another aspect in which CA is progressive.) *The Ogallala and Floridan aquifers in Texas and Florida are being depleted at approximately ten times their natural rates of replenishment. It’s the very definition of unsustainability. By resorting to massive and ever-deeper pumping projects the crunch may be forestalled for another few decades but that is part of the problem, in that it leads to denial and failure to act today. Incidentally, while climate change gets blamed for the Florida problem of saltwater intrusion into its aquifers (and sometimes bubbling up into the urban areas), actually the main cause is overdrawing on the aquifer. In a low land like Florida it means that at some point it causes saltwater (from ocean or marshes) to flow into the fresh, ruining the entire aquifer. @Eric2 I forgot to mention that the collapse of that Miami apartment block is probably related to this same issue. The entire substrata of Miami (and most of Florida) is limestone which has pros and cons. It is the reason Florida has a gigantic freshwater aquifer to draw upon. Without it Florida would not be developed. But then overdrawing on it creates the saltwater intrusion problem and the sinkhole problem. Oh, and why Miami is sinking (much more than climate change is causing the ocean to “rise”; watch out for when climate change really kicks in). The news reports about the building collapse are all about deaths and then about building standards and another huge American problem, lack of maintenance. However I still believe they’ll discover that geology was the ultimate cause of collapse. Unless there is a big coverup because the implications are both huge and unthinkable, with no quick fixes. Here’s another story about water shortages. As they explain, the groundwater is not deep because of the geology and so there is no solution from digging deeper, and this too is why it is running dry (it simply is not a big store like many aquifers; it runs off into the ocean). But it is curious. I thought that part of Northern California coast was quite wet–and when I check it out, it is: Fort Bragg (closest big town to Mendocino, and where they currently truck water from) gets 1100mm and 111 days of rain. For comparison, most of Southern CA gets 300-400mm and ≈35d rain; Seattle gets 948mm and Vancouver BC gets 1460mm and 160d rain. But it’s not even mentioned in the article! So a solution is right there (unless Mendocino is in some kind of weird rain shadow?) The thing is that >40% of household water use is toilet + laundry and this can be served by untreated rainwater. Perhaps it is not as big a solution for a restaurant but still. Here (the driest continent) it has been determined that to cope with recurring droughts household water tanks should be >30,000L, ideally >50,000L, but these things are relatively cheap (these days the replumbing of wc and laundry will probably cost more!). Is this another case of the American mind being incapable of thinking beyond existing experience? Say what you will about O’Toole but when it comes to the urban growth boundaries he is comically right. UGB are even stopping transit oriented development in places like Portland ironically(look on google satellite how much farm land is close to downtown Portland) but the urbanists will never get rid of them because its part of their religion. You mention about money being ‘wasted on low-growth regions’. What do you make of the argument that by maximising ROI you end up repeatedly boosting the same place and thereby creating or entrenching low-growth regions? Say (vastly oversimplified) a new metro line boosts productivity by 5%. If you build it in city A (avg salary $50000) you boost the economy by $2500/head. If you build it in city B (avg salary $100000) you boost the economy by $5000/head. If you are looking to maximise ROI you will always build it in city B. Now city B is even richer, and hence even more deserving of ROI-maximising investment in the future. This is not a theoretical question – it has been identified in the UK where salaries are higher in London and so this effect has emerged over the last few decades of transport funding decisions assessed primarily by benefit-cost ratio. The effect has even caused changes to investment appraisal methods. This argument seems to take cities as the moral unit of evaluation. If you take human individuals as the moral unit of evaluation, and if there are human individuals whose choice to locate in City A or City B is determined by the investment in housing and infrastructure in the cities, then I think the question disappears. If some cities can absorb investments in ways that expand more opportunities to more people, and others can’t, then it seems in many ways akin to the issue where some cities need expensive mitigation of flooding or drought conditions in order to accommodate the same number of new people as other cities in more favorable locations, so that we should be focusing development in the places where people can better be accommodated rather than splitting it equally between a good place for people and a less good (or actively bad) place for people. Most people don’t think like that though. Most, if not all, people have something (support networks, families, non-work commitments, language) tying them to a particular place, so they do not simply choose to locate in city A or city B based on financial reward. Humans do not work how you suggest (it’d be much simpler if we did!). And repeatedly boosting one area of a polity at the expense of others causes negative externalities. You get poor opportunities in peripheral areas, often to the point where they end up shrinking with all the crap that causes – Detroit is the obvious example of a city crippled by shrinking pop. You also get huge housing costs in the boosted core (which of course makes moving from core to periphery much harder). There are other ways this sort of bad outcome can be wrought apart from with infrastructure; the euro has done the same in making Germany overly competitive at the expense of e.g. Greece. Or you get the young and able leaving and those left behind voting for fascists Detroit was harmed by its suburbs, not due to “ROI” investment, and using the fact the British has one of the world’s most destructive housing policies doesn’t seem like a good excuse either. Japan has basically allowed anyone to move to Tokyo and Osaka, such that they remain stagnant or weak growing despite a shrinking country. I didn’t say Detroit was harmed by ROI investment, I said it’s a good example of some of the problems a shrinking city can cause. And the high-rents-in-the-boosted-core phenomenon is hardly unique to the UK (I don’t think I even implied it was). Japan also – quite famously! – does low-ROI investment in declining rural areas. Detroit isn’t on the periphery. It’s at the center of the country’s 14th largest metro area. Can you name the fourteenth largest metro area in any other country? Would you recognize the name of the town at its center in most places? Easily, the same way I checked that Detroit has apparently moved down to 14th in the 2020 estimates, with Wikipedia. 14th in the U,K. is Edinburgh, 14th in Canada is Oshawa, Matsuyama in Japan, Montpelier in France. 14th in Austrtalia is a wide place in the road in Queensland. It’s Metropolitan Statistical Area is the same size as Hamburg’s EMR and it’s Combined Statistical Area is the same size as Stuttgart or Munich. Give Canada a whirl https://en.wikipedia.org/wiki/List_of_census_metropolitan_areas_and_agglomerations_in_Canada And to be fair Edinburgh and Montpellier are places I think lots of people will have heard of. The US doesn’t do benefit-cost analyses and spends road money disproportionately in low-growth regions – Texas is a massive net donor of fuel taxes (I’m told on Twitter it’s the only state where the 92% limit is binding). And it’s not even low-growth as in zoning-constrained coastal cities, which don’t build new roads, but depopulating Rust Belt regions. And yet the same effect is obtained there. And even with this tax subsidy, Americans are certain that the Rust Belt didn’t fail but was failed by outside forces, like globalism, high finance, and other stand-ins that a few generations ago would just have been called the Jews. Today it’s not PC to say that so the populist grifters say the same with the word Jews stripped away. In Upstate New York most people believe they’re tax-subsidizing the city when the opposite is the case. The Ruhr had and has many of the same problems the old industrial regions of Silesia, northern England or the US have and had. But the Ruhr is doing better and its reputation now even includes stuff like “culture” and “greenery”. What’s different in the Ruhr? Local policies? Federal policies? @Herbert I’d say both local and federal policies worked quite well together. In transport they do a much better job than the North of England does intergrating the centres together with trains and usings light-rail within the cities. Cologne Stadtbahn kills the Manchester Metrolink in ridership/km. And well Midwest only has one regional rail line worthy of the name. That can’t solve all your problems but it helps. If you look at the Macro-economic story of say UK vs Germany, the story is that postwar Germany had an open economy with new co-ordination institutions like industry-wide unions, whereas in the UK focused on preserving the 19th century industrial structure of outer Britain until Thatcher when she put them out of the misery (90% of the jobs in the staples are gone by 1970). negotiated a shift to niche-industrial products with high-value complimentary services, for instance many historic German textile firms from the Ruhr survive often as service sector firms in textiles. First world economies preserve industrial sectors with services which outside Finance Germany actually outperforms the UK. Good passenger rail integrates regional labour markets. Investing in downtowns attracts service sector professionals. That said the Ruhr is still in relative decline, the one thing they got wrong was universities. Midwest has toxic racial politics, crappy local government design etc, only Minnesota is doing well. But it was New England levels of white until the 2000’s and had its big-city its university town and its state capitol in the same urban area. If Chicago were not governed so badly it would be fine and could maybe drag Wisconsin and Indiana with it. Ohio is screwed. Just because Bochum university is a crime against architecture doesn’t mean it’s a bad university. Also iirc the Stadtbahn tunnels were planned in part to give former miners new jobs digging them… It didn’t work and now their maintenance is a drain on municipal budgets Having an underpowered or overpowered university system is a product of two things, quality and per capita scale, not just in the absolute but in relative terms within a country. Ruhr is at a disadvantage per capita versus Hesse/Baden-Wurttemberg for sure and probably Berlin too. As for tunnel problems. Writing from an English perspective, that’s the kind of problem we’d love to have. I’ve just been been comparing lightrail/tram systems and while UK lighrail doesn’t suck at American levels, its still bad by Central European standards. @Alon Levy, do you have better phrase for Denmark/Austria/Switzerland etc than “German periphery”? I know you haven’t used that pejoratively since you’ve been clear that these countries have made the best use of German innovations in public transport. I used Central Europe, but I don’t think that’s very good either. Nobody can agree where central europe is, but most agree it somehow contains Germany Ohio’s problem is that it’s still dominated by rural interests, and the demographics aren’t improving. If the cities of Ohio (Cleveland, Cincy, Columbus, Dayton, Toledo, Akron/Canton, and Youngstown) had sufficient demographic size and political clout, it would have a chance of making sane decisions, but they’re just slightly less in population than the delusional rural-identity people (mostly racists). You should see what people in Illinois south of Chicago think about the city. They’re all convinced they’d be better off as another state. Said state of course would be in heavy competition with Mississippi for the poorest in America. On the other hand, USA will be better off if Illinois splits into two states, as Chicago will no longer get dragged down by the burden. Ditto for NYC and upstate New York. State boundaries in 1800 already didn’t quite make sense (because they were colonial vestiges) but coming into the 21st century, the old state boundaries have become so stupid and they are hampering America’s growth. Or do like the Chinese: any megacity becomes its own province (state). This could solve permanently the Dem’s senate difficulties. Or does it? No, that would hurt Democrats. Illinois now elects two D senators, if split then it would elect two Ds and two Rs. Similarly with NY, CA, etc. I think in general most state splits that aren’t crazy gerrymanders would help Republicans, because Republicans are the majority in most US territory. Independently this would also have the bad effect of making Senate races noncompetitive nearly everywhere. @Eric2 Hold on. While it may net to zero in some states, of which Illinois is probably one example, that is probably not the case in CA where the LA and Bay Area would result in at least a 2-seat, maybe 3-seat gain. Likewise, Texas is approaching the cusp of going purple and it is mostly happening in their big cities, so any gain in a city is a net gain. Likewise FA and probably GA where the current Dem senators are in a very precarious situation. Not to mention completely new Dem seats in Washington-DC. Would NYC and Boston result in net zero for those states? The US continues to urbanise and so even some of the current situations of big red cities will shift purple then blue over time. Someone has probably already modelled this …. As it happens, I crayoned state boundary redrawing last year! I drew this without looking at who voted what; this applet says Biden wins 15/30 states on this map, the same share he actually won. You can do small tweaks in either direction, for example if you move West Texas back to Texas then the resulting Arizona-NM state is blue, and if you donate enough of Minnesota to Colorado and Illinois it flips back to blue, whereas if you do some Appalachian tweaks you can make the Piedmont state red and with enough expansion you can flip Michigan and Upstate NY to red. @Alon re electoral boundaries Right. Interesting but merging states or even subregions is a non-starter. Years ago I read something similar but specific to the state most obviously crying out to be divided: California. While NoCal and SoCal is the simplest it has obvious political difficulties but the alternative–I think of between 3 and 5 so as to give something to conservatives, but then that isn’t realistic either. In its own way it is a form of gerrymandering though equi-population electorates in CA would always favour Dems. My idea is a bit different in having it city-based. It is often said that city government is the only one that is at all functional in the US. Hence the Bloomberg association of city mayors and a world version too. It seems to have overcome the usual petty partisan divisions and the participants reported satisfaction. It is suggestive that giving them more power over national affairs might be more productive than current arrangements. And, inevitably the world is urbanising such that we”ll all be city-slickers before long. Maybe we’ll eventually revert closer to the feudal arrangement where all those outside the city walls will have to be associated with their nearest city to have any political input ie. protection from their overlord! Here it is: I don’t think city governance in the US is especially functional. The only place I can concretely point out to you where city > state is New York City, but only because the governor happens to be Cuomo; de Blasio is bad and Adams is a corrupt machine politician who is so traumatized by almost losing the election he wants to change the electoral system. In the US about the only places where government even approximates something that works are ones where a) there’s interpartisan competition and b) the Democrats are currently winning (Congress, Virginia, Minnesota, etc.) – without such interpartisan competition politics becomes just a contest of who can dole out jobs to supporters. In France, of note, the lack of local empowerment has been a big benefit to intercity transport and housing, and some of the worst abuses are local, not national. Mandatory pork at schools is not state policy – the food halls are at the commune level, so from time to time a racist mayor makes Jewish and Muslim children eat pork and gets told to stop by state courts. So even if you have to have subnational government, it should be at high enough a level that there’s meaningful ideological politics, more regional than local. (P.S. speaking of the article you link to, the reference to John Yoo as “UC Berkeley law professor” rather than the more significant “Bush torture lawyer” is just bad.) @Alon We’re just talking about different quanta of dysfunctionality. The point of that article about dividing CA into 6 states (!) was that it was in response to its perceived total dysfunction and ungovernability; and here it is going into another Recall election. This, the richest and most progressive state, not just in the USA but world. We’ve seen federal politics in a total mess, at least since GWB if not Reagan, maybe Nixon. And still Biden will do nothing about the absurd and anti-democratic filibuster, and predictions are that the usual gridlock will resume after next midterms (!?). Obviously NYC is hardly a posterchild for great government but it looked pretty good in US terms during Bloomberg’s reign. One can also look at very successful city-states like Singapore and formerly HK. The argument has been that the unit of government that is closest to the business end of governing, ie. service delivery, has to work the best because feedback on dysfunction is pretty much instantaneous. Then scale delivers a pool of competent people to run the joint. I wouldn’t discuss France in this context because, as you know, I think it is one of the better run nations, and at all levels. It has actually managed cohesive governance across nation-province-town-city quite well. (I’ve wondered whether France’s peculiar dual mandates may be partly responsible. National pollies who are also mayors etc.) The Anglosphere can’t run any halfway competent rail system (the UK has been living off legacy for >50y) but even without takt or EMU-TGVs (les pauvres!) France still runs a quite good city and inter-city and international rail system:-) Nice crayon, but keep in mind these 30 states have drastically different populations. For example the state centered on St Louis appears to have population ~4 million while the state centered on LA has population ~24 million. So if this is an attempt at making the Senate fairer, it’s barely an improvement over the current situation. Also, the population distribution is changing all the time, which means that tweaks that work now will stop working in a decade or two (similar to congressional gerrymanders, which are redone with every census). Now if the point is to unite economic areas so that they can manage their affairs more efficiently, this crayon would do a good job. But I doubt the gains are enough to justify the disruption involved. City states always have the freeloader issue. Well, sub national city states anyway. The main reason Brandenburg has a tax base at all are people living just across the state line from Berlin and paying taxes in Brandenburg. It’s even worse for Bremen, but then Bremerhaven is a bit of a rust belt Where on earth did you get the idea that California is the most progressive state in the world? Is it more progressive than the Netherlands? Catalonia? Suriname? It’s about as progressive on climate as Bavaria. On some other issues, like access to education and transportation in general, California is to the right of Bavaria. Re Progessiveness of CA. I knew some would object to my throwaway line. But I think many lose sight of the historic influence of California. For example where they lead on vehicle emissions the rest of the US inevitably follows (even with the name “CAFE standards”), and so the world. It’s true on many enviro concerns. On education, Alon is talking about the post-Reagan era when their inability to raise taxes has put horrible financial pressure on what was once the best university system, not only in the US, but arguably the world. IMO, the three-tier UC system is an excellent model and for other jurisdictions more willing to fund such a system. Not to mention it being the epicentre of many cultural (’60s youthquake, protest movements, music, arts) and scientific (biotech, biomed etc) and hi-tech (Silicon Valley, HP, Apple, Google, FB etc). Yes, it may be a mixed bag what it does with all this creation but there’s not another place that wouldn’t wish to have a fraction of it. CAFE standards are the progressivity of 40 years ago. In the 21st century, California’s environmental policy is horrific – so NIMBY people on net move from temperate coastal areas to inland areas with heavy AC and car use, unable to effect change toward public transit, not good on energy for how much solar incidence it has. The UC system is still good, the problem is that it charges thousands of dollars per semester. And the primary and secondary education system is, because of the New Right (not exactly Reagan, but Prop 13, which is ideologically the same), bad. @Alon As I have come to realise with some of our differences of opinion, they are based on generational experiences and perceptions. On this one I probably concede, in that undeniably CA has gone backwards on many things. Most notably and sadly education, because the money thing increasingly negates the original spirit of it: excellent education opportunities for all, and on a merit basis. I seem to recall that CAFE standards were still being contested during the Trump era, ie. the latest update on them by CA; and CA was the lead state in contesting that rollback in the courts (presumably since reversed by Biden?). I haven’t checked recently but CA was always way ahead of all other states on electricity consumption per cap. Remarkably, and unlike almost anywhere else has not increased its per capita consumption over decades. In a recent post you wrote that Texas is the biggest net contributor to federal fuel taxes. Of course that reflects that they drive more, a lot more and probably with less efficient and/or bigger vehicles, than anyone else (probably on the planet). A lot of those will be commuters in its ridiculously sprawled and car dependent cities. I also read that Texas produces more electricity than all of the UK (which is about 3x its size) as if that is a good thing instead of just massively inefficient and wasteful. IIRC Dallas uses more than 3x electricity per person than NYC (here it is: Dallas 16,116 kWh, NYC 4,696 kWh per household). Of course its grid is standalone so it can’t share with neighbouring states! The paradox is that it also produces lots of wind power but then just wastes it. Texas uses 483TWh annually compared to CA’s 202TWh despite the much smaller population. Per capita TX uses 3.3 times more than CA. CA’s per capita use is the same as the EU’s median use. Pretty good for the largest state in the U.S.A. Dare one also mention it is where the US’s hopes for EV became serious … I’m not sure why you’re bringing anti-semitism into this: I think the people who live in the rust belt are all too aware that it was the heads of the auto companies and other manufacturers who consciously de-industrialised the region by outsourcing production to lower-wage countries (not to mention it’s perfectly possible to be against finance capital and neoliberal globalisation without being latently anti-semitic). If there’s any racism around this it comes out as anti-Mexican sentiment, not anti-Jewish. But the bigger issue is, what do you do with a region that has fallen victim to economic decline? Do you just starve it of all funding and create a vicious circle because you can supposedly get a higher rate of return on investing in growth regions, with the result that tens of millions of people either have to leave their homes or suffer unending economic depression? Or do you try to invest in such a way that the depressed areas perform an economic turnaround? By the way, you’re living in a city where the latter alternative is precisely what happened. Berlin in 1990 was far poorer than the likes of Frankfurt and Munich, its economy was stagnant and its population was shrinking. But the concerted public investment program in transport links and other infrastructure has helped to resuscitate its fortunes, to the point that it’s now a magnet for much of Europe’s youth. Sure, this was mainly motivated by an imperial vanity project of restoring the old capital to its former glory (and contrasts with the asset stripping carried out on the rest of the former GDR), but there’s no reason the same thing can’t be done for Detroit, Cleveland, et al. Or should the German government have consigned Berlin to a death spiral while pumping money into the western boom cities? So, the situation in Berlin is unique, for a couple of different reasons: 1. The city inherited the infrastructure of a dominant capital that, before the war, was among the world’s top 10 cities. As soon as the Wall fell, Wessis swarmed Mitte, Prenzlauer Berg, and Friedrichshain, for their proximity. This isn’t true of any post-industrial American city, unless you count Philadelphia. Detroit notably lacks this infrastructure, because it grew too late (1910s-40s, not 1850s-1900s) and was one of the origins of job sprawl, so other than abundant substandard housing it doesn’t have Berlin’s production amenities. 2. West Germany kept its capital in abeyance, distributing institutions between different cities expecting reunification, rather than just consolidating everything in Frankfurt, so Berlin was a natural place for the capital and associated infrastructure (new development where the Wall had been, Hauptbahnhof, BER, etc.). 3. West Berlin was not poor. The economic statistics are really a story of reintegration of the East, which happened everywhere in East Germany – but this also includes depopulating parts, like Saxony-Anhalt, that do not feel like they’re growing even if incomes are up by a lot. Wages were raised to compete with Western employers, which is a good thing and is also not what the local elites who are paying these wages want when they demand state investment in infrastructure to be paid to contractors they own. 4. Berlin has very high levels of Anglophony, way better than Munich and Hamburg, maybe better than Frankfurt. This means that pan-European industries have an easier time locating here, hence the tech startups that pretend they’re in San Francisco. For the same reason, finance and corporate branches relocating from London after Brexit are converging on Amsterdam as their new location, and not on Paris or Frankfurt. Alon: “Berlin has very high levels of Anglophony … For the same reason, finance and corporate branches relocating from London after Brexit are converging on Amsterdam as their new location, and not on Paris or Frankfurt.” The logic is good but the details, not quite: Dublin kind of proves the Anglophony point though doubtless other issues played a role, tax for one. But I’d say Paris is reaping the benefit of an easy 2h12m Paris to London Eurostar, as well as language not being a realworld problem for these kinds of workers, most of whom would be quite familiar and comfortable with Paris. And actually, despite the francophilia trope, more comfortable than Frankfurt or Berlin. In fact, I recall I predicted exactly this on your blog back in pre-Brexit days. It also happens that this effect is germane to your article, in that Brexit is driving these fintech functions and jobs to many centres across the EU, which is good compared to being concentrated in a single place. Of course with my biases it wouldn’t trouble me if The City went all Detroit but nah that’s not going to happen either. [And yeah, your points are all good and I’m just picking at one minor nit.] In some ways reunification was as much a hit to Berlin as it was a boon. Unlike Paris or London or even Madrid or Rome, Berlin doesn’t automatically get federal money shoveled in for no particular reason other than prestige. But during partition, the East would concentrate its resources in east Berlin (the only subway built by east Germany was an U5 extension) and despite West Berlin not ever being technically 100% part of west Germany (Berlin only sent non voting delegates to the Bundestag) there were a lot of west Berlin specific subsidies, from air travel to “Berlin made” industry. With reunification that suddenly dried up and instead of waxing poetical about the “city on the frontlines” Bavarian or Rhineland conservatives would now decry the moral decrepitude of the “atheist” or “lazy” or “drug addicted” cosmopolitan capital. And then the CDU caused a banking crash that emptied the municipal budget… You haven’t lived thru the Wowereit/Sarrazin years, but austerity in Berlin during that era was as severe as it was dumb. Berlin’s civil service is still licking its wounds from that hit Re: Alon’s comment, there’s so much wrong to this it’s hard to know where to begin. Even West Berlin was markedly poorer than the main cities of West Germany. It had no industry and virtually no commerce, and its economy was being propped up by the FRG essentially for propaganda purposes (as Herbert outlines). And Berlin is nowhere near as anglophone as Frankfurt (and probably not Munich or Hamburg either). There are some anglophone ghettos in Neukölln and Prenzlauer Berg, but there are also huge swathes of the outer east where you’d be hard pressed to find anyone who speaks more than a basic level of English. You can’t say the same about any of the major West German cities. Also none of this is any reason why the policy of regenerating Berlin through infrastructure investment couldn’t be replicated in rust belt areas. Detroit, for instance, actually has pretty good bones: it has an expansive freight rail system that could be re-purposed as commuter rail and wide boulevards that can have transport lanes (BRT or streetcar) inserted into them. And the depopulated inner areas have a lot less NIMBY resistance to being transformed into medium-density neighbourhoods than comparable low-density suburbs in more prosperous metro areas do. Indeed, there’s already an artistic/cultural gentrification vanguard moving to Detroit much as they did to Berlin and NY in the 70s and 80s. When you say things like “Berlin is probably not as Anglophone as Munich,” are you saying this out of knowledge? Because you don’t live in Germany, and your takes on Germany look extremely filtered through a few layers of badness (e.g. you said a few months ago that Kurdistan protests are illegal here; they’re not and I see them at Alexanderplatz all the time, with Kurdistan flags). The Neukölln-as-a-bubble bit is something that looks taken directly out of CityLab; Germans stereotype Neukölln as a poorneighborhood full of Middle Easterners and criminals (the average Aryan thinks these are the same), as in the depiction in 4 Blocks, or Bild’s demagoguing about the corona outbreak there last summer. There’s also a parallel stereotype of Neukölln as a gentrifying neighborhood, but in the German discourse it’s a secondary stereotype, it’s only in English that people act like Neukölln is the new Charlottenburg rather than the new Seine-Saint-Denis.And no, Marzahn is not somehow real Berlin. Those supposed vast swaths are a significant but small minority of the city’s population, maybe a quarter once you strip the inner parts of Lichtenberg and Pankow and the neighborhoods inside the Ring. We’re not Stockholm or anything like that but we’re not Paris either and I say this having lived in all three cities and both noticing how Anglophone people are and what other people say about how Anglophone people are (and “Berlin > Munich, Hamburg” is an unrecognizable take here, and yes, I’ve asked repeatedly). Let’s say you are a person choosing where to live. If you live in Detroit and work in the suburbs, you will pay a 1.2% Detroit City Income Tax. If you move to Livonia, you won’t pay any city income tax. If you are choosing where to put your business, Detroit has a 2% corporate tax, Livonia has none. If you are going to take a job at a supermarket, taking the job at a Detroit supermarket will cost you 1.2% versus taking the same job at the Livonia supermarket (P.S. living and working in Detroit will cost you 2.4%)(P.P.S Detroit isn’t the only city in Michigan with a city income tax, but it is the highest of them.). Detroit’s freight rail system bar like 1 maybe 2 lines is terrible for commuter rail. It’s both significantly sparser than older cities (cf. Chicago and Cleveland), but also has no one living around it because the auto industry smartly located all their factories around it. Detroit’s development in Alon parlance follows roads, not rails. It is not Chicago. There is certainly a way to fix Detroit with some mass transit investments, tax reform, and highway removal, but it’s going to be slow, and those investments will only pay dividends when some other industry starts generating growth in Detroit again. The northernmost part of Neukölln (nicknamed “Kreuzkölln“ due to the proximity of Xberg) is indeed gentryfying and Berliners by and large know that. But the reputation of Neukölln as a whole is still that put in the media by the former district mayors Buschkowsky and Giffey who despite being SPD like to present their tenure as “law& order” and “tough on crime” and stuff. Now as for how Anglophone Munich is… That depends to some extent on what you define as “Munich”. Munich city boundaries are relatively narrow which means both the tram and the subway cross the city boundaries (but so do Hamburg subway and Nuremberg subway [to Fürth which formes a contiguous urban era and was almost annexed in the 1920s]). Munich has reinvented itself after the war as an automotive (BMW) and research hub, but especially the research stuff is often outside Munich proper in places like Garching, Oberschleißheim or (in aviation) Oberpfaffenhofen, the airport or even the military airfield at Manching. So if you go to e.g. Hasenbergl, you’ll have different rates of English speakers than in Garching or the “Schickeria” Re: Alon, thanks for the assumption (and the attendant straw man arguments) but I actually do live in Berlin, can speak German, and have direct knowledge of the city. I also have lived in Frankfurt and commute there frequently for work (or at least did so pre-Corona), and I can definitively say it is more anglophone than Berlin. I said “probably” for Munich and Hamburg because I am less familiar with them and don’t have stats at hand. I didn’t claim that the outer eastern districts are “real Berlin” (whatever that’s supposed to mean), I said that there are large parts of the former East Berlin where the level of English-language ability is very low, and this is not the case in the major western cities. This is pretty indisputable. I also said there are anglophone ghettos IN Neukölln (to be more specific, certain areas bordering Kreuzberg), not that the whole Bezirk is like that. Again this is pretty indisputable, but the Bezirk as a whole is diverse, since it stretches all the way to the border with Brandenburg. Nobody thinks it is the new Charlottenburg, God knows where you got that idea from (Charlottenburg is a distinctly musty, conservative middle class area). For Germans it’s understood a case of Kreuzberg gentrification expanding outwards, for Americans it’s probably seen as a twin of Bushwick. Finally, I didn’t say that pro-Kurdistan protests are illegal in Germany, but that publicly displaying symbols from the PKK is, since it is a proscribed group in Germany. Again, this is indisputable and amply documented (see this TAZ article if you’re interested: https://taz.de/Kurdische-Symbole-in-Deutschland/!5629632/). It’s obviously not universally enforced, rather it is deliberately designed to be something the cops at their discretion can use if they want to target specific people. But the law is on the books and there are multiple recent cases of activists being charged under the statute. @Nilo, the freight lines would still be useful as regional lines to places like Pontiac and Ann Arbor, and further afield Flint, Lansing and even Toledo. There are some stillborn plans about doing this. Also plenty of the brownfields areas are ripe for TOD conversion, since a lot of its is now defunct. But I agree that the main roads have more potential for mass transit within Detroit proper. [I deleted my own comment and am reposting here to fix the threading, nothing nefarious is happening.] @Herbert Kreuzberg is undergoing white flight as well (as is Prenzlauer Berg), but yes, the Hermannplatz area is falsely believed locally to be gentrifying, whereas Neukölln writ large is correctly believed to be a racially diverse neighborhood and incorrectly believed to be dangerous (how I wish 4 Blocks were about Amara…). Something I recently read is that Munich, too, has net emigration of German citizens (link) but is gaining more than enough foreigners to offset. Some people just don’t like living next to foreigners. I suspect this is where the “people left behind in rural Saxony become fascist” issue comes from – they were always racist, and that’s why they’re not going to move to urban job opportunities, whereas more open people from those same areas move away and the ones who come to Berlin tell me how happy they are they got out. @df1982 [For some reason I thought you lived in Australia, like Michael James? Sorry…] Yeah, so a specific symbol is proscribed, which is about as connected with the “Germany suppresses free speech” line as the prohibition on specific Nazi symbols is with general censorship (which, were it real, would close FAZ, Die Welt, Bild, and other racist papers). The white flight in Neukölln isn’t just the Bezirk level. A lot of the links are rotting due to ongoing site migration, but here are 2019 numbers and here are 2018 ones. Neukölln the Ortsteil is losing Germans without migration background and gaining people with migration background; ditto other supposedly-gentrifying areas like Kreuzberg and P-berg. The highest-migration-background-% Ortsteilen are Gesundbrunnen, Tiergarten, Neukölln, Wedding, and Kreuzberg, I think? My impression judging by names on buildings is that immigrants of different social classes live in the same buildings – I don’t remember ever seeing a building with lots of Turkish and Arab names and no other names or no other non-German names (and yes, I look), and all-gentrifier buildings, like the one I currently live in, look rare. (And I don’t remember obvious all-German buildings in Neukölln either.) The Berlin vs. Munich Anglophony is something I’ve heard from multiple people who’ve lived in both cities. @Michael Re Anglophony and work, I saw this FT article about Amsterdam while going on a Twitter rabbit hole looking for that US state crayon map that I linked. @Aloon: “Re Anglophony and work, I saw this FT article about Amsterdam… ” Yeah, that was about one particular class of stock trading, one that has very low returns because it is largely automated and relies on scale (see below). It fits in with the point made by my AFR article on specialisation of different cities in the EU. You extrapolated from it too much. (The AFR–Australian Financial Review–is the Oz version of FT; in fact that article may have been reprinted from the FT, which they do quite a bit on European matters.) That’s OK, I’ve led a peripatetic life, but in general it’s best to avoid dismissing someone’s views on the basis of where you assume they live. I would be careful with the “people with migration background” statistic, since it can cover two very different groups: people from high-income countries moving to Berlin for professional/lifestyle reasons, and those from lower-income regions (Africa, Turkey, the Middle East, predominantly) migrating out of existential necessity. Prenzlauer Berg is definitely the former (there are a lot of non-Germans, but it’s still overwhelmingly white), while Neukölln was traditionally the latter, but the “Kreuzkölln” areas are becoming more and more the former. Hah, I inadvertently turned you into a Dutchman with Aloon! Or is it A Loon? If someone who studied at LMU or TU Munich moves to Garching to have a job in research after graduating, that counts as “having moved out of Munich”. The combination of world class r&d in Munich suburbs (including a living Nobel laureate working at Garching) and the municipal boundary not including them leads to some of those oddities If your differentiating between “good” and “productive” industrial capital and “bad” or “exploitative” financial capital, you’re already halfway on the path to antisemitism. From what I can tell the money isn’t even going to proper Rust Belt cities like Cleveland or Toledo or Rochester, it’s going to the rural Midwest or Appalachia. (Appalachia was never Rust Belt.) This is just another equity-efficiency tradeoff, one of many in economics. But telling politicians to “Just Say No to Leakage” is a technical solution to a political problem. There is little political incentive to say no and every reason to say yes. I can’t speak to the other examples but for Ontario, it’s not as if Doug Ford wants excellent public transit but is held back by suburban NIMBYs: he is in fact one of the suburban NIMBYs and therefore usually rules in their favour. The leakage is a feature, not a bug. You’re essentially saying that the problem is how scarce resources are allocated, which is fundamentally an issue of politics and not something to be solved by a city planner or such. It’s not exactly resource allocation, since leakage is a negative-sum game. In the case of the Toronto RER, the electrification project will reduce noise, because electric trains are less noisy than diesel trains even when they run faster. But the benefit-cost ratio for the project was so high that instead of fighting it, Metrolinx caved early and agreed to the noise walls. So it’s perhaps better to say that surplus wasn’t extracted but destroyed. This is also what we’re seeing with Second Avenue Subway. The various schemes used to limit neighborhood impact at high cost didn’t actually reduce negative impact. The decision to mine the stations instead of building them cut-and-cover meant less neighborhood impact but over a longer period of time, about 5 years instead of 1.5-2. Value was destroyed, but the mere perceptionof reduced impact led to the more expensive choice.So in that sense, I think it’s valuable to let agencies know that this dynamic exists and they should be careful to avoid it. In the case of very high ROI projects like late electrification or Northeast Corridor HSR, the best choice is probably to intentionally overexpand to a lower ROI, and yet this has a lot of potential for leakage, i.e. for including extra projects that aren’t actually good (like the 1.5-orders-of-magnitude-too-expensive SOGR program in the US) rather than things that are justifiable on their own. I think the French combination of tram construction and city beautification is a good idea overall. You’re changing your city already, so why not use the opportunity to do several things at once? Yes, and when it’s something relatively small, it’s fine. The problem is when you have a city – let’s call it “San Francisco” – building bike paths for the cost of a subway, excusing everything on “we need to rebuild the street” grounds. I guess you are referring to the ≈$300m bikeway+pedestrian path proposed for the (West) Bay Bridge but it’s really the cost of one car lane on that bridge. It would create a kind of symmetry–the world’s most expensive bikepath to accompany the most expensive bridge per mile (east span). In fact they are overengineering the bikepath so as to be able to take a maintenance vehicle on the premise that it will prevent closing a traffic lane when maintenance access is required. Again, it’s all about prioritising cars. If one was paranoid one might imagine a near-future conversion of that ‘bikepath’ to permanent vehicle use–or maybe one bus lane instead of taking two car lanes from the bridge. It has even been suggested for the, admittedly underused, San Rafael bridge bikepath. One wonders what a lightweight bike/ped path (not suitable for any vehicle) would cost by comparison? And yes this comparison (below) hasn’t been updated to include that cost (which in any case hasn’t been built) but I reckon the cost comparisons are worthwhile posting anyway. I’ve converted all the figures to billions which IMO makes the comparison easier for the eye. He’s probably referring to this $300M/mile project for Market Street @Eric2: “He’s probably referring to this $300M/mile project for Market Street” Maybe. Though the $300m West Bay Bridge bike + pedestrian path is about 3 miles, and is a new bit of bridge hanging off the existing bridge (cannot be slung under due to shipping clearance requirements), and as I said the bridge supervisors want it to take full size vehicles (and probably very heavy maintenance vehicles bloated in the usual American tradition) so it is not quite as outrageous as first seems. That is, it’s not just laying down some green paint though that is probably what they should do (on an existing car lane): https://sf.streetsblog.org/2021/02/01/petition-for-a-quick-build-bike-lane-on-the-bay-bridge/ Petition for a Quick-Build Bike Lane on the Bay Bridge The Brooklyn Bridge already has a bike and ped path–and NYC intends to add more space for both. Advocates want the Bay Area to do the same on the Oakland Bay Bridge By Roger Rudick, Feb 1, 2021 https://sf.streetsblog.org/2018/11/20/editorial-motorists-be-thankful-about-cost-of-bay-bridge-bike-path/ Editorial: Motorists, Be Thankful about Cost of Bay Bridge Bike Path The high cost isn’t really about bikes, it’s about giving almost everything to cars By Roger Rudick, Nov 20, 2018 Alon I have gone and pointed out your style of analysis to my local representatives in the UK on bike lanes and they realise the value and are impressed. So I think change can happen. I mean moving from a cycle lane costing £250k/km to £50k/km obviously makes it much easier to justify construction. American cars are now the size of Sherman tanks maintenance spending funded out of other people’s moneyMartians don’t pay Federal or state taxes. It’s not other people’s money. It’s just coming out of a different budget. Why shouldn’t people who pay a lot of taxes expect that some of it get spent on them? declining regions like the Rust Belt, where due to continued road investment and population decline, there is no longer traffic congestion.The Northeast and Midwest have toll roads which get little Federal money. Little state money either. If New Jersey Turnpike toll payers want to spend toll money adding lanes they can. Extend that logic, the Second Avenue Subway shouldn’t be built and a second set of Hudson River tunnels is silly. New York hasn’t really been called Rust Belt in 20+ years. When I talk about the Rust Belt, I talk about new interchanges in small towns in Ohio and Michigan, and not about very busy Northeastern or Chicago roads. And other people’s money is really about outside infusion. Massachusetts is of course a net tax donor, and within Massachusetts Somerville is a net tax donor, and yet the Community Path was not funded locally and therefore Somerville’s civic elites didn’t mind that it was horrendously expensive. In Franconia you can sometimes sell expensive projects with lots of state funding at least in part on “so that the money doesn’t end up in Munich” If there’s one thing most Franconians agree on, it’s that they dislike Munich and see the Bavarian government as sucking money out of Franconia and putting it into Munich and adjacent areas. Just look at the number of top research facilities that Munich and surroundings have acquired in the last seventy years. . The Rust Belt is the Northeast and Midwest. The mill now being The Lofts at the Mill luxury condos doesn’t make it less deindustrialized. The DOT or the toll roads either, don’t do things for the fun of it. There has to be some sort of reason for a project. You may not like the reason but there is a reason. A lot of it is because there is this pesky thing called time. When it passes things wear out. Then need to be replaced. Then there are those pesky pesky users who do things like move to suburbs. And then need more capacity. And very very rarely that additional capacity is things like new railroad tunnels under the Hudson River. Would the real rust belt please stand up? The reason the DOT does things because there is money. Often there are several different buckets of money, so one bucket really is starved for money while another has plenty for useless projects. Thus you get state DOTs overbuilding highways to nowhere, while in the same area county bridges are failing inspection but there is no money to replace them – both budgets are filled by the same gas tax, but the state DOT is getting a larger share of the money and the counties are not able to access any of it for work they need to do. Thus states will raise gas taxes because of a real problem in back county roads, and the DOT will announce another highway to nowhere funding by those taxes, with the counties will just barely have enough to repair a few of the worst bridges. If they didn’t have any money they wouldn’t be able to do anything. The employees would wander off quite quickly too. Which is it? “several different buckets of money” or “budgets are filled by the same gas tax”. It’s a pity the gas tax doesn’t fill the budget of of the Federal or state DOTs. Little if any of it filters down to county or municipal budgets. It’s why there are local roads maintained by the municipality, county roads maintained by the county, state highways maintained by the state and Federal highways maintained by the state with Federal contributions. Several different buckets of money all filled by the same tax. thus one bucket has more than it objectively needs and instead of giving that money to a different bucket they find more projects for that bucket. In the mean time the other bucket doesn’t have enough money for the projects they need to do. In the case of my example, the State DOT only funds state roads (an interesting quirk, national roads in the US are state roads with federal funding), and not county roads: thus the county isn’t getting enough money to repair their small roads while the state is building large projects that are not needed. Each state does things differently, of course. Still the above seems to happen a lot in some variations. Keep in mind though that I’m not making any claim that the counties are doing a good job of controlling waste – I have no insight if they are getting enough money to repair their roads but spending it on expensive projects. gas tax doesn’t fill the budget of of the Federal or state DOTs. Toll roads don’t get any tax money. None of them do anything because they think it will be fun. somebody somewhere took measurements, somebody else evaluated them and wrote down all the reasons the project should be done. Then there was a variety of reviews. It’s too bad you don’t like their reasons but there are reasons. Buffalo/Rochester/Syracuse are definitely still considered Rust Belt. I disagree that State of Good Repair is leakage, as you’ve defined leakage. Some of it might be incredibely ineffective and wasteful, but I hope you are not arguing that state of good repair and general maintenance is not important. Scrimping on maintenance is going to lead to less reliability and generally shabiness, and in worst cases compromised safety. What would be more useful is looking at best practice for ‘standard’ life-cycle spending on various components, especially the nuts and bolts of the working systems: tracks, road beds, switches, wiring and power systems, garages, etc. I would agree that some renovations are less useful then others. Prettying up the subway stops might be easy to sell, but not a good use of cash. Hard to say though. Based on prior posts, I a pretty sure the argument is on what the specific maintenance is, not maintainence in general. I see a lot of the same stuff in the tech industry, where people write shit code, and then dump money on “site reliability engineers” and not fixing the underlying problems once and for all. So yes there is are specific patronage / regulatory capture problems in the the public sector, but I also think there is a tendency for productivity to sag in loose labor markets. NYC Subway in particular was in a practically derelict state by the 1970s; “state of good repair” means replacing everything but the tunnels. Now, is NYC managing to do do this in a particularly bad way — replacing parts with archaic out-of-date parts rather than modernizing while they replace — spending too much to get too little — well, yes. But there was no getting around the rebuilding of most of it. Just the damaged walls and water infiltration alone were huge, and expensive-to-fix, problems. And they’re not “sexy” problems with good-looking ribbon-cuttings, unfortunately — water problems never are. (Speaking of water problems, Daylight Tibbetts Creek!) New York didthat in the 1980s-90s. Since then, SOGR has been a black hole of ever-escalating flagging rules and no decrease in the backlog.The Wiesbaden tram vote is a perfect example of suburban vs urban voters. The urban core voted yes, outlying districts voted no, overall the project was rejected 2:1 “the Gilets Jaunes riots were mostly white supremacist, but they were white supremacists with a strong anti-urban identity who felt like the diesel taxes were too urban-focused.” I guess this blog is turning into a Twitter thread now. Hahahaha I mean, which part of this is incorrect? Most likely he’s referring to the “white supremacist” part – but you knew that. On a similar note, in an earlier comment you labeled the Frankfurter Allgemeine Zeitung (FAZ) a “racist paper”. Would you elaborate? It published an op-ed by an elder SPD politician who sits on the Berlin Holocaust memorial’s board who ranted about how ethnic minorities have a responsibility to accept majority rule and drew equivalence between AfD and immigration activism. And it’s impossible to read anything about immigration there without being bombarded with a litany of statistics about social problems. I see, thanks.
true
true
true
I’ve spent more than ten years talking about the cost of construction of physical infrastructure, starting with subways and then branching on to other things, most. And yet there’s a pr…
2024-10-12 00:00:00
2021-07-23 00:00:00
https://s0.wp.com/i/blank.jpg
article
pedestrianobservations.com
Pedestrian Observations
null
null
17,583,232
https://www.slate.com/articles/health_and_science/science/2014/08/koko_kanzi_and_ape_language_research_criticism_of_working_conditions_and.single.html
The Strange World of Koko, Kanzi, and the Decline of Ape Language Research
Jane C Hu
Last week, people around the world mourned the death of beloved actor and comedian Robin Williams. According to the Gorilla Foundation in Woodside, California, we were not the only primates mourning. A press release from the foundation announced that Koko the gorilla—the main subject of its research on ape language ability, capable in sign language and a celebrity in her own right—“was quiet and looked very thoughtful” when she heard about Williams’ death, and later became “somber” as the news sank in. Williams, described in the press release as one of Koko’s “closest friends,” spent an afternoon with the gorilla in 2001. The foundation released a video showing the two laughing and tickling one another. At one point, Koko lifts up Williams’ shirt to touch his bare chest. In another scene, Koko steals Williams’ glasses and wears them around her trailer. These clips resonated with people. In the days after Williams’ death, the video amassed more than 3 million views. Many viewers were charmed and touched to learn that a gorilla forged a bond with a celebrity in just an afternoon and, 13 years later, not only remembered him and understood the finality of his death, but grieved. The foundation hailed the relationship as a triumph over “interspecies boundaries,” and the story was covered in outlets from *BuzzFeed *to the *New York Post* to ** Slate**. The story is a prime example of selective interpretation, a critique that has plagued ape language research since its first experiments. Was Koko really mourning Robin Williams? How much are we projecting ourselves onto her and what are we reading into her behaviors? Animals perceive the emotions of the humans around them, and the anecdotes in the release could easily be evidence that Koko was responding to the sadness she sensed in her human caregivers. But conceding that the scientific jury is still out on whether gorillas are capable of sophisticated emotions doesn’t make headlines, and admitting the ambiguity inherent in interpreting a gorilla’s sign language doesn’t bring in millions of dollars in donations. So we get a story about Koko mourning Robin Williams: a nice, straightforward tale that warms the heart but leaves scientists and skeptics wondering how a gorilla’s emotions can be deduced so easily. Koko is perhaps the most famous product of an ambitious field of research, one that sought from the outset to examine whether apes and humans could communicate. In dozens of studies, scientists raised apes with humans and attempted to teach them language. Dedicated researchers brought apes like Koko into their homes or turned their labs into home-like environments where people and apes could play together and try, often awkwardly, to understand each other. The researchers made these apes the center of their lives. But the research didn’t deliver on its promise. No new studies have been launched in years, and the old ones are fizzling out. A behind-the-scenes look at what remains of this research today reveals a surprisingly dramatic world of lawsuits, mass resignations, and dysfunctional relationships between humans and apes. Employees at these famed research organizations have mostly kept quiet over the years, fearing retaliation from the organizations or lawsuits for violating nondisclosure agreements. But some are now willing to speak out, and their stories offer a troubling window onto the world of talking apes. * * * The first attempts to communicate with other primates began in the 1930s. Scientists knew that chimpanzees were our closest relatives and they wondered why chimps didn’t also have language.* Researchers theorized that culture could have something to do with it—perhaps if apes were raised like humans, they would pick up our language. So Indiana University psychologist Winthrop Kellogg adopted a 7½-month-old chimpanzee he named Gua. He raised Gua alongside his own human son, Donald, who was 10 months old when Gua arrived. *Time *magazine wrote that the experiment seemed like a “curious stunt”; others were critical of separating a baby chimp from her mother or rearing a child with a chimp. At one year of age, Gua could respond to verbal commands, but to her humans’ disappointment, she never learned to speak. The experiment was abandoned after nine months. In the following few decades, scientists discovered that anatomical differences prevent other primates from speaking like humans. Humans have more flexibility with our tongues, and our larynx, the organ that vibrates to make the sounds we recognize as language, is lower in our throats. Both of these adaptations allow us to produce the wide variety of sounds that comprise human languages. In a stroke of genius, researchers decided to try teaching apes an alternate, nonvocal way to communicate: sign language. Washoe, a chimpanzee, was the first research subject. Washoe was born in West Africa, then captured and brought to the United States. University of Nevada psychologists Allen and Beatrix Gardner adopted her in the 1960s. Like Gua, Washoe was raised as a child: She had her own toothbrush, books, and clothes, and the Gardners took her for rides in the family car. Over the course of her life, Washoe learned more than 250 signs, and she reportedly even coined novel words. One famous story has it that she signed “water bird” after seeing a swan. Skeptics remain unconvinced that this was evidence of spontaneous word creation, suggesting that perhaps Washoe merely signed what she saw: water and a bird. The next decade saw an explosion of human-reared ape language research, and the same cycle of claims and criticism. Scientists named chimps as if they were human children: Sarah, Lucy, Sherman, Austin. Another was named Nim Chimpsky, a playful dig at Noam Chomsky, the linguist known for his theory that language is innate and uniquely human. Scientists tried raising other ape species as well: Chantek, an orangutan; Matata, a bonobo; Koko, a gorilla. Koko, especially, was a sensational hit with the media. Originally loaned as a 1-year-old to Stanford graduate student Francine “Penny” Patterson for her dissertation work, Koko remained with Patterson after the dissertation was complete, and Patterson founded the Gorilla Foundation in 1976 to house Koko and another gorilla, Michael. * * * Of the many ape savants studied over the years, two stand out as the most celebrated: Koko the gorilla and Kanzi the bonobo. Both have been profiled repeatedly in the media for their intellect and communication skills. Koko’s résumé is more impressive than most humans’: She stars in a book called *Koko’s Kitten* written by Patterson and Gorilla Foundation co-director Ron Cohn, chronicling Koko’s relationship with a tail-less kitten that Koko named All Ball. The book, according to the Gorilla Foundation’s website, is “a classic of children’s literature,” and it was featured on *Reading Rainbow *in the 1980s*. *Koko has had her likeness turned into stuffed animals, and she was the guest of honor in two AOL chats, in 1998 and 2000. Koko has also had many celebrity supporters over the years: She’s met Leonardo DiCaprio and the late Mister Rogers. William Shatner says she grabbed at his genitals. Betty White is on her board of directors. Robin Williams tickled her in 2001. At the end of the day that Patterson and other colleagues told Koko about Williams’ death, the Gorilla Foundation announced, Koko sat “with her head bowed and her lip quivering.” Another media favorite was Kanzi, a bonobo whose brilliance was discovered by accident. Kanzi was born at Yerkes Primate Center in 1980. Kanzi’s mother was a female named Lorel, but a dominant female named Matata laid claim to Kanzi and unofficially adopted him. Matata was being trained to communicate by pointing to symbols on a keyboard called lexigrams that corresponded to English words. Much to the chagrin of her human researchers, Matata showed little interest in her studies, but one day in 1982 Kanzi spontaneously began expressing himself using the lexigram board. From then on, researchers turned their focus to him instead. Sue Savage-Rumbaugh, a researcher at Yerkes at the time, had worked with chimps and other bonobos, so she oversaw Kanzi’s training. He quickly built up a lexigram vocabulary of more than 400 symbols. He’s also been said to invent new words by combining symbols, refer to past and present events, and understand others’ points of view, all of which are skills usually attributed only to humans. These apes *are* able to communicate with humans, and this alone is a testament to primate cognition. But in the past few decades there has been a spirited debate about whether apes are using language in the same way humans do. One major difference between ape and human communication appears to be their motivation for communicating. Humans spontaneously communicate about the things around them: Adults make small talk with the grocery store clerk about the weather; a toddler points out a dog on the street to her parents; readers write comments about stories on ** Slate**. Unlike us, however, it seems that apes don’t care to chitchat. Psychologist Susan Goldin-Meadow points out that studies with Kanzi show that only 4 percent of his signs are commentary, meaning the other 96 percent are all functional signs, asking for food or toys. Similar skepticism about Koko emerged in the 1980s, when Herb Terrace, Nim Chimpsky’s former foster parent, published a fairly scathing critique of ape language research, leading to a back-and-forth with Patterson via passive-aggressive letters to the editor of the *New York Review of Books*. Among other criticisms, Terrace asserted that Koko’s signs were not spontaneous but instead elicited by Patterson asking her questions. Patterson defended her research methods, then signed off from the debate, saying her time would be “much better spent conversing with the gorillas.” Critics also allege that the abilities of apes like Koko and Kanzi are overstated by their loving caregivers. Readers with pets may recognize this temptation; we can’t help but attribute intelligence to creatures we know so well. (Or to attribute complex emotions, such as grief over the death of a beloved comic and actor.) I recently wrote an article on a study suggesting that dogs experience jealousy, and the top response from dog owners was: “Anyone with a dog already knows this.” It’s hard to resist reading into animals’ actions, and it turns out, animals read into our actions, too. They carefully watch us for cues about what we want so they can get our attention or treats. In a classic case, a horse named Clever Hans was thought to understand multiplication and how to tell time but was actually just relying on the unconscious facial expressions and movements of his owner to respond correctly. Long-term studies with human-reared apes are designed to create bonds between apes and their caregivers so that the pair feels comfortable communicating. This closeness often means that the caregiver is the only person able to “translate” for the ape, and it’s difficult to disentangle how much interpretation goes into those translations. As a result, the scientific community is often wary of taking caregivers’ assertions at face value. Some are straight-up skeptics. In a 2010 lecture, Stanford primatologist Robert Sapolsky alleged that Patterson had published “no data,” just “several heartwarming films” without “anything you could actually analyze.” According to a list of publications on the Gorilla Foundation’s website, this isn’t entirely accurate. They’ve published three papers in the past decade—the latest in 2010—but only one is about gorillas’ cognitive abilities. According to the paper, the data are observational, and come from “unpublished, internal-use video” created by Patterson and another Gorilla Foundation employee, as well as “unpublished lists of Koko’s sign lexicon” and a 1978 paper where Patterson describes Koko’s earliest signs. There is currently no data or video from the Gorilla Foundation available to outside scientists, which makes it difficult for others to evaluate the foundation’s claims. (The Gorilla Foundation says it has been focusing efforts on digitizing its data, and recently announced a project to make it available to researchers.) In lieu of other data to evaluate, a transcript from Koko’s 1998 AOL chat, in which Koko signed something and Patterson translated for the audience, offers an interesting glimpse into how Patterson interprets Koko’s signs. An excerpt: Question: What are the names of your kittens? (and dogs?) LiveKOKO: foot Patterson: Foot isn’t the name of your kitty: Koko, what’s the name of your cat? Question: no LiveKOKO: She just gave some vocalizations there… some soft puffing Patterson [chat host]: I heard that soft puffing!: Now shaking her head no. Patterson: Do you like to chat with other people? Question: fine nipple Koko: Nipple rhymes with people, she doesn’t sign people per se, she was trying to do a ‘sounds like…’ Patterson Nipples, as we’ll see, come up a lot with Koko. In his lecture, Sapolsky alleges that Patterson spontaneously corrects Koko’s signs: “She would ask, ‘Koko, what do you call this thing?’ and [Koko] would come up with a completely wrong sign, and Patterson would say, ‘Oh, stop kidding around!’ And then Patterson would show her the next one, and Koko would get it wrong, and Patterson would say, ‘Oh, you funny gorilla.’ ” * * * Criticisms of ape language studies wore down researchers, and projects fizzled out as the humans in charge lost interest in defending their research, being full-time ape parents, and securing ever-more elusive funding to continue the projects. Even as the research ended, though, the apes remained. Depending on apes’ species and gender, the average lifespan for wild apes is between 30 and 50 years, and they often live even longer in captivity. In their post-research lives, these apes, like child stars that peaked early in life, were left to live out their days in less glamorous environments. Apes have been sent around to various private collections and zoos, and, if lucky, ended up in sanctuaries. Ape facilities must obtain licenses from the U.S. Department of Agriculture to showcase the animals (e.g. in a zoo) or to use them as research subjects. But these facilities are largely funded by private donations, and government agencies have little oversight of their day-to-day operations. Human-reared apes’ upbringing often made it hard to adjust to the “real world” of captivity, where their companions were other apes instead of doting researchers. It took great lengths to reintroduce Lucy to being a normal chimp. Chantek was separated from his caretaker, Lyn Miles, for 11 years while he lived in a cage at Yerkes, where he grew depressed and overweight. Nim Chimpsky was shipped off to live in a cage with other chimps at a medical center, and he was subjected to research before being sent to an animal sanctuary, where he was reportedly lonely and angry and “killed a poodle in a fit of rage.” And, like child stars, many of these apes die tragic, premature deaths. Gua, abandoned after her study and sent back to a lab, caught pneumonia and died at 3 years old. Nim died of a heart attack at the age of 26; Koko’s companion Michael died of a heart condition at the age of 27. Of the dozens of ape language projects, just two are still in operation today: the Gorilla Foundation, which houses Koko, now 43, and a male gorilla, Ndume; and the Great Ape Trust, home to Kanzi, 33, and several other bonobos. Both Koko’s and Kanzi’s fame has dwindled over the past decade. When Koko made headlines for her relationship with Robin Williams, the news left many online commenters remarking that it was amazing she’s still alive. Despite the criticisms of ape research, people jumped at the opportunity to participate in the lives of these famous apes. Beth Dalbey, former communications editor at the Great Ape Trust, said that she was initially charmed by the cleverness of Panbanisha (daughter of Kanzi’s adoptive mother, Matata) and that she wouldn’t have traded the experience for anything else. Former Gorilla Foundation caregiver John Safkow dropped everything when he was hired. “I left a 20-year career because this opportunity came up,” he said. “I thought, ‘This is the coolest job ever.’ ” But it turned out *not* to be the coolest job. According to former employees at the Gorilla Foundation—who signed nondisclosure agreements and, in some cases, wish to remain anonymous because they fear retaliation—the apes were poorly cared for, and the employees were subjected to bizarre forms of harassment. (Disclosure: My husband briefly worked at the Gorilla Foundation as a part-time, unpaid volunteer. He was not interviewed or consulted for this article and did not suggest sources for it or introduce me to people who became sources.) When I first emailed Dawn Forsythe, who writes a blog about the ape community called the Chimp Trainer’s Daughter, to request an interview, she warned me that sources might be reluctant to talk to me. “The human world of apes can get pretty nasty,” she said. Over the course of several months in 2012, nine of roughly a dozen caregivers and researchers at the Gorilla Foundation resigned, and many submitted letters of resignation explaining their decision to leave. Several employees also worked together to submit a letter of concern to the foundation’s board of directors. (The mass resignations and criticism remained internal and have not gotten media attention until now.) “It was a four-page document about our requirements as caregivers, and things we felt were unethical or immoral,” Safkow says. “Incidentally, all of the board members left after we did, too—Betty White is the only one who’s still there.” I contacted the Gorilla Foundation for an interview, and it requested that I send all questions in email. In response to a question about the letter voicing concerns of departing employees, the Gorilla Foundation emphasized to me that the letter was sent to the board of directors by a researcher, not a caregiver, “who had no first-hand knowledge or experience of anything” in the letter. Regardless of who *sent* the letter, however, it was composed based on the collective experience of all nine employees who resigned. The Gorilla Foundation told me it hired an animal welfare attorney to review the allegations, who found that they were “totally unsubstantiated.” Additionally, the Gorilla Foundation said these allegations “caused significant internal harm to our organization, which had a negative stress-inducing impact on gorillas Koko and Ndume.” * * * At the Gorilla Foundation, many employees’ letters of resignation focused heavily on the issue of the apes’ health. According to former caregivers, Koko was overweight. “All the caregivers would talk about Koko’s weight,” Sarah, a caregiver who resigned a few years ago, told me. (Sarah is a pseudonym; she did not want her real name published because she signed a nondisclosure agreement when she was hired at the Gorilla Foundation.) “We always tried to get her to exercise, but she would never go outside—she just wanted to sit in her little trailer and watch TV or sleep.” The Gorilla Foundation maintains that Koko is not overweight and that at her current weight of 270 pounds she “is, like her mother, a larger frame Gorilla” and within the healthy weight of a captive gorilla. (Wild female gorillas are 150-200 pounds.) Employees believe that Koko’s weight is the result of an unhealthy diet. In the wild, gorillas are natural foragers who eat mostly leaves, flowers, fruit, roots, and insects. Captive gorillas don’t forage, but zoos typically attempt to make their animals’ diets similar to those of their wild peers. Sarah was hired in 2011 as a “food preparation specialist” to arrange for Koko’s meals. Given what she knew about gorilla diets from her training as an anthropologist, she was surprised that she was expected to prepare gourmet meals. Soon after she started her job at the Gorilla Foundation, Sarah cooked a Thanksgiving meal for Koko that was also eaten by humans, which concerned her. The Gorilla Foundation says that it does “celebrate the holidays by providing special meals that feature some of the same foods that the caregivers enjoyed.” “Koko was extremely picky,” Sarah said, and she thinks this was because Koko was often fed delectable human treats, including processed meats. “She would always eat the meat first when she should have been eating plain—not seasoned or salted—vegetables and other greens.” Sarah says that the foods on the diet checklist at the Gorilla Foundation were reasonable and that caregivers tried to stick to those healthy meal plans, even carefully weighing the food to make sure it wasn’t too much. But then, she says, Patterson would visit with Koko and bring in treats. “She would go in with treats like chocolate or meats, and we had no control over it because she would feed it directly to her,” Sarah says. The Gorilla Foundation says that “Koko’s diet includes a wide variety of food and drinks” that “not only cover her nutritional needs, but enriches her life.” Beyond diet, the quality of gorillas’ veterinary care was a concern. “There were no scientific or veterinary staff to make changes,” says Safkow. “We felt that both Koko and Ndume were not receiving the medical care that was required.” Sarah reports that one veterinarian occasionally visited the site to check on the gorillas but that “it was kind of known he would just sign off on papers, the ones that the Gorilla Foundation needed to be able to have the proper paperwork.” The Gorilla Foundation says it currently has a primary vet as well as backups who visit several times a year. Safkow, Sarah, and other employees have corroborated that both gorillas were fed massive numbers of vitamins and supplements—Safkow estimated Koko received between 70 and 100 pills a day. (The Gorilla Foundation says she currently takes “between 5 to 15 types of nutritional supplements,” as part of a regimen that “many doctors and naturopaths recommend for preventive maintenance.”) Sarah confirms that as part of her job as a food prep specialist, she was responsible for buying these supplements with the discount she received at a grocery store where she worked part-time. “We had to bribe her with all these things she shouldn’t be eating to get her to take these pills,” said Safkow. The list included smoked turkey, pea soup (“very salty,” Safkow pointed out), nonalcoholic beer, and candies. “We tried chocolate once we had tried everything else,” he said. The Gorilla Foundation denied this, yet it also said that chocolate is good for gorillas’ health—that a cardiologist suggested the gorillas eat 85 percent cacao to ward off heart disease and that the supplements given to the gorillas are “natural” and “high in antioxidants, which are powerful boosters of health and longevity.” Research on antioxidant supplements in humans shows no such thing, however, and they may do more harm than good. In any case, it’s not clear how well research on antioxidants applies to gorillas. According to multiple former employees, these pills were recommended by Gabie Reiter, a woman who calls herself a “certified naturopath and medical intuitive,” who consulted with Patterson on the phone. Reiter’s website advertises, among other services, chakra alignment and removal of pollutants and toxins through telephone “power tune-ups.” “[Patterson] would be on the phone with [Reiter] almost daily, and Penny would use her for the medical and emotional needs for the gorillas,” Safkow says, adding that Reiter “would make adjustments to her homeopathic medication, all without any scientific or veterinary diagnoses recommending that treatment.” The caption for a 2005 photo on Koko’s website describes her as having the option of taking certain homeopathic cures when she asks for them. I contacted Reiter to ask about her work at the Gorilla Foundation. At first, she replied to me with a text message, suggesting she was familiar with the organization: “I’m going to talk to Penny Patterson and Ron Cohn first and will get back with you.” A day later, she followed up with an email saying, “After consulting with Penny Patterson, I won’t be available for an interview.” The Gorilla Foundation said that Reiter “uses a combination of kinesthesiological testing and experience” to select “natural” and homeopathic supplements and doses. The foundation maintained that the veterinarian approves all supplements. Employees also expressed concern about the treatment of Ndume, a male silverback gorilla who was brought to the Gorilla Foundation to impregnate Koko. (Michael, who died in 2000, was originally intended to be her mate, but they developed a sibling relationship rather than a mating one.) For more than two decades, it has been the Gorilla Foundation’s public goal for Koko to have a baby so that she can teach her child sign language. “Koko has been telling us for years that she wants to have a baby,” the Gorilla Foundation wrote to me in an email. Over the years, Koko has been photographed playing with her dolls as “practice for motherhood,” and the Gorilla Foundation says that Koko “chose” Ndume through video dating. He has been on long-term loan from the Cincinnati Zoo since 1991. After 23 years, the two have not mated, and former employees report that they spend all their time separated. Koko and Ndume “can only see each other through two sets of bars,” said Safkow. The Gorilla Foundation said that the gorillas are “strongly emotionally bonded,” “communicate constantly” through a mesh partition, and “care deeply about one another.” The foundation’s website says that while Koko is frustrated about her lack of baby, she is not giving up on “her dream.” Safkow also said he believed Patterson strongly favored Koko—after all, Koko has been her project for so many years—and would spend time talking and laughing with her in her trailer while Ndume cried. “Patterson does not spend any time with Ndume, except to walk by his window and give him a treat,” he said. “I feel that he’s the real victim.” Sarah agreed, saying, “He’s isolated and forgotten about there.” The Gorilla Foundation denied this. In 2012, several former employees told the apes’ issues blogger Forsythe that Ndume had not been receiving proper care for years, and Forsythe sent an email to the Animal and Plant Health Inspection Services branch of the USDA asking for confirmation that the gorillas were properly cared for. A month later, the USDA reported that certain aspects of Ndume’s care had been neglected, including the fact that he had not been TB tested in more than 20 years. (The USDA recommends gorillas be tested every year.) Caregivers at the Gorilla Foundation also felt that the leaders of the organization overstepped their bounds in controlling caregivers’ actions. Safkow described a set of closed-circuit cameras from which Patterson could monitor the goings-on of the foundation from her home. “She wanted us to wear phone headsets so she could call us directly,” said Safkow. “So while we’re sitting there with Koko, she’s watching us and calling us and micromanaging all of our interactions. It was insane.” The Gorilla Foundation said that this technology is used to “support gorilla safety, health, research, and care.” The micromanagement included encouraging employees to do things they didn’t feel comfortable with, all in the name of pleasing the gorillas. In the mid-2000s, the Gorilla Foundation was sued by two former employees for sexual harassment. Nancy Alperin and Kendra Keller alleged that Patterson pressured them to show their nipples to Koko. Patterson apparently thought this was for Koko’s benefit; it was alleged in the lawsuit that Patterson once said, “Koko, you see my nipples all the time. You are probably bored with my nipples. You need to see new nipples. I will turn my back so Kendra can show you her nipples.” The Foundation strongly denied the claims at the time but settled with Alperin and Keller. Safkow, who worked at the Gorilla Foundation several years after the Alperin and Keller suit was settled, said Koko remained intrigued by nipples. “It was just a given that you show your nipples to Koko,” he said. “Koko gets what Koko wants. We would even hold our nipples hostage from her until she took her pills.” This side of Koko is not presented to outsiders, he says. “It’s different when there’s a big donor. She wants to see their nipples, and points at her nipple and makes a grunting sound—but Penny would spin this to, ‘*Nipple* sounds like *people* and what she’s saying is she wants to see more people.’ ” (A similar “sounds like” dialogue appears in the AOL chat transcript from 1998.) Safkow also recalls an incident when he was pressured to show his nipples to Koko in the presence of several other Gorilla Foundation employees. “At the time, Penny claimed Koko was depressed. We had afternoon ‘porch parties’ where we dressed up for Koko and acted goofy to cheer her up. Koko came up to the mesh between us and asked to see my nipples. It was embarrassing since other people were around, so I told her, ‘Later.’ Then Penny put her hand on my shoulder and said, ‘Do you mean that? I’m just asking because Koko really needs all the support she can get right now.’ ” The Gorilla Foundation said that Koko may make these requests but that it does not ask caregivers to comply. “This seems to be a natural curiosity for a gorilla like Koko, and we don’t censor her communication, we just observe and record it,” the foundation wrote in an email. * * * At the Great Ape Trust, which houses Kanzi and several other bonobos, concerned employees took similar action. In September 2012, 12 caregivers and researchers wrote a letter to the board of directors raising concerns about the leadership and judgment of Sue Savage-Rumbaugh, the researcher who led Kanzi’s training. “The Great Ape Trust/Bonobo Hope Sanctuary is not a fit place for 7 bonobos,” they wrote, alleging that they had “observed and internally reported injuries to apes, unsafe working conditions, and unauthorized ape pregnancies.” (The letter was eventually posted on Forsythe’s site and is readable here.) The Great Ape Trust’s board conducted an investigation in response to employees’ allegations, in cooperation with the U.S. Department of Agriculture, which certifies primate facilities. Savage-Rumbaugh was placed on administrative leave from her position as senior scientist and executive director of bonobo research. Despite her official-sounding title, according to her attorneys, Savage-Rumbaugh was not even officially employed by the center at the time she was placed on administrative leave, and had not been since 2008. (In a 2012 interview, Savage-Rumbaugh mentioned that she was asked to accept an “emeritus” researcher designation after asserting that bonobos were making representative art.) According to the *Des Moines Register*, while the board of directors and USDA completed their investigation, Savage-Rumbaugh’s sister and niece were among the caregivers and volunteers responsible for the bonobos’ care. Savage-Rumbaugh was reinstated days after Panbanisha died of pneumonia in November 2012. After Savage-Rumbaugh’s return to the Great Ape Trust, criticism of the organization continued. In early 2013, it hosted events that allowed the general public to visit with bonobos directly. Former employees were perturbed that Teco, Kanzi’s baby son, attended public events, because this could have put him at risk for disease. Dalbey, who worked at the organization until 2011, said that would not have been allowed when she was an employee there. “You had to have a reason to be in the building, and the house standards were rigid,” she said. “There’s no telling whether those people had their TB tests and flu shots.” Al Setka, former communications director, agrees. “We were a different organization,” he says. “We didn’t allow public visitations; we were a scientific and educational organization.” According to Savage-Rumbaugh’s attorneys, she has been barred from access to the bonobos since spring 2013. The Great Ape Trust has recreated itself as the Ape Cognition and Conservation Initiative, and the new science directors of the organization think of it as a new beginning. “This allows us to start from scratch,” says Jared Taglialatela, appointed late last year as the organization’s new director of research. The old organization is “not who we are, and that’s not what we’re hoping to be.” Recently, however, the organization was under fire when it was announced that Kanzi—who, by some accounts, is overweight—would be judging a dessert-making contest at the Iowa State Fair. Taglialatela said he was “not exactly thrilled with the idea” of having Kanzi participate in the contest but that the board of directors “envisioned that this would be a great way to raise awareness” about bonobo conservation efforts. It was reported that Kanzi would receive desserts directly from the fair, but Taglialatela says the story was wrong: “All of Kanzi’s food is prepared on site with staff.” Regardless of the dietary issues, members of the ape community are still concerned about the exploitation of apes for human entertainment. Taglialatela says the ACCI has tried to make sure the state fair event was as educational as possible. “We’ve tried to make sure that the science and conservation education portion plays center stage in his involvement, and we are confident in seeing that realized,” he says. * * * When I asked former employees of the Gorilla Foundation and ACCI (the former Great Ape Trust) what they thought would be the best outcome for the apes still used in language and cognition research, they expressed a combination of desperation and optimism about the future. All the former employees I spoke with emphasized their love for their ape friends and their desire to see them healthy. From what Taglialatela told me about the ACCI’s new goals, it sounds like the organization is striving to be what Dalbey and Setka said the organization once was: a research, education, and conservation group. Taglialatela says that the ACCI’s “primary mission is scientific discovery” and that he’s hopeful that they’ll be able to win grants and funding to support their research. Previously, he said, it seemed that there was “not much in terms of publication” coming from the organization’s research program, but he said that he hoped to change this. Dalbey and Setka both were hopeful that the new organization would care for the bonobos properly, but they expressed reservations about more invasive research, such as anesthetizing animals for brain scans, which Taglialatela has done in his previous work with chimpanzees. Taglialatela says the research is still in its infancy, so it’s yet to be seen what will happen there. The ACCI has also been embroiled in a legal dispute with Savage-Rumbaugh and her nonprofit organization Bonobo Hope. A previous legal agreement determined that the Great Ape Trust and Bonobo Hope would share ownership of the bonobos, and a recent court motion declared that ACCI was bound to the same legal responsibilities. It’s unclear, at this point, whether Savage-Rumbaugh will be allowed access to the bonobos again. Taglialatela expressed disappointment that the messy legal process took the focus away from the bonobos, saying that “it is unfortunate that Dr. Savage-Rumbaugh and the [Bonobo Hope] members appear more concerned about her access and her self-interests rather than those of the bonobos.” Former Gorilla Foundation researchers were in agreement about what they saw as Ndume’s most promising future. Many of the people I spoke with suggested Ndume be put back in the care of the Cincinnati Zoo, which still legally owns him. “Ndume needs to be taken away, back to the Cincinnati Zoo or somewhere he’s able to be socialized into a troop and lead a normal life,” said Sarah. Blogger Forsythe created a petition asking the Cincinnati Zoo to reclaim Ndume, which more than 3,700 people signed. According to Forsythe, the zoo sent several private messages to individuals who posted on the zoo’s Facebook page about Ndume, messages in which the zoo insisted that Ndume is happy and receiving enrichment activities at the Gorilla Foundation. Forsythe sees this fight as a lost cause and has ended the petition. The Cincinnati Zoo has not released any public statements about Ndume and did not respond to requests for comment. Koko is a more difficult story, and though these former employees are concerned about Patterson’s care for Koko, they recognize that given the close bond between the two, separating them could be disastrous. “Koko needs Penny—there’s no way she could live without Penny,” Safkow says. “Koko’s somewhere in between a gorilla and a human, and there’s really no hope for her outside the Gorilla Foundation.” But Sarah suggests more oversight is needed to ensure the gorillas are healthy. “Koko definitely needs medical care, and she needs a trained vet to get her back on a normal diet,” she said. Former employees express anger and frustration that the Gorilla Foundation continues to solicit donations for projects that have gone on for decades without success. According to the Gorilla Foundation’s 2013 tax forms, it has collected nearly $8 million in the past five years. Former employees allege that people are misled about what their donations go to. Safkow thinks that the foundation needs to be upfront about the feasibility of its goals. In addition to Koko’s baby, the Gorilla Foundation has been raising money for the gorillas to retire in Maui. The foundation has leased land and hired a surveyor. But from the organization’s tax forms, it appears that no significant progress has been made on the Maui project since 2003. “Just tell people, ‘We need your money to take care of this aging gorilla.’ ” Safkow said. “She’s not having a baby, she’s not going to Maui, but she does need money to help her—she’s one of a kind.” Others are concerned that the Gorilla Foundation lists ape conservation efforts as one of its primary goals. In 2001, the organization contributed to the construction of a gorilla enclosure in Cameroon named after Michael. According to an ape conservation activist (who wishes to remain anonymous) and three former employees, little has been done since then. The group’s website reports that the Gorilla Foundation sent tens of thousands of copies of Patterson and Cohn’s book *Koko’s Kitten* to schoolchildren in Cameroon as part of what they call “empathy education.” Some former employees felt deceived by the way the gorilla research was presented to them when they were hired. Alex (a pseudonym) said he and others were “duped into this fantasy that the foundation is doing amazing work and that we would be a major contributor.” He also thinks that there should be “an apology statement issued to anyone who donated time and money to the foundation.” When the Gorilla Foundation responded to all these issues, it speculated that the allegations were “seemingly provided … from disgruntled former employees.” One wonders why so many employees were disgruntled in the first place. I sent a message to the only current caregiver I could find contact information for but received no response; remember, employees are made to sign nondisclosure agreements. It may be a coincidence, but a week after I first contacted the Gorilla Foundation for its comments on these allegations, it published a press release on its website announcing several ambitious new projects, including distributing even more copies of *Koko’s Kitten* in Cameroon, opening up data to scientists, and developing a Koko signing app, and the foundation reiterated its dedication to the “care and protection of Koko and Ndume.” Perhaps the foundation’s operations have changed as a result of the mass resignations in 2012—and many people I spoke with remain hopeful that the organization has taken or will take action to address their complaints. Koko and Kanzi are still beloved. Their language and cognitive skills are standard parts of intro social science courses. Their names are famous, and you can frequently see references to these apes’ stories in the media. “People just don’t want to hear anything negative,” says Safkow, describing the intrigue surrounding Koko. “You want to believe this fairy tale; it’s magical.” But like all fairy tales, the one about talking apes is partly make-believe. No matter how much we wish to project ourselves onto them, they are still apes—albeit very intelligent ones. They deserve our respect, and, at the very least, proper care. Our original plan for these apes—to study their capacity for language—has more or less been achieved, and it’s unclear how much more we *can* learn, as apes like Koko and Kanzi are reaching old age. Through these projects, we’ve learned about the ability of nonhuman apes to associate symbols or signs with objects in the world and to use this knowledge to communicate with humans. We’ve learned about the uniqueness of human language. But we may also have learned something about how strange, stubborn, and fanciful we can be. ***Correction, Aug. 21, 2014: **This article originally stated that chimpanzees are our closest ancestors. They are our closest relatives; our last common ancestor with chimps and bonobos lived more than 6 million years ago.
true
true
true
Last week, people around the world mourned the death of beloved actor and comedian Robin Williams. According to the Gorilla Foundation in Woodside,...
2024-10-12 00:00:00
2014-08-21 00:00:00
https://compote.slate.co…b.jpg?width=1560
article
slate.com
Slate
null
null
7,040,479
http://www.bbc.co.uk/news/world-us-canada-25691066
Half of US Congressional politicians are millionaires
null
# Half of US Congressional politicians are millionaires - Published **For the first time, half of the members of the US Congress are millionaires, according to a wealth analysis.** At least 268 of the 534 politicians in the Senate and House of Representatives had a net worth of $1m (£606,821) or more in 2012. Democrats were slightly wealthier than Republicans, found the data from the Center for Responsive Politics (CRP) at OpenSecrets.org. It comes as politicians debate national jobless benefits and the minimum wage. The median net worth for Washington politicians came in at $1m, according to data, external. Democrats registered a median wealth of $1.04m while Republicans had $1m. Senators ranked higher than members of the House in median wealth, with $2.7m versus $896,000 in the lower chamber. The wealthiest member of Congress was Republican Congressman Darrell Issa. The Californian reported a net worth of up to $598m, earned largely through sales of a car antitheft system. The poorest member of Congress was also a California Republican, Congressman David Valadao. He listed debts upwards of $12m, largely stemming from loans on a family dairy farm. As Congress suffers under some of its lowest approval ratings in history, "there's been no change in our appetite to elect affluent politicians to represent our concerns in Washington", CRP director Sheila Krumholz told the Agence France-Presse news agency. Also necessary is wealth "to run financially viable campaigns", she added. - Published10 January 2014 - Published10 January 2014 - Published28 December 2013
true
true
true
At least 268 of the 534 politicians in the Senate and House of Representatives have a net worth of $1m, the first time in US history, finds a study.
2024-10-12 00:00:00
2014-01-10 00:00:00
https://ichef.bbci.co.uk…2214314_issa.jpg
article
bbc.com
BBC News
null
null
17,958,135
https://www.chrisstucchio.com/blog/2018/the_price_of_privacy.html
null
null
null
true
false
false
null
null
null
null
null
null
null
null
null
26,532,041
https://www.bleepingcomputer.com/news/microsoft/microsoft-halts-rollout-of-windows-10-kb5001649-emergency-update/
Microsoft halts rollout of Windows 10 KB5001649 emergency update
Lawrence Abrams
*Update: The KB5001649 update is rolling out again. You can find more information here.* Microsoft has paused the Windows 10 KB5001649 cumulative update rollout, likely due to installation issues and reported crashes. Microsoft is now offering the previously released KB5001567 emergency update instead. The March 2021 updates have been a complete mess when printing, with update after update causing new issues to arise. After the release of the March 2021 Windows 10 KB5000802 and KB5000808 cumulative updates, users began experiencing "APC_INDEX_MISMATCH for win32kfull.sys" BSOD crashes when printing. Other users experienced different issues, such as blank pages, black bars in printouts, and background graphics not printing. To fix these issues, Microsoft released the emergency out-of-band update KB5001567 on March 15th. While this update fixed the bug causing Windows 10 to crash when printing, it did not resolve the other issues users experienced. "The updates caused missing text and graphics during print jobs to local (usb) Zebra industrial label printers - various models. Once KB was uninstalled all printing normalized. Watch out for KB5000802, KB5000808, KB5000809 and KB5000822 depending on version of Windows 10," a user reported in one of our articles on these printing issues. To fix the remaining issues, Microsoft released a second OOB emergency update on March 18th, known as KB5001649. This update was supposed to fix formatting issues, missing graphics elements, blank pages, or black bars on printed pages. Microsoft released this update through Windows Update as an optional update, and it replaced the previously offered KB5001567 update, as shown below. ## Microsoft pauses the rollout of KB5001649 printing fix Starting today, Microsoft appears to have paused the rollout of KB5001649 as it is no longer available on any of our devices that were previously offered the update. Instead, when checking Windows Update, the same devices are now offered the KB5001567 update released earlier this week instead. As KB5001649 superseded KB5001567, the only explanation is that KB5001649 is no longer offered via Windows Update. Windows Latest and BornCity have also confirmed this pause of the KB5001649 update. Microsoft has not provided any official reason for the pause, and the KB5001649 support bulletin has not been updated with any information. Furthermore, the update is still available via the Microsoft Update Catalog, just no longer via Windows Update. The rollout has likely been paused due to the many reports [1, 2, 3, 4, 5] of users receiving a "We couldn't install this update , but you can try again (0x80070541)" error when attempting to install the update. Another possible reason is that people are still crashing when printing with this update installed. "This didn't work for the instance I had today. I ran KB5001649 but when tried to print (Kyocera TaskAlfa) the system crashed. I am reverting back to disabling updates for the time being manually removing," a reader reported in one of our articles. Unless you are experiencing printing issues, it may be wiser to hold off on these optional non-security updates until Microsoft can provide further information. When we asked Microsoft why the update was pulled, they just shared that they are working on the fixes. “We are working to fix the issues some customers may be experiencing when printing from some apps.”- a Microsoft spokesperson. *Update 3/21/21: Added statement from Microsoft.* ## Comments ## lovs2build - 3 years ago SUCH a joke MS has become. An update to fix a bad update even worse than the others. I really dislike MS and windows 10, time for something new unlike this patches to keep it updated with new features, how about an operating system that doesn't need new content and works. I mean it has gotten to the point I don't have auto update running, when I see an update I have to research it to make sure it even works before installing, who does that and who expects users to have to make sure an update works and not break their systems?? MS get your S*** together before another OS maker comes along and steals your thunder.
true
true
true
Microsoft has paused the Windows 10 KB5001649 cumulative update rollout, likely due to installation issues and reported crashes. Microsoft is now offering the previously released KB5001567 emergency update instead.
2024-10-12 00:00:00
2021-03-20 00:00:00
https://www.bleepstatic.…glass-broken.jpg
article
bleepingcomputer.com
BleepingComputer
null
null
25,135,822
https://blog.cloudflare.com/network-layer-ddos-attack-trends-for-q3-2020/
Network-layer DDoS attack trends for Q3 2020
Vivek Ganti
**DDoS attacks are surging** — both in frequency and sophistication. After doubling from Q1 to Q2, the total number of network layer attacks observed in Q3 doubled again — resulting in a 4x increase in number compared to the pre-COVID levels in the first quarter. Cloudflare also observed more attack vectors deployed than ever — in fact, while SYN, RST, and UDP floods continue to dominate the landscape, we saw an explosion in protocol specific attacks such as mDNS, Memcached, and Jenkins DoS attacks. Here are other key network layer DDoS trends we observed in Q3: Majority of the attacks are under 500 Mbps and 1 Mpps — both still suffice to cause service disruptions We continue to see a majority of attacks be under 1 hr in duration Ransom-driven DDoS attacks (RDDoS) are on the rise as groups claiming to be Fancy Bear, Cozy Bear and the Lazarus Group extort organizations around the world. As of this writing, the ransom campaign is still ongoing. See a special note on this below. ### Number of attacks The total number of L3/4 DDoS attacks we observe on our network continues to increase substantially, as indicated in the graph below. All in all, Q3 saw over 56% of all attacks this year — double that of Q2, and four times that of Q1. In addition, the number of attacks per month increased throughout the quarter. While September witnessed the largest number of attacks overall, August saw the most large attacks (over 500Mbps). Ninety-one percent of large attacks in Q3 took place in that month—while monthly distribution of other attack sizes was far more even. While the total number of attacks between 200-300 Gbps decreased in September, we saw more global attacks on our network in Q3. This suggests the increase in the use of distributed botnets to launch attacks. In fact, in early July, Cloudflare witnessed one of the largest-ever attacks on our network — generated by Moobot, a Mirai-based botnet. The attack peaked at 654 Gbps and originated from 18,705 unique IP addresses, each believed to be a Moobot-infected IoT device. The attack campaign lasted nearly 10 days, but the customer was protected by Cloudflare, so they observed no downtime or service degradation. ## Attack size (bit rate and packet rate) There are different ways of measuring a L3/4 DDoS attack’s size. One is the volume of traffic it delivers, measured as the bit rate (specifically, Gigabits-per-second). Another is the number of packets it delivers, measured as the packet rate (specifically, packets-per-second). Attacks with high bit rates attempt to saturate the Internet link, and attacks with high packet rates attempt to overwhelm the routers or other in-line hardware devices. In Q3, most of the attacks we observed were smaller in size. In fact, over 87% of all attacks were under 1 Gbps. This represents a significant increase from Q2, when roughly 52% of attacks were that small. Note that, even ‘small’ attacks of under 500 Mbps are many times sufficient to create major disruptions for Internet properties that are not protected by a Cloud based DDoS protection service. Many organizations have uplinks provided by their ISPs that are far less than 1 Gbps. Assuming their public facing network interface also serves legitimate traffic, you can see how even these ‘small’ DDoS attacks can easily take down Internet properties. This trend holds true for attack packet rates. In Q3, 47% of attacks were under 50k pps — compared to just 19% in Q2. Smaller attacks can indicate that amateur attackers may be behind the attacks — using tools easily available to generate attacks on exposed IPs/ networks. Alternatively, small attacks may serve as a smokescreen to distract security teams from other kinds of cyberattacks that might be taking place simultaneously. ### Attack duration In terms of length, very short attacks were the most common attack type observed in Q3, accounting for nearly 88% of all attacks. This observation is in line with our prior reports — in general, Layer 3/4 DDoS attacks are getting shorter in duration. Short burst attacks may attempt to cause damage without being detected by DDoS detection systems. DDoS services that rely on manual analysis and mitigation may prove to be useless against these types of attacks because they are over before the analyst even identifies the attack traffic. Alternatively, the use of short attacks may be used to probe the cyber defenses of the target. Load-testing tools and automated DDoS tools, that are widely available on the dark web, can generate short bursts of, say, a SYN flood, and then following up with another short attack using an alternate attack vector. This allows attackers to understand the security posture of their targets before they decide to potentially launch larger attacks at larger rates and longer durations - which come at a cost. In other cases, attackers generate small DDoS attacks as proof and warning to the target organization of the attacker’s ability to cause real damage later on. It’s often followed by a ransom note to the target organization, demanding payment so as to avoid suffering an attack that could more thoroughly cripple network infrastructure. Whatever their motivation, DDoS attacks of any size or duration are not going away anytime soon. Even short DDoS attacks cause harm, and having an automated real-time defense mechanism in place is critical for any online business. ### Attack vectors SYN floods constituted nearly 65% of all attacks observed in Q3, followed by RST floods and UDP floods in second and third places. This is relatively consistent with observations from previous quarters, highlighting the DDoS attack vector of choice by attackers. While TCP based attacks like SYN and RST floods continue to be popular, UDP-protocol specific attacks such as mDNS, Memcached, and Jenkins are seeing an explosion compared to the prior quarter. Multicast DNS (mDNS) is a UDP-based protocol that is used in local networks for service/device discovery. Vulnerable mDNS servers respond to unicast queries originating outside of the local network, which are ‘spoofed’ (altered) with the victim's source address. This results in amplification attacks. In Q3, we noticed an explosion of mDNS attacks — specifically, we saw a 2,680% increase compared to the previous quarter. This was followed by Memcached and Jenkins attacks. Memcached is a Key Value database. Requests can be made over the UDP protocol with a spoofed source address of the target. The size of the Value stored in the requested Key will affect the amplification factor, resulting in a DDoS amplification attack. Similarly, Jenkins, NTP, Ubiquity and the other UDP based protocols have seen a dramatic increase over the quarter due to its UDP stateless nature. A vulnerability in the older version (Jenkins 2.218 and earlier) aided the launch of DDoS attacks. This vulnerability was fixed in Jenkins 2.219 by disabling UDP multicast/ broadcast messages by default. However there are still many vulnerable and exposed devices that run UDP based services which are being harnessed to generate volumetric amplification attacks. ### Attack by country Looking at country-based distribution, the United States observed the most number of L3/4 DDoS attacks, followed by Germany and Australia. Note that when analyzing L3/4 DDoS attacks, we bucket the traffic by the Cloudflare edge data center locations where the traffic was ingested, and not by the location of the source IP. The reason is when attackers launch L3/4 attacks they can spoof the source IP address in order to obfuscate the attack source. If we were to derive the country based on a spoofed source IP, we would get a spoofed country. Cloudflare is able to overcome the challenges of spoofed IPs by displaying the attack data by the location of Cloudflare’s data center in which the attack was observed. We’re able to achieve geographical accuracy in our report because we have data centers in over 200 cities around the world. ### Africa ### Asia Pacific & Oceania ### Europe ### Middle East ### North America ### South America ### United States ### A note on recent ransom-driven DDoS attacks Over the past months, Cloudflare has observed another disturbing trend — a rise in extortion and ransom-based DDoS (RDDoS) attacks targeting organizations around the world. While RDDoS threats do not always result in an actual attack, the cases seen in recent months show that attacker groups are willing to carry out the threat, launching large scale DDoS attacks that can overwhelm organizations that lack adequate protection. In some cases, the initial teaser attack may be sufficient to cause impact if not protected by a Cloud based DDoS protection service. In a RDDoS attack, a malicious party threatens a person or organization with a cyberattack that could knock their networks, websites, or applications offline for a period of time, unless the person or organization pays a ransom. You can read more about RDDoS attacks here. Entities claiming to be Fancy Bear, Cozy Bear, and Lazarus have been threatening to launch DDoS attacks against organizations’ websites and network infrastructure unless a ransom is paid before a given deadline. Additionally, an initial ‘teaser’ DDoS attack is usually launched as a form of demonstration before parallel to the ransom email. The demonstration attack is typically a UDP reflection attack using a variety of protocols, lasting roughly 30 minutes in duration (or less). What to do if you receive a threat: **Do not panic and we recommend you to not pay the ransom**: Paying the ransom only encourages bad actors, finances illegal activities —and there’s no guarantee that they won’t attack your network now or later.**Notify local law enforcement**: They will also likely request a copy of the ransom letter that you received.**Contact Cloudflare**: We can help ensure your website and network infrastructure are safeguarded from these ransom attacks. ### Cloudflare DDoS protection is different On-prem hardware/cloud-scrubbing centers can't address the challenges of modern volumetric DDoS attacks. Appliances are easily overwhelmed by large DDoS attacks, Internet links quickly saturate, and rerouting traffic to cloud scrubbing centers introduces unacceptable latency penalties. Our cloud-native, always-on, automated DDoS protection approach solves problems that traditional cloud signaling approaches were originally created to address. Cloudflare’s mission is to help build a better Internet, which grounds our DDoS approach and is why in 2017, we pioneered unmetered DDoS mitigation for all of our customers on all plans including the free plan. We are able to provide this level of protection because every server on our network can detect & block threats, enabling us to absorb attacks of any size/kind, with no latency impact. This architecture gives us unparalleled advantages compared to any other vendor. **51 Tbps of DDoS mitigation capacity and under 3 sec TTM**: Every data center in Cloudflare’s network detects and mitigates DDoS attacks. Once an attack is identified, the Cloudflare’s local data center mitigation system (dosd) generates and applies a dynamically crafted rule with a real-time signature — and mitigates attacks in under 3 seconds globally on average. This 3-second Time To Mitigate (TTM) is one of the fastest in the industry. Firewall rules and “proactive”/static configurations take effect immediately.**Fast performance included**: Cloudflare is architected so that customers do not incur a latency penalty as a result of attacks. We deliver DDoS protection from every Cloudflare data center (instead of legacy scrubbing centers or on-premise hardware boxes) which allows us to mitigate attacks closest to the source. Cloudflare analyzes traffic out-of-path ensuring that our DDoS mitigation solution doesn’t add any latency to legitimate traffic. The rule is applied at the most optimal place in the Linux stack for a cost efficient mitigation, ensuring no performance penalty.**Global Threat Intelligence**: Like an immune system, our network learns from/mitigates attacks against any customer to protect them all. With threat intelligence (TI), it automatically blocks attacks and is employed in customer facing features (Bot Fight mode, Firewall Rules & Security Level). Users create custom rules to mitigate attacks based on traffic attribute filters, threat & bot scores generated using ML models (protecting against bots/botnets/DDoS). To learn more about Cloudflare’s DDoS solution contact us or get started.
true
true
true
In Q3 ‘20, Cloudflare observed a surge in DDoS attacks, with double the number of DDoS attacks and more attack vectors deployed than ever — with a notable surge in protocol-specific DDoS attacks such as mDNS, Memcached, and Jenkins amplification floods.
2024-10-12 00:00:00
2020-11-18 00:00:00
https://cf-assets.www.cl…-2020-OAoUzK.png
article
cloudflare.com
The Cloudflare Blog
null
null
109,962
http://www.adobe.com/devnet/dreamweaver/articles/dwmx_design_tips.html
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
751,952
http://www.ted.com/talks/mark_bittman_on_what_s_wrong_with_what_we_eat.html
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
35,334,696
https://twitter.com/mcxfrank/status/1640379247373197313
x.com
null
null
true
true
false
null
2024-10-12 00:00:00
null
null
null
null
X (formerly Twitter)
null
null
6,530,794
http://news.sciencemag.org/physics/2013/10/fusion-breakthrough-nif-uh-not-really-%E2%80%A6
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
39,304,145
https://jackrusher.com/strange-loop-2022/
Stop Writing Dead Programs, Strange Loop 2022
null
00:12.95 My talk today is *Stop Writing Dead Programs*. 00:14.57 This is sort of the thesis statement 00:16.01 for the talk, even though it's 40 years 00:18.52 old, this Seymour Papert quote saying 00:19.91 that we're still digging ourselves into 00:21.65 a kind of a pit by continuing to 00:23.68 preserve practices that have no rational 00:26.92 basis beyond being historical. A strong runner up was this quote, which captures the essence of what we should be trying to do when creating new languages: “The Self system attempts to integrate intellectual and non-intellectual aspects of programming to create an overall experience. The language semantics, user interface, and implementation each help create this integrated experience.” (source) 00:29.08 I will start with a somewhat personal 00:30.47 journey in technology. I'm going to ask 00:31.66 you for some feedback at some different 00:34.13 places, so first off – by applause – how 00:36.97 many of you know what this is? 00:40.43 Okay, okay, that's actually more 00:41.86 than I expected. Now, how many of you 00:44.33 actually used one of these? 00:47.63 Okay, so what I can say is that I am part 00:49.36 of the last generation of people who 00:51.04 were forced to use punch cards at school. 00:52.79 I still had to write Fortran programs 00:55.25 with punch cards, and this thing is a 00:56.93 card punch. It's like a keyboard, 00:58.85 except when you press the keys you're 01:00.22 actually making holes in a piece of 01:01.54 paper, and then you feed them 01:03.29 into this thing in the back, and the 01:05.57 pieces of paper look like this. So each 01:06.83 one of these vertical columns is 01:08.51 basically a byte, and you're stabbing 01:10.19 through the different bits of the 01:11.81 byte to indicate what letter it is. If 01:13.37 you look at the top left corner, 01:16.55 you see `Z(1) = Y + W(1)` . 01:19.19 This is one line of code – a 01:21.53 card is one line of code. Something to 01:23.87 notice about this card it's 80 columns 01:25.55 wide. We're going to come back to that 01:25.56 later. Some commenters were confused that we still used punched cards in the 80s, when display terminals already existed. This was in the context of a required class for engineering students to prepare them for the possibility that they would encounter punch cards in the wild. Most of us never did, beyond this one class. 01:27.46 This design dates from 01:29.45 1928. This is a Hollerith punch card, the 01:31.42 same one used forever. Now, what does a 01:34.24 program look like if you're programming 01:35.33 like this? It looks like this: it's a deck. 01:37.37 Now notice the rubber band. When you're 01:40.01 doing this, you live in terminal fear 01:41.74 that you will drop the deck of cards. It 01:44.39 is a terrible experience resorting 01:47.21 the cards. That long diagonal stripe 01:48.89 there is so that this person, who made 01:50.51 this particular deck, could put it back 01:51.83 together without having to look at every 01:53.33 single line in the process. And the words 01:55.24 written on the top of the deck are sort 01:57.59 of indicating where different 01:58.49 subroutines are located within this program. 02:00.95 Now, to give you a sense of how long 02:02.99 these programs can get, this picture 02:05.45 (forgive me, it's a low quality picture). 02:06.64 This is the actual reader I used and 02:08.63 that in the front there is an actual 02:10.01 program I wrote. The lower right hand 02:11.80 corner one, which was a Fortran program 02:13.85 to simulate rocket flight, because my my 02:16.72 particular school had a connection to 02:18.22 NASA and we did a lot of Rocket-y things. 02:19.67 Right, so can you imagine how long it 02:22.25 took me to punch all these and put them 02:23.51 in there, and what we would do is give 02:25.25 them to a system operator who would feed 02:26.75 them into a computer. In this case the 02:28.67 computer I personally used was this one. 02:30.53 This is a VAX-11/780. This machine cost 02:34.19 nearly a million dollars, and had 16 02:35.99 megabytes – that's megabytes – of RAM, ran at 02:38.69 5 megahertz – that's megahertz! This 02:41.44 thing in front of me here is thousands 02:43.36 of times more powerful than the machine 02:44.80 that I was using then – that the whole 02:46.13 campus was using to do these kinds of 02:47.57 things then – and what would the output 02:49.49 look like that came from sending this 02:51.35 enormous deck of cards in? Well, it would 02:53.15 come out on a line printer that looks like 02:54.77 this. And you wouldn't get it right 02:56.75 away. An operator would give it to you 02:58.72 later. Note the vintage haircuts, the 03:00.89 fellow in the middle there is the actual 03:02.57 operator who was handing me these 03:03.71 outputs, and he's the person who gave me 03:04.85 these photos of this equipment. 03:06.47 So this process, as you can imagine, was 03:08.57 hard, but it was hard in a dumb way. 03:11.33 Some things are hard because they have 03:13.07 to be, and I really support the idea of 03:14.69 overcoming challenges and doing hard 03:16.43 things, but this was hard 03:18.29 for reasons had nothing to do with the 03:19.79 actual problem you're trying to [solve]. 03:22.19 Like, something with a rocket and a simulation, 03:23.57 and you're thinking about not dropping 03:25.13 your punch card deck, and it's taking you 03:26.57 forever to find out what happened. So, it 03:29.14 really hinges on your ability to emulate 03:30.94 the computer in your head because the 03:33.11 computer's not going to help you in any 03:34.30 way. There's not an editor, there's 03:35.44 nothing, and that in turn hinges on 03:38.08 working memory, which is something that 03:39.47 is not very well distributed among 03:41.08 humans. There were a small number of 03:42.94 us for whom this whole thing came pretty 03:44.57 naturally, and we were treated as, like, 03:46.78 special people – as kind of high priests 03:48.58 with magical powers, and this is how we 03:50.69 came to think of ourselves, right, that 03:52.36 we're special [because] we can make it work. 03:53.86 But the truth is we were less priests 03:55.97 like this than we were monks like this – 03:57.41 hitting ourselves in the head. 03:59.93 Right, but the problem is – 04:02.86 as Peter Harkins mentions here – that 04:05.39 programmers have this tendency to, once 04:07.07 they master something hard (often 04:08.86 pointlessly hard), rather than then making 04:10.78 it easy they feel proud of themselves 04:12.58 for having done it and just perpetuate 04:14.33 the hard nonsense. And I'm going to argue 04:16.67 that a lot of what we still do today is 04:18.28 very much like what I was doing on that 04:19.78 old VAX. For one thing, there's a lot of 04:21.59 batch processing going on, and 04:23.74 what's wrong with batch processing? Hella 04:25.73 long feedback loops. It's no good, takes 04:27.77 you forever – it took me 45 minutes to 04:29.27 find out what a one card change would do 04:31.49 in the printout that I would get back, 04:32.99 because that was the loop. You're 04:34.79 thinking: well, it's not like that for us, 04:36.23 right, we're not poking holes in paper 04:37.67 cards – we have display terminals! But 04:40.31 how many of you guys have compile 04:42.05 cycles that can take 45 minutes? Famously, 04:44.87 the Go team wrote go because they 04:47.21 were so angry about waiting for an hour, 04:48.89 because they wanted to see what was 04:50.57 going to happen with some C++ 04:51.59 code they're running on some horrible 04:52.85 giant Google codebase. Maybe you want 04:54.89 to deploy your stuff and see if it works, 04:56.27 because we're all running web apps now. 04:57.29 So do you, like, stuff it in a 04:58.90 Docker container, and then ship it out to 05:00.65 the cloud and wait for a CI job? How long 05:02.57 does that that take? 05:04.15 Two hours for this guy! I mean why do we 05:06.77 tolerate this? This is crazy! Docker 05:08.51 shouldn't exist. It exists only because 05:09.95 everything else is so terribly 05:11.15 complicated that they added another 05:12.40 layer of complexity to make it work. It's 05:14.74 like they thought: if deployment is bad, 05:16.12 we should make development bad too. It's 05:18.59 just... it's not good. 05:21.53 So, what kind of things do we inherit 05:23.15 from this way of thinking about the 05:24.59 world? We get funny ideas that are built 05:26.62 into programming about time and state. 05:28.79 Ideas like, there should be a compile/run 05:31.07 cycle. This is a terrible idea, but it's 05:33.23 an ancient idea, that you're going to 05:34.49 compile the thing and you're getting an 05:35.68 artifact and you're going to run the 05:36.71 artifact over there and those two things 05:38.15 are completely different phases of your 05:39.71 process. There's going to be linear 05:41.81 execution – most programming languages 05:43.79 assume that there's only one thread and 05:45.35 you're going to run straight through 05:46.43 from the beginning to the end; that your 05:48.35 program is going to start up from a 05:49.67 blank State and then run to termination. 05:51.40 Now, how many programs that we actually 05:53.45 write do that? We'll revisit that in a 05:55.43 moment. This really only works if your 05:56.99 program is some kind of input/output 05:58.49 transformer. So there's no runtime 06:00.77 introspection, because runtime is 06:02.02 happening over there and your actual 06:03.40 work is happening over here, and you just 06:04.85 have to kind of guess from what happened, 06:05.93 how it might be related to your code, and 06:08.09 if there's a bug – well, sorry, failures 06:10.12 just halt your program. You get maybe a 06:11.68 core dump, or you get a log message 06:13.18 somewhere with a stack trace in it. Now, 06:15.23 what kind of programs do we really write? 06:16.90 Mostly long-lived servers. I've got 06:18.89 server processes with uptimes of a 06:20.81 thousand days. 06:21.89 They don't work the same way 06:23.62 `/usr/bin/sort` works. I don't want a 06:25.73 process that's optimized for writing 06:27.35 that. We also write GUI programs. GUI 06:29.51 programs are more intense than this, even. 06:31.30 So you've got all of these 06:32.68 different kinds of input coming into the 06:34.37 program, and it's maybe it's talking to 06:35.68 the keyboard, it's talking to the mouse, 06:36.77 it's talking to the network, if it's Zoom 06:38.57 it's talking to the camera, it's talking 06:40.37 to the microphone – it's crazy. So this 06:42.59 approach to programming just 06:44.15 doesn't work well for the things we 06:45.59 actually build. It also infected 06:48.17 programming language theory. So, if the 06:50.21 program is a static artifact, what does 06:51.59 that mean? It means we're mostly going to 06:53.27 concentrate on algebraics, so we're going 06:54.46 to talk about syntax and semantics and 06:55.79 very little else. 06:57.05 There's going to be no concern really 06:58.49 for *pragmatics* – and what I mean here by 06:59.80 pragmatics is what it's actually like to 07:01.55 interact with your programming 07:02.74 environment, and this leads to 07:04.37 mathematics envy and a real fixation on 07:06.46 theorem proving. 07:07.96 So, to give an example of what happens 07:09.95 when people actually concentrate on a 07:11.80 part of programming and make progress, 07:13.01 we're going to take a quick tour through 07:14.74 syntax and semantics. We're going to do a 07:17.02 simple transformation here. We've 07:18.40 got 1 through 4, we want it to be 07:19.67 2 through 5. We want it to be 07:21.17 relatively general. I've written some 07:22.90 example programs that do this in a 07:25.37 variety of programming languages. The 07:28.07 first one here is it is in ARM64 machine 07:32.39 language, because my laptop happens to 07:34.18 run this processor now. As you can 07:35.87 plainly see from this code, it starts off 07:38.57 Oh wait! Does everyone here understand 07:40.15 ARM64? Okay, all right, it's a little easier 07:42.46 if I do this, so you can see where the 07:43.79 instructions are within these different 07:45.11 words. This is a cool instruction set. 07:47.39 It's not like x86. [In x86], all the 07:49.18 instructions are different lengths. In 07:50.45 ARM64, they're all the same length because 07:51.77 it's RISC, but we'll do it in assembly 07:54.29 language – it'll be easier, right. So we 07:56.15 we'll start with this label here, add one, 07:57.83 and we've got the signature of what it 07:59.08 would be as a C program after that. 08:00.95 What am I actually doing when I write 08:02.33 this program? Well, the first thing I'm 08:03.58 doing is moving things from registers 08:05.51 onto the stack. Why am I doing this? I'm 08:08.27 doing this because the ABI says I have 08:09.77 to. No other reason. It's nothing to do 08:11.62 with my problem. And then I want to call 08:13.37 `malloc` because I have to allocate some 08:14.74 memory to return the new, you know, array, 08:16.90 with the new stuff in it. So what I have 08:18.52 to do... 08:19.30 I'm doing crazy things. Look down here, 08:21.89 you see the registers are all called 08:23.45 with X names? That's because 08:24.83 there's 64-bit registers at X, but I get 08:26.51 down here to set up for `malloc` and now 08:27.71 I'm using W names. Why? Well, I just have 08:29.51 to know that I have to do something 08:30.46 special if it's a 32-bit number, and 08:32.44 it'll mask off 32 of the bits and still 08:34.31 work great. Now I have to stuff things 08:37.07 in these registers. I have to multiply 08:38.57 one of the variables. Do I use a multiply 08:39.94 for that? No, I'm using it with a bit 08:41.38 shifting operation because that's what's 08:42.82 faster on this processor. And then I call 08:44.87 `malloc` , and I get back what I want. Great. 08:46.91 Now, I want a loop. This is what a loop 08:48.76 looks like. Notice we're on the second 08:49.91 page, and all I'm doing is incrementing 08:52.25 some numbers. So, I come through and 08:54.23 I do a comparison. Okay, is this register 08:55.91 that I put this value into zero? If 08:57.71 it's less/equal, then I jump to return. You 08:59.15 can't see return, it's on another page. 09:00.47 There's a *third page*. 09:02.09 So, I move zero into this other register 09:04.25 and I go through here and bang bang... I'm 09:06.23 I'm not going to bore you with the whole 09:07.67 thing. I'm bored just talking about it. 09:09.35 Imagine how I felt writing it! 09:11.15 And then at the end I have to do the 09:12.53 reverse of the things I did at the 09:13.79 beginning to set everything back into 09:15.59 the registers from the stack where I 09:16.85 saved them. Why? Because I have to have 09:18.23 the right return address to give this 09:19.61 thing back. I have to do this like a 09:20.99 voodoo incantation, because it's what the 09:22.55 processor wants. Nothing to do with the 09:24.41 problem I'm trying to solve. How can we 09:26.09 do it better? Hey look – it's C)! This is 09:28.25 exactly the same program. Many fewer 09:30.23 lines of code. However, it has a load of 09:32.50 problems that have nothing to do with 09:33.71 what I'm trying to accomplish as well. 09:34.91 For one, I have to pass two things. I have 09:37.31 to pass the length of the array separate 09:38.99 from the array. Why? Because there's no 09:41.38 sequence type in C. Great work guys! 😆 So 09:43.67 then, from there, I want to return this 09:45.23 value. This modified sequence. And what do 09:46.79 I have to do? Well, I had to do this in 09:48.41 assembly too, but this is crazy. I have to 09:49.85 allocate memory give it back and then 09:51.29 hope that the other guy is going to free 09:52.67 that memory later. This has nothing to do 09:54.88 with what I'm trying to accomplish. I 09:56.63 want to increment each of these numbers. 09:57.71 I do it with a `for` loop that counts from 09:59.81 one to the length of the array. Is 10:01.43 counting to the length of the array 10:02.38 relevant? Right, no. No, this is not 10:04.37 relevant. In fact, essentially one line 10:07.31 of code of this whole thing – the actual 10:08.99 increment – is the only thing that 10:10.49 actually matters. On the other hand, I can 10:13.00 complement C as a portable assembly 10:14.93 language because you see I don't have to 10:15.88 do the stack nonsense by hand, and 10:17.75 instead of telling it that it's four 10:19.85 bytes wide, I can actually use `sizeof` 10:21.76 to know that but that's about the 10:23.44 only way it's really an improvement. Now 10:25.19 let's look at Lisp. Note that Lisp is 10:26.75 about 10 years older than C. Here I have 10:29.32 a sequence abstraction. I have four 10:31.07 numbers and I can use a 10:32.57 higher order function to go over it and add 10:33.94 one to each of them. This is a tremendous 10:35.38 improvement by going back in time. 10:38.16 But we can do better. We can do better 10:38.87 than this notation. We can go to 10:41.09 Haskell. So, in Haskell what do we have? 10:43.07 This is really lovely. We have this thing 10:45.11 where we auto-curry the `(+ 1)` , and we 10:47.50 get a function that adds one. This is 10:49.25 getting pretty concise. Can anybody here 10:50.81 quickly name for me a language in which 10:52.37 this exact operation is even more 10:54.29 concise? I'll give you a moment. 10:56.03 I hear APL, and indeed APL! So here we 11:00.94 have [rank] polymorphism. I have 11:04.06 a single number – 11:06.41 a scalar – and I have a set of numbers. 11:07.67 Note that there's no stupid junk. I don't 11:09.35 have to put commas between everything. I 11:11.38 don't have to wrap anything in any 11:12.76 special [delimiters] or anything of this 11:14.09 nature. I just say add one to these 11:15.88 numbers, and I get what I was after. So if 11:17.44 we start from the assembly language and 11:20.38 we come to the APL, which is – you know – 11:21.65 again – you know – like eight years older 11:23.44 than C, we find that syntax and semantics 11:25.61 can take us a long way. 11:27.88 But there are other things that we care 11:30.05 about where no one has put in this much 11:32.03 effort. And one of those things is state 11:34.06 and time. Almost every programming 11:35.44 language doesn't do anything to help us 11:37.85 with managing state over time from 11:40.37 multiple sources. There are some notable 11:42.41 exceptions. I will talk about them now. So, 11:46.56 because Rich Hickey – he really cared about 11:47.20 concurrency – he included immutable data 11:47.21 structures. So now you don't have 11:49.06 constant banging on the same things and 11:50.87 crushing each other's data. This is very 11:53.32 helpful. What else? He's got `atom` s. These 11:54.71 are synchronized mutable boxes with 11:56.81 functional update semantics. Everybody 11:58.25 uses these. These are great. He has also a 11:59.87 full Software Transactional Memory 12:01.06 implementation that frankly nobody uses, 12:02.63 but it's still great. It just has a more 12:05.50 complicated API, and the lesson from this 12:07.06 probably is: if you want people to do the 12:08.81 right thing, you have to give them an API 12:10.73 simple enough that they really will. 12:12.29 Then on top of this, we have `core.async` . 12:13.49 Now, I have less nice things to 12:14.63 say about `core.async` . I like 12:16.31 Communicating Sequential Processes, the 12:18.94 way everybody else does, but this is 12:21.23 implemented as a macro and as a 12:22.31 consequence when it compiles your CSP 12:23.93 code you end up with something that you 12:25.31 can't really look into anymore. Like, you 12:27.76 can't ask a channel how many things are 12:30.41 in that channel. You can't really know 12:32.15 much about what's happening there. And I 12:33.94 would say that in the JVM, I agree with 12:35.26 what rich said the year before he 12:36.65 created `core.async` , which is that you 12:37.91 should just probably use the built-in 12:39.41 concurrent queues. 12:41.21 Now, in ClojureScript, of course, 12:43.31 these things were more useful because 12:44.87 everyone was trapped in callback hell. 12:46.43 We'll see what happens moving on, now 12:48.88 that we have `async` /`await` in JavaScript. 12:49.97 Moving on to another implementation 12:51.41 of CSP, Go. Go actually did something good 12:53.38 here, right, they – and I'm not going to say 12:55.43 much else that's great about Go – is The Go team includes several all-time great programmers. I respect them all. But I do feel that they had a chance to be more ambitious than they were with Go, which – with the weight of their reputations and the might of Google behind it – could have shifted the culture in a better direction. 12:59.62 they built a fantastic runtime for 13:01.91 this stuff. It's really lightweight, it 13:04.06 does a great job. The bad news is that Go 13:04.07 is a completely static language, so even 13:04.85 though you should be able to go in and 13:07.12 ask all of these questions during 13:09.29 runtime while you're developing from 13:11.26 within your editor, like a civilized 13:13.43 person, you can't. You end up with a 13:14.87 static artifact. Well, that's a bummer. 13:15.88 Okay. 13:17.26 And I would say, actually, before I 13:18.71 move on, that anytime you have this 13:20.56 kind of abstraction where you have a 13:22.31 bunch of threads running, when you have 13:22.32 processes doing things, you really want 13:23.21 `ps` and you really want `kill` . And, 13:24.82 unfortunately, neither Go nor Clojure can 13:26.44 provide these because their runtimes 13:27.41 don't believe in them. The JVM 13:28.49 runtime itself thinks that if you kill a 13:30.59 [thread] you're going to leak some 13:32.56 resources, and that the resources you 13:34.49 leak may include locks that you need to 13:35.87 free up some other threads that are 13:38.03 running elsewhere, so they've just 13:40.37 forbidden the whole thing. And in Go you 13:41.21 have to send it a message, open a 13:42.76 separate Channel, blah blah blah. 13:44.50 Erlang, on the other hand, gets almost 13:46.43 everything right in this area. In 13:47.56 this situation, they've implemented the 13:49.31 actor model, and they've done it in a way 13:50.69 where you have a live interactive 13:52.37 runtime, and because they're using shared 13:54.11 nothing for their state and supervision 13:55.55 trees, you can kill anything anytime and 13:57.05 your system will just keep running. This 13:58.55 is fantastic. This is great. Why doesn't 13:59.93 everything work like this? It also 14:02.03 comes with introspection tools, like 14:03.65 Observer, that should make anyone using 14:05.21 any other platform to build a 14:06.88 long-running server thing fairly jealous. 14:08.32 Now, when I say this, I'm not telling you 14:10.12 you should use Erlang. What I'm telling 14:11.32 you is whatever you use should be at 14:13.31 least as good as Erlang at doing this, 14:14.50 and if you're developing a new language – 14:16.19 for God's sake – please take notice. 14:18.23 I can talk now about something that 14:19.91 I worked on with my colleague Matt 14:21.76 Huebert. This is something that I 14:23.21 particularly like. This is a hack in The cells project was Matt's baby. He did almost all the coding. I worked with him as a mentor because I had already implemented a number of dataflow systems. 14:24.94 ClojureScript. We call it cells, and it 14:28.19 takes spreadsheet like dataflow and adds 14:30.35 it into ClojureScript. This resulted in a 14:31.61 paper that was delivered at the PX16 14:33.05 workshop at ECOOP in Rome in 2016. 14:35.09 You've got things like this, right. So, you 14:38.93 say: here's an interval, every 300 14:41.15 milliseconds give me another random 14:43.79 integer, and it does. And then you can 14:45.47 have another thing refer to that, in this 14:48.23 case `cons` ing them on, and now we build a 14:49.97 history of all the random integers that 14:51.29 have happened. What else can you do? Well 14:52.79 you can refer to that, and you can `(take 10)` 14:54.76 with normal Clojure semantics, and 14:56.21 then `map` that out as a bar chart. What do you 14:57.71 get? A nice graph. A graph that 14:58.97 moves in real time. Or we can move on to 15:00.53 this. We added sort of Bret Victor-style 15:02.03 scrubbers into it so that you could do 15:03.88 these kinds of things. I'll show you 15:05.93 instead of telling you, because it's 15:08.09 obvious if you look at it what's going 15:10.12 on here. We did this partially to 15:12.17 show people that you can just really 15:14.50 program with systems that have all those 15:15.71 features that Bret was demoing. 15:16.73 Source code's still out there – anybody 15:17.99 wants to do that, you can do that. We 15:20.32 moved on from that to maria.cloud, which Maria was a joint project of Matt, Dave Liepmann, and myself. We wanted a good teaching environment that requires no installfest for ClojureBridge. 15:21.47 takes all of that code we wrote for 15:23.32 cells and turns it into a notebook. We 15:25.85 actually did this for learners. Take a 15:27.41 look at this. This is a computational 15:29.38 notebook. It has the cells, it gives you 15:31.25 much better error messages than default 15:32.44 Clojure, and so on. We used this to teach. 15:34.06 It was a great experience, and currently – 15:36.41 this year – thanks to Clojurists Together, we 15:38.38 have some additional funding to bring it 15:41.15 up to date and keep it running. I 15:42.53 encourage everybody to check it out. The 15:44.09 last thing here on this list is the 15:46.43 propagators. The propagators come 15:48.23 from Sussman – this is Sussman's 15:49.49 project from around the same time that 15:50.93 actors were happening and Alan Kay was first 15:53.26 getting interested in Smalltalk. This 15:54.65 was a really fertile scene at MIT in the 15:56.21 early 70s. It was actually the project 15:58.61 he originally hired Richard Stallman, of 16:00.47 the GNU project, as a grad student, to 16:02.44 work on, and then later did some 16:03.94 additional work with Alexey Radul, which 16:05.56 expanded the whole thing. 16:07.67 I can't tell you all about it here. 16:09.47 There's just too much to say, but I can 16:11.09 tell you there was a fantastic talk at 16:13.12 the 2011 strange Loop called We Really 16:14.99 Don't Know How to Compute!, and I 16:16.67 recommend that you watch that when you 16:18.82 get out of Strange Loop. Just go home and 16:20.56 watch that talk. It's amazing. A side 16:22.00 thing is that the propagator model was 16:23.93 used by one of the grad students at MIT 16:25.18 at the time to make the very first 16:26.38 spreadsheet. VisiCalc was based on this 16:28.18 model. This is a really useful 16:30.11 abstraction that everyone should know 16:32.56 about. It's data flow based, it does truth 16:33.94 maintenance, and it keeps provenance of 16:35.21 where all of the conclusions the truth 16:37.55 maintenance system came from, which means 16:39.91 it's probably going to be very valuable 16:41.74 for explainable AI later. There are a number of other approaches I really like, but which I didn't have time to get into here. FrTime, from the Racket community, is great. In terms of formalisms for reasoning about this sort of thing, I really like the Π-calculus. 16:44.03 We'll move to another area where 16:46.37 there's been even less progress. 16:47.87 Now we're getting to the the absolute 16:49.31 nadir of progress here, [and] that's in 16:50.50 program representation. Let's look at 16:52.55 that punch card again: 16:54.35 80 columns, there it is. 16:57.11 Now look at this. This is the output of a 16:58.43 teletype. Notice that it is fixed width 16:59.62 and approximately 80 columns. Notice that 17:00.94 the fonts are all fixed with. 17:02.38 This is the teletype in question. 17:04.85 This looks like it should be in a museum, 17:06.82 and it should be in a museum, and – in fact – 17:09.11 is in a museum. 17:11.87 We got these. So, this is the terminal in 17:13.54 which I did a lot of hacking on that VAX 17:16.42 that you saw earlier (when I wasn't 17:17.75 forced to use punch cards), and a lot of 17:19.37 that was in languages like VAX Pascal) – 17:22.54 yeah – but also Bliss, which was pretty 17:25.42 cool. So you'll notice that this is a 17:26.99 VT100 terminal. And all of you are 17:28.01 using machines today that have terminal 17:29.45 emulators that pretend to be this 17:32.39 terminal; that's why they have VT100 17:34.49 escape codes, because those escape codes 17:36.28 first shipped on this terminal. Now we'll 17:39.35 move on to another terminal. This is the 17:41.02 one that I used when I was doing all of 17:43.07 my early Unix hacking back in the 80s. 17:45.35 This is called an ADM-3A. Now, by 17:46.97 applause, how many of you use an editor 17:49.66 that has vi key bindings? Come on! Yeah, 17:51.52 all right, yeah. So then you might be 17:52.90 interested in the keyboard of the ADM-3A, 17:55.07 which was the one that Bill Joy had at 17:58.31 home to connect to school through a 18:00.11 modem while he was writing vi. So here it 18:03.52 is. Note the arrow keys on the *h-j-k-l*. 18:05.51 They are there because those are the 18:07.07 ASCII control codes to move the roller and the 18:08.69 printhead on the old teletype that you 18:10.61 saw a moment ago. So you'd hit control 18:12.89 plus those to control a teletype. We used to use CTRL-h to back up over printed characters to then type a sequence of dashes as a strikethrough on these old printers. We also used the same trick on display terminals to make fancy spinning cursors. 18:17.15 It happened to have the arrow keys, he 18:18.95 used them. Look where the *control* key is. 18:21.89 For all you Unix people, it's right next 18:23.93 to the *a*. To this day, on this 18:25.66 supercomputer here, I bind the *caps lock* 18:27.77 key to *control* because it makes my life 18:29.63 easier on the Unix machine that it is. 18:31.31 Look up there, where the *escape* key is, by 18:32.99 the *q*. That's why we use *escape* to get 18:35.15 into command mode in vi, because it was 18:37.31 easily accessible. Now scan across the 18:38.81 top row just right of the *0*. What's 18:41.45 that? The unshifted *** is the *:*. 18:43.37 That's why [it does] what it does in vi, 18:45.40 because it was right there. And now the 18:47.27 last one, for all the Unix people in the 18:49.31 audience, in the upper right hand corner 18:51.28 there's a button where when you hit 18:53.81 *control* and that button, it would clear 18:55.97 the screen and take the cursor to the 18:58.07 home position. If you did not hit *control*, 18:59.33 instead hit *shift*, you got the *~*. 19:00.52 Notice tilde is right under home. If 19:01.54 you're wondering why your home directory 19:02.93 is tilde whatever username, it's 19:04.31 because of this keyboard. 19:07.97 Now here is Terminal.app on my mega 19:09.71 supercomputer. Notice 80 Columns of fixed 19:11.21 width type. Notice that when I look at 19:13.19 the processes they have *ttys* – that stands 19:14.99 for teletype. 19:15.00 This machine is cosplaying as a PDP-11. 19:19.54 Now, whenever I get exercised about this, 19:23.45 and talk about it, somebody sends me this 19:25.78 blog post from Graydon Hoare. He's 19:27.77 talking [about how] he'll bet on text. He 19:29.57 makes good arguments. I love text. I use 19:35.73 text every day. Text is good! The thing 19:35.74 about it, though, is that the people who 19:42.40 send me this in support of text always 19:45.11 mean text *like this* – text like it came 19:46.54 out of a teletype – and never text like 19:48.04 Newton's Principia, never text *like this* 19:50.39 from Wolfgang Weingart. That is, these 19:52.43 people don't even know what text is 19:55.25 capable of! They're denying the 19:56.93 possibilities of the medium! 19:58.90 This is how I feel about that. I've 20:01.01 promised Alex I will not say anything 20:02.69 profane during this talk, so you will be 20:05.27 seeing this 💩 emoji again. 20:07.66 The reason I disagree with this position 20:08.87 is because the visual cortex exists, okay? 20:10.49 So this guy, this adorable little 20:12.65 fella, he branched off from our lineage 20:14.69 about 60 million years ago. Note the 20:16.66 little touchy fingers and the giant eyes, 20:18.47 just like we have. We've had a long time 20:21.28 with the visual cortex. It is very 20:23.09 powerful. It is like a GPU accelerated 20:25.90 supercomputer of the brain, whereas the 20:28.37 part that takes in the words is like a 20:30.11 very serial, slow, single-thread CPU, and I 20:32.63 will give you all a demonstration right 20:34.07 now. 20:36.65 Take a look at this teletype-compatible 20:38.87 text of this data and tell me if any 20:42.47 sort of pattern emerges. Do you see 20:44.57 anything interesting? 20:46.31 Here it is plotted X/Y. Your brain knew 20:48.47 this was a dinosaur before you knew that 20:50.27 your brain knew this was a dinosaur. This dataset is Alberto Cairo's Datasaurus. 20:50.28 That is how powerful the visual 20:51.11 cortex is, and there are loads of people 20:53.81 who have spent literally hundreds of 20:56.45 years getting very good at this. Data 20:58.66 visualization. If I gave you a table 20:59.99 talking about the troop strength of 21:03.35 Napoleon's March to and from Moscow, 21:05.09 you'd get kind of a picture. But if you 21:06.49 look at it like this, you know what kind 21:09.28 of tragedy it was. You can see right away. 21:11.39 This was 175 years ago, and we're still 21:12.77 doing paper tape. 21:14.57 Graphic designers – they know something. 21:16.31 They know a few things. For instance, they 21:18.52 know that these are all channels. These 21:20.45 different things: point, line, plane, 21:22.61 organization, asymmetry – that these things 21:24.11 are all channels that get directly to 21:25.73 our brain, and there is no need to eshew 21:28.54 these forms of representation when we're 21:31.78 talking about program representation. 21:33.71 I recommend everyone in this audience 21:35.69 who hasn't already done so, go just get 21:37.25 this 100 year old book from Kandinsky 21:38.69 and get a sense of what's possible. 21:40.61 Here's one of his students working on 21:42.59 some notation. Look how cool that is! Come 21:44.93 on! All right, so another thing with text 21:47.14 is that it's really bad at doing graphs 21:49.31 with cycles, and our world is full of 21:50.57 graphs with cycles. Here's a Clojure 21:52.13 notation idea of the the taxonomy of 21:54.35 animals, including us and that cute little 21:56.02 tarsier. And it works fine because 21:57.35 it's a tree, and trees are really good at 21:59.81 containment – they can do containment in a 22:02.45 single acyclic manner. Now this 22:04.07 sucks to write down as text. This is the 22:05.57 Krebs cycle. Hopefully, all of you learned 22:07.43 this at school. If not maybe read up on 22:09.52 it. 22:10.90 If you imagine trying to explain this 22:13.61 with paragraphs of text you would never 22:15.40 get anywhere. Our doctors would all fail. 22:17.75 We would all be dead. So instead, we draw 22:21.11 a picture. We should be able to draw 22:23.63 pictures when we're coding as well. 22:25.07 Here's the Periodic Table of the Elements. Look how 22:27.28 beautiful this is. This is 1976. We've 22:28.97 got all these channels working together to 22:30.59 tell us things about all these these 22:32.02 different elements, how these elements interact 22:33.59 with each other. 22:35.51 Another area that we've pretty much 22:36.71 ignored is *pragmatics*, and what I mean by 22:37.97 that – I'm borrowing it from linguistics 22:39.77 because we've borrowed syntax and 22:41.93 semantics from linguistics – pragmatics is 22:43.49 studying the relationship between a 22:45.35 language and the users of the language, 22:47.51 and I'm using it here to talk about 22:48.52 programming environments. 22:49.90 Specifically, I want to talk about 22:51.64 interactive programming, which is I think 22:53.14 the only kind of programming we should 22:55.01 really be doing. Some people call it live 22:56.57 coding, mainly in the art community, and 22:58.43 this is when you code with what Dan 22:59.63 Ingalls refers to as *liveness*. It is the 23:01.13 opposite of batch processing. Instead, 23:02.93 there is a programming environment, 23:04.73 and the environment and the program are 23:06.28 combined during development. So what does 23:07.61 this do for us? Well, there's no compile 23:09.52 and run cycle. You're compiling inside 23:11.93 your running program, so you no longer 23:13.31 have that feedback loop. It 23:16.37 doesn't start with a blank slate and run 23:19.07 to termination. Instead, all of your 23:20.57 program state is still there while 23:22.01 you're working on it. This means that you 23:24.04 can debug. You can add things to it. You 23:26.99 can find out what's going on, all while 23:29.14 your program is running. 23:31.07 Of course, there's runtime introspection 23:33.23 and failures don't halt the 23:34.61 program. They give you some kind of 23:36.16 option to maybe fix and continue what's 23:37.73 happening now. This combination of 23:39.16 attributes, I would say, is most of what 23:41.21 makes spreadsheets so productive. 23:43.49 And it gives you these incredibly short 23:44.87 feedback loops, of which we'll now have 23:46.90 some examples. If you're compiling some 23:48.95 code, say, in Common Lisp, you can compile 23:50.51 the code and disassemble it and see 23:52.37 exactly what you got. Now the program is 23:54.23 running. The program is alive right now, 23:56.09 and I'm asking questions of that runtime. 23:58.49 And I look at this and I say, okay, 23:59.81 36 bytes – that's too much – so I'll 24:01.66 go through and I'll add some some, you 24:03.47 know, optimizations to it, recompile, 24:06.23 16 bytes that's about as many 24:07.43 instructions as I want to spend on this. 24:10.78 so I know a bunch of you are probably 24:12.89 allergic to S-expressions. Here's Julia. 24:14.45 You can do exactly the same thing in 24:17.51 Julia. Look at this. You get the native 24:19.07 code back for the thing that you just 24:20.87 made, and you can change it while it's 24:22.43 running. A lesser form of livecoding is embodied in PHP. We could spend an hour discussing all the weird, inconsistent things about that language, but I'd argue that the short feedback loops it provides are why so much of the internet still runs on it today (Facebook, Wikipedia, Tumblr, Slack, Etsy, WordPress, &c). 24:24.59 Now what about types? This is where half 24:24.60 of you storm off in anger. So, 24:25.90 I'm going to show you this tweet, and I 24:28.13 wouldn't be quite this uncharitable, but 24:29.93 I broadly agree with this position. 24:31.19 It's a lot of fun like. I have 24:32.81 been programming for 45 years. I have 24:34.01 shipped OCaml. I have shipped Haskell. I 24:35.39 love Haskell, actually. I think it's great. 24:35.40 But I would say that over those many 24:37.66 decades, I have not really seen the 24:39.83 programs in these languages to have any 24:42.40 fewer defects than programs in any other 24:43.90 programming language that I use, modulo 24:46.01 the ones with really bad memory 24:48.77 allocation behavior. 24:51.28 And there has been considerable 24:52.97 empirical study of this question, and 24:55.31 there has been no evidence. It really I was going to do a little literature review here to show that development speed claims for dynamic languages and code quality/maintenance claims for static languages appear to have no empirical evidence, but Dan Luu has already done a great job of that, so I'll just link to his page on the topic: “[U]nder the specific set of circumstances described in the studies, any effect, if it exists at all, is small. [...] If the strongest statement you can make for your position is that there's no empirical evidence against the position, that's not much of a position.” 24:56.87 doesn't seem to matter. So if you like 24:59.14 programming in those languages, that's 25:01.90 great! I encourage you to do it! You 25:03.35 should program it whatever you enjoy, but 25:05.02 you shouldn't pretend that you have a 25:07.01 moral high ground because you've chosen 25:08.75 this particular language. And I would say 25:10.31 really that if what you care about is 25:12.16 systems that are highly fault tolerant, 25:14.45 you should be using something like 25:16.37 Erlang over something like Haskell 25:18.47 because the facilities Erlang provides 25:19.78 are more likely to give you working 25:21.04 programs. Imagine that you were about to take a transatlantic flight. If some engineers from the company that built the aircraft told you that they had not tested the engines, but had proven them correct by construction, would you board the plane? I most certainly would not. Real engineering involves testing the components of a system and using them within their tolerances, along with backup systems in case of failure. Erlang's supervision trees resemble what we would do for critical systems in other engineering disciplines. None of this is to say that static types are bad, or useless, or anything like that. The point is that they, like everything else, have limitations. If I'd had more time, I would have talked about how gradual typing (e.g. Typed Racket, TypeScript, &c) is likely an important part of future languages, because that approach allows you to defer your proofs until they can pay for themselves. 25:22.85 You can throw fruit at me – rotten fruit – 25:24.47 at me later. You can find me in the 25:26.45 hallway track to tell me how wrong I am. 25:28.43 So, I've said that, but I'll also 25:30.23 show you probably the most beautiful 25:32.51 piece of [code] that I've ever seen. 25:33.64 Like, the best source code in the world. 25:35.33 And that's McIlroy's *Power Serious*, which 25:37.19 happens to be written in Haskell. So, this 25:38.75 is a mutually recursive definition of 25:38.76 the series of sine and cosine in two 25:40.13 lines of code. I want to cry when I look 25:42.11 at this because of how beautiful it is. 25:43.25 But that has nothing to do with software 25:44.63 engineering. Do you understand what I'm 25:47.33 saying? There's a different question. The 25:48.89 beauty of the language is not always 25:50.87 what gets you to where you need to go. 25:52.13 I will make a an exception here for 25:53.87 model checkers, because 25:55.97 protocols are super hard! It's a good 25:57.52 idea to try to verify them I've used 25:59.02 Coq and Teapot [for example] for these kinds of 26:00.95 things in the past, and some systems do 26:02.14 have such a high cost of failure that it 26:03.64 makes sense to use them. If you're 26:05.14 doing some kind of, you know, horrible 26:07.01 cryptocurrency thing, where you're likely 26:08.75 to lose a billion dollars worth of 26:11.09 SomethingCoin™, then, yeah, you 26:14.39 maybe want to use some kind of verifier 26:15.83 to make sure you're not going to screw 26:17.33 it up. But, that said, space 26:18.71 probes written in Lisp and FORTH) 26:20.21 have been debugged while off world. Had I had more time, I would have done an entire series of slides on FORTH. It's a tiny language that combines interactive development, expressive metaprogramming, and tremendous machine sympathy. I've shipped embedded systems, bootloaders, and other close-to-the-metal software in FORTH. 26:22.25 If they had if they had proven their 26:23.81 programs correct by construction, In fact, they *did* prove their program correct by construction. But there was still human error! 26:25.07 shipped them into space, and then found out 26:26.63 their spec was wrong, they would have 26:28.43 just had some dead junk on Mars. But what 26:30.35 these guys had was the ability to fix 26:33.47 things while they are running on space 26:34.85 probes. I think that's actually more 26:35.93 valuable. Again, throw the rotten fruit 26:37.90 later. Meet me in the hallway track. 26:40.19 I would say overall that part of 26:42.40 this is because programming is actually 26:44.75 a design discipline. It — oh, we're losing 26:46.19 somebody – somebody's leaving now probably 26:47.51 out of anger about static types. This was an improvised joke about someone leaving to eat lunch or use the bathroom or something. I've since heard that that person felt embarassed and called out by the joke, so I'd like to leave an apology here. It was meant to be funny in context! 26:48.95 As a design discipline, you find that you 26:50.93 will figure out what you're building as 26:52.43 you build it. You don't actually 26:54.83 know when you start, even if you think 26:57.04 you do, so it's important that we build 26:58.90 buggy approximations on the way, and I 27:00.89 think it's not the best use of your time 27:02.93 to prove theorems about code that you're 27:04.31 going to throw away anyway. In addition, 27:07.43 the spec is always wrong! It doesn't 27:08.57 matter where you got it, or who said it, 27:12.76 the only complete spec for any 27:14.87 non-trivial system is the source code of 27:16.25 the system itself. We learn through 27:18.28 iteration, and when the spec's right, it's 27:20.02 still wrong! Because the software will 27:21.71 change tomorrow. All software is 27:24.23 continuous change. The spec today is not 27:25.73 the spec tomorrow. Which leads me to 27:27.23 say that overall, debuggability is in my 27:29.26 opinion more important than correctness 27:32.14 by construction. So let's talk about 27:33.76 debugging! 27:35.51 I would say that actually most 27:37.25 programming is debugging. What do we 27:38.63 spend our time doing these 27:41.02 days? Well, we're spending a lot of time 27:43.25 with other people's libraries. We're 27:44.87 dealing with API endpoints. We're dealing 27:46.97 with huge legacy code bases, and we're 27:48.71 spending all our time like this robot 27:50.93 detective, trying to find out what's 27:52.37 actually happening in the code. And we do 27:54.16 that with exploratory programming, 27:54.17 because it reduces the amount of 27:55.66 suffering involved. So, for example, in a 27:57.64 dead coding language, I will have to run 27:58.90 a separate debugger, load in the program, 28:00.35 and run it, set a break point, and get it 28:01.31 here. Now, if I've had a fault in 28:02.75 production, this is not actually so 28:04.49 helpful to me. Maybe I have a core dump, 28:06.35 and the core dump has some information 28:07.73 that I could use, but it doesn't show me 28:08.99 the state of things while it's running. 28:11.21 Now here's some Common Lisp. Look, I set 28:12.83 this variable. Look, I inspect this 28:14.51 variable on the bottom I see the value 28:16.49 of the variable. This is *valuable* to me. 28:18.16 I like this, and here we 28:20.33 have a way to look at a whole set of 28:22.49 nested data structures graphically. We 28:24.04 can actually see things – note in 28:25.85 particular the complex double float at 28:27.28 the bottom that shows you a geometric 28:28.37 interpretation. This object inspector is called Clouseau. You can see a video about it here. 28:30.23 This is amazing! This is also 1980s 28:31.49 technology. You should be ashamed if 28:33.89 you're using a programming language that 28:35.57 doesn't give you this at run time. 28:37.31 Speaking of programming languages that 28:39.64 *do* give you this at runtime, here is a 28:43.07 modern version in Clojure. Here's somebody 28:45.95 doing a Datalog query and getting back 28:47.93 some information and graphing it as they 28:49.31 go. I will say that Clojure is slightly 28:51.64 less good at this than Common Lisp, at 28:53.02 present, in part because the Common Lisp 28:53.03 Object System (CLOS) makes it particularly easy 28:54.16 to have good presentations for different 28:56.51 kinds of things, but at least it's in the 28:58.19 right direction. 28:59.14 As we talk about this, one of the 29:00.52 things in these kinds of programming 29:02.33 languages, like Lisp, is that you have an 29:04.31 editor and you're evaluating forms – all the 29:06.16 Clojure parameters here are going to 29:08.14 know this right off – 29:10.19 you're evaluating forms and they're 29:12.04 being added to the runtime as you go. And 29:13.43 this is great. It's a fantastic way to 29:14.69 build up a program, but there's a real 29:16.49 problem with it, which is that if you 29:18.35 delete some of that code, the thing 29:19.90 that you just evaluated earlier is still 29:20.93 in the runtime. So it would be great if 29:23.14 there were a way that we could know what 29:24.71 is current rather than having, say, a text 29:26.81 file that grows gradually out of sync 29:28.49 with the running system. And that's 29:29.69 called Smalltalk, and has been around 29:30.88 since at least the 70s. So this is the 29:32.14 Smalltalk object browser. We're 29:34.85 looking at Dijkstra's algorithm, 29:36.47 specifically we're looking at 29:37.78 backtracking in the shortest path 29:39.11 algorithm, and if I change this I know I 29:41.26 changed it. I know what's happening if I 29:43.07 delete this method the method is gone. 29:44.63 It's no longer visible. So there is a 29:45.95 direct correspondence between what I'm 29:48.40 doing and what the system knows and 29:50.57 what I'm seeing in front of me, and 29:52.01 this is very powerful. And here we have 29:53.87 the Glamorous toolkit. This is 29:56.38 Tudor Gîrba and feenk's thing. They embrace this 29:57.83 philosophy completely. They have built an 29:58.90 enormous suite of visualizations that 29:59.99 allow you to find out things about your 30:01.85 program while it's running. We should all 30:04.85 take inspiration from this. This is an 30:06.35 ancient tradition, and they have kind of 30:07.90 taken this old thing of Smalltalkers 30:10.37 and Lispers building their own tools as 30:12.40 they go to understand their own codebases, 30:13.97 and they have sort of pushed it – 30:15.16 they've pushed the pedal all the way to 30:17.63 the floor, and they're rushing forward 30:19.90 into the future and we should follow 30:21.76 them. 30:23.81 Another thing that is very useful in 30:25.19 these situations is error handling. If 30:26.81 your error handling is 'the program stops', 30:29.02 then it's pretty hard to recover. 30:31.07 But in a Common Lisp program like this – 30:33.04 this is an incredibly stupid toy example – 30:34.85 but I have a version function. I have not 30:36.04 actually evaluated the function yet. I'm 30:37.54 going to try to call it. So, what's going 30:39.28 to happen, well, the CL people here know 30:40.73 what's going to happen, it's going to pop 30:41.81 up the condition handler. So this is 30:43.07 something that – programming in Clojure – 30:43.08 I actually really miss from Common Lisp. 30:43.90 It comes up, and I have options here. I 30:45.88 can type in the value of a specific 30:47.51 function, say 'hey call this one instead' 30:49.13 for the missing function. I can try again, 30:50.99 which – if I don't change anything – will 30:52.90 just give me the same condition handler. 30:54.64 Or, I can change the state of the running 30:57.04 image and then try again. So, for example, 30:58.54 if I go down and evaluate the function 31:00.16 so that it's now defined and hit retry, 31:01.90 it just works. This is pretty amazing. We 31:02.81 should all expect this from our 31:05.09 programming environments. Again, when I 31:06.23 talk about Smalltalk and Lisp, people say 31:07.61 'well, I don't want to use Smalltalk or Lisp'. I'm 31:09.64 not telling you to use Smalltalk or 31:11.45 Lisp. I'm telling you that you should have 31:13.01 programming languages that are at least 31:14.99 as good as Smalltalk and Lisp. 31:16.37 Some people, when I show them all this 31:17.81 stuff – all this interactive stuff, they're, 31:19.61 like, 'Well, what if I just had a real fast 31:21.71 compiler, man? You know I can just 31:23.45 just change and hit a key and then the 31:25.19 things that –' Well, we're back to that 31:28.01 💩 again, because if you have a fast 31:29.81 compiler you still have all the problems 31:31.97 with the blank slate/run-to-termination 31:33.64 style. Data science workloads 31:35.09 take a long time to initialize. You might 31:36.23 have a big data load and you don't want 31:37.43 to have to do that every single time you 31:38.81 make a change to your code. And the data 31:41.21 science people know this! This is why R 31:42.76 is interactive. This is why we have 31:43.97 notebooks for Python and other languages, 31:45.71 because they know it's crazy to work 31:48.23 this other way. Also, GUI State – oh my word! 31:49.54 It can be incredibly tedious to click 31:50.87 your way back down to some sub-sub-menu 31:53.38 so that you can get to the part where 31:55.13 the problem is. You want to just keep it 31:56.57 right where it is and go in and see 31:58.78 what's happening behind the scenes, and 32:00.35 fix it while it's running. Someone came up to me after the talk and described a situation where he was working on a big, fancy commercial video game. He had to play the same section of the game for 30 minutes to get back to where the error occurred each time. 😱 32:01.78 Also, you should be able to attach to 32:03.35 long-running servers and debug them 32:05.45 while they're in production. This is 32:07.13 actually good! It's scary to people who 32:08.75 are easily frightened, but it is very 32:10.78 powerful. 32:13.19 I'll say after all of this about 32:16.37 interactive programming, about escaping 32:18.40 batch mode, that almost all programming 32:20.21 today is still batch mode. And how do we 32:21.52 feel about that? I kind of feel like Licklider 32:23.09 did. Licklider funded 32:24.88 almost all of the work that created the 32:26.26 world we live in today, and Engelbart 32:27.95 built half of it, and one of the things 32:29.69 that Licklider said that I found – I 32:31.07 just love the phrase – is 'getting into 32:33.23 position to think'. That is, all of the 32:34.97 ceremony that you have to go through to 32:36.59 get ready to do your work should go away, 32:36.60 and that was their whole mission in the 32:38.99 60s. 32:40.73 We almost got there, but then we have 32:42.40 languages like C++. 32:43.78 I could say a lot of mean things 32:46.31 about C++, but I used to work at the 32:47.57 same facility that Bjarne did, and I kind 32:49.66 of know him a little bit, so I'm not 32:51.23 going to do that. Instead, 32:52.97 I'm just going to quote Ken Thompson 32:56.99 This is a really funny situation, 32:59.21 because I worked [using] some of the early C++ 33:01.31 compilers because I was 33:03.28 excited about the idea of having decent 33:05.69 abstractions in a low-level language 33:08.14 that I could use [at work]. But I will say that it 33:08.15 was never great, and that it has gotten 33:09.40 worse over time, paradoxically by adding 33:11.51 good features to the language. But 33:14.26 if you keep adding every feature that 33:16.13 you possibly want, you end up 33:17.69 with a language that is not in any way 33:19.66 principled. There is no way to reason 33:20.99 about it. It has too much junk in it. And 33:22.97 if you'd like to see this happening in 33:26.45 real time to another language, I 33:31.43 recommend that you read what's going on 33:33.40 in TC39 with JavaScript, where they are 33:34.85 adding every possible feature and 33:36.52 muddying an already difficult language 33:38.81 further. In all fairness, TC39 is in a terrible position. They can't remove features from the language because there's such a large corpus already in the world. At the same time, the language has a bunch of ergonomic problems that they want to fix. I wish they had frozen a primitive version of JS and added a marker at the beginning of scripts to switch out which language is used, much in the way `#lang` does in Racket. 33:40.07 So, what about Go? Well, I admire the 33:42.76 runtime and the *goroutines*, the garbage 33:44.81 collector, but it's really another punch 33:47.21 card compatible compile/run language. It 33:49.25 also shares with C++ 33:50.87 the problem that it's not a great 33:52.19 library language, because if you want to 33:53.45 write a library in Go and then use it 33:54.88 from say a C program, or whatever, you 33:57.28 have to bring in the entire go runtime, 33:58.54 which is a couple [of megabytes]. not mostly what I 34:00.04 want. So what about Rust? Well, I mean it's 34:01.61 a nice thing that Rust is a good library 34:03.59 language. I like that about it. But it's 34:04.90 also a huge missed opportunity in terms 34:06.35 of interactive programming. They just 34:06.36 went straight for the punch cards again. 34:07.61 And it's a super super complicated 34:10.01 language, so it would be nice when 34:11.51 trying to figure out which of the 40 34:12.95 different memory allocation keywords 34:15.29 you're going to use to tell it how to do 34:16.60 its thing if you could explore that 34:17.99 interactively instead of going through a 34:19.55 compile/test cycle. And another way that 34:21.40 I feel about it – I have to quote Deech 34:23.38 here – which is that you know some people 34:25.07 hate stop the world GC, I really hate 34:27.23 stop the world type checkers. If 34:29.57 it's going to take me an hour to compile 34:30.95 my thing, I just want to give up. I'm 34:32.81 going to become a carpenter or something. 34:35.03 In this family of languages, 34:36.34 I'll say that Zig is more to my taste. I 34:37.60 actually like Zig more than I like 34:40.12 Rust. This will anger all of the 34:41.99 Rustaceans. I apologize, but it is true. 34:43.31 But, Zig people – for goodness sake – why is 34:45.10 there no interactive story there either? 34:46.66 You've got this nice little language 34:47.93 that has multi-stage compilation. It can 34:49.60 learn a lot from Lisp, and it just sort 34:52.43 of ignores all that and goes straight 34:53.57 to the 1970s or before. 34:55.43 So what do future directions that don't 34:57.34 suck look like? Well, I'll give you some 34:58.73 examples that try to use some 34:59.75 of the things I've talked about as 35:01.55 underexplored areas. So, this 35:02.75 is a structure editor for Racket, which 35:04.84 is a dialect of Scheme), and it was built 35:07.01 by a fellow called Andrew Blinn, and it's 35:08.51 still Racket underneath. That is, it's 35:10.01 still a lot of parentheses – it's still 35:11.99 S-expressions – but when you're editing it, 35:14.56 you have this completely different 35:15.89 feeling where you're modifying this 35:17.39 living structure and it's quite colorful 35:19.13 and beautiful – probably for some of you 35:20.56 garish – but I like it. 35:22.13 And I recommend having a peek at how 35:25.67 that works, and compare it to how you're 35:27.53 editing code now. Another example that I 35:29.27 think will be more accessible to this 35:32.21 audience is this one from Leif Anderson. 35:33.17 This is also Racket, and this is doing a 35:35.27 define using pattern matching for a red 35:37.25 black tree balancing algorithm. And it is 35:39.47 an ancient practice of many years to 35:42.29 document gnarly code like this with a 35:43.67 comment block over it, but you have a 35:46.13 couple of problems: (1) the comment block 35:47.51 is ugly and not completely obviously 35:49.01 meaning what it's supposed to mean; but 35:50.45 also (2) it can grow out of sync with the 35:52.43 code itself so. Leif has made this fine 35:53.81 thing that reads the code and produces 35:55.37 these diagrams, and you can switch the 35:57.82 diagram view on or off. So this is 35:59.45 what – if we want to talk about 36:01.43 self-documenting code, I would say 36:02.56 something like this that can actually 36:04.55 show you what the code does is better 36:07.49 than what most things do. 36:09.10 In the same vein, we've got this piece. 36:11.27 This is called Data Rabbit. Data 36:13.84 Rabbit is a crazy data visualization 36:15.71 thing written in Clojure. Each one of 36:17.08 these little blocks that are connected 36:18.29 by these tubes is actually a little 36:20.51 piece of Clojure code, and they can do 36:22.67 data visualization, they can do 36:24.53 refinement, they can do all of these nice 36:26.21 things. I'm not a huge, you know, box 36:28.19 and arrow programming language guy, but I 36:29.63 think that Ryan has done great work here 36:32.27 and that everybody should take a look at 36:33.13 it. 36:34.91 There's also Clerk. I'm a bit biased 36:36.05 here. This is something I work on. This is 36:38.03 something I've been working on for the 36:39.95 last year with the team at Nextjournal, 36:42.17 but I think it is actually very good, so 36:44.63 I'm going to tell you a little something 36:46.67 about it. 36:48.17 This is this is what it looks like 36:49.43 when you're working with Clerk. You've got 36:51.34 whatever editor you want on one side and 36:52.97 then you've got a view onto the contents 36:54.17 of the namespace you're working on off 36:55.49 to the side. This has some special 36:57.53 properties. It means, for one thing, that 36:59.08 you can put these notebooks into version 37:01.19 control. You can ship these notebooks. 37:02.51 These can be libraries that you use. You 37:02.52 don't have this separation between your 37:03.47 notebook code and your production code. 37:05.45 They can be the same thing, and it 37:06.82 encourages a kind of literate 37:07.84 programming approach where every comment 37:09.17 along the way – or every comment block 37:11.03 along the way – is interpretered as markdown, 37:11.93 with LaTeX and other features. 37:13.01 It's a very nice way to work. I 37:14.99 encourage the Clojure people here to 37:16.06 check it out. It is of no use to you if 37:18.10 you're not a Clojure person, because it's 37:20.15 very Clojure-specific. And I'll show you 37:21.89 a couple of other screenshots here, like 37:23.56 this we're doing some data science and 37:26.08 you've got – that's my emacs on the 37:27.89 right hand side, and I'm able to do all 37:30.23 of the things, like pretty printing data 37:31.97 structures, and inspecting them, and then 37:34.19 sending things over and seeing them in 37:35.63 Clerk. It is a very cozy way to work. 37:37.67 There's also, for instance, this example 37:38.81 where in around six lines of code I do a 37:40.37 query for some bioinformatic information 37:42.29 that shows me 37:43.91 what drugs affect what genes that are 37:45.34 known to be correlated with what 37:47.39 diseases, so we can see what drugs 37:48.77 might be interesting targets for genetic 37:50.08 disorders of differing type. Twenty 37:51.29 years ago, if you would have told people 37:53.56 they'd be able to do a single query like 37:55.73 this and find these kinds of things out 37:57.10 they would have looked at you like had 37:58.79 two heads, but here it is and it's no 38:00.34 code at all. Or this, which is a port of 38:02.15 Sussman's Structure and Interpretation 38:03.71 of Classical Mechanics library into 38:05.27 Clojure that you can use inside of Clerk. This is very nice work by Sam Ritchie. In addition to porting the libraries, he's working on an open edition of Sussman's textbooks using Clojure. 38:07.73 And then [you can] do things with physics – 38:09.95 real things. This is emulating a chaotic 38:11.87 system, and you can actually – you can't 38:13.43 see on this – but you can actually grab 38:15.77 sliders and move them around and change 38:17.75 the state of the system in real time. 38:18.82 It'll show you what's happening. 38:20.15 Or this. Martin here in the front row 38:23.56 wrote this. This is an example of Rule 38:25.49 30, which is a cellular automaton, and he's 38:26.56 written a viewer for it, so instead of 38:27.89 looking at 1s and 0s, you can 38:28.91 actually see the thing he's working on. 38:29.87 And the amount of code this takes is 38:31.73 almost none. 38:34.55 This is a regular expression dictionary 38:36.10 that I wrote. This thing – one of the 38:38.87 nice things about Clerk is you have all 38:40.67 the groovy visualization [and] interactive 38:42.17 things that come from having a browser, 38:43.91 but you also have all the power of 38:45.41 Clojure running on the JVM on the other 38:46.60 side. So you can do things like talk to a 38:47.93 database on the file system, which is a 38:49.19 revelation compared to what you can 38:51.34 normally do with a browser. 38:53.15 With this kind of thing you 38:55.13 can do rapid application development. You 38:57.17 can do all kinds of things, and I will 38:58.60 add that clerk actually improves on the 38:59.69 execution semantics that you normally 39:00.95 get with emacs and Clojure. This is inside 39:02.32 baseball for the Clojure people, sorry 39:04.49 for everybody else, but that thing I was 39:06.23 talking about – about how you can add 39:08.08 things to the running image and then 39:09.29 delete the code and then they're not 39:11.15 there and you don't know it and maybe 39:12.65 you save your program it doesn't work 39:14.03 the next time you start – Clerk will not 39:15.34 use things that you've removed from the 39:16.84 file. It actually reports that, so you get 39:18.65 errors when you have gotten your text 39:19.97 out of sync with your running image. 39:21.53 Now, obviously, I have a huge Lisp bias. I 39:23.51 happen to love Lisp, but it's not just 39:25.49 Lisps. There are other people doing good 39:27.89 things. This is called Hazel. This is 39:29.99 from Cyrus Omar's team. You see those 39:31.25 little question marks after the function 39:32.87 there? This is an OCaml or Elm-like 39:34.01 language, and they do something called 39:35.81 *typed holes* where they're actually 39:37.25 running interactively their type 39:38.75 inference and using it for its in my 39:40.06 opinion strongest purpose, which is 39:41.27 improving user interface. So here, when 39:42.34 you go to put something into one of 39:44.75 these typed holes, it knows what type 39:46.67 it's going to be, and it's going to give 39:48.71 you hints, and it's going to help you do 39:50.39 it, and they've taken that to build this 39:52.25 nice student interface. If you're 39:54.71 going to teach students through design 39:55.73 recipes that involve type-based thinking, 39:56.87 then you should have a thing like this 39:58.31 that actually helps them in some way, and 40:00.41 the one they've made is very good I 40:01.79 recommend reading the papers. [Cyrus] has 40:03.34 a student called David Moon who has made 40:04.84 this. This is called Tylr. I can't really 40:06.41 show you this in a good way without 40:07.97 [many videos]. So I recommend that you go to 40:10.19 David Moon's Twitter, and you scroll 40:11.27 through and you look at some of these 40:13.49 things. It's got a beautiful 40:15.23 structure editing component that 40:16.43 prevents you from screwing up your code 40:17.51 syntactically while you're working on it, 40:18.71 and gives you advice based on type 40:20.39 information. 40:22.13 Here this is my absolute favorite 40:23.56 from Cyrus's group. This is also by 40:25.37 David Moon who did the structure editor 40:26.81 and Andrew Blinn who did the nice editor 40:28.73 for Scheme that we saw at the beginning 40:30.17 of this section. Here we have, again, an 40:32.27 OCaml or Elm-like language, but you can 40:34.13 put these little widgets in. 40:36.65 These are called *livelits*, with the 40:37.79 syntactical affordance here [that] they 40:39.71 begin with a dollar sign. 40:41.51 He's got some data here, and the data 40:42.58 showed as a data frame. It's 40:43.97 actually a convenient, nice to edit thing, 40:45.41 and it's in-line with the source code. 40:47.56 This is a thing where you can have 40:49.13 more expressive source code by allowing 40:50.99 you to overlay different views onto the 40:51.00 source code. You can also see there's a 40:52.49 slider in there, and the slider is [live]. 40:54.23 [It] immediately computes. The rest of the 40:56.08 values are immediately recomputed when 40:57.41 the slider slides in a data flow kind of 40:59.03 way. This is a great project. I hope they 41:00.23 do more of it. Here's something a little 41:02.75 crazier. This is Enso. Enso is groovy 41:05.03 because it is a functional programming 41:07.06 language that has two representations. It 41:09.82 is projectional, so it is not just this 41:11.08 kind of lines between boxes thing. 41:13.55 It's line between boxes, and then you 41:15.10 can flip it over and see the code that 41:18.23 corresponds to those things. You can edit 41:19.79 either side and it fixes both. 41:21.34 And now we'll go on to our last example 41:22.97 from this section, which is also the 41:24.65 craziest one. And that is Hest by Ivan Reese. 41:27.10 Here we're computing factorial, 41:28.37 but we're doing it with animation, so we 41:29.69 see these values flowing through the 41:31.60 system in this way and splitting based 41:33.10 on uh based on criteria that are 41:34.60 specified in the code, and we're working 41:36.53 up to a higher and higher factorial now. 41:38.45 I look at this, and I don't say 'yeah, 41:41.45 that's how I want to program; I 41:42.89 want to spend every day in this thing', 41:44.81 but what I've learned – if nothing else – 41:47.15 over the very long career that I've had, 41:49.67 is if you see something that looks 41:50.99 completely insane and a little bit like 41:52.31 outsider art, you're probably looking at 41:53.87 something that has good ideas. So, whether 41:55.60 or not we ever want to work like this, we 41:58.01 shouldn't ignore it. 41:59.21 This was my last example for today. I had to stop because I was already slightly over time, but there a number of other systems that I would like to have mentioned: In this talk, I stayed away from artistic livecoding systems because many programmers can't see themselves in what artists are doing. However, I would be remiss not to show you these systems: 42:01.91 I have some thank yous to do. First, 42:04.31 I'd like to thank Alex for inviting me 42:07.25 to give this talk. I'd like to thank Nextjournal 42:08.69 for sponsoring my work, including 42:10.91 the writing of this talk. And I would 42:13.25 like to thank all of you for watching! 42:15.41 Thank you very much!
true
true
true
null
2024-10-12 00:00:00
2019-02-01 00:00:00
null
null
null
null
null
null
32,607,884
https://www.bbc.com/travel/article/20220825-kath-kuni-the-himalayas-ancient-earthquake-defying-design
The Himalayas' ancient earthquake-defying design
Tarang Mohnot
# The Himalayas' ancient earthquake-defying design **In a series of Himalayan towns known for severe earthquakes, locals still honour a millennia-old building style.** In 1905, a deadly earthquake rocked the landscape of Himachal Pradesh, an Indian state in the western Himalayas. Sturdy-looking concrete constructions toppled like houses of cards. The only surviving structures were in towns where the residents had used an ancient, traditional Himalayan building technique known as *kath kuni*. On a warm Tuesday afternoon, I was headed towards one of them: Naggar Castle, which was built more than 500 years ago as the seat of the region's powerful Kullu kings, and which remained standing, unscathed, after that calamity. Officers from the Geological Survey of India were amazed by the lack of seismic damage to the castle and other kath kuni homes in the earthquake's radius. "This, at first sight, appeared unnatural on account of the apparently rather top-heavy construction of the houses… until one came to realise the natural resisting power of their timber-bonded walls," they wrote. The castle is one of the most exquisite remaining examples of the building style, but kath kuni houses have been constructed in this region for thousands of years. The design is recognisable by its layered interlocking of deodar wood (a type of Himalayan cedar) with locally sourced stone, without the use of mortar. Naggar Castle is now a hotel and tourist attraction, but its rustic walls – flat-stacked grey stones alternating with earth-toned planks of wood – are proof that some things are timeless. As a design, kath kuni is ingenious. "Deodar wood and stone create a spectacular balance and composition together," said Rahul Bhushan, architect and founder at NORTH, a Naggar-based architecture and design studio working to preserve the building technique through construction projects, workshops, artist residencies and homestays. "Stone gives weight to the structure, resulting in a low centre of gravity, and wood holds the structure together, thanks to its flexibility." The technique is perfectly suited to the Himalayas, one of the most seismically active zones in the world. Doors and windows are built small and have heavy wooden frames to lessen the stress on the openings during an earthquake. Plus, the buildings have fewer of these openings to help transfer inertial forces to the ground. On top of it all, thick slate roofs hold the whole edifice firmly in place. The words "kath kuni" are derived from Sanskrit, translating to "wooden corner". "This describes the essence of the building style," said Tedhi Singh, one of the few remaining *mistris* (masons) in Chehni – the only village in Himachal Pradesh where the houses are all kath kuni, as opposed to other villages where newer concrete houses are more common. "Take a look at the corners of any kath kuni building and you'll clearly see beams of wood interlocked together. Gaps between these layers are packed with small stones, hay and rubble. This system of intricate interlocking makes kath kuni structures remarkably flexible, allowing walls to move and adjust in case of a seismic event." Singh added that kath kuni structures have double-layered walls that act as insulators, keeping the space warm in the frigid winter months and cool in the summers. Trenches in the ground and raised beds of stone blocks strengthen the superstructure, while keeping water and snow from seeping in. In addition to these quake-proof qualities, kath kuni architecture is also well-adapted to the region's agrarian and communitarian style of living. Generally, the ground floor is reserved for livestock. Upper storeys are used as living quarters since they're a lot warmer, thanks to the sunlight and the rising body heat of livestock from below. "I can't imagine living in a concrete structure… they simply don't fit our lifestyle," said Mohini, who lives with her husband and daughter in a century-old stone-and-wood structure in Chachogi, a tiny village near Naggar. "Kath kuni homes are designed in a way that lets us keep our cattle loose in the open space on the bottom storey and move them inside at the time of milking or during harsh weather conditions. They are also generally built in clusters, making it easy for us to share livestock and storage space." Over time, the building technique has been passed down through generations. However, the tradition is dying as clusters of flat-roofed concrete houses are taking precedence in many villages. Several locals are even concealing their concrete homes with stone tiles and wood-finish wallpapers – desperate attempts to preserve identity as raw materials for kath kuni have become more difficult and expensive to obtain. In 1864, the British Empire established the Forest Department in India, leading to a sudden transfer of forest ownership from the locals to the state. This spurred the rampant extraction and commercial use of deodar in present-day Himachal Pradesh. In an attempt to repair the relationship between forests and local forest-dwellers, the Indian government passed the Forest Rights Act in 2006, which entitles each Himachali family to just one tree every 10 years – hardly enough wood to build a house. "Opposed to kath kuni, concrete looks jarring to the eyes because it's not in sync with the landscape. But it's not like the locals don't want to build wooden houses – they simply lack access to the required resources," said Sonali Gupta, an anthropological archaeologist and the founding director of the Himalayan Institute of Cultural and Heritage Studies. As Himachal Pradesh's traditional dwellings became expensive and unfeasible, the concrete industry gathered steam. Bricks and cement presented locals with a cheaper and quicker way to build houses. "Kath kuni structures come with higher one-time costs, and people find it hard to shell out those amounts," said Bhushan. Along with the fall in the demand for kath kuni structures, there's been a steady decline in the number of mistris who specialise in the art, coupled with a growing belief that concrete structures are more durable. However, Himachal Pradesh has undergone scores of earthquakes of magnitude 4.0 and higher in the past 100 years, and during these seismic events, concrete houses proved liable to damage. Finally, aspects of kath kuni have also become somewhat irrelevant in the context of Himachal's evolving culture and values. "Kath kuni houses have really small doors," said Mohini. "In the old days, people bowed at the entrance, for this also meant bowing before the household deity in reverence. But today, one doesn't want to bend before anyone – not even God." Despite these challenges, local organisations are trying to find ways to promote and save traditional building methods. For example, NORTH works with its clients to design projects in the kath kuni style and collaborates with local artisans for the construction. They are also investigating whether alternative materials such as bamboo could replace wood to make the kath kuni style more sustainable in the long-term. In addition, Bhushan is experimenting with *dhajji dewari,* another old Himalayan building technique that uses timber frames and earth infill, and is a lot more cost- and time-effective than kath kuni. And since Himachal Pradesh is a tourism-heavy state, boutique accommodations such as Neeralaya and Firdaus bolster education and appreciation of local architecture by offering tourists the opportunity to stay in kath kuni-style homes, as well as experience regional cooking and activities such as fishing and forest bathing. Even with this revived focus on the old ways, mistri Tedhi Singh worries that once smooth roads connect Chehni to the world, cement will make its way to the village, requiring him to adopt modern-day techniques. "It's quite bittersweet," he said. "The thought of good roads is like a dream but working with bricks and cement just won't be the same." As for Mohini, she is confident that her daughter will live her life in the same house that two generations before her have called home. "I will teach her how to preserve this house and make her understand that such houses can't be made again... earthquakes will come and go, but the house will live on – take care of it." *Heritage Architecture* *is a BBC Travel series that explores* *the world's most* *interesting and unusual buildings that define a place through aesthetic beauty and inventive ways of adapting to local environments.* *--- * *Join more than three million BBC Travel fans by liking us on **Facebook**, or follow us on **Twitter** and **Instagram**.* *If you liked this story, **sign up for the weekly bbc.com features newsletter** called "The Essential List". A handpicked selection of stories from BBC Future, Culture, Worklife and Travel, delivered to your inbox every Friday.*
true
true
true
In a series of Himalayan towns known for severe earthquakes, locals still honour a millennia-old building style.
2024-10-12 00:00:00
2022-08-26 00:00:00
https://ychef.files.bbci…351/p0cvqj1x.jpg
newsarticle
bbc.com
BBC
null
null
28,633,814
https://popular.info/p/apples-texas-problem
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
16,306,669
https://www.theguardian.com/environment/2018/feb/04/carbon-emissions-negative-emissions-technologies-capture-storage-bill-gates
How Bill Gates aims to clean up the planet
John Vidal
It’s nothing much to look at, but the tangle of pipes, pumps, tanks, reactors, chimneys and ducts on a messy industrial estate outside the logging town of Squamish in western Canada could just provide the fix to stop the world tipping into runaway climate change and substitute dwindling supplies of conventional fuel. It could also make Harvard superstar physicist David Keith, Microsoft co-founder Bill Gates and oil sands magnate Norman Murray Edwards more money than they could ever dream of. The idea is grandiose yet simple: decarbonise the global economy by extracting global-warming carbon dioxide (CO2) straight from the air, using arrays of giant fans and patented chemical whizzery; and then use the gas to make clean, carbon-neutral synthetic diesel and petrol to drive the world’s ships, planes and trucks. The hope is that the combination of direct air capture (DAC), water electrolysis and fuels synthesis used to produce liquid hydrocarbon fuels can be made to work at a global scale, for little more than it costs to extract and sell fossil fuel today. This would revolutionise the world’s transport industry, which emits nearly one-third of total climate-changing emissions. It would be the equivalent of mechanising photosynthesis. The individual technologies may not be new, but their combination at an industrial scale would be groundbreaking. Carbon Engineering, the company set up in 2009 by leading geoengineer Keith, with money from Gates and Murray, has constructed a prototype plant, installed large fans, and has been extracting around one tonne of pure CO2 every day for a year. At present it is released back into the air. But Carbon Engineering (CE) has just passed another milestone. Working with California energy company Greyrock, it has now begun directly synthesising a mixture of petrol and diesel, using only CO2 captured from the air and hydrogen split from water with clean electricity – a process they call Air to Fuels (A2F). “A2F is a potentially game-changing technology, which if successfully scaled up will allow us to harness cheap, intermittent renewable electricity to drive synthesis of liquid fuels that are compatible with modern infrastructure and engines,” says Geoff Holmes of CE. “This offers an alternative to biofuels and a complement to electric vehicles in the effort to displace fossil fuels from transportation.” Synthetic fuels have been made from CO2 and H2 before, on a small scale. “But,” Holmes adds, “we think our pilot plant is the first instance of Air to Fuels where all the equipment has large-scale industrial precedent, and thus gives real indication of commercial performance and viability, and leads directly to scale-up and deployment.” The next step is to raise the money, scale up and then commercialise the process using low-carbon electricity like solar PV (photovoltaics). Company publicity envisages massive walls of extractor fans sited outside cities and on non-agricultural land, supplying CO2 for fuel synthesis, and eventually for direct sequestration. “A2F is the future,” says Holmes, “because it needs 100 times less land and water than biofuels, and can be scaled up and sited anywhere. But for it to work, it will have to reduce costs to little more than it costs to extract oil today, and – even trickier – persuade countries to set a global carbon price.” Meanwhile, 4,500 miles away, in a large blue shed on a small industrial estate in the South Yorkshire coalfield outside Sheffield, the UK Carbon Capture and Storage Research Centre (UKCCSRC) is experimenting with other ways to produce negative emissions. The UKCCSRC is what remains of Britain’s official foray into carbon capture and storage (CCS), which David Cameron had backed strongly until 2015. £1bn was ringfenced for a competition between large companies to extract CO2 from coal and gas plants and then store it, possibly in old North Sea gas wells. But the plan unravelled as austerity bit, and the UK’s only running CCS pilot plant, at Ferrybridge power station, was abandoned. The Sheffield laboratory is funded by £2.7m of government money and run by Sheffield University. It is researching different fuels, temperatures, solvents and heating speeds to best capture the CO2 for the next generation of CCS plants, and is capturing 50 tonnes of CO2 a year. And because Britain is phasing out coal power stations, the focus is on achieving negative emissions by removing and storing CO2 emitted from biomass plants, which burn pulverised wood. As the wood has already absorbed carbon while it grows, it is more or less carbon-neutral when burned. If linked to a carbon capture plant, it theoretically removes carbon from the atmosphere. Known as Beccs (bioenergy with carbon capture and storage), this negative emissions technology is seen as vital if the UK is to meet its long-term climate target of an 80% cut in emissions at 1990 levels by 2050, according to UKCCSRC director Professor Jon Gibbins. The plan, he says, is to capture emissions from clusters of major industries, such as refineries and steelworks in places like Teesside, to reduce the costs of transporting and storing it underground. “Direct air capture is no substitute for using conventional CCS,” says Gibbins. “Cutting emissions from existing sources at the scale of millions of tonnes a year, to stop the CO2 getting into the air in the first place, is the first priority. “The best use for all negative emission technologies is to offset emissions that are happening now – paid for by the emitters, or by the fossil fuel suppliers. We need to get to net zero emissions before the sustainable CO2 emissions are used up. This is estimated at around 1,000bn tonnes, or around 20-30 years of global emissions based on current trends,” he says. “Having to go to net negative emissions is obviously unfair and might well prove an unfeasible burden for a future global society already burdened by climate change.” The challenge is daunting. Worldwide manmade emissions must be brought to “net zero” no later than 2090, says the UN’s climate body, the Intergovernmental Panel on Climate Change (IPCC). That means balancing the amount of carbon released by humans with an equivalent amount sequestered or offset, or buying enough carbon credits to make up the difference. But that will not be enough. To avoid runaway climate change, emissions must then become “net negative”, with more carbon being removed than emitted. Many countries, including the UK, assume that negative emissions will be deployed at a large scale. But only a handful of CCS and pilot negative-emission plants are running anywhere in the world, and debate still rages over which, if any, technologies should be employed. (A prize of $25m put up by Richard Branson in 2007 to challenge innovators to find a commercially viable way to remove at least 1bn tonnes of atmospheric CO2 a year for 10 years, and keep it out, has still not been claimed – possibly because the public is uncertain about geoengineering.) The achilles heel of all negative emission technologies is cost. Government policy units assume that they will become economically viable, but the best hope of Carbon Engineering and other direct air extraction companies is to get the price down to $100 a tonne from the current $600. Even then, to remove just 1% of global emissions would cost around $400bn a year, and would need to be continued for ever. Storing the CO2 permanently would cost extra. Critics say that these technologies are unfeasible. Not using the fossil fuel and not producing the emissions in the first place would be much cleverer than having to find end-of-pipe solutions, say Professor Kevin Anderson, deputy director of the Tyndall Centre for Climate Change Research, and Glen Peters, research director at the Centre for International Climate Research (Cicero) in Norway. In a recent article in the journal *Science*, the two climate scientists said they were not opposed to research on negative emission technologies, but thought the world should proceed on the premise that they will not work at scale. Not to do so, they said, would be a “moral hazard par excellence”. Instead, governments are relying on these technologies to remove hundreds of millions of tonnes of carbon from the atmosphere. “It is breathtaking,” says Anderson. “By the middle of the century, many of the models assume as much removal of CO2 from the atmosphere by negative emission technologies as is absorbed naturally today by all of the world’s oceans and plants combined. They are not an insurance policy; they are a high-risk gamble with tomorrow’s generations, particularly those living in poor and climatically vulnerable communities, set to pay the price if our high-stakes bet fails to deliver as promised.” According to Anderson, “The beguiling appeal of relying on future negative emission technologies is that they delay the need for stringent and politically challenging policies today – they pass the buck for reducing carbon on to future generations. But if these Dr Strangelove technologies fail to deliver at the planetary scale envisaged, our own children will be forced to endure the consequences of rapidly rising temperatures and a highly unstable climate.” Kris Milkowski, business development manager at the UKCCSRC, says: “Negative emissions technology is unavoidable and here to stay. We are simply not moving [to cut emissions] fast enough. If we had an endless pile of money, we could potentially go totally renewable energy. But that transition cannot happen overnight. This, I fear, is the only large-scale solution.” ## Comments (…) Sign in or create your Guardian account to join the discussion
true
true
true
It’s a simple idea: strip CO2 from the air and use it to produce carbon-neutral fuel. But can it work on an industrial scale?
2024-10-12 00:00:00
2018-02-04 00:00:00
https://i.guim.co.uk/img…64cddddefc8a15e6
article
theguardian.com
The Guardian
null
null
15,365,941
https://arstechnica.com/information-technology/2017/09/an-alarming-number-of-macs-remain-vulnerable-to-stealthy-firmware-hacks/
An alarming number of patched Macs remain vulnerable to stealthy firmware hacks
Dan Goodin
An alarming number of Macs remain vulnerable to known exploits that completely undermine their security and are almost impossible to detect or fix even after receiving all security updates available from Apple, a comprehensive study released Friday has concluded. The exposure results from known vulnerabilities that remain in the Extensible Firmware Interface, or EFI, which is the software located on a computer motherboard that runs first when a Mac is turned on. EFI identifies what hardware components are available, starts those components up, and hands them over to the operating system. Over the past few years, Apple has released updates that patch a host of critical EFI vulnerabilities exploited by attacks known as Thunderstrike and ThunderStrike 2, as well as a recently disclosed CIA attack tool known as Sonic Screwdriver. An analysis by security firm Duo Security of more than 73,000 Macs shows that a surprising number remained vulnerable to such attacks even though they received OS updates that were supposed to patch the EFI firmware. On average, 4.2 percent of the Macs analyzed ran EFI versions that were different from what was prescribed by the hardware model and OS version. Forty-seven Mac models remained vulnerable to the original Thunderstrike, and 31 remained vulnerable to Thunderstrike 2. At least 16 models received no EFI updates at all. EFI updates for other models were inconsistently successful, with the 21.5-inch iMac released in late 2015 topping the list, with 43 percent of those sampled running the wrong version. ## Hard to detect (almost) impossible to disinfect Attacks against EFI are considered especially potent because they give attackers control that starts with the very first instruction a Mac receives. What's more, the level of control attackers get far exceeds what they gain by exploiting vulnerabilities in the OS or the apps that run on it. That means an attacker who compromises a computer's EFI can bypass higher-level security controls, such as those built into the OS or, assuming one is running for extra protection, a virtual machine hypervisor. An EFI infection is also extremely hard to detect and even harder to remedy, as it can survive even after a hard drive is wiped or replaced and a clean version of the OS is installed.
true
true
true
At-risk EFI versions likely put Windows and Linux PCs at risk, too.
2024-10-12 00:00:00
2017-09-29 00:00:00
https://cdn.arstechnica.…9/macbookpro.jpg
article
arstechnica.com
Ars Technica
null
null
11,358,406
http://www.nytimes.com/2016/03/25/business/dealbook/starboard-value-plans-to-oust-yahoos-board.html
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
39,653,744
https://www.youtube.com/watch?v=DhCBCudKJTs
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
4,192,133
http://news.cnet.com/8301-1023_3-57465433-93/facebooks-e-mail-debacle-one-bug-fix-but-rollback-impossible/?part=rss&subj=news&tag=title
CNET: Product reviews, advice, how-tos and the latest news
Jon Reed
Best of the Best Editors' picks and our top buying guides Best of the Best Editors' picks and our top buying guides Upgrade your inbox Get CNET Insider From talking fridges to iPhones, our experts are here to help make the world a little less complicated. ## More to Explore ## Latest ### Best Walmart Holiday Deals Still Available: Last Chance for Big Saving on Tech, Home Goods and More 11 minutes ago### Best Internet Providers in Honolulu, Hawaii 17 minutes ago### The Best Spots in Your Home To Help Indoor Plants Grow 25 minutes ago### Lemme Sleep Took Over My TikTok, So I Had to Try This Supplement Myself 2 hours ago### Quick and Easy Tips for Perfectly Crispy Bacon 2 hours ago### Best Places to Buy Glasses Online for 2024 3 hours ago### 23 Best Gifts for New Homeowners for the Holidays 2024 4 hours ago### How to Pause Your Internet Service 4 hours ago### ChatGPT Glossary: 48 AI Terms That Everyone Should Know 5 hours ago### How to Watch Ariana Grande on 'Saturday Night Live' Tonight Without Cable 5 hours ago### Best Gifts for Hikers, From Their Feet to Their Butts 5 hours ago### Aurora Viewers Share Stunning Photos of the Northern Lights 6 hours ago### This Visual Guide Shows Everyone How to Hit Daily Protein Needs 6 hours ago### 2025 Social Security COLA Increase: Here's What Happens Next 6 hours ago### Best iPhone 15 and iPhone 15 Pro Cases for 2024 6 hours ago## Our Expertise Expertise Lindsey Turrentine is executive vice president for content and audience. She has helped shape digital media since digital media was born. 0357911176 02468104 024681025 ## Tech ## Money ## Crossing the Broadband Divide Millions of Americans lack access to high-speed internet. Here's how to fix that. ## Energy and Utilities ## Deep Dives Immerse yourself in our in-depth stories. Get the best price on everything CNET Shopping helps you get the best prices on your favorite products. Get promo codes and discounts with a single click. Add to Chrome - it's free! ## Internet Low-Cost Internet Guide for All 50 States: Despite the End of ACP, You Still Have Options 10/05/2024 ## Sleep Through the Night Get the best sleep of your life with our expert tips. Get the best price on everything CNET Shopping helps you get the best prices on your favorite products. Get promo codes and discounts with a single click. Add to Chrome - it's free! ## Tech Tips Get the most out of your phone with this expert advice. Get the best price on everything CNET Shopping helps you get the best prices on your favorite products. Get promo codes and discounts with a single click. Add to Chrome - it's free! ## Home ## Daily Puzzle Answers ## Living Off Grid CNET's Eric Mack has lived off the grid for over three years. Here's what he learned.
true
true
true
Get full-length product reviews, the latest news, tech coverage, daily deals, and category deep dives from CNET experts worldwide.
2024-10-12 00:00:00
2024-10-12 00:00:00
https://www.cnet.com/a/i…t=675&width=1200
website
cnet.com
CNET
null
null
24,149,641
https://nwn.blogs.com/nwn/2020/08/fortnite-metaverse-apple-1984-tim-sweeney.html
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
30,185,207
https://arstechnica.com/gadgets/2022/02/mysterious-port-less-laptop-is-7-mm-thin/
Mysterious port-less laptop is 7 mm thin
Scharon Harding
As laptops have become thinner and lighter, there have been sacrifices along the way. Often, those sacrifices come in the area of port selection, as ultra-portable laptops keep getting bolder about omitting things like USB-A ports, 3.5 mm jacks, and SD card readers for the sake of portability and style. Whether you think that's inconvenient or incredible, take a look at the Craob X laptop. It takes minimalism to a new level—it has zero integrated ports. Spotted by My Laptop Guide on Monday, the Craob X claims to be the "world's first port-less ultrabook." Craob's website provides limited information about the Craob X laptop and nothing about the company itself. There's no release date beyond a vague "coming soon" advertisement. Overall, there's very little detail about the Craob X, making us skeptical about if or when it will be available. In fact, we can't even find mention of the company existing beyond this website. Still, the Craob X presents an interesting idea for the future of ultraportable laptops. While even the trimmest ultraportable will typically offer at least a USB-C port, the Craob X's deck is empty. And we can see why—there's barely room to fit anything there. According to Craob's website, the 13.3-inch laptop is 0.28 inches (7 mm) thin and weighs 1.9 lbs (861.83 g). That'd be pretty impressive, even compared to other lightweight clamshells. Measuring 0.58 inch (14.8 mm) thick and weighing 2.64 lbs (1,197.48 g), the Dell XPS 13 is 107.14 percent thicker and 38.95 percent heavier than the Craob X's claimed measurements. You can presumably use Bluetooth to connect peripherals to the device, but the site does not address the laptop's Bluetooth capabilities.
true
true
true
Think USB-A ports are getting rare? Count your blessings.
2024-10-12 00:00:00
2022-02-01 00:00:00
https://cdn.arstechnica.…022/02/Craob.jpg
article
arstechnica.com
Ars Technica
null
null
5,342,147
http://zeptobars.ru/en/read/how-to-open-microchip-asic-what-inside
Zeptobars.ru
null
# Zeptobars.ru Сайт в процессе разработки **Сделать сайт просто как** «раз-два-три»! Раз. Выбрать и зарегистрировать свободное доменное имя. Два. Заказать хостинг, выбрав подходящий тарифный план или заказать установку выделенного сервера. Три. Заказать создание сайта у нашего специалиста. Мы можем предложить вам создание сайта любой сложности. Первый шаг вы уже сделали, зарегистрировав доменное имя. Следующими шагами будут заказ хостинга и создание сайта. Второй шаг - заказ хостинга из предлагаемых тарифных планов. Также вы можете заказать у нас установку выделенного сервера. Шаг третий - создание сайта. Вы можете заказать сайт любой сложности, связавшись с нашим специалистом. ## WHOIS – проверка домена Expired.ru Список освобождающихся доменов в зоне .RU и .РФ, сервис перехвата доменов. Заявка на регистрацию домена подается через максимально возможный пул регистраторов, что значительно увеличивает ваши шансы.
true
true
true
null
2024-10-12 00:00:00
null
null
null
null
null
null
null
19,678,254
https://www.thoughtfulcode.com/why-use-php/
Why Use PHP in 2019? - Thoughtful Code
David
Let’s get it out of the way early: **PHP is a strange and ugly language. It’s not exceptionally fast. It’s not beautiful syntactically. It’s not formulated around a clear opinion about good software development practices.** And it’s still what I write a lot of software in. The obvious question is: why? **Why use PHP today?** There are lots of good reasons for it, above and beyond personal idiosyncratic preferences. Here’s the space to cover exactly that. Why is PHP my language of choice for web development? ## What is PHP? “What is a PHP?” Or, more commonly, “what’s PHP?” In short, PHP is a programming language made for the web, built up from the C programming language, and which uses idiosyncratic HTML-like tags (or sigils) to contain its code. The PHP programming language is mostly used server-side, which means that it runs on your web server software, which is customarily going to serve HTML to your visitors. PHP initially stood for “Personal Home Page.” Because that pretty thoroughly constrained the meaning and desirability of using the language for general use, the language now stands for “PHP: Hypertext Preprocessor”. This is what’s called a recursive acronym (a name that contains the name). Nerds love them. ## What can PHP do? What is PHP used for? Basically, anything that you want to do on a web server, you can do with PHP. Make a blog? Yep. Create a full fledged software-as-a-serivce application? Absolutely. Write a little script to process some data in a few seconds? PHP is great for that. Write a complicated set of scripts that accidentally becomes a successful software business? PHP is used like that a lot. If you don’t trust me, the PHP website lists the following uses: - Server-side scripting - Command-line scripting - Writing desktop applications I’m not so sure I’d encourage the last bullet point, but it is possible. But the first two are common and good reasons to use PHP in 2019. This leads to one important and unavoidable fact… ## PHP is EVERYWHERE There are a lot of reasons to know and love PHP, probably the most potent and valid of which is this: it’s used and runs EVERYWHERE the web does. Your cheap little $3 per month hosting account *may* let you run a web application in Python or Ruby if you shop carefully. But it’ll definitely run PHP. This means that you can count on it wherever you are. And because it runs everywhere, and is easy to get started with, *a lot* of very popular software is written in PHP. **WordPress** is the example that’s both largest and most familiar to me, but tools like Joomla, Drupal, Magento, ExpressionEngine, vBulletin (yep, that’s still around), MediaWiki, and more are all running PHP on the server. And there are more PHP application frameworks than you can shake a stick at as well: Symfony, Zend, Laravel, Aura, CakePHP, Yii, and even the venerable CodeIgnitor. Surely you can make a list of web frameworks of some length for almost any other language. And for the commonly used web languages like Python, Ruby, or Node/JavaScript you may even be able to amass a numerically competitive list. But the sheer volume of sites running PHP is immense. WordPress proudly boasts that it powers more than 30% of the internet. You don’t need to even trust that fact to realize that *a lot* of the internet must be using PHP if that fact is even conceivably true. ## PHP has some very good qualities ### Easy Dynamism is Baked into PHP PHP does HTML rendering and programming easier than almost any other language. So it’s pretty simple to change HTML to PHP. Just change your `file.html` to `file.php` , add a bit of dynamism inside some `<?php` and `?>` tags. Most web servers will have already been configured to take care of the rest for you. It’s so simple that almost anyone can get started without a need for much more of an understanding of programming than this. Because PHP is so friendly to cut your teeth with, a lot of bad code is written in it. Once you realize that most bad PHP is written by novice programmers, most of the fear and hatred of PHP you encounter in the world comes from one other essential problem: PHP has never had a grand design with a visionary idea about why it was the perfect language for your web server. Instead, its the result of wide collaboration through an open process which serve as a crucible through which any good idea — and a few bad ones — must pass. ### Object-Orientation with (Great) Package Management is now the Norm in PHP And bad ideas can make it through that crucible. The most popular example is that PHP 5.3 — separately widely regarded as the first modern version of PHP — introduced the `goto` statement, which is generally either scoffed at or thought an easy source for errors. Similarly bad thing in PHP that have resulted from the process through which the language has grown: object-orientation was first implemented as a flawed and limited concept, the standard library is full of inconsistent names and parameter ordering, and (in an example that recently got a fair amount of attention) the `::` token is called by the interpreter by the inscrutable-to-English-speakers: `T_PAAMAYIM_NEKDOTAYIM` . But today, OOP is fully-realized in PHP. Few languages have as much Java-like OOP practice than PHP. What’s more, unlike Java, PHP has a single and widely-love package manager, called Composer. It was very good, and so the ease of pulling in other well-written and well-maintained libraries in PHP is nothing to be trifled with. ### PHP Has Gotten a Lot Faster But those thing said, PHP is evolving in interesting ways. It’s growing toward being a pretty fully-featured Java-like (for better or worse) object-oriented language. And much like Java, it’s gaining easy abstractions for functional programming — arguably the current hotness. It’s also growing a pretty awesome set of tools — PHP loves Composer, and for good reason — and a commendable effort to make all of these large open source projects in PHP work a little bit better together. Oh, and we shouldn’t forget the current hotness: speed gains PHP has made in the PHP 7 series of releases. This is widely regarded as having been initiated by the HHVM coming out of Facebook. For a short time, there was a risk that the speed of the HHVM would fracture the PHP community. But it didn’t. Instead PHP just got so much faster that people have mostly forgotten that the HHVM exists. ## PHP has a HUGE Community; it’s Learning-Friendly If you’re deciding what cool new technology to use, a thing I think often gets to little attention is what’s its like to come to terms with the language. What’s it like to learn PHP? PHP tutorials are common and often quite good. One downside of PHP’s popularity is that you can find some not-so-good training from people who don’t know enough about what using the tool is really like. Or who last were following “best practices” that were current a decade ago. But on the whole, that’s rare-enough that I don’t think it should discourage you. Finding out a practice you’ve used for a few months isn’t great is rare, and not a huge problem. ## Common Comparisons to PHP A lot of people new to programming are looking for very general face-offs of languages. So, here they come. All the languages I’m going to write up here have the following traits in common with PHP: **They’re open-source**. What this means is that you can use the underlying language for free (no cost), and you are able to see and understand the underlying program if you wish to.**They’re often used for web development**. Pretty straight-forward, these are languages used a lot for web development. Some are also widely used outside of that venue, but not all.**They’re high-level, loosely- and dynamically-typed.**This means that a variable can change types, and that you don’t have to define when you define a variable what type of things (numbers vs strings vs objects, etc) it’ll store. This is generally favored for web programming, but not universally.**Their communities are good-sized or better.**There are a lot of interesting languages that have the qualities listed above, but which don’t have a large community of practice. I’m leaving them aside here. ### Why use PHP? Why not JavaScript? Perhaps the most important language to compare PHP to is JavaScript. Modern development requires that every project uses at least a little bit of JavaScript for client-side development and interactivity. With Node, it has became relatively easy to use JS on the server as well. The idea of using the same language “isomorphically” on the client and the server is very appealing. Like PHP, JavaScript is an eclectic and sometimes ugly language with a lot of warts and “gotchas”. But JavaScript has gotten hugely fast in the last decade, so the case for it is real. Why pick PHP over JavaScript? You’ve got expertise or need to use libraries in PHP. Otherwise, I think JavaScript may be a better choice. ### PHP vs Active Server Pages ((.Net) Core) Active Server Pages started life as a closed-source Microsoft language for web programming. It was pretty directly comparable to PHP, but ran in Windows Server environments. That was replaced by ASP.net. Which has now been replaced by ASP.NET Core. The latter two are now open-source languages, like PHP is. And I’ve personally never written a lick of ASP, in any variation. I would favor ASP if I used and loved Microsoft servers. Otherwise, give me PHP please. The size and scale of communities are not really comparable. ### Should I Use Ruby or PHP? Ruby, specifically Ruby on Rails, was *very* popular a decade ago. Ruby is still a much-loved language, which is much more elegant than PHP to my eyes. That said, its community is smaller. And I sense that Ruby has stopped being the “hot language” (being superseded in that role by JavaScript). Ruby’s beautiful, and there are enough people good at it that I’d not avoid it for that reason. But hiring people already-familiar will remain harder than doing so in PHP. (Though I think the skill of an average Ruby developer is likely higher than the same for PHP.) ### Python vs PHP: Which is Better? The last language that makes sense in a one-to-one comparison with PHP is Python. Python is used in many more contexts than web development (it’s also very popular for statistics and data analytics). It’s also one of the more stable and well-designed languages around, in popular consensus. Python (like Ruby, and JavaScript) is a little bit harder to get running on a web server than PHP. But it’s a prettier language, and more diversely used than PHP. It’s one of the languages I’d think hard about favoring over PHP, all other things (access to external library, access to hiring expertise, etc) being equal. ### PHP vs Go? Scala? Java? etc? I mentioned at the top that there was a lot of similarities with all prior direct language comparisons I made. But if we drop those constraints, there are **tons** of other languages we could compare. So to briefly touch on a few: **Java**is hugely popular, and very fast. It’s used for building Android apps, desktop apps, and web apps. It’s not dynamically typed though, which has better performance guarantees, but worse ease of web programming.**Go**is a newer and growing Google-backed language. It’s focused on web servers, but slightly lower-level (more like C) than PHP. It’s fast, but the community of practice is smaller.**Scala**is a popular JVM-running (Java-compatible) language that still seems to be growing in popularity. It’s more elegantly designed than PHP, but I can’t say much else about it with confidence. Again, that’s all I feel like I’ve got enough context to touch. But there are even more options you can consider. But that’s the end of the specific alternatives I’ll consider to PHP. ## Programming Language Choice is About Context Given those comparisons, why use PHP? I’ve enumerated some of the things that are bad about PHP, and I feel some of them regularly. Needle and haystack inconsistency in search functions bites me at least once a month when I’m quickly doing something without autocomplete. For a true and complete greenfield project, with no need to interoperate with any other system, or need to run in any arbitrary environment, with no existing expertise on the team, I’d think pretty deeply before I recommended PHP. Python really appeals to me in that scenario. But those are a lot of caveats. There are great reasons for even a fresh greenfield project to be written in PHP. Any modern web language can help you make good websites and apps. And every one of them has features that aren’t ideal. PHP, like any of them, requires you to truly understand its trade-offs for you in your project to determine whether or not it makes sense for you. But for me, personally, I continue to find that it’s possible to be — as Keith Adams put it in a great talk — “shockingly productive in PHP.” If you’ve got a team of Python programmers, write your application in Python. If you’ve got a cofounder who knows Java, write your SaaS in that. The personnel considerations of software should always outweigh the languages ones. Thanks for the explanation. I decided to take up PHP as a new language this year. Based on what you have written, seems like it is quite easy to learn PHP, especially when I recently set up my own WordPress blog. Number 1 result on Google, and it boils down for me to: “It has a large community.” All the other reasons are just nonsensical rationalizations to the fact that thousands of developers keep using a language that’s inferior by design and execution. I think there’s a little more to the argument than that. But I wouldn’t argue that PHP (or JavaScript) are “more beautifully designed and executed” as languages than Ruby or Python. Partly because I don’t fully believe that, partly because it’s too big an argument to make. I touched a little more about why both PHP and JavaScript have succeeded, despite inelegant designs in “All Programs Have a Surrounding Human Context”. For me, what’s make our life much easier, it’s the best, ignore all the bad reputation about php, it’s sufficient, to the point, and fast development, that set we don’t need any thing else, just pick it and start enjoying but try to be professional I mean write good code, nice and clear structure, don’t use the flexibility of the language in crappy code. I have programmed before mostly using VB products from Microsoft, experience with C and C++, some Java, and Front End Development. I’ve never developed any Web sites before but would like to begin. Having PHP, Python, NodeJS, as possible options, which way would you recommend to create a site which must hold sensitive data, something like SalesForce, for example (not that big, I’m just referring to the type of data stored). Consider either language I’d have to learn from scratch pretty much. I don’t know enough about community, frameworks, etc to make a decision. Hi Oscar, Much as I love PHP, I’d tell anyone in you shoes to learn Node/JavaScript. If you’re building a wholly-custom application, I think JS is a better base than PHP, because you’ll be able to use the same language on the client (web browser) and web server. That doesn’t mean there aren’t complex differences between those two environments, but at least for you the underlying programming language will be the same in both places. Hope that helps, David The common refrain that “you can use the same language on the front-end and back-end,” is often a shallow evaluation of the particulars. All web developers need to learn JavaScript, but not all applications should be written with Node.js. The key to using Node.js for server-side development is to understand when and where a single threaded, non-blocking, asynchronous event loop run-time makes sense. Suddenly, the idea of being able to use the same language on the front and back ends is less important than finding the best solution for a particular use case. Very few people develop software starting off with raw Node.js, if they can even get it setup, and fault tolerant (with high availability). Again, it gets more complex than just “Uga, uga, same language.” Those who are starting off would be wise to learn the classical model object-orientation before taking on the prototypal model that JavaScript uses. Also, the efficiency gains that Node.js can provide are sometimes outweighed by the need to get going! Node types run to frameworks faster than PHP types because JavaScript is truly a strange animal of a programming language when you get to the guts of it. Even Douglas Crockford (before ES6+) wrote a book, “JavaScript: The Good Parts.” Nicholas Zakas’ books “The Principles of Object Orient JavaScript” and “Understanding ECMAScript 6: The Definitive Guide”, in addition to Flannagan’s “JavaScript: The Definitive Guide (5th and 6th), clearly demonstrate that praise for JavaScript is fan boy marketing, not reality. I came from C. C++, and Perl, so PHP was a God Send. 🙂 The bottom line is, PHP does not have native support for UNICODE, and this is the number one fault with the language, compared to Perl, JavaScript, Java, and others. Otherwise, PHP is a great language to learn to program in. You will appreciate more things (security, databases, web servers, operating systems) starting with PHP, than if you consume your life with JavaScript and user-agent related issues. In the end, Node.js is more than what the average person needs, or needs to worry about (npm anyone?). Advanced developers with deep understanding of use cases benefit most from Node.js. They can design applications that require less computing resources, but that advantage is trivial for everyday business applications and websites generally made by beginners. Node. js is unnecessary for most web applications, but it is an outstanding option to go to if your use case justifies it! Dear David, I don’t think there’s anything on the web quite like what you’ve published here I grew up on PHP and JavaScript always feeling like they were weird scripting languages, tho I think you said it… a) they’re not anymore with all the recent software produced around PHP b)software is about community Thank You You are enlightening 🙂 Hello, Pretty great post but there’s one thing I’d like to point out is that ASP.NET can run on Apache and Nginx through Mono although only partially, not all features are implemented and at this point it’s probably not recommended or worth the hassle; however, ASP.NET Core is fully and completely supported on Windows, Linux and macOS. https://docs.microsoft.com/en-us/dotnet/core/about DEAR DAVID. HEY, Thanks for posting this blog. This is very useful for me and I want to know some questions. is it important to learn PHP in 2019? and why I learn this how many benefits it gives? Is it still worth it? How can we persue common comparisons to PHP? Hi David: Nice write-up. I’ve been programming for over 50 years and now as a Professor at a local college I teach Application Development in 12 different languages. I use PHP for all my Web Application Development for both the college and for clients I’ve served over the last 20 years. I claim PHP is a Master Language because it can create code for the five primary domains of the net, namely content (HTML), presentation (CSS), behavior (Javascript), function (PHP), and persistence (MySQL). While some may say these Web Languages have their own purposes, which is true, but it is PHP that can deliver them to the Browser. In other words, as a PHP Developer, you have complete control of delivering these scripts as needed. Additionally, PHP has a huge assortment of problem-solving tools (i.e., text processing, image generation and alteration, URL data collection, bots, data analysis, AI, and many, many more) you can use to serve your clients. Lastly, the PHP community has an vast number of professionals and supporters who often lend their time to welcome and educate new-comers and assist them in developing code. If you’re new to Web Programming, then you will be hard-pressed to find a better Web Language than PHP. Cheers, Tedd I’ll take Net Core any day over PHP. It’s fast as hell (so is very good at scaling), it has a very good language (C#), has its own package library NuGet, it’s Open Source, it runs everywhere (Windows, Mac, Linux you name it), and the community is very large. And .NET is definitively not OpenSource lol. Thanks for sharing this. I’ve been trying to learn a little PHP. Great post for me and people who already use other languages like asp or asp.net and want to explore PHP. This is very helpful. Keep on the great work. I just want to say that there is no need to dive too deep in PHP if you don’t want to. A versatile coder can just pick what they need for a particular scenario. This is an amazing blog! Thank you. I too work for a PHP development company so I completely agree! If code really is about poetry as you claim, there are better choices than PHP. PHP is about quantity, not quality. It says so on their website too: “popular, pragmatic, flexible”. PHP solves trivial business problems at alsmost no cost. Of course, that too is a big quality if it is on the scale of what you need at some point in time. But once your application starts to become more serious, you can’t even compile it with Zend Guard which only supports outdated PHP versions. You definately need to move away from PHP and MySQL. You’ll wish you’d started with C++ and PostgreSQL right from the start of your project, instead of having to rewrite everything over again. Both PHP and MySQL have problems with time anyway. PHP reinvented that wheel again with DateTime which still has many annoying bugs as of today to be reliable. Next to that, MySQL can’t even store date/time fully with timezone information, leading to all kinds of problems once your initial assumptions (or installation details) change. To think that the inventor of PHP, Rasmus Lerdorf, once started PHP to demonstrate the ease of C… Great article. You misspelled comparison in the last paragraph. Just fixed. Thanks for pointing out the missing P 🙂 A very informative post – thank you for sharing, David I am an absolute novice to programming with a business background. I am getting a website developed (the developer is using Laravel framework) and would like to educate myself sufficiently in PHP so that I understand how the code works. Where do I start (a course, tutorial, etc.)? I would be grateful for your advice. Syed, for a situation like you’re describing Laracasts is my most common recommendation. Cheers! You can host your web application written with ASP.NET Core on linux servers too ! ASP.NET Core has great tooling for free (Visual Studio Community), Backed by C# language (C# 8 is becoming a really great language) and lots of other things and also .NET Core is the most loved framework based on StackOverflow survey ! Good article, but could be little deeper. I have been using php for last 15 years making so so many projects with it, mostly small and medium size, but even one webshop with 30 000 products. Aaand it’s fast and cheap to run! You can get it running in Google App engine less than 10 usd/month and if it gets more visitors, just put extra 10 usd and you are good to go. But when using all these fancy node.js and others, you will be in h*ll if your service gets too much traffic! I mean just 100 concurret visitors. But what is sad, that the train of php has already gone. All the apps in my country is made with node and react. So I need to learn them as well and I recommend to everybody to skip learning php for this reason. Absolutely right , PHP is a great platform to use, but however we cannot totally forget about Node.Js.I recently moved from Node.js to PHP and made an article to share my experience: https://hackernoon.com/nodejs-vs-php-which-is-better-for-your-web-development-he7oa24wp Kindly go through it and choose which platform to use. Python really appeals to me in that scenario. But those are a lot of caveats. There are great reasons for even a fresh greenfield project to be written in PHP. How David, I hope you are doing well. You have written a well-detailed article about the use of PHP in 2019. The main advantage of PHP is a wide developer community support. And above all, it fastens up the web development process by a considerable margin. I use Ruby, PHP and Go on a daily basis. No offense but, in my opinion, you failed to mention the actual advantages of PHP over other languages that you were comparing it to. I suspect that it’s because you don’t have enough experience with the other languages and you’re basing your opinion on Wikipedia-like theoretical information. Ruby is indeed more elegant but PHP has a lot of advantages over it. For example, type declarations in PHP allow failing early and help with better static code analysis, PHP’s IDE support is undeniably better and PHP is considerably faster than Ruby. In fact, I enjoy using PHP over Ruby just because of the aforementioned advantages. While I love Go and I think that it’s probably the best language among these three, in comparison to PHP it’s a lot harder to work with dynamic data. For example, I hate working with JSON in Go and I think that it’s one of the few poorly designed parts of this language. Is it low-level as you mentioned? Not really. In my opinion, Go is a higher-level language than PHP as its standard library is reacher and allows doing much more without tons of additional layers (e.g. frameworks). *enjoy more using PHP than Ruby First of all, thanks for sharing this overview. PHP was the first language I ever wrote code and I remember building great things with it – from simple websites to e-commerces in Magento and complex applications using Zend 2. Later I’ve decided to focus on front-end development only, and, the last time I touched PHP was aroung 2015, with Laravel. Almost at the end of 2019, I’m considering playing with it again and I want to take a look at Symphony or just play Laravel again. I believe it would be easier to get back on track even with the changes from version 5 to 7, because, as you said, PHP is learning-friendly. I appreciate you taking the time to consider different tools for the job and writing this out. Though, it’s a little concerning when you say, “The personnel considerations of software should always outweigh the languages ones”. Personnel can learn languages, and rather easily.. a language cannot change and has cases where one should be used over another. Why even write the article if you’re just going to shrug it off and say use what you want. I prefer PHP, since it is easy to learn and implement. Plus having lots of features. I love PHP . It’s so easy to develope page. My opinion is PHP will be used for long time. However , I realy need a feature from PHP. **Converter text language to native language and ran it on server. This will improve more security by hacking. This feature is very important for bigger project. I think it will unlock the language to growth up. Jokes …. Comparing the programming language based on community size is like comparing cars based on colors. Size of the community as well can indicate every code dolphin can become a programmer. I’m not surprised as PHP is actually a template system grown up to object oriented and partly controlled memory leak. PHP community finally discovered package management and dependency management (composer). Laravel is clone of Ruby on Rails, congrats PHP community, but at least you’ve learnt something. Projects like virtPHP, nice try, good direction, but quite late and lame. Command line scripting in PHP is like prototyping in assembler, you’re just not doing it, because it’s stupid. These days, learn Golang, TypeScript or Python even if you’re just a PHP dolphin trying to rewrite linux network stack in PHP because you just don’t understand how it works (I swear there are really such people, I’ve seen it in my eyes, hand to install that on servers ….. they did it because they wanted CDN). What I’m saying, 9 out of 10 PHP programmers are not programmers, just poorly educated chaos makers. PS: Agree with Shamt. “I would favor ASP if I used and loved Microsoft servers.” – .NET Core is cross platform so there is no need for you to use Microsoft servers David, thank you for your thoughts. I enjoyed what you had to say. Why use PHP in 2019? It works, it is designed with web development in mind, and it gives the layman a chance to build something (usually a website). PHP allows developers to focus on building, even if there are some technical deficiencies (native multi-threading and unicode support) and language conventions that tended to be more popular in the pasts (80s, 90s, Perl, object->member, $someVariable). The views of younger and modern computer scientists, software engineers, academics, hobbyists, deep corporate professionals, and others that disparage PHP might appreciate the linage of not C/C++/Java/C#, but not that of C/sh/csh/ksh/bash/Perl/PHP. It is easy to say one does not like something about PHP when they never had to do it in Perl, or never programmed in C. PHP is like the capstone of a pyramid of progress for a set of coding norms, but in a web development context (yes, you can do more with PHP). Python has the strongest argument (in features and modern coding conventions) against PHP for traditional web development, even though I despise some of its coding conventions and syntax (lack of ++,–), especially the class syntax. JavaScript zealots already hated PHP, so Node.js just gave them more ammunition to say “See, see, I hate PHP,” even when Node.js might not be the way to go for basic web applications (especially for someone building their own thing, or just starting out). Node.js might use JavaScript, but that does not mean you will be using JavaScript the same way that you do on the client-side (a point many tend to gloss over). Suddenly, everything is “real time” and in need of “non-blocking”, which is not true. Of course, if you take Node.js people literally, then every web application MUST be able to handle millions of requests per minute. Java is overkill for basic web applications. Beginning web developers should not be concerned about checked and unchecked exceptions (please). Why use PHP in 2019? It never stopped working and making people money in 2019, that’s why. If you have to build gigantic websites/applications, that’s your problem, not mine (take that Node.js people). Python, Java, and C# people, well, hey it’s what they are learning in school so it MUST be the solution for everything. Get paid. Let others have their purist debates. Save the other languages for their best use cases. PHP is the best language that aligns well with the history, progress, paradigm, practice and soul of server-side web development. If you never went through C, shell programming, and Perl, it is hard to appreciate PHP to the fullest. PHP puts most of the plumbing together for you and it is very accessible. That matters. That’s valuable. PHP has evolved and moved forwards, not backwards. It’s better today than ever. Use it to get ahead. Save Java and the lot for building software in other contexts. Use PHP to build, build, build, baby! There is no profit in claiming superiority if you have nothing to show for it. I really enjoyed your article. I was a developer for many years in VB & ASP (ASP Classic mostly). Nowadays, I’m depending on others to develop some mobile apps for me, and I find that they’re using E2C/I2M (Easy to Create / Impossible to Maintain) apps, and I’m frustrated by the attitudes of “it can’t do that” limitations. I want a simple language that can use a SQL style database, where *I* can code it and stop depending on the naysayers. My biggest fear is simple security. (prevention from being hacked – no I don’t have super sensitive data). In the past, I developed a core set of features that answered MOST business problems… which almost always lead to a series of simple tables with a LISTING page and a DETAIL page for each data table, with a few other small bells and whistles. Is PHP a good tool for simple data access applications like this? What databases are good to use? Thanks for considering, “Java is hugely popular, and VERY FAST”. I’m done I learned PHP in 2011 in about 2 hours of coaching. Since then I’ve been writing internal and simple external facing web apps in PHP (with either MYSQL or Oracle backend). Previously I’d built systems using Perl CGI. Having learned C and Shell scripting, I found Perl easy to learn, and I found PHP easy to learn. I still use PHP because it is for me the quickest way to cobble together a working web site. I have also used JSP/Java and I have to say I find I am having to a LOT more coding to get the same result as with PHP. I like the simplicity of the setup, developing on XAMPP (Windows) and deploying to Apache on Linux or Windows. I am getting concerned about being forced down the road of package management via Compose. I do avoid that. I also avoid frameworks like Lavarel etc as it just seems to be like dumbing down the development process, whilst at the same time complicated it by having to learn another set of commands. As my boss said I’m lazy too lazy to move from PHP, but people should let PHP and Ruby die in peace. It’s time to move on to JS or Python. PHP 8 with its just-in-time (JIT) compiler is worth to watch.
true
true
true
Let’s get it out of the way early: PHP is a strange and ugly language. It’s not exceptionally fast. It’s not beautiful syntactically. It’s not formulated around a clear opinion about good software development practices. And it’s still what I write a lot of software in. The obvious question is: why? Why use PHP today? […]
2024-10-12 00:00:00
2018-06-12 00:00:00
https://www.thoughtfulco…aptop-screen.jpg
article
thoughtfulcode.com
Thoughtful Code
null
null
33,506,940
https://thenextweb.com/news/swiss-scientists-new-see-through-solar-panels-are-sweet-nectar-startups
Swiss scientists’ transparent solar cells will be sweet for startups
Ioanna Lykiardopoulou
As Europe’s transparent solar panel market swells, Swiss scientists have set a new efficiency record for the technology. This could lead the way to energy-generating windows that power up our homes and devices. Also known as Grätzel cells, dye-sensitised solar cells (DSCs) are a type of low-cost solar cell that use photosensitized dye to convert visible light into electricity. Previous versions of DSCs have been reliant on direct sunlight, but a team of researchers at the École Polytechnique Fédérale de Lausanne (EPFL) have found a way to make transparent photosensitizers that can absorb light across the entire visible light spectrum, including both direct and ambient. The researchers developed a way of improving the combination of two newly designed photosensitizer dye molecules. They did this by creating a technique in which a monolayer of a hydroxamic acid derivative is pre-adsorbed onto the surface of nanocrystalline mesoporous titanium dioxide. On top of the new photosensitizers being able to harvest light across the entire visible domain, the scientists have also increased the DSCs’ photovoltaic performance — which has been a weak point of the technology compared to traditional solar cells. To put that in numbers, the enhanced DSCs’ efficiency reaches above 15% in direct sunlight and up to 30% in ambient light. For reference, commercial solar panels have an average efficiency between 15% to 22%. In other words, if this technology can hit scale, we may soon see a transparent solar panel revolution in Europe. ## How are DSCs already being used? DSCs aren’t a new technology, but the advances from the École Polytechnique Fédérale de Lausanne could deliver a lifeline to sustainable buildings. Dye-sensitised solar cells are not only transparent, but can also be fabricated in multiple colors and for low cost. In fact, some are already being used in skylights, greenhouses, as well as glass facades. For example, think of the SwissTech Convention Center — a location that became the first public building to install the DSCs technology in 2012. While in 2017, the Copenhagen International School also used the same technology to inaugurate its building with 12,000 colored solar panels, which meet over half of the school’s annual energy needs. And as of 2021, the Netherlands-based company Physee is installing 15,000 of its ‘SmartWindows’ in office buildings across Europe. ## Why dye-sensitised solar cells could be a boon to European startups The scientists at the EPFL have improved this technology’s ability to work in low light conditions, something that’s vital in cloudier, colder countries. As the authors wrote, “Our findings pave the way for facile access to high performance DSCs and offer promising prospects for applications as power supply and battery replacement for low-power electronic devices that use ambient light as their energy source.” For European startups, this could be a game changer. While there’s an obvious benefit for installing transparent solar cells on buildings to help meet nations’ net-zero climate goals, the technology’s applicability goes beyond energy-generating windows and glass facades. DSCs take up far less space than traditional panels, which opens up their use for a wide number of items, whether that’s portable electronic devices (such as earphones and ereaders) or connected sensors that are part of the Internet of Things. When a scientific advance like this happens, it opens the door for bright minds to create something new and efficient. Let’s just hope Europe’s startups are ready to walk through. You can find the full research here. ## Get the TNW newsletter Get the most important tech news in your inbox each week.
true
true
true
Swiss researchers have improved the efficiency of transparent solar cells, leading the way to electricity-generating windows and devices.
2024-10-12 00:00:00
2022-11-07 00:00:00
https://img-cdn.tnwcdn.com/image/tnw-blurple?filter_last=1&fit=1280%2C640&url=https%3A%2F%2Fcdn0.tnwcdn.com%2Fwp-content%2Fblogs.dir%2F1%2Ffiles%2F2022%2F11%2Fheader-image-transparent-solar-panels.jpg&signature=6295979d616639327356b4b609860b3c
newsarticle
thenextweb.com
TNW | Tech
null
null
13,002,124
http://cosmos.nautil.us/feature/21/will-et-drink-water
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
656,597
http://perspectives.mvdirona.com/2009/06/13/ErasureCodingAndColdStorage.aspx
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
27,265,196
https://www.youtube.com/watch?v=3XcYgltQCcA
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
30,876,171
http://www.fhi.ox.ac.uk/wp-content/uploads/is-doomsday-likely.pdf
null
null
null
true
false
false
null
null
null
null
null
null
null
null
null
3,988,059
http://scriptographer.org/forum/help/illu-cs6/#Post-4696
Illu CS6?
Ken Frederick
08.08.14, 15:24 it runs nor on illu cs6 (mac) at this time? is an update planned? thank you :) I have written more about this topic here: The Future of Scriptographer is … Paper.js!. Future discussions should be carried on in the comments to that news post, and I will lock this thread here.
true
true
true
null
2024-10-12 00:00:00
2014-08-08 00:00:00
null
null
null
Scriptographer.org
null
null
20,729,625
https://github.com/cve-search/cve-search
GitHub - cve-search/cve-search: cve-search - a tool to perform local searches for known vulnerabilities
Cve-Search
cve-search is a tool to import CVE (Common Vulnerabilities and Exposures) and CPE (Common Platform Enumeration) into a MongoDB to facilitate search and processing of CVEs. The main objective of the software is to avoid doing direct and public lookups into the public CVE databases. Local lookups are usually faster and you can limit your sensitive queries via the Internet. cve-search includes a back-end to store vulnerabilities and related information, an intuitive web interface for search and managing vulnerabilities, a series of tools to query the system and a web API interface. cve-search is used by many organizations including the public CVE services of CIRCL. This document gives you basic information how to start with cve-search. For more information please refer to the documentation in the ** /doc** folder of this project. Check the documentation to get you started You can search the database using search.py. ``` usage: search.py [-h] [-q Q] [-p P [P ...]] [--only-if-vulnerable] [--strict_vendor_product] [--lax] [-f F] [-c C] [-o O] [-l] [-n] [-r] [-a] [-v V] [-s S] [-t T] [-i I] Search for vulnerabilities in the National Vulnerability DB. Data from http://nvd.nist.org. options: -h, --help show this help message and exit -p P [P ...] P = search one or more products, e.g. o:microsoft:windows_7 or o:cisco:ios:12.1 or o:microsoft:windows_7 o:cisco:ios:12.1. Add --only-if-vulnerable if only vulnerabilities that directly affect the product are wanted. --only-if-vulnerable With this option, "-p" will only return vulnerabilities directly assigned to the product. I.e. it will not consider "windows_7" if it is only mentioned as affected OS in an adobe:reader vulnerability. --strict_vendor_product With this option, a strict vendor product search is executed. The values in "-p" should be formatted as vendor:product, e.g. microsoft:windows_7 --lax Strict search for software version is disabled. Likely gives false positives for earlier versions that were not yet vulnerable. Note that version comparison for non-numeric values is done with simplifications. -f F F = free text search in vulnerability summary -c C search one or more CVE-ID -o O O = output format [csv|html|json|xml|cveid] -l sort in descending mode -n lookup complete cpe (Common Platform Enumeration) name for vulnerable configuration -r lookup ranking of vulnerable configuration -a Lookup CAPEC for related CWE weaknesses -v V vendor name to lookup in reference URLs -s S search in summary text -t T search in last n day (published) -T T search in last n day (modified) -i I Limit output to n elements (default: unlimited) -q [Q] Removed. Was used to search pip requirements file for CVEs. ``` Examples: ``` ./bin/search.py -p cisco:ios:12.4 ./bin/search.py -p cisco:ios:12.4 -o json ./bin/search.py -f nagios -n ./bin/search.py -p microsoft:windows_7 -o html ``` If you want to search all the WebEx vulnerabilities and only printing the official references from the supplier. `./bin/search.py -p webex: -o csv -v "cisco"` You can also dump the JSON for a specific CVE ID. `./bin/search.py -c CVE-2010-3333 -o json` Or dump the last 2 CVE entries in RSS or Atom format. `./bin/dump_last.py -f atom -l 2` Or you can use the webinterface. `./web/index.py` There is a ranking database allowing to rank software vulnerabilities based on their common platform enumeration name. The ranking can be done per organization or department within your organization or any meaningful name for you. As an example, you can add a partial CPE name like "sap:netweaver" which is very critical for your accounting department. `./sbin/db_ranking.py -c "sap:netweaver" -g "accounting" -r 3` and then you can lookup the ranking (-r option) for a specific CVE-ID: `./bin/search.py -c CVE-2012-4341 -r -n` As cve-search is based on a set of tools, it can be used and combined with standard Unix tools. If you ever wonder what are the top vendors using the term "unknown" for their vulnerabilities: ``` python3 bin/search_fulltext.py -q unknown -f \ | jq -c '. | .vulnerable_configuration[0]' \ | cut -f5 -d: | sort | uniq -c | sort -nr | head -10 1500 oracle 381 sun 372 hp 232 google 208 ibm 126 mozilla 103 microsoft 100 adobe 78 apple 68 linux ``` You can compare CVSS (Common Vulnerability Scoring System ) values of some products based on their CPE name. Like comparing oracle:java versus sun:jre and using R to make some statistics about their CVSS values: ``` python3 bin/search.py -p oracle:java -o json \ | jq -r '.cvss' | Rscript -e 'summary(as.numeric(read.table(file("stdin"))[,1]))' Min. 1st Qu. Median Mean 3rd Qu. Max. 1.800 5.350 9.300 7.832 10.000 10.000 ``` ``` python3 bin/search.py -p sun:jre -o json \ | jq -r '.cvss' | Rscript -e 'summary(as.numeric(read.table(file("stdin"))[,1]))' Min. 1st Qu. Median Mean 3rd Qu. Max. 0.000 5.000 7.500 7.333 10.000 10.000 ``` If you want to index all the CVEs from your current MongoDB collection: `./sbin/db_fulltext.py -l 0` and you query the fulltext index (to get a list of matching CVE-ID): `./bin/search_fulltext.py -q NFS -q Linux` or to query the fulltext index and output the JSON object for each CVE-ID: `./bin/search_fulltext.py -q NFS -q Linux -f` The fulltext indexer visualization is using the fulltext indexes to build a list of the most common keywords used in CVE. NLTK is required to generate the keywords with the most common English stopwords and lemmatize the output. NTLK for Python 3 exists but you need to use the alpha version of NLTK. `./bin/search_fulltext.py -g -s >cve.json` You can see a visualization on the demo site. The web interface is a minimal interface to see the last CVE entries and query a specific CVE. You'll need flask in order to run the website and Flask-PyMongo. To start the web interface: ``` cd ./web ./index.py ``` Then you can connect on `http://127.0.0.1:5000/` to browser the last CVE. The web interface includes a minimal JSON API to get CVE by ID, by vendor or product. A public version of the API is also accessible on cve.circl.lu. List the know vendors in JSON `curl "http://127.0.0.1:5000/api/browse/"` Dump the product of a specific vendor in JSON ``` curl "http://127.0.0.1:5000/api/browse/zyxel" { "product": [ "n300_netusb_nbg-419n", "n300_netusb_nbg-419n_firmware", "p-660h-61", "p-660h-63", "p-660h-67", "p-660h-d1", "p-660h-d3", "p-660h-t1", "p-660h-t3", "p-660hw", "p-660hw_d1", "p-660hw_d3", "p-660hw_t3" ], "vendor": "zyxel" } ``` Find the associated vulnerabilities to a vendor and a product. ``` curl "http://127.0.0.1:5000/api/search/zyxel/p-660hw" | jq . [ { "cwe": "CWE-352", "references": [ "http://www.exploit-db.com/exploits/33518", "http://secunia.com/advisories/58513", "http://packetstormsecurity.com/files/126812/Zyxel-P-660HW-T1-Cross-Site-Request-Forgery.html", "http://osvdb.org/show/osvdb/107449" ], "vulnerable_configuration": [ "cpe:/h:zyxel:p-660hw:_t1:v3" ], "Published": "2014-06-16T14:55:09.713-04:00", "id": "CVE-2014-4162", "Modified": "2014-07-17T01:07:29.683-04:00", "cvss": 6.8, "summary": "Multiple cross-site request forgery (CSRF) vulnerabilities in the Zyxel P-660HW-T1 (v3) wireless router allow remote attackers to hijack the authentication of administrators for requests that change the (1) wifi password or (2) SSID via a request to Forms/WLAN_General_1." }, { "cwe": "CWE-20", "references": [ "http://www.kb.cert.org/vuls/id/893726" ], "vulnerable_configuration": [ "cpe:/h:zyxel:p-660h-63:-", "cpe:/h:zyxel:p-660h-t1:-", "cpe:/h:zyxel:p-660h-d3:-", "cpe:/h:zyxel:p-660h-t3:v2", "cpe:/h:zyxel:p-660h-t1:v2", "cpe:/h:zyxel:p-660h-d1:-", "cpe:/h:zyxel:p-660h-67:-", "cpe:/h:zyxel:p-660h-61:-", "cpe:/h:zyxel:p-660hw_t3:v2", "cpe:/h:zyxel:p-660hw_t3:-", "cpe:/h:zyxel:p-660hw_d3:-", "cpe:/h:zyxel:p-660hw_d1:v2", "cpe:/h:zyxel:p-660hw_d1:-", "cpe:/h:zyxel:p-660hw:_t1:v2", "cpe:/h:zyxel:p-660hw:_t1:-" ], ``` - MISP modules cve-search to interact with MISP - MISP module cve-advanced to import complete CVE as MISP objects - cve-portal which is a CVE notification portal - cve-search-mt which is a set of management tools for CVE-Search - cve-scan which is a NMap CVE system scanner - Mercator which is an application that allow the mapping of an information system Official dockerized version of cve-search: There are some unofficial dockerized versions of cve-search (which are not maintained by us): You can find the changelog on GitHub Releases (legacy changelog). cve-search is free software released under the "GNU Affero General Public License v3.0" ``` Copyright (c) 2012 Wim Remes - https://github.com/wimremes/ Copyright (c) 2012-2024 Alexandre Dulaunoy - https://github.com/adulau/ Copyright (c) 2015-2019 Pieter-Jan Moreels - https://github.com/pidgeyl/ Copyright (c) 2020-2024 Paul Tikken - https://github.com/P-T-I ```
true
true
true
cve-search - a tool to perform local searches for known vulnerabilities - cve-search/cve-search
2024-10-12 00:00:00
2012-05-06 00:00:00
https://opengraph.githubassets.com/a05609015b10ff26788e69105d6437110c08a16bc0e676faf2819a945893885f/cve-search/cve-search
object
github.com
GitHub
null
null
39,527,272
https://spectrum.ieee.org/how-the-boeing-737-max-disaster-looks-to-a-software-developer
How the Boeing 737 Max Disaster Looks to a Software Developer
Gregory Travis
# How the Boeing 737 Max Disaster Looks to a Software Developer ## Design shortcuts meant to make a new plane seem like an old, familiar one are to blame **UPDATE: 3 Feb. 2024: **In the wake of a near-disastrous cabin blowout in an Alaska Airlines 737 Max 9 on 5 Jan.—which has propelled the embattled redesign of the once iconic 737 back into the headlines—Boeing, the plane’s designer, has been shaken anew. As the *Financial Times*reported on 31 Jan., the company’s chief executive Dave Calhoun wrote in a note to employees that outsourcing in the 737 Max design (discussed below) probably went “too far.” “We caused the problem, and we understand that,” Calhoun said. “Whatever conclusions are reached, Boeing is accountable for what happened. ... We simply must be better.” Which perhaps explains why the company even declined to report their financial outlook for the remainder of 2024. In response to the company’s near-crisis state, *The Los Angeles Times *reported on 30 Jan. that a former Boeing manager has publicly stated his aversion to ever flying on the redesigned 737. “I would absolutely not fly a Max airplance,” former Boeing senior manager Ed Pierson told the paper. “I’ve worked in the factory where they were built, and I saw the pressure employees were under to rush the planes out the door. I tried to get them to shut down before the first crash.” As Gregory Travis chronicles below from a 2019 perspective, a latter-day crisis of faith about the “Max” appears to be building even more in size and stature. On 1 Feb., Reuters reported that demand for older 737 jetliners has skyrocketed since 2024’s near-disaster. As Travis describes it, the original, classic craft with “smallish engines and relatively simple systems” is, even in its advanced age, proving very tough to top. —*IEEE Spectrum* *Original article from 18 Apr. 2019 follows: * *The views expressed here are solely those of the author and do not represent positions of *IEEE Spectrum* or the IEEE.* **I have been a pilot** for 30 years, a software developer for more than 40. I have written extensively about both aviation and software engineering. Now it’s time for me to write about both together. The Boeing 737 Max has been in the news because of two crashes, practically back to back and involving brand new airplanes. In an industry that relies more than anything on the appearance of total control, total safety, these two crashes pose as close to an existential risk as you can get. Though airliner passenger death rates have fallen over the decades, that achievement is no reason for complacency. The 737 first appeared in 1967, when I was 3 years old. Back then it was a smallish aircraft with smallish engines and relatively simple systems. Airlines ( especially Southwest) loved it because of its simplicity, reliability, and flexibility. Not to mention the fact that it could be flown by a two-person cockpit crew—as opposed to the three or four of previous airliners—which made it a significant cost saver. Over the years, market and technological forces pushed the 737 into ever-larger versions with increasing electronic and mechanical complexity. This is not, by any means, unique to the 737. Airliners constitute enormous capital investments both for the industries that make them and the customers who buy them, and they all go through a similar growth process. Most of those market and technical forces are on the side of economics, not safety. They work as allies to relentlessly drive down what the industry calls “ seat-mile costs“—the cost of flying a seat from one point to another. Much had to do with the engines themselves. The principle of Carnot efficiency dictates that the larger and hotter you can make any heat engine, the more efficient it becomes. That’s as true for jet engines as it is for chainsaw engines. It’s as simple as that. The most effective way to make an engine use less fuel per unit of power produced is to make it larger. That’s why the Lycoming O-360 engine in my Cessna has pistons the size of dinner plates. That’s why marine diesel engines stand three stories tall. And that’s why Boeing wanted to put the huge CFM International LEAP engine in its latest version of the 737. There was just one little problem: The original 737 had (by today’s standards) tiny little engines, which easily cleared the ground beneath the wings. As the 737 grew and was fitted with bigger engines, the clearance between the engines and the ground started to get a little…um, tight. By substituting a larger engine, Boeing changed the intrinsic aerodynamic nature of the 737 airliner.Norebbo.com Various hacks (as we would call them in the software industry) were developed. One of the most noticeable to the public was changing the shape of the engine intakes from circular to oval, the better to clear the ground. With the 737 Max, the situation became critical. The engines on the original 737 had a fan diameter (that of the intake blades on the engine) of just 100 centimeters (40 inches); those planned for the 737 Max have 176 cm. That’s a centerline difference of well over 30 cm (a foot), and you couldn’t “ovalize” the intake enough to hang the new engines beneath the wing without scraping the ground. The solution was to extend the engine up and well in front of the wing. However, doing so also meant that the centerline of the engine’s thrust changed. Now, when the pilots applied power to the engine, the aircraft would have a significant propensity to “pitch up,” or raise its nose. The angle of attack is the angle between the wings and the airflow over the wings. Think of sticking your hand out of a car window on the highway. If your hand is level, you have a low angle of attack; if your hand is pitched up, you have a high angle of attack. When the angle of attack is great enough, the wing enters what’s called an aerodynamic stall. You can feel the same thing with your hand out the window: As you rotate your hand, your arm wants to move up like a wing more and more until you stall your hand, at which point your arm wants to flop down on the car door. This propensity to pitch up with power application thereby increased the risk that the airplane could stall when the pilots “punched it” (as my son likes to say). It’s particularly likely to happen if the airplane is flying slowly. Worse still, because the engine nacelles were so far in front of the wing and so large, a power increase will cause them to actually produce lift, particularly at high angles of attack. So the nacelles make a bad problem worse. I’ll say it again: In the 737 Max, the engine nacelles themselves can, at high angles of attack, work as a wing and produce lift. And the lift they produce is well ahead of the wing’s center of lift, meaning the nacelles will cause the 737 Max at a high angle of attack to go to a *higher *angle of attack. This is aerodynamic malpractice of the worst kind. Pitch changes with power changes are common in aircraft. Even my little Cessna pitches up a bit when power is applied. Pilots train for this problem and are used to it. Nevertheless, there are limits to what safety regulators will allow and to what pilots will put up with. Pitch changes with increasing angle of attack, however, are quite another thing. An airplane approaching an aerodynamic stall cannot, under any circumstances, have a tendency to go further into the stall. This is called “dynamic instability,” and the only airplanes that exhibit that characteristic—fighter jets—are also fitted with ejection seats. Everyone in the aviation community wants an airplane that flies as simply and as naturally as possible. That means that conditions should not change markedly, there should be no significant roll, no significant pitch change, no nothing when the pilot is adding power, lowering the flaps, or extending the landing gear. The airframe, the hardware, should get it right the first time and not need a lot of added bells and whistles to fly predictably. This has been an aviation canon from the day the Wright brothers first flew at Kitty Hawk. Apparently the 737 Max pitched up a bit too much for comfort on power application as well as at already-high angles of attack. It violated that most ancient of aviation canons and probably violated the certification criteria of the U.S. Federal Aviation Administration. But instead of going back to the drawing board and getting the airframe hardware right (more on that below), Boeing relied on something called the “Maneuvering Characteristics Augmentation System,” or MCAS. Boeing’s solution to its hardware problem was software. I will leave a discussion of the corporatization of the aviation lexicon for another article, but let’s just say another term might be the “Cheap way to prevent a stall when the pilots punch it,” or CWTPASWTPPI, system. Hmm. Perhaps MCAS is better, after all. MCAS is certainly much less expensive than extensively modifying the airframe to accommodate the larger engines. Such an airframe modification would have meant things like longer landing gear (which might not then fit in the fuselage when retracted), more wing dihedral (upward bend), and so forth. All of those hardware changes would be horribly expensive. What’s worse, those changes could be extensive enough to require not only that the FAA recertify the 737 but that Boeing build an entirely new aircraft. Now we’re talking *real *money, both for the manufacturer as well as the manufacturer’s customers. That’s because *the *major selling point of the 737 Max is that it is just a 737, and any pilot who has flown other 737s can fly a 737 Max without expensive training, without recertification, without another type of rating. Airlines—Southwest is a prominent example—tend to go for one “standard” airplane. They want to have one airplane that all their pilots can fly because that makes both pilots and airplanes fungible, maximizing flexibility and minimizing costs. It all comes down to money, and in this case, MCAS was the way for both Boeing and its customers to keep the money flowing in the right direction. The necessity to insist that the 737 Max was no different in flying characteristics, no different in systems, from any other 737 was the key to the 737 Max’s fleet fungibility. That’s probably also the reason why the documentation about the MCAS system was kept on the down-low. Put in a change with too much visibility, particularly a change to the aircraft’s operating handbook or to pilot training, and someone—probably a pilot—would have piped up and said, “Hey. This doesn’t look like a 737 anymore.” And then the money would flow the wrong way. **As I explained, **you can do your own angle-of-attack experiments just by putting your hand out a car door window and rotating it. It turns out that sophisticated aircraft have what is essentially the mechanical equivalent of a hand out the window: the angle-of-attack sensor. You may have noticed this sensor when boarding a plane. There are usually two of them, one on either side of the plane, and usually just below the pilot’s windows. Don’t confuse them with the pitot tubes (we’ll get to those later). The angle-of-attack sensors look like wind vanes, whereas the pitot tubes look like, well, tubes. Angle-of-attack sensors look like wind vanes because that’s exactly what they are. They are mechanical hands designed to rotate in response to changes in that angle of attack. The pitot tubes measure how much the air is “pressing” against the airplane, whereas the angle-of-attack sensors measure what direction that air is coming from. Because they measure air pressure, the pitot tubes are used to determine the aircraft’s speed through the air. The angle-of-attack sensors measure the aircraft’s direction relative to that air. There are two sets of angle-of-attack sensors and two sets of pitot tubes, one set on either side of the fuselage. Normal usage is to have the set on the pilot’s side feed the instruments on the pilot’s side and the set on the copilot’s side feed the instruments on the copilot’s side. That gives a state of natural redundancy in instrumentation that can be easily cross-checked by either pilot. If the copilot thinks his airspeed indicator is acting up, he can look over to the pilot’s airspeed indicator and see if it agrees. If not, both pilot and copilot engage in a bit of triage to determine which instrument is profane and which is sacred. Long ago there was a joke that in the future planes would fly themselves, and the only thing in the cockpit would be a pilot and a dog. The pilot’s job was to make the passengers comfortable that someone was up front. The dog’s job was to bite the pilot if he tried to touch anything. On the 737, Boeing not only included the requisite redundancy in instrumentation and sensors, it also included redundant flight computers—one on the pilot’s side, the other on the copilot’s side. The flight computers do a lot of things, but their main job is to fly the plane when commanded to do so and to make sure the human pilots don’t do anything wrong when they’re flying it. The latter is called “envelope protection.” Let’s just call it what it is: the bitey dog. Let’s review what the MCAS does: It pushes the nose of the plane down when the system thinks the plane might exceed its angle-of-attack limits; it does so to avoid an aerodynamic stall. Boeing put MCAS into the 737 Max because the larger engines and their placement make a stall more likely in a 737 Max than in previous 737 models. When MCAS senses that the angle of attack is too high, it commands the aircraft’s trim system (the system that makes the plane go up or down) to lower the nose. It also does something else: Indirectly, via something Boeing calls the “Elevator Feel Computer,” it pushes the pilot’s control columns (the things the pilots pull or push on to raise or lower the aircraft’s nose) downward. In the 737 Max, like most modern airliners and most modern cars, everything is monitored by computer, if not directly controlled by computer. In many cases, there are no actual mechanical connections (cables, push tubes, hydraulic lines) between the pilot’s controls and the things on the wings, rudder, and so forth that actually make the plane move. And, even where there are mechanical connections, it’s up to the computer to determine if the pilots are engaged in good decision making (that’s the bitey dog again). But it’s also important that the pilots get physical feedback about what is going on. In the old days, when cables connected the pilot’s controls to the flying surfaces, you had to pull up, hard, if the airplane was trimmed to descend. You had to push, hard, if the airplane was trimmed to ascend. With computer oversight there is a loss of natural sense in the controls. In the 737 Max, there is no real “natural feel.” True, the 737 does employ redundant hydraulic systems, and those systems do link the pilot’s movement of the controls to the action of the ailerons and other parts of the airplane. But those hydraulic systems are powerful, and they do not give the pilot direct feedback from the aerodynamic forces that are acting on the ailerons. There is only an artificial feel, a feeling that the computer wants the pilots to feel. And sometimes, it doesn’t feel so great. When the flight computer trims the airplane to descend, because the MCAS system thinks it’s about to stall, a set of motors and jacks push the pilot’s control columns forward. It turns out that the Elevator Feel Computer can put a *lot* of force into that column—indeed, so much force that a human pilot can quickly become exhausted trying to pull the column back, trying to tell the computer that this really, really should not be happening. The antistall system depended crucially on sensors that are installed on each side of the airliner—but the system consulted only the sensor on one side.Norebbo.com Indeed, not letting the pilot regain control by pulling back on the column was an explicit design decision. Because if the pilots could pull up the nose when MCAS said it should go down, why have MCAS at all? MCAS is implemented in the flight management computer, even at times when the autopilot is turned off, when the pilots think they are flying the plane. In a fight between the flight management computer and human pilots over who is in charge, the computer will bite humans until they give up and (literally) die. Finally, there’s the need to keep the very existence of the MCAS system on the hush-hush lest someone say, “Hey, this isn’t your father’s 737,” and bank accounts start to suffer. The flight management computer is a computer. What that means is that it’s not full of aluminum bits, cables, fuel lines, or all the other accoutrements of aviation. It’s full of lines of code. And that’s where things get dangerous. **Those lines of code** were no doubt created by people at the direction of managers. Neither such coders nor their managers are as in touch with the particular culture and mores of the aviation world as much as the people who are down on the factory floor, riveting wings on, designing control yokes, and fitting landing gears. Those people have decades of institutional memory about what has worked in the past and what has not worked. Software people do not. In the 737 Max, only one of the flight management computers is active at a time—either the pilot’s computer or the copilot’s computer. And the active computer takes inputs *only* from the sensors on its own side of the aircraft. When the two computers disagree, the solution for the humans in the cockpit is to look across the control panel to see what the other instruments are saying and then sort it out. In the Boeing system, the flight management computer does not “look across” at the other instruments. It believes only the instruments on its side. It doesn’t go old-school. It’s modern. It’s software. This means that if a particular angle-of-attack sensor goes haywire—which happens all the time in a machine that alternates from one extreme environment to another, vibrating and shaking all the way—the flight management computer just believes it. It gets even worse. There are several other instruments that can be used to determine things like angle of attack, either directly or indirectly, such as the pitot tubes, the artificial horizons, etc. All of these things would be cross-checked by a human pilot to quickly diagnose a faulty angle-of-attack sensor. In a pinch, a human pilot could just look out the windshield to confirm visually and directly that, no, the aircraft is not pitched up dangerously. That’s the ultimate check and should go directly to the pilot’s ultimate sovereignty. Unfortunately, the current implementation of MCAS denies that sovereignty. It denies the pilots the ability to respond to what’s before their own eyes. Like someone with narcissistic personality disorder, MCAS gaslights the pilots. And it turns out badly for everyone. “Raise the nose, HAL.” “I’m sorry, Dave, I’m afraid I can’t do that.” In the MCAS system, the flight management computer is blind to any other evidence that it is wrong, including what the pilot sees with his own eyes and what he does when he desperately tries to pull back on the robotic control columns that are biting him, and his passengers, to death. In the old days, the FAA had armies of aviation engineers in its employ. Those FAA employees worked side by side with the airplane manufacturers to determine that an airplane was safe and could be certified as airworthy. As airplanes became more complex and the gulf between what the FAA could pay and what an aircraft manufacturer could pay grew larger, more and more of those engineers migrated from the public to the private sector. Soon the FAA had no in-house ability to determine if a particular airplane’s design and manufacture were safe. So the FAA said to the airplane manufacturers, “Why don’t you just have your people tell us if your designs are safe?” The airplane manufacturers said, “Sounds good to us.” The FAA said, “And say hi to Joe, we miss him.” Thus was born the concept of the “Designated Engineering Representative,” or DER. DERs are people in the employ of the airplane manufacturers, the engine manufacturers, and the software developers who certify to the FAA that it’s all good. Now this is not quite as sinister a conflict of interest as it sounds. It is in nobody’s interest that airplanes crash. The industry absolutely relies on the public trust, and every crash is an existential threat to the industry. No manufacturer is going to employ DERs that just pencil-whip the paperwork. On the other hand, though, after a long day and after the assurance of some software folks, they might just take their word that things will be okay. It is astounding that no one who wrote the MCAS software for the 737 Max seems even to have raised the possibility of using multiple inputs, including the opposite angle-of-attack sensor, in the computer’s determination of an impending stall. As a lifetime member of the software development fraternity, I don’t know what toxic combination of inexperience, hubris, or lack of cultural understanding led to this mistake. But I do know that it’s indicative of a much deeper problem. The people who wrote the code for the original MCAS system were obviously terribly far out of their league and did not know it. How can they implement a software fix, much less give us any comfort that the rest of the flight management software is reliable? **So Boeing produced** a dynamically unstable airframe, the 737 Max. That is big strike No. 1. Boeing then tried to mask the 737’s dynamic instability with a software system. Big strike No. 2. Finally, the software relied on systems known for their propensity to fail (angle-of-attack indicators) and did not appear to include even rudimentary provisions to cross-check the outputs of the angle-of-attack sensor against other sensors, or even the other angle-of-attack sensor. Big strike No. 3. None of the above should have passed muster. None of the above should have passed the “OK” pencil of the most junior engineering staff, much less a DER. That’s not a big strike. That’s a political, social, economic, and technical sin. It just so happens that, during the timeframe between the first 737 Max crash and the most recent 737 crash, I’d had the occasion to upgrade and install a brand-new digital autopilot in my own aircraft. I own a 1979 Cessna 172, the most common aircraft in history, at least by production numbers. Its original certification also predates that of the 737’s by about a decade (1955 versus 1967). My new autopilot consists of several very modern components, including redundant flight computers (dual Garmin G5s) and a sophisticated communication “bus” (a Controller Area Network bus) that lets all the various components talk to one another, irrespective of where they are located in my plane. A CAN bus derives from automotive “drive by wire” technology but is otherwise very similar in purpose and form to the various ARINC buses that connect the components in the 737 Max. My autopilot also includes electric pitch trim. That means it can make the same types of configuration changes to my 172 that the flight computers and MCAS system make to the 737 Max. During the installation, after the first 737 Max crash, I remember remarking to a friend that it was not lost on me that I was potentially adding a hazard similar to the one that brought down the Lion Air crash. Finally, my new autopilot also implements “envelope protection,” the envelope being the graph of the performance limitations of an aircraft. If my Cessna is *not* being flown by the autopilot, the system nonetheless constantly monitors the airplane to make sure that I am not about to stall it, roll it inverted, or a whole host of other things. Yes, it has its own “bitey dog” mode. As you can see, the similarities between my US $20,000 autopilot and the multimillion-dollar autopilot in every 737 are direct, tangible, and relevant. What, then, are the differences? For starters, the installation of my autopilot required paperwork in the form of a “Supplemental Type Certificate,” or STC. It means that the autopilot manufacturer and the FAA both agreed that my 1979 Cessna 172 with its (Garmin) autopilot was so significantly different from what the airplane was when it rolled off the assembly line that it was *no longer the same Cessna 172. *It was a different aircraft altogether. In addition to now carrying a new (supplemental) aircraft-type certificate (and certification), my 172 required a very large amount of new paperwork to be carried in the plane, in the form of revisions and addenda to the aircraft operating manual. As you can guess, most of those addenda revolved around the autopilot system. Of particular note in that documentation, which must be studied and understood by anyone who flies the plane, are various explanations of the autopilot system, including its command of the trim control system and its envelope protections. There are instructions on how to detect when the system malfunctions *and how to disable the system, immediately. *Disabling the system means pulling the autopilot circuit breaker; instructions on how to do that are strewn throughout the documentation, repeatedly. Every pilot who flies my plane becomes intimately aware that it is *not *the same as any other 172. This is a big difference between what pilots who want to fly my plane are told and what pilots stepping into a 737 Max are (or were) told. Another difference is between the autopilots in my system and that in the 737 Max. All of the CAN bus–interconnected components constantly do the kind of instrument cross-check that human pilots do and that, apparently, the MCAS system in the 737 Max does not. For example, the autopilot itself has a self-contained attitude platform that checks the attitude information coming from the G5 flight computers. If there is a disagreement, the system simply goes off-line and alerts the pilot that she is now flying manually. It doesn’t point the airplane’s nose at the ground, thinking it’s about to stall. Perhaps the biggest difference is in the amount of physical force it takes for the pilot to override the computers in the two planes. In my 172, there are still cables linking the controls to the flying surfaces. The computer has to press on the same things that I have to press on—and its strength is nowhere near as great as mine. So even if, say, the computer thought that my plane was about to stall when it wasn’t, I can easily overcome the computer. In my Cessna, humans still win a battle of the wills every time. That used to be a design philosophy of every Boeing aircraft, as well, and one they used against their archrival Airbus, which had a different philosophy. But it seems that with the 737 Max, Boeing has changed philosophies about human/machine interaction as quietly as they’ve changed their aircraft operating manuals. The 737 Max saga teaches us not only about the limits of technology and the risks of complexity, it teaches us about our real priorities. Today, safety doesn’t come first—money comes first, and safety’s only utility in that regard is in helping to keep the money coming. The problem is getting worse because our devices are increasingly dominated by something that’s all too easy to manipulate: software. Hardware defects, whether they are engines placed in the wrong place on a plane or O-rings that turn brittle when cold, are notoriously hard to fix. And by hard, I mean expensive. Software defects, on the other hand, are easy and cheap to fix. All you need to do is post an update and push out a patch. What’s more, we’ve trained consumers to consider this normal, whether it’s an update to my desktop operating systems or the patches that get posted automatically to my Tesla while I sleep. Back in the 1990s, I wrote an article comparing the relative complexity of the Pentium processors of that era, expressed as the number of transistors on the chip, to the complexity of the Windows operating system, expressed as the number of lines of code. I found that the complexity of the Pentium processors and the contemporaneous Windows operating system was roughly equal. That was the time when early Pentiums were affected by what was known as the FDIV bug. It affected only a tiny fraction of Pentium users. Windows was also affected by similar defects, also affecting only fractions of its users. But the effects on the companies were quite different. Where Windows addressed its small defects with periodic software updates, in 1994 Intel recalled the (slightly) defective processors. It cost the company $475 million—more than $800 million in today’s money. I believe the relative ease—not to mention the lack of tangible cost—of software updates has created a cultural laziness within the software engineering community. Moreover, because more and more of the hardware that we create is monitored and controlled by software, that cultural laziness is now creeping into hardware engineering—like building airliners. Less thought is now given to getting a design correct and simple up front because it’s so easy to fix what you didn’t get right later. Every time a software update gets pushed to my Tesla, to the Garmin flight computers in my Cessna, to my Nest thermostat, and to the TVs in my house, I’m reminded that none of those things were complete when they left the factory—because their builders realized they didn’t have to be complete. The job could be done at any time in the future with a software update. **Boeing is in the process** of rolling out a set of software updates to the 737 Max flight control system, including MCAS. I don’t know, but I suspect that those updates will center on two things: - Having the software “cross-check” system indicators, just as a human pilot would. Meaning, if one angle-of-attack indicator says the plane’s about to stall, but the other one says it’s not so, at least hold off judgment about pushing the nose down into the dirt and maybe let a pilot or two know you’re getting conflicting signals. - Backing off on the “shoot first, ask questions later” design philosophy—meaning, looking at multiple inputs. For the life of me, I do not know why those two basic aviation design considerations, bedrocks of a mind-set that has served the industry so well until now, were not part of the original MCAS design. And, when they were not, I do not know or understand what part of the DER process failed to catch the fundamental design defect. But I suspect that it all has to do with the same thing that brought us from Boeing’s initial desire to put larger engines on the 737 and to avoid having to internalize the cost of those larger engines—in other words, to do what every child is taught is impossible: get a free lunch. The emphasis on simplicity comes from the work of Charles Perrow, a sociologist at Yale University whose 1984 book, *Normal Accidents: Living With High-Risk Technologies*, tells it all in the very title. Perrow argues that system failure is a normal outcome in any system that is very complex and whose components are “tightly bound”—meaning that the behavior of one component immediately controls the behavior of another. Though such failures may seem to stem from one or another faulty part or practice, they must be seen as inherent in the system itself. They are “normal” failures. Nowhere is this problem more acutely felt than in systems designed to augment or improve safety. Every increment, every increase in complexity, ultimately leads to decreasing rates of return and, finally, to negative returns. Trying to patch and then repatch such a system in an attempt to make it safer can end up making it less safe. This is the root of the old engineering axiom “Keep it simple, stupid” (KISS) and its aviation-specific counterpart: “ Simplify, then add lightness.” The original FAA Eisenhower-era certification requirement was a testament to simplicity: Planes should not exhibit significant pitch changes with changes in engine power. That requirement was written when there was a direct connection between the controls in the pilot’s hands and the flying surfaces on the airplane. Because of that, the requirement—when written—rightly imposed a discipline of simplicity on the design of the airframe itself. Now software stands between man and machine, and no one seems to know exactly what is going on. Things have become too complex to understand. I cannot get the parallels between the 737 Max and the space shuttle Challenger out of my head. The Challenger accident, another textbook case study in normal failure, came about not because people didn’t follow the rules but because they did. In the Challenger case, the rules said that they had to have prelaunch conferences to ascertain flight readiness. It didn’t say that a significant input to those conferences couldn’t be the political considerations of delaying a launch. The inputs were weighed, the process was followed, and a majority consensus was to launch. And seven people died. In the 737 Max case, the rules were also followed. The rules said you couldn’t have a large pitch-up on power change and that an employee of the manufacturer, a DER, could sign off on whatever you came up with to prevent a pitch change on power change. The rules didn’t say that the DER couldn’t take the business considerations into the decision-making process. And 346 people are dead. It is likely that MCAS, originally added in the spirit of increasing safety, has now killed more people than it could have ever saved. It doesn’t need to be “fixed” with more complexity, more software. It needs to be removed altogether. An earlier version of this article was cited in *EE Times*. *Editor’s note: This story was updated on 21 April to clarify that the MCAS pushes the airliner’s nose by means of the “Elevator Feel Computer.”*
true
true
true
Design shortcuts meant to make a new plane seem like an old, familiar one are to blame
2024-10-12 00:00:00
2019-04-18 00:00:00
https://spectrum.ieee.or…%2C155%2C0%2C155
article
ieee.org
IEEE Spectrum
null
null
491,875
http://blogs.techrepublic.com.com/project-management/?p=364
owned and operated since 1995
null
Coming soon...
true
true
true
null
2024-10-12 00:00:00
2000-01-01 00:00:00
null
null
null
com.com
null
null
33,760,027
https://www.cnbc.com/2022/11/25/walmart-overtakes-amazon-in-shoppers-search-for-black-friday-bargains.html
Walmart overtakes Amazon in shoppers' search for Black Friday bargains
Annie Palmer
Walmart is top of mind for holiday shoppers who are hunting for Black Friday deals, according to new research. The big box retailer is dominating online searches for Black Friday discounts as of Friday morning, according to advertising technology company Captify, which tracks more than 1 billion searches a day from websites globally. Searches for Black Friday discounts on Walmart surged 386% year over year, leapfrogging rival retailer Amazon, which last year ranked first in Captify's survey of most searched retailers on Black Friday. This year, the world's largest e-commerce company ranked fourth, behind Target and Kohl's, respectively. Retailers are battling for shoppers' eyeballs and wallets at a time when the holiday shopping season is expected to be more subdued than in years past. Americans are expected to pull back on their holiday shopping this year as sky-high inflation squeezes their spending power. The National Retail Federation said it expects holiday sales during November and December to rise between 6% and 8% from last year, a decline when factoring in the effect of inflation. Online sales during the months of November and December are forecast to grow a meager 2.5% to $209.7 billion, compared with an 8.6% increase a year ago, according to Adobe Analytics. Early signs show the season may not be as gloomy as predicted. Online sales climbed 2.9% year over year to $5.29 billion on Thanksgiving Day, Adobe Analytics said. That's slightly higher than its estimates for growth during the overall holiday season. Black Friday is expected to pull in $9 billion in online sales, a 1% jump from the previous year, according to Adobe. Shopify merchants saw a solid start to the holiday period. Businesses who host their online stores on Shopify were raking in $1.52 million per minute on Thanksgiving Day, according to the company. *Select's editors are curating **Cyber Monday deals** from tech to home and kitchen and more.*
true
true
true
Retailers are battling for shoppers' eyeballs and wallets amid an unusual holiday shopping season clouded with sky-high inflation.
2024-10-12 00:00:00
2022-11-25 00:00:00
https://image.cnbcfm.com…37&w=1920&h=1080
article
cnbc.com
CNBC
null
null
34,696,062
https://dl.acm.org/doi/10.1145/3571198
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
23,143,974
https://blush.design
Blush: Illustrations for everyone
null
Blush is as amazing and creative as the artists they’ve chosen to represent. Dynamic, powerful, customizable illustrations all in the tool you do your design work in. Don’t forget fun as hell! Rogie, Designer Advocate at Figma I absolutely love the high-quality, quirky, and seemingly endless combinations offered on the Blush app. It’s so refreshing to see this blend of technology and traditional illustration that just makes sense on the web. Una Kravets, Web Developer Advocate at Google Every time I use Blush I smile. It has made designing downstates and onboarding screens so much fun. Not only are the illustrations amazing, but its simplicity allows me to customize illustrations with ease—it has literally been a lifesaver for me. As a designer on a small team Blush is a must-have! Edgar Chaparro, UX Designer at Zenput Blush is now my favourite Sketch Plugin, and saves me so much time when looking for beautiful illustrations to use in my projects. Marc Andrew, Designer at UI-UX Cheatsheet Blush instantly improves my designs. Hard to believe something so easy and so fast can add so much style but it’s a real game changer. Lex Roman, Growth Designer at Growth Designers After working through a lot of SVG-to-web issues with other design tools, I have a whole new appreciation for how seamless, effortless, and beautiful Blush itself is as a product and the illustrations within it. Caitlin Sowers, UX Designer at EDVO The level of detail and thought that has gone into these illustration systems is incredible. I love that Blush is as much about discovering great illustrations as it is about meeting the amazing artists who created them! Devin Mancuso, Design Strategist at Google We’re using Blush to find illustrations for digital products, to add some colour and personality. Experience is great so far. Site is lovely. Lush illustrations. Super-easy to add to Figma and play with. Noam Sohachevsky, Co-founder of SIDE Labs Customize incredible illustrations from artists around the world
true
true
true
Blush makes it easy to add free illustrations to your designs. Play with fully customizable graphics made by artists across the globe.
2024-10-12 00:00:00
2023-12-12 00:00:00
https://blush.design/og-image.png
website
blush.design
Blushdesignapp
null
null
21,683,416
https://interestingengineering.com/solar-powered-plant-in-kenya-gives-drinking-water-to-35000-people-a-day
A Solar Farm is Providing Drinking Water to 50,000 People a Day
Fabienne Lang
You likely don’t think twice as you reach for a glass and fill it with water from your kitchen tap. But unfortunately, not everyone has this luxury. Globally, one in three people doesn’t have access to safe drinking water. With the decreasing availability of high-quality freshwater, more and more communities are turning to desalination to produce drinkable water from brackish water and saltwater. But the desalination process can be expensive. The San Diego County Water Authority, for example, pays about $1,200 to $2,200 for an acre-foot (1,233,481 lt)of desalinated water, depending on the source. Luckily, an NGO called GivePower seems to have found an affordable solution that’s helping communities around the world. A solar solution GivePower’s desalination systems are powered entirely by solar energy and battery storage. Housed in 20-foot (6 m) shipping containers, they’re capable of transforming 18,492 gallons (70,000 lt) of seawater into drinking water daily. Their systems cost just over $500,000 and have a 20-year lifespan. In 2018, GivePower installed its first Solar Water Farm in Kiunga, on the Eastern Coast of Kenya, situated by the Indian Ocean. The region suffered extreme drought for many years, and the 3,500 inhabitants of Kiunga village didn’t have access to clean drinking water. Before their solar farm installation, the people of Kiunga sometimes had to travel up to one hour each way a day just to get enough drinking water. Because each and every drop of water is so precious to them, families and village members usually bathed and washed their clothes in salty water — something that is very harsh on the skin. The introduction of the Solar Water Farm changed all of that. As of 2019, the plant was producing enough drinking water for up to 50,000 people daily. And this was just the beginning. A cleaner future To date, GivePower has completed 2,650 solar power installations across 17 countries. In April 2020, GivePower deployed additional solar water farms in both Mombasa, Kenya and La Gonâve, Haiti. A few months later, in June 2020, GivePower’s Solar Water Farm Max went live in Likoni, Kenya. GivePower has also developed solar installations in underdeveloped areas of the U.S., including Standing Rock in 2019 — the largest solar installation in North Dakota. Drinking contaminated water can lead to debilitating waterborne illnesses and diseases such as cholera and dysentery. Moreover, it is a basic human right to have access to potable water. GivePower’s work it’s a huge step forward for solar energy usage. RECOMMENDED ARTICLES 0COMMENT NEWSLETTER The Blueprint Daily Stay up-to-date on engineering, tech, space, and science news with The Blueprint. Fabienne Lang <p>Fabienne is a Berlin-based lifestyle, tech, and travel freelance writer and editor. As a fanatic of travel and experiences, 68 countries and counting, Fabienne leads an active and diverse lifestyle sharing her insights and tips along the way. Her words have appeared in international and national print and online publications: Exberliner, Culture Trip, Echelon, Giddy, Interesting Engineering, and more. You’ll most likely find Fabienne voraciously reading, writing or on an active trip in far-fetched lands.</p>
true
true
true
The Solar Water Farm brought potable water to a remote town in Papua New Guinea.
2024-10-12 00:00:00
2019-11-26 00:00:00
https://images.interesti…ater-cleaner.jpg
article
interestingengineering.com
Interesting Engineering
null
null
1,520
http://www.wikio.com/webinfo?id=13833869
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
4,399,060
http://pypi.python.org/pypi/blessings/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
8,138,054
https://www.surveymonkey.com/mp/aboutus/press/surveymonkey-acquires-fluidware/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
21,774,223
https://peerj.com/articles/8247/
Unprofessional peer reviews disproportionately harm underrepresented groups in STEM
Nyssa J Silbiger; Amber D Stubler
# Unprofessional peer reviews disproportionately harm underrepresented groups in STEM - Published - Accepted - Received - Academic Editor - Robert Toonen - Subject Areas - Ethical Issues, Science and Medical Education - Keywords - Peer review, Underrepresented minorities, STEM, Intersectionality - Copyright - © 2019 Silbiger and Stubler - Licence - This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, reproduction and adaptation in any medium and for any purpose provided that it is properly attributed. For attribution, the original author(s), title, publication source (PeerJ) and either DOI or URL of the article must be cited. - Cite this article - 2019. Unprofessional peer reviews disproportionately harm underrepresented groups in STEM. PeerJ 7:e8247 https://doi.org/10.7717/peerj.8247 ## Abstract ## Background Peer reviewed research is paramount to the advancement of science. Ideally, the peer review process is an unbiased, fair assessment of the scientific merit and credibility of a study; however, well-documented biases arise in all methods of peer review. Systemic biases have been shown to directly impact the outcomes of peer review, yet little is known about the downstream impacts of unprofessional reviewer comments that are shared with authors. ## Methods In an anonymous survey of international participants in science, technology, engineering, and mathematics (STEM) fields, we investigated the pervasiveness and author perceptions of long-term implications of receiving of unprofessional comments. Specifically, we assessed authors’ perceptions of scientific aptitude, productivity, and career trajectory after receiving an unprofessional peer review. ## Results We show that survey respondents across four intersecting categories of gender and race/ethnicity received unprofessional peer review comments equally. However, traditionally underrepresented groups in STEM fields were most likely to perceive negative impacts on scientific aptitude, productivity, and career advancement after receiving an unprofessional peer review. ## Discussion Studies show that a negative perception of aptitude leads to lowered self-confidence, short-term disruptions in success and productivity and delays in career advancement. Therefore, our results indicate that unprofessional reviews likely have and will continue to perpetuate the gap in STEM fields for traditionally underrepresented groups in the sciences. ## Introduction The peer review process is an essential step in protecting the quality and integrity of scientific publications, yet there are many issues that threaten the impartiality of peer review and undermine both the science and the scientists (Kaatz, Gutierrez & Carnes, 2014; Lee et al., 2013). A growing body of quantitative evidence shows violations of objectivity and bias in the peer review process for reasons based on author attributes (e.g., language, institutional affiliation, nationality, etc.), author identity (e.g., gender, sexuality) and reviewer perceptions of the field (e.g., territoriality within field, personal gripes with authors, scientific dogma, discontent/distrust of methodological advances) (Lee et al., 2013). The most influential demonstrations of systemic biases within the peer review system have relied on experimental manipulation of author identity or attributes (e.g., Goldberg’s, 1968 classic study “Joan” vs “John”; Goldberg, 1968; Wennerås & Wold, 1997) or analyses of journal-reported metrics such as number of papers submitted, acceptance rates, length of time spent in review and reviewer scores (Fox, Burns & Meyer, 2016; Fox & Paine, 2019; Helmer et al., 2017; Lerback & Hanson, 2017). These studies have focused largely on the inequality of outcomes resulting from inequities in the peer review process. While these studies have been invaluable for uncovering trends and patterns and increasing awareness of existing biases, they do not specifically assess the content of the reviews (Resnik, Gutierrez-Ford & Peddada, 2008), the downstream effects that unfair, biased and *ad hominem* comments may have on authors and how these reviewer comments may perpetuate representation gaps in science, technology, engineering, and mathematics (STEM) fields. In the traditional peer review process, the content, tone and thoroughness of a manuscript review is the sole responsibility of the reviewer (the identity of whom is often protected by anonymity), yet the contextualization and distribution of reviews to authors is performed by the assigned (handling) editor at the journal to which the paper was submitted. In this tiered system, journal editors are largely considered responsible for policing reviewer comments and are colloquially referred to as the “gatekeepers” of peer review. Both reviewers and editors are under considerable time pressures to move manuscripts through peer review, often lack compensation commensurate with time invested, experience heavy workloads and are subject to inherent biases of their own, which may translate into irrelevant and otherwise unprofessional comments being first written and then passed along to authors (Resnik & Elmore, 2016; Resnik, Gutierrez-Ford & Peddada, 2008). We surveyed STEM scientists that have submitted manuscripts to a peer-reviewed journal as first author to understand the impacts of receiving unprofessional peer review comments on the perception of scientific aptitude (confidence as a scientist), productivity (publications per year) and career advancement (ability to advance within the field). This study defined an unprofessional peer review comment as any statement that is unethical or irrelevant to the nature of the work; this includes comments that: (1) lack constructive criticism, (2) are directed at the author(s) rather than the nature or quality of the work, (3) use personal opinions of the author(s)/work rather than evidence-based criticism, or (4) are “mean-spirited” or cruel (e.g., of comments received by survey respondents that fit these criteria see Fig. 1). The above definition was provided as a guideline for survey respondents to separate frank, constructive and even harsh reviews from those that are blatantly inappropriate or irrelevant. Specifically, this study aimed to understand the content of the unprofessional peer reviews, the frequency at which they are received and the subsequent impacts on the recipient’s perception of their abilities. Given that psychological studies show that overly harsh criticisms can lead to diminished success (Baron, 1988), we tested for the effects of unprofessional peer review on the perception of scientific aptitude, productivity and career advancement of all respondents. Underrepresented groups in particular are vulnerable to stereotype threat, wherein negative societal stereotypes about performance abilities and aptitude are internalized and subsequently expressed (Leslie et al., 2015). Further, the combination of social categorizations may lead to amplified sources of oppression and stereotype threat (Crenshaw, 1991); therefore, it is necessary to assess the impacts of unprofessional peer review comments across intersectional gender and racial/ethnic groups. ## Materials and Methods ### Survey methods and administration The data for this study came from an anonymous survey of international members of the STEM community using Qualtrics survey software. Data were collected under institutional review board agreements and federalwide assurance at Occidental College (IRB00009103, FWA00005302) and California State University, Northridge (IRB00001788, FWA00001335). The survey was administered between 28 February 2019 and 10 May 2019 using the online-based platform in English and was open to anyone self-identifying as a member of the STEM community that published their scholarly work in a peer reviewed system. Participation in the survey was voluntary and no compensation or incentives were offered. All respondents had to certify that they were 18 years or older and read the informed consent statement; participants were able to exit the survey at any point without consequence. Participants were recruited broadly through social media platforms, direct posting on scientific list-serves and email invitations to colleagues, department chairs and organizations focused on diversity and inclusive practices in STEM fields (see Supplemental Files for distribution methods). Targeted emails were used to increase representation of respondents. Data on response rates from specific platforms were not collected. The survey required participants to provide basic demographic information including gender identity, level of education, career stage, country of residence, field of expertise and racial and/or ethnic identities (see Supplemental Files for specific survey questions). Throughout the entire survey, all response fields included “*I prefer not to say*” as an opt-out choice. Once demographic information was collected from participants, the study’s full definition of an unprofessional peer review was presented and respondents self-identified whether they had ever received a peer review comment as first author that fit this definition. Survey respondents answering “*no*” or “*I prefer not to say*” were automatically redirected to the end of the survey, as no additional questions were necessary. Respondents answering “*yes*” were asked a series of follow-up questions designed to determine the nature of the unprofessional comments, the total number of scholarly publications to date and the number of independent times an unprofessional review was experienced. The perceived impact of the unprofessional reviews on the scientific aptitude, productivity and career advancement of each respondent was assessed using the following questions: (1) To what degree did the unprofessional peer review(s) make you doubt your scientific aptitude? (1–5) 1 = not at all, 5 = I fully doubted my scientific abilities; (2) To what degree do you feel that receiving the unprofessional review(s) limited your overall productivity? Please rate this from 1 to 5. 1 = not at all, 5 = greatly limited number of publications per year; (3) To what degree do you feel that receiving the unprofessional review(s) delayed career advancement? Please rate this from 1 to 5. 1 = not at all, 5 = greatly impacted/delayed career advancement. Finally, respondents were invited to provide direct quotes of unprofessional reviews received (although the respondents were told they could remove any personal or identifying information such as field of study, pronouns, etc.). Participants choosing to share peer review quotes were able to specify whether they gave permission for the quote to be shared/distributed. Explicit permission from each respondent was received to use and distribute all quotes displayed in Fig. 1. At the end of the survey, all respondents were required to certify that all the information provided was true and accurate to the best of their abilities and that the information was shared willingly. We recognize that there are limitations to our survey design. First, our survey was only administered in English. There also may have been non-response bias for individuals who did not experience negative comments during peer review even though any advertisement of this survey indicated that we sought input from anyone who has ever published a peer-reviewed study as first author. There could also be a temporal element, where authors who received comments more recently may have responded differently than those who received unprofessional reviews many years ago. Additionally, the order of questions was not randomized and participants were asked to complete demographic information before answering questions about their peer review experience, which may have primed respondents to select answers that were more in line with racial or gender-specific stereotypes (Steele & Ambady, 2006). In order to maintain the anonymity of our respondents, we did not ask for any bibliometric data from the authors. Given that our sample of respondents represented a diverse array of career stages, STEM fields, countries of residence and racial/ethnic identities, we do not believe that any of the above significantly limits the interpretation of our results. ### Data analysis We tested for the pervasiveness and downstream effects of unprofessional peer reviews on four intersecting gender and racial/ethnic groups: (1) women of color and non-binary people of color, (2) men of color, (3) white women and white non-binary people and (4) white men. Due to the small number of respondents identifying as non-binary (<1% of respondents), we statistically analyzed women and non-binary genders together in a category as marginalized genders in the sciences. However, refer to Table S1 for full breakdown of responses from each gender identity so that patterns may be assessed by readers without the constraints conferred by statistical assumptions and analyses. To protect the anonymity of individuals that may be personally identified based on the combination of their race/ethnicity and gender, respondents who identified as a race or ethnicity other than white, including those who checked multiple racial and/or ethnic categories were grouped together for the statistical analysis (Fig. S1). It is important to note that by grouping the respondents into four categories, the analysis captures only the broad patterns for intersectional groups and does not relay the unique experiences of each respondent, which should not be discounted. Survey respondents (*N* = 1,106) were given the opportunity to opt out of any question; therefore, the sample sizes were different for each statistical analysis. We tested for differences in the probability of receiving an unprofessional peer review across four intersectional groups (*N* = 1,077) using a Bayesian logistic regression (family = Bernoulli, link = logit). Of the 642 people who indicated that they received an unprofessional peer review, 617, 620 and 618 answered the questions regarding perceived impacts to their scientific aptitude, productivity and career advancement, respectively. We ran individual Bayesian ordinal logistic regressions (family = cumulative, link = logit) for each of the three questions to test for differences in probabilities of selecting a 1–5 across the four groups. All models were run using the BRMS package (Bürkner & Others, 2017) in R*v3.5.2* which uses the Hamiltonian Monte Carlo algorithm in STAN (Hoffman & Gelman, 2014; Stan Development Team, 2015). Each model was sampled from four chains, 4,000 iterations post-warmup, and half student *t* distributions for all priors. Model convergence was assessed using Gelman–Rubin diagnostics ( for all parameters; Gelman & Rubin, 1992) and visual inspection of trace plots. Posterior predictive checks were visually inspected using the *pp_check()* function and all assumptions were met. Data are presented as medians and two-tailed 95% Bayesian credible intervals (BCI). ## Results We received 1,106 responses from people in 46 different countries across >14 STEM disciplines (Fig. 2). Overall, 58% of all the respondents (*N* = 642) indicated that they had received an unprofessional review, with 70% of those individuals reporting multiple instances (3.5 ± 5.8 reviews, mean ± SD, across all participants). There were no significant differences in the likelihood of receiving an unprofessional review among the intersectional groups (Fig. S2); however, there were clear and consistent differences in downstream effects between groups in perceived impacts on self-confidence, productivity and career trajectories after receiving an unprofessional review. White men were most likely to report no impact to their scientific aptitude (score of 1) after receiving an unprofessional peer review (P[1] = 0.40, 95% BCI [0.34–0.47], where P[score] denotes the probability of selecting a particular score), with a 5.7 times higher probability of selecting a 1 than a 5 (fully doubted their scientific aptitude; P[5] = 0.07, 95% BCI [0.05–0.09]). Notably, white men were 1.3, 2.0 and 1.7 times more likely to indicate no resultant doubt of their scientific aptitude than men of color (P[1] = 0.30, 95% BCI [0.20–0.41]), white women and white non-binary people (P[1] = 0.20, 95% BCI [0.16–0.23]) and women of color and non-binary people of color (P[1] = 0.23, 95% BCI [0.16–0.31]), respectively (Fig. 3A). Together, these results indicate that receiving unprofessional peer reviews had less of an overall impact on the scientific aptitude of white men relative to the remaining three groups. Similar patterns among intersectional groups emerged for reported impacts of unprofessional reviews on productivity (measured in number of publications per year). Specifically, women of color and non-binary people of color, white women and white non-binary people and men of color were mostly likely to select a 3 (moderate level of perceived negative impact on productivity), whereas white men were most likely to select a 1 (no perceived impact on their productivity; Fig. 3B). White men were also the least likely of all groups to indicate that receiving unprofessional reviews greatly limited their number of publications per year (P[5] = 0.06, 95% BCI [0.05–0.09]), which significantly differed from groups of women and non-binary people, but not men of color (Fig. 3B). Women of color and non-binary people of color had the most distinct pattern in reported negative impacts on career advancement (Fig. 3C). Women of color and non-binary people of color had a nearly equal probability of reporting each level of impact (1–5); whereas, men of color, white women and white non-binary people and white men had a decreasing probability of selecting scores indicative of a higher negative impact on career advancement (Fig. 3C). Specifically, women of color and non-binary people of color were the most likely to select that they had significant delays in career advancement as a result of receiving an unprofessional review (P[5] = 0.20, 95% BCI [0.13–0.28]). Women of color and non-binary people of color were also the least likely of the groups to report no impact on career advancement as a result of unprofessional reviews (P[1] = 0.22, 95% BCI [0.15–0.31]). ## Discussion Our data show that unprofessional peer reviews are pervasive in STEM disciplines, regardless of race/ethnicity or gender, with over half of participants reporting that they had received unprofessional comments. Our study did not assess peer review outcomes of participants, but it is possible that unprofessional reviews could impact acceptance rates across groups differently because reviewer perception of competence is implicitly linked to gender regardless of content (Goldberg, 1968; Kaatz, Gutierrez & Carnes, 2014; Wennerås & Wold, 1997). Previous studies have demonstrated clear differences in acceptance/rejection rates between genders (Murray et al., 2019; Symonds et al., 2006; Fox & Paine, 2019) and future studies should test if receiving an unprofessional peer review leads to different acceptance outcomes depending on gender and/or race/ethnicity. While there were no statistical differences in the number of unprofessional reviews received among the four intersectional groups in our study, there were clear and consistent differences in the downstream impacts that the unprofessional reviews had among groups. Overall, white men were the least likely to question their scientific aptitude, or report delays in productivity or career advancement than any other group after receiving an unprofessional review. Groups that reported the highest self-doubt after unprofessional comments also reported the highest delays in productivity. This finding corroborates studies showing destructive criticism leads to self-doubt (Baron, 1988) and vulnerability to stereotype threat (Leslie et al., 2015), which has quantifiable negative impacts on productivity (Kahn & Scott, 1997) and career advancement (Howe-Walsh & Turnbull, 2014). Conversely, high self-confidence is related to increased persistence after failure (Baumeister et al., 2003). Therefore, scientists with a higher evaluation of their own scientific aptitude after an unprofessional review may be less likely to have reduced productivity following a negative peer review experience. Women and non-binary people were the most likely to report significant delays in productivity after receiving unprofessional reviews. It is well known that publication rates in STEM fields differ between genders (Symonds et al., 2006; Bird, 2011). Men have 40% more publications than women on average, with women taking up to 2.5 times as long to achieve the same output rate as men in some fields (Symonds et al., 2006). While our study cannot confer causality leading to diminished productivity, the results show that unprofessional reviews reinforce bias that is already being encountered by underrepresented groups on a daily basis. Other well-studied mechanisms leading to reduced productivity for women include (but are not limited to) papers by women authors spend more time in review than papers by men (Hengel, 2017), men are significantly less likely to publish coauthored papers with women than with other men (Salerno et al., 2019), women receive less research funding than men in some countries (Witteman et al., 2019) and women spend more time doing service work than men at academic institutions (Guarino & Borden, 2017). Women are also underrepresented in the peer review process leading to substantial biases in peer review (Goldberg, 1968; Kaatz, Gutierrez & Carnes, 2014). For example, studies have shown that women are underrepresented as editors which leads to fewer refereed papers by women (Fox, Burns & Meyer, 2016; Lerback & Hanson, 2017; Helmer et al., 2017; Cho et al., 2014) and that authors of all genders are less likely to recommend women reviewers (Lerback & Hanson, 2017) which contributes to inequity in peer review outcomes (i.e., fewer first and last authored papers accepted by women) (Murray et al., 2019). However, some strategies, such as double-blind reviewing and open peer review, have been shown to alleviate gender inequity in publishing (Darling, 2015; Budden et al., 2008; Groves, 2010), although there are notable exceptions (Webb, O’Hara & Freckleton, 2008). The difficulty in publishing and lack of productivity may contribute to the high attrition rate of women in academia (Cameron, Gray & White, 2013). Assessment of intersectional groups has been generally overlooked in research on publication and peer review biases. Yet, traditionally underrepresented racial and ethnic groups experience substantial pressures and limitations to inclusion in STEM fields. Indeed, in our study there were significant differences, especially in perceived delays in career advancement, between white women and white non-binary people, and women of color and non-binary people of color (Fig. 3C). Had we focused on only gender or racial differences, the distinct experiences of women of color and non-binary people of color would have been obscured. Because both gender and racial biases lead to diminished recruitment and retention, as well as higher rates of attrition in the sciences (Xu, 2008; Alfred, Ray & Johnson, 2019), intersectionality cannot be ignored. Our results indicate that receiving unprofessional peer reviews is an yet another barrier to equity in career trajectories for women of color and non-binary people of color, in addition to the quality of mentorship, intimidation and harassment, lack of representation and many others (Howe-Walsh & Turnbull, 2014; Zambrana et al., 2015). Our study indicates that unprofessionalism in reviewer comments is pervasive in STEM fields. Although we found clear patterns indicating that unprofessional peer reviewer comments had a stronger negative impact on underrepresented intersectional groups in STEM, all groups had at least some members reporting the highest level of impact in every category. This unprofessional behavior often occurs under the cloak of anonymity and is being perpetuated by the scientific community upon its members. Comments like several received by participants in our study (see Fig. 1) have no place in the peer review process. Interestingly, less than 3% of our participants that received an unprofessional peer review stated that the review was from an open review journal, where the peer reviews and responses from authors are published with the final manuscript (Pulverer, 2010). While a recent laboratory study showed that open peer review practices led to higher cooperation and peer review accuracy (Leek, Taub & Pineda, 2011), less is known about how transparent review practices affect professionalism in peer review comments. Our data indicate that open reviews may help curtail unprofessional comments, but more research on this topic is needed. Individual scientists have the power and responsibility to address the occurrence of unprofessional peer reviews directly and enact immediate change. We therefore recommend the following: (1) Make peer review mentorship an active part of student and peer learning. For example, departments and scientific agencies should hold workshops on peer review ethics. (2) Follow previously published best practices in peer review (Huh & Sun, 2008; Kaatz, Gutierrez & Carnes, 2014). (3) Practice self-awareness and interrogate whether comments are constructive and impartial (additionally, set aside enough time to review thoroughly, assess relevance and re-read any comments). (4) Encourage journals that do not already have explicit guidelines for the review process to create a guide, as well as implement a process to reprimand or remove reviewers that are acting in an unprofessional manner. For example, the journal could contact the reviewer’s department chair or senior associate if they submit an unprofessional review. (5) Societies should add acceptable peer review practices to their code of conduct and a structure that reprimands or removes society members that submit unprofessional peer reviews. (6) Editors should be vigilant in preventing unprofessional reviews from reaching authors directly and follow published best practices (D’Andrea & O’Dwyer, 2017; Resnik & Elmore, 2016). (7) When in doubt use the “golden rule” (review others as you wish to be reviewed). ## Conclusions Our study shows that unprofessional peer reviews are pervasive and that they disproportionately harm underrepresented groups in STEM. Specifically, underrepresented groups were most likely to report direct negative impacts on their scientific aptitude, productivity and career advancement after receiving an unprofessional peer review. While it was beyond the scope of this study, future investigations should also focus on the effect of unprofessional peer reviews on first-generation scientists English as a second language, career stage, peer review in grants, and other factors that could lead to differences in downstream effects. Unprofessional peer reviews have no place in the scientific process and individual scientists have the power and responsibility to enact immediate change. However, we recognize and applaud those reviewers and editors (and there are many!) that spend a significant amount of time and effort writing thoughtful, constructive, and detailed criticisms that are integral to moving science forward. ## Supplemental Information ### Survey distribution methods. Detailed information on survey distribution for this study.
true
true
true
Background Peer reviewed research is paramount to the advancement of science. Ideally, the peer review process is an unbiased, fair assessment of the scientific merit and credibility of a study; however, well-documented biases arise in all methods of peer review. Systemic biases have been shown to directly impact the outcomes of peer review, yet little is known about the downstream impacts of unprofessional reviewer comments that are shared with authors. Methods In an anonymous survey of international participants in science, technology, engineering, and mathematics (STEM) fields, we investigated the pervasiveness and author perceptions of long-term implications of receiving of unprofessional comments. Specifically, we assessed authors’ perceptions of scientific aptitude, productivity, and career trajectory after receiving an unprofessional peer review. Results We show that survey respondents across four intersecting categories of gender and race/ethnicity received unprofessional peer review comments equally. However, traditionally underrepresented groups in STEM fields were most likely to perceive negative impacts on scientific aptitude, productivity, and career advancement after receiving an unprofessional peer review. Discussion Studies show that a negative perception of aptitude leads to lowered self-confidence, short-term disruptions in success and productivity and delays in career advancement. Therefore, our results indicate that unprofessional reviews likely have and will continue to perpetuate the gap in STEM fields for traditionally underrepresented groups in the sciences.
2024-10-12 00:00:00
2019-12-12 00:00:00
https://dfzljdn9uc3pi.cl…7/1/fig-2-1x.jpg
article
peerj.com
PeerJ
null
null