id
int64
3
41.8M
url
stringlengths
1
1.84k
title
stringlengths
1
9.99k
author
stringlengths
1
10k
markdown
stringlengths
1
4.36M
downloaded
bool
2 classes
meta_extracted
bool
2 classes
parsed
bool
2 classes
description
stringlengths
1
10k
filedate
stringclasses
2 values
date
stringlengths
9
19
image
stringlengths
1
10k
pagetype
stringclasses
365 values
hostname
stringlengths
4
84
sitename
stringlengths
1
1.6k
tags
stringclasses
0 values
categories
stringclasses
0 values
18,964,692
https://www.cbc.ca/news/technology/what-on-earth-newsletter-road-salt-environment-1.4982353
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
9,929,197
https://github.com/AnthonyDiGirolamo/todotxt-machine
GitHub - AnthonyDiGirolamo/todotxt-machine: an interactive terminal based todo.txt file editor with an interface similar to mutt
AnthonyDiGirolamo
todotxt-machine is an interactive terminal based todo.txt file editor with an interface similar to mutt. It follows the todo.txt format and stores todo items in plain text. - View your todos in a list with helpful syntax highlighting - Archive completed todos - Define your own colorschemes - Tab completion of contexts and projects - Filter contexts and projects - Search for the todos you want with fuzzy matching - Sort in ascending or descending order, or keep things unsorted - Clickable UI elements Python 2.7 or Python 3.4 on Linux or Mac OS X. todotxt-machine 1.1.8 and earlier drew its user interface using only raw terminal escape sequences. While this was very educational it was difficult to extend with new features. Version 2 and up uses urwid to draw its interface and is much more easily extendable. ### Using pip ``` pip install todotxt-machine ``` Download or clone this repo and run the `todotxt-machine.py` script. ``` git clone https://github.com/AnthonyDiGirolamo/todotxt-machine.git cd todotxt-machine ./todotxt-machine.py ``` ``` todotxt-machine Usage: todotxt-machine todotxt-machine TODOFILE [DONEFILE] todotxt-machine [--config FILE] todotxt-machine (-h | --help) todotxt-machine --version todotxt-machine --show-default-bindings Options: -c FILE --config=FILE Path to your todotxt-machine configuraton file [default: ~/.todotxt-machinerc] -h --help Show this screen. --version Show version. --show-default-bindings Show default keybindings in config parser format Add this to your config file and edit to customize ``` You can tell todotxt-machine to use the same todo.txt file whenever it starts up by adding a `file` entry to the `~/.todotxt-machinerc` file. If you want to archive completed tasks, you can specify a done.txt file using an `archive` entry. You can also set you preferred colorscheme or even define new themes. Here is a short example: ``` [settings] file = ~/todo.txt archive = ~/done.txt auto-save = True show-toolbar = False show-filter-panel = False enable-borders = False enable-word-wrap = True colorscheme = myawesometheme ``` todotxt-machine currently supports solarized and base16 colors. Pictured above are the following themes from left to right: `base16-light` `base16-dark` `solarized-light` `solarized-dark` Here is a config file with a complete colorscheme definition: ``` [settings] file = ~/todo.txt colorscheme = myawesometheme [colorscheme-myawesometheme] plain=h250 selected=,h238 header=h250,h235 header_todo_count=h39,h235 header_todo_pending_count=h228,h235 header_todo_done_count=h156,h235 header_file=h48,h235 dialog_background=,h248 dialog_color=,h240 dialog_shadow=,h238 footer=h39,h235 search_match=h222,h235 completed=h59 context=h39 project=h214 creation_date=h135 due_date=h161 priority_a=h167 priority_b=h173 priority_c=h185 priority_d=h77 priority_e=h80 priority_f=h62 ``` You can add colorschemes by adding sections with names that start with `colorscheme-` . Then under the `[settings]` section you can say which colorscheme you want to use. The format for a color definitions is: ``` name=foreground,background ``` Foreground and background colors are follow the 256 color formats defined by urwid. Here is an excerpt from that link: High colors may be specified by their index `h0` , ...,`h255` or with the shortcuts for the color cube`#000` ,`#006` ,`#008` , ...,`#fff` or gray scale entries`g0` (black from color cube) ,`g3` ,`g7` , ...`g100` (white from color cube). You can see all the colors defined here. I recommend you leave the foreground out of the following definitions by adding a comma immediately after the `=` ``` selected=,h238 dialog_background=,h248 dialog_color=,h240 dialog_shadow=,h238 ``` If you want to use your terminal's default foreground and background color use blank strings and keep the comma: ``` dialog_background=, ``` Let me know if you make any good colorschemes and I'll add it to the default collection. You can customize any key binding by adding a setting to the `[keys]` section of your config file `~/.todotxt-machinerc` . For a list of the default key bindings run: ``` todotxt-machine --show-default-bindings ``` You can easily append this to your config file by running: ``` todotxt-machine --show-default-bindings >> ~/.todotxt-machinerc ``` When you edit a key binding the in app help will reflect it. Hit `h` or `?` to view the help. - On Mac OS hitting `ctrl-y` suspends the application. Run`stty dsusp undef` to fix. - Mouse interaction doesn't seem to work properly in the Apple Terminal. I would recommend using iTerm2 or rxvt / xterm in XQuartz. - With tmux the background color in todotxt-machine can sometimes be lost at the end of a line. If this is happening to you set your `$TERM` variable to`screen` or`screen-256color` export TERM=screen-256color ~~User defined color themes~~~~Manual reordering of todo items~~~~Config file for setting colors and todo.txt file location~~~~Support for archiving todos in done.txt~~~~Custom keybindings~~- Add vi readline keybindings. urwid doesn't support readline currently. The emacs style bindings currently available are emulated. See the log here
true
true
true
an interactive terminal based todo.txt file editor with an interface similar to mutt - AnthonyDiGirolamo/todotxt-machine
2024-10-12 00:00:00
2013-10-18 00:00:00
https://opengraph.githubassets.com/5811cf44521ddc1b0dd977e3ba6f352893b3a1fc53abe554724eafd8b1e1a0c2/AnthonyDiGirolamo/todotxt-machine
object
github.com
GitHub
null
null
20,407,048
https://seed.run/blog/how-to-build-a-cicd-pipeline-for-serverless-apps-with-circleci
How to build a CI/CD pipeline for Serverless apps with CircleCI
Frank
# How to build a CI/CD pipeline for Serverless apps with CircleCI At Seed, we’ve built a fully managed CI/CD pipeline for Serverless Framework apps on AWS. So you can imagine we have quite a bit of experience with the various CI/CD services. Over the next few weeks we are going to dive into some of the most popular services out there. We’ll take a detailed look at what it takes to run your own CI/CD pipeline for Serverless apps. This’ll give you a good feel for not just how things work but how Seed makes your life easier! Today we’ll be looking at CircleCI. You might have come across tutorials that help you set up a CI/CD pipeline for Serverless on Circle. However, most of these are way too simplistic and don’t talk about how to deploy large real-world Serverless apps. Instead we’ll be working with a more accurate real-world setup comprising of: - A monorepo Serverless app - With multiple services - Deployed to separate development and production AWS accounts As a refresher, a monorepo Serverless app is one where multiple Serverless services are in subdirectories with their own `serverless.yml` file. Here is the repo of the app that we’ll be configuring you can refer to. The directory structure might look something like this: ``` / package.json services/ users-api/ package.json serverless.yml posts-api/ package.json serverless.yml cron-job/ package.json serverless.yml ``` ### What we’ll be covering - How to deploy your monorepo Serverless app on Git push - How to deploy to multiple AWS accounts - How to deploy using the pull request workflow - How to clean up unused branches and closed pull requests Note that, this guide is structured to work in standalone steps. If you only want to deploy to multiple AWS accounts, you can stop after step 2. Also, worth mentioning that while this guide is helping you create a fully-functional CI/CD pipeline for Serverless; all of these features are available in Seed without any configuration or scripting. ### Pre-requisites - A CircleCI account. - AWS credentials (Access Key Id and Secret Access Key) of the AWS account you are going to deploy to. Follow this guide to create one. - A monorepo Serverless app in a GitHub repo. Head over to our template repo and click on **Use this template**to clone it to your account. ### 1. How to deploy your monorepo app on Git push Let’s start by configuring the Circle side of things. Go into your CircleCI account. Select **Contexts** from the left menu, and click **Create Context**. Create a context called **Development**. Go in to the **Development** context, and click on **Add Environment Variable**. Create an variable with: - Name: **AWS_ACCESS_KEY_ID** - Value: Access Key Id of the IAM user Repeat the previous step and create another variable with: - Name: **AWS_SECRET_ACCESS_KEY** - Value: Secret Access Key of the IAM user Go to the cloned repository and click on **Create new file**. Name the new file `.circleci/config.yml` and paste the following: ``` version: 2.1 jobs: deploy-service: docker: - image: circleci/node:8.10 parameters: service_path: type: string stage_name: type: string steps: - checkout - restore_cache: keys: - dependencies-cache-{{ checksum "package-lock.json" }}-{{ checksum "<< parameters.service_path >>/package-lock.json" }} - dependencies-cache - run: name: Install Serverless CLI command: sudo npm i -g serverless - run: name: Install dependencies command: | npm install cd << parameters.service_path >> npm install - run: name: Deploy application command: | cd << parameters.service_path >> serverless deploy -s << parameters.stage_name >> - save_cache: paths: - node_modules - << parameters.service_path >>/node_modules key: dependencies-cache-{{ checksum "package-lock.json" }}-{{ checksum "<< parameters.service_path >>/package-lock.json" }} workflows: build-deploy: jobs: - deploy-service: name: Deploy Users API service_path: services/users-api stage_name: ${CIRCLE_BRANCH} context: Development - deploy-service: name: Deploy Posts API service_path: services/posts-api stage_name: ${CIRCLE_BRANCH} context: Development - deploy-service: name: Deploy Cron Job service_path: services/cron-job stage_name: ${CIRCLE_BRANCH} context: Development ``` Let’s quickly go over what we are doing here: - We created a job called **deploy-service**, that takes the**path of a service**and the**name of the stage**you want to deploy to. The name of the stage will be used as the`--stage` in the Serverless commands. - The **deploy-service**job does an`npm install` in the repo’s root directory and in the service subdirectory. - The job then goes into the service directory and runs `serverless deploy` with the stage name that’s passed in. - We also created a workflow that runs the **deploy-service**job for each service, while passing in the**branch name**as the stage name. - As a side note, we also specified that we want to cache the `node_modules/` directory in both the root and the service directory for faster deployment. Next, scroll to the bottom and click **Commit new file**. Back in Circle, select **ADD PROJECTS** from the left menu, and click on the **Set Up Project** button next to your project. Make sure the **Show forks** checkbox is checked. Select **Linux** as the Operating System, and **Node** as the Language. Go to step 5 and click on **Start building**. Then click on **WORKFLOWS** in the left menu. Click on the workflow, you will see the 3 jobs that are currently running. Click on a job. You will see the output for each of the steps. Scroll down to the **Deploy application** section, and you should see the output for the `serverless deploy -s master` command. Now that we have the basics up and running, let’s look at how to deploy our app to multiple AWS accounts. ### 2. How to deploy to multiple AWS accounts You might be curious as to why we would want to deploy to multiple AWS accounts. It’s a good practice to keep your development and production environments in separate accounts. By separating them completely, you can secure access to your production environment. This will reduce the likelihood of accidentally removing resources from it while developing. To deploy to another account, repeat the earlier step of creating a **Development** context, and create a **Production** context with the AWS Access Key Id and Secret Access Key of your production AWS account. Go to your GitHub repo and open `.circleci/config.yml` . Replace it with the following: ``` version: 2.1 jobs: deploy-service: docker: - image: circleci/node:8.10 parameters: service_path: type: string stage_name: type: string steps: - checkout - restore_cache: keys: - dependencies-cache-{{ checksum "package-lock.json" }}-{{ checksum "<< parameters.service_path >>/package-lock.json" }} - dependencies-cache - run: name: Install Serverless CLI command: sudo npm i -g serverless - run: name: Install dependencies command: | npm install cd << parameters.service_path >> npm install - run: name: Deploy application command: | cd << parameters.service_path >> serverless deploy -s << parameters.stage_name >> - save_cache: paths: - node_modules - << parameters.service_path >>/node_modules key: dependencies-cache-{{ checksum "package-lock.json" }}-{{ checksum "<< parameters.service_path >>/package-lock.json" }} workflows: build-deploy: jobs: # non-master branches deploys to stage named by the branch - deploy-service: name: Deploy Users API service_path: services/users-api stage_name: ${CIRCLE_BRANCH} context: Development filters: branches: ignore: master - deploy-service: name: Deploy Posts API service_path: services/posts-api stage_name: ${CIRCLE_BRANCH} context: Development filters: branches: ignore: master - deploy-service: name: Deploy Cron Job service_path: services/cron-job stage_name: ${CIRCLE_BRANCH} context: Development filters: branches: ignore: master # master branch deploys to the 'prod' stage - deploy-service: name: Deploy Users API service_path: services/users-api stage_name: prod context: Production filters: branches: only: master - deploy-service: name: Deploy Posts API service_path: services/posts-api stage_name: prod context: Production filters: branches: only: master - deploy-service: name: Deploy Cron Job service_path: services/cron-job stage_name: prod context: Production filters: branches: only: master ``` This does a couple of things: - A Git push to the **master**branch will be deployed to the**prod**stage, instead of the stage with the branch name. It’ll also use the**Production**context. - A Git push to all the other branches will be deployed to the stage with their branch name using the **Development**context. Commit and push this change. This will trigger Circle to build the **master** branch again. This time deploying to your production account. Next, let’s look at implementing the PR aspect of our Git workflow. ### 3. How to deploy in pull request workflow A big advantage of using Serverless is how easy and cost effective it is to deploy many different versions (ie. stages) of your app. A great use case for this is to deploy a version of your app for each pull request to preview how the merged version would work, similar to the idea of Review Apps on Heroku. Unfortunately, Circle does not support pull requests natively. It can be achieved with a little bit of bash scripting. Let’s look at how to set that up. Go to your GitHub repo and open the `.circleci/config.yml` that we had created above. Replace it with the following: ``` version: 2.1 jobs: deploy-service: docker: - image: circleci/node:8.10 parameters: service_path: type: string stage_name: type: string steps: - checkout - run: name: Check Pull Request command: | if [[ ! -z "$CIRCLE_PULL_REQUEST" ]]; then # parse pr# from URL https://github.com/fwang/sls-monorepo-with-circleci/pull/1 PR_NUMBER=${CIRCLE_PULL_REQUEST##*/} echo "export PR_NUMBER=$PR_NUMBER" >> $BASH_ENV echo "Pull request #$PR_NUMBER" fi - run: name: Merge Pull Request command: | if [[ ! -z "$PR_NUMBER" ]]; then git fetch origin +refs/pull/$PR_NUMBER/merge git checkout -qf FETCH_HEAD fi - restore_cache: keys: - dependencies-cache-{{ checksum "package-lock.json" }}-{{ checksum "<< parameters.service_path >>/package-lock.json" }} - dependencies-cache - run: name: Install Serverless CLI command: sudo npm i -g serverless - run: name: Install dependencies command: | npm install cd << parameters.service_path >> npm install - run: name: Deploy application command: | cd << parameters.service_path >> if [[ ! -z "$PR_NUMBER" ]]; then serverless deploy -s pr$PR_NUMBER else serverless deploy -s << parameters.stage_name >> fi - save_cache: paths: - node_modules - << parameters.service_path >>/node_modules key: dependencies-cache-{{ checksum "package-lock.json" }}-{{ checksum "<< parameters.service_path >>/package-lock.json" }} workflows: build-deploy: jobs: # non-master branches deploy to stage named by the branch - deploy-service: name: Deploy Users API service_path: services/users-api stage_name: ${CIRCLE_BRANCH} context: Development filters: branches: ignore: master - deploy-service: name: Deploy Posts API service_path: services/posts-api stage_name: ${CIRCLE_BRANCH} context: Development filters: branches: ignore: master - deploy-service: name: Deploy Cron Job service_path: services/cron-job stage_name: ${CIRCLE_BRANCH} context: Development filters: branches: ignore: master # master branch deploy to the 'prod' stage - deploy-service: name: Deploy Users API service_path: services/users-api stage_name: prod context: Production filters: branches: only: master - deploy-service: name: Deploy Posts API service_path: services/posts-api stage_name: prod context: Production filters: branches: only: master - deploy-service: name: Deploy Cron Job service_path: services/cron-job stage_name: prod context: Production filters: branches: only: master ``` Let’s go over the changes we’ve made. - We added a **Check Pull Request**step. We check the built-in environment variable**$CIRCLE_PULL_REQUEST**to decide if the current branch belongs to a pull request. If it does, then we set an environment variable called**$PR_NUMBER**with the pull request id. - We also added a **Merge Pull Request**step, where we checkout the merged version of code from GitHub. - We then change the **Deploy application**step to deploy to the name the stage**pr#**when deploying a pull request. Commit these changes to `.circleci/config.yml` . Next, go to GitHub and create a pull request to the master branch. This will trigger Circle to deployed the merged code with stage name **pr1** to your development account. ### 4. How to clean up unused branches and closed pull requests After a feature branch is deleted, or a pull request closed, you want to clean up the resources in your AWS account. Circle does not have triggers for these events on GitHub. If you have the credentials for your AWS account, you can perhaps remove the resources by going into each service directory on your local machine and running `serverless remove` . This step is both cumbersome and not ideal. To get around this limitation, we’ll use a little trick. We’ll tell Circle to remove a stage (instead of deploying to one) if it sees a Git tag called `rm-stage-STAGE_NAME` . To do this, go to your GitHub repo and open `.circleci/config.yml` that we created above. Replace it with the following: ``` version: 2.1 jobs: deploy-service: docker: - image: circleci/node:8.10 parameters: service_path: type: string stage_name: type: string steps: - checkout - run: name: Check Pull Request command: | if [[ ! -z "$CIRCLE_PULL_REQUEST" ]]; then # parse pr# from URL https://github.com/fwang/sls-monorepo-with-circleci/pull/1 PR_NUMBER=${CIRCLE_PULL_REQUEST##*/} echo "export PR_NUMBER=$PR_NUMBER" >> $BASH_ENV echo "Pull request #$PR_NUMBER" fi - run: name: Merge Pull Request command: | if [[ ! -z "$PR_NUMBER" ]]; then git fetch origin +refs/pull/$PR_NUMBER/merge git checkout -qf FETCH_HEAD fi - restore_cache: keys: - dependencies-cache-{{ checksum "package-lock.json" }}-{{ checksum "<< parameters.service_path >>/package-lock.json" }} - dependencies-cache - run: name: Install Serverless CLI command: sudo npm i -g serverless - run: name: Install dependencies command: | npm install cd << parameters.service_path >> npm install - run: name: Deploy application command: | cd << parameters.service_path >> if [[ ! -z "$PR_NUMBER" ]]; then serverless deploy -s pr$PR_NUMBER else serverless deploy -s << parameters.stage_name >> fi - save_cache: paths: - node_modules - << parameters.service_path >>/node_modules key: dependencies-cache-{{ checksum "package-lock.json" }}-{{ checksum "<< parameters.service_path >>/package-lock.json" }} remove-service: docker: - image: circleci/node:8.10 parameters: service_path: type: string stage_name: type: string steps: - checkout - restore_cache: keys: - dependencies-cache-{{ checksum "package-lock.json" }}-{{ checksum "<< parameters.service_path >>/package-lock.json" }} - dependencies-cache - run: name: Install Serverless CLI command: sudo npm i -g serverless - run: name: Install dependencies command: | npm install cd << parameters.service_path >> npm install - run: name: Remove application command: | cd << parameters.service_path >> # parse stage name from TAG rm-stage-pr1 serverless remove -s << parameters.stage_name >> - save_cache: paths: - node_modules - << parameters.service_path >>/node_modules key: dependencies-cache-{{ checksum "package-lock.json" }}-{{ checksum "<< parameters.service_path >>/package-lock.json" }} workflows: build-deploy: jobs: # non-master branches deploy to stage named by the branch - deploy-service: name: Deploy Users API service_path: services/users-api stage_name: ${CIRCLE_BRANCH} context: Development filters: branches: ignore: master - deploy-service: name: Deploy Posts API service_path: services/posts-api stage_name: ${CIRCLE_BRANCH} context: Development filters: branches: ignore: master - deploy-service: name: Deploy Cron Job service_path: services/cron-job stage_name: ${CIRCLE_BRANCH} context: Development filters: branches: ignore: master # master branch deploy to the 'prod' stage - deploy-service: name: Deploy Users API service_path: services/users-api stage_name: prod context: Production filters: branches: only: master - deploy-service: name: Deploy Posts API service_path: services/posts-api stage_name: prod context: Production filters: branches: only: master - deploy-service: name: Deploy Cron Job service_path: services/cron-job stage_name: prod context: Production filters: branches: only: master # remove non-production stages - remove-service: name: Remove Users API service_path: services/users-api stage_name: ${CIRCLE_TAG:9} context: Development filters: tags: only: /^rm-stage-.*/ branches: ignore: /.*/ - remove-service: name: Remove Posts API service_path: services/posts-api stage_name: ${CIRCLE_TAG:9} context: Development filters: tags: only: /^rm-stage-.*/ branches: ignore: /.*/ - remove-service: name: Remove Cron Job service_path: services/cron-job stage_name: ${CIRCLE_TAG:9} context: Development filters: tags: only: /^rm-stage-.*/ branches: ignore: /.*/ ``` Let’s look at what we changed here: - We created a new job called **remove-service**. It’s similar to our**deploy-service**job, except it runs`serverless remove` . It assumes the name of the tag is of the format`rm-stage-STAGE_NAME` . It parses for the stage name after the 2nd hyphen. - We also set the workflow to run the **remove-service**job for each of our services. To test this, try tagging the pull request branch before you close the PR. ``` $ git tag rm-stage-pr1 $ git push --tags ``` You’ll notice that the **pr1** stage will be removed from your development account. And that’s it! Let’s wrap things up next. #### Next steps It took us a few steps but we now have a fully-functional CI/CD pipeline for our monorepo Serverless app. It supports a PR based workflow and even cleans up once a PR is merged. The repo used in this guide is available here with the complete CircleCI configs. Some helpful next steps would be to auto-create custom domains for your API endpoints, send out Slack or email notifications, generate CloudFormation change sets and add a manual confirmation step when pushing to production, etc. You also might want to design your workflow to accommodate for any dependencies your services might have. These are cases where the output from one service is used in another. Finally, if you are not familiar with Seed, it’s worth noting that it’ll do all of the above for you out of the box! And you don’t need to write up a build spec or do any scripting. Do your Serverless deployments take too long? Incremental deploys in Seed can speed it up 100x! Learn More
true
true
true
In this post we’ll look at how to configure a CI/CD pipeline for a Serverless app with CircleCI. We’ll be using a real-world monorepo Serverless app that’ll be deployed to separate development and production AWS accounts. We’ll set up a PR based Git workflow and remove our services once the PRs are merged.
2024-10-12 00:00:00
2019-07-09 00:00:00
https://seed.run/assets/…-cicd-circle.png
article
seed.run
Frank
null
null
8,733,058
http://www.jetcafe.org/~jim/lambda.html
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
14,856,400
https://github.com/jarun/nnn/releases/tag/v1.3
Release nnn v1.3 · jarun/nnn
Jarun
# nnn v1.3 ## What's in? - Show directories in custom color (default: enabled in blue) - Option `-e` to use exiftool instead of mediainfo - Fixed #34: nftw(3) broken with too many open descriptors - More concise help screen `-e` to use exiftool instead of mediainfo
true
true
true
What's in? Show directories in custom color (default: enabled in blue) Option -e to use exiftool instead of mediainfo Fixed #34: nftw(3) broken with too many open descriptors More concise help screen
2024-10-12 00:00:00
2017-07-26 00:00:00
https://opengraph.githubassets.com/b46d275e8eda237c9d2756e250e8b27d89edd8703159061e2dc503d0770ae646/jarun/nnn/releases/tag/v1.3
object
null
GitHub
null
null
4,169,356
http://blog.asmartbear.com/color-wheels.html
Color Wheels are wrong? How color vision actually works
Jason Cohen
# Color Wheels are wrong? How color vision actually works ## Why are artists special? Ask any artist to explain how color works, and they’ll launch into a treatise about how the **Three Primary Colors**—red, blue, and yellow—form a color wheel: Why “wheel?” All other colors are created by mixing these three colors various proportions, they’ll explain. In particular, mixing equal quantities of each pair of Primary Colors produces the **Secondary Colors** (orange, green, and purple): Continuing this process produces the full **color wheel** you might have learned in school; a pretty, symmetrical, satisfying device in which each hue melds seamlessly and linearly into the next: ## Unfortunately, this crumbles under even minor scrutiny For example, open up your desktop printer and you’ll see something quite different: Three colors of ink which, when combined, produce all others: cyan, magenta, and yellow. (Black is included as a money-saver—black is the cheapest and most common color; it’s cheaper to have a black cartridge than to dump ink from the other three.) **But wait!** I thought the “Primary” colors were red, blue, and yellow, not cyan (bluish-green), magenta (bluish-red), and yellow. So one primary is the same (yellow) but the other two are different… yet these still generate color wheels containing all the other colors. **So what does the “Primary” designation really mean?** Also it’s not as simple as saying “*any* three colors can produce all the others” because that’s clearly not true (by experiment). And it’s not as simple as saying “any three colors will do, they just have to be equally spaced around the color wheel,” because yellow is common to both the painter’s and printer’s wheel, yet the other two primaries differ completely (red and blue are primary in the painter’s wheel but secondary in the printer’s wheel.) TVs and computers are different yet again. If you stand close to a CRT (non-flat-screen), you can see that every pixel (or “dot”) is really three tightly-packed colored phosphors: red, green, and blue. If you’ve done computer graphics you’ve been forced to name colors using these “RGB color values;” true geeks automatically think “yellow” when they see `#FFFF00` . (If it’s intuitive to you that `#A33F17` is burnt orange, it’s time for you to leave the monastery.) This leads to **yet another system of three “Primary” colors** generating all the others, and yet another color wheel. This one is a little easier to explain—ink and paint are “subtractive” (adding cyan, magenta, and yellow yields black) whereas colored light is “additive” (adding red, green, and blue yields white): Still, we have yet another color wheel in which two (but not all three!) “primaries” match those of the artist’s wheel and none match the printer’s wheel. This isn’t adding up. **Let’s turn to science.** ## Physics makes it worse Physics is clear and certain. Light is a wave of electromagnetic energy (and/or a particle, but for today it’s just a wave OK?) and, like a vibrating guitar string, light waves wiggle at certain frequencies. Some of those frequencies we detect with our eyes, and the frequency determines its color: Now we’re getting somewhere! Or are we? First off, we’ve suddenly lost the notion of a “wheel.” As much as the previous color systems have contradicted each other, at least they all agreed that hues transform smoothly and continuously, one to the next, a beautiful symmetry with neither beginning nor end. But here we have a clear beginning (red) and end (violet). The colors in-between are continuous—and seem to generally match the order seen in the various color wheels—but then it just terminates with violet. How does it get back to red? What about that **fuchsia / magenta / purplish-reddish color** which is **clearly present in every color wheel but missing from the physical spectrum?** How can a color be *missing*? Where does it come from? But wait, we’re not done being confused. ## And another thing: Opposites Every seven-year-old kid in America is taught that “the opposite of red is green” and “the opposite of blue is yellow.” But what does that mean exactly? After all, there’s nothing in that linear physical light spectrum to indicate that any color is “the opposite” of any other, particularly not those two pairs. And the color wheels aren’t much help either; trying to match the “opposites” on the painter’s wheel yields an unsatisfying asymmetry where two of the primaries are opposite, and the third is opposite from a secondary: But “opposites” are real. In the early 1800s Goethe (yes, *the* Goethe) noticed that red/green and blue/yellow were never perceived together, in the sense that no color could be described as a combination of those pairs. No color could be described as “reddish green.” If you are asked to imagine “a green with a bit of red,” nothing comes to mind. In the following 150 years, various experiments tested this idea, all of which validated his observation. There’s something to this. Something neither the wheels nor the spectrum can explain. It’s time to get down to the real source of color: The ridiculous complexity of human beings. ## The answer: Physiology (of course) *Caveat Emptor: The following is a gross and irresponsible over-simplification of what actually happens. But it’s correct in its general thrust, and few people on Earth (myself excluded) are qualified to explain with complete accuracy, so in the interest of general illumination, no pun intended, OK maybe intended just a little bit, I’m doing it anyway.* Of course it starts in the eye, where three types of cells called “cones” measure the amount of red, green, and blue light hitting the retina. “Ah *ha*,” I can hear you CSS freaks scream, “it’s RGB after all! I was right! All that time spent—nay *invested*—in remembering that `#001067` is the default title-bar color in Windows 95 was well worth it!” Hold on there, cowboy. Actually, “amount of red, green, and blue” is a gross simplification (as warned!). Peaking under the hood (just a tad), the three types of cones are in fact denoted S, M, and L for “short, medium, and long” wavelengths, and each respond at different levels in a range of wavelengths: But I digress, and besides I did promise to be all gross and irresponsible, so let’s go back to that. So there are R, G, and B cones. The signals from these cones don’t go straight to the brain; they first pass through a pre-processing filter, and **it’s this filter that explains all the mysteries.** Actually there are three filters. **Filter #1** works like this: Explanation: The more R there is, the more positive the signal; the more G, the more negative the signal. If there’s relatively *equal amounts* of R and G—whether from none of both, a little of both, or a lot of both—the signal is zero. This explains why there’s no “greenish-red.” Because: Let’s say R and G can go between 0 and 100 units of intensity. Consider the case of “full red with a little green,” where R=100 (full intensity) and G=25 (one-quarter intensity). Then separately consider the case of “strong red with no green,” where R=75 and G=0. In both cases, Filter #1 computes the same output signal: 75. But remember the brain doesn’t get the raw R and G signals—it only gets the filter’s output—so *the brain cannot tell the difference* between these two scenarios. So there’s no such thing as “red with a little green”—there’s just a less intense red. The brain physically cannot see “greenish-red” because the filter removes that polarity. Knowing that blue/yellow is the other opposite pair, you can probably guess what **Filter #2** is: Here blue (B) is opposed with a combination of both the R and G channels. The R and G cones are stimulated either when there’s literally both red and green light (like when a CSS coder turns on both red and green as to create yellow), or when 570nm light (yellow, on the visible spectrum) stimulates both R and G cones. **Filter #3** is simple: In short, it measures the quantity of light without regard to hue. This is “how bright,” or “luminance” in color-theory parlance. **And magenta?** It comes from full R and B with no G, activating Filter #1 full-positive, Filter #2 at zero. It’s not a physical wavelength of color, it’s just a combination of outputs of two filters. ## The perceptual color wheel To do this “wheel” thing properly, you should represent the red/green and blue/yellow opposites. It’s not at all difficult, so it amazes me how rarely it’s seen or taught: Four primary colors? Yes, why not? It’s the closest thing to the physiology without getting complex. **Why is it necessarily a “wheel?”** As you trace the (real, physical, see: rainbows) visible light spectrum, filter 1 starts full positive, then goes smoothly through zero and then negative, then back towards one. On the diagram just above, that’s the values of the x coordinate of a circle as you trace an angle counter-clockwise starting from pointing rightward along the x axis. So, like cosine, the first filter creates that plot. Filter 2 does exactly the same, but produces the y coordinate of the circle, like sine: it starts as zero, then moves towards one, then back to zero and then negative, ending towards where it started. So the color wheel is a simplified, idealized way of plotting filters 1 and 2 through the natural spectrum, and the math of the biological filters naturally plot a circle. Of course the real shape isn’t a perfect circle, nor are colors evenly distributed around it, but the general idea is both directionally correct and useful. The CIE color space is closest to perceptual reality: ## Bonus Brain Bender: The context / color connection This is just the beginning of color theory. To give you a glimpse at how complex it gets, consider this: When a color is juxtaposed to other colors, we perceive it as a different color. For example, most people will say the small square on the left is brown, whereas the one on the right is orange: Actually, the squares are *exactly the same color!* The surrounding context dictates the *perceived* color, *on top of* all that wavelength-physiology we just did. This makes sense because the brain projects abstract things it knows about the natural world onto your perception of color. For example, we know intuitively that shadows artificially darken colors, so our brains automatically account for this in our perception of those colors. (It’s called “color constancy.”) For example, you know that the dark and light colors on this hot air balloon are “the same:” But it also results in optical illusions so powerful that even when you know the trick you still can’t see it correctly. Like this: Which square is darker: A or B? In fact A and B are the same color (`#787878` ), but you can’t see it even when you know this. To prove it to myself I had to open this picture in an image editor and actually move one square over another to see it was the same. Freaky. ### Further Reading You got this far? You still care? Sheesh, you’re as weird as me. If you really want to lose a few days of your life, this is an amazing, in-depth treatise on color theory. That link is just page 1 of 8. Good luck. *A Smart Bear* https://longform.asmartbear.com/color-wheels/ © 2007-2024 Jason Cohen @asmartbear *☞ If you're enjoying this, please subscribe and share this article! ☜*
true
true
true
Artists say all colors are a mixture of red, yellow, and blue. But physics and TV screens and printers disagree. How does color really work?
2024-10-12 00:00:00
2011-01-31 00:00:00
https://longform.asmartb…mbnail-1200w.png
article
asmartbear.com
A Smart Bear
null
null
8,144,296
http://www.pulsepoint.org/2014/08/06/pulsepoint-app-now-available-to-los-angeles-county/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
13,287,748
https://github.com/SethMichaelLarson/python_project_template
GitHub - sethmlarson/secure-python-package-template: Template for a Python package with a secure project host and package repository configuration.
Sethmlarson
Template for a Python package with a secure project host and package repository configuration. The goals of this project are to: - Show how to configure a Python package hosted on GitHub with: - Operational security best-practices - Automated publishing to PyPI - Code quality and vulnerability scanning - Build reproducibility - Releases with provenance attestation - Obtain a perfect rating from OpenSSF Scorecard - SLSA Level 3 using GitHub OIDC InfoCommit and tag signing is a practice that's recommended to avoid commit author spoofing but isn't strictly required for a secure project configuration. If you'd like to skip this step, you can jump ahead to creating a GitHub repository. Git needs to be configured to be able to sign commits and tags. Git uses GPG for signing, so you need to create a GPG key if you don't have one already. Make sure you use a email address associated with your GitHub account as the email address for the key. If you wish to keep your email address private you should use GitHub's provided `noreply` email address. `gpg --full-generate-key` After you've generated a GPG key you need to add the GPG key to your GitHub account. Then locally you can configure git to use your signing key: `git config --global --unset gpg.format` List GPG secret keys, in this example the key ID is '3AA5C34371567BD2' ``` $ gpg --list-secret-keys --keyid-format=long /Users/hubot/.gnupg/secring.gpg ------------------------------------ sec 4096R/3AA5C34371567BD2 2016-03-10 [expires: 2017-03-10] uid Hubot <[email protected]> ssb 4096R/4BB6D45482678BE3 2016-03-10 ``` Tell git about your signing key: `git config --global user.signingkey 3AA5C34371567BD2` Then tell git to auto-sign commits and tags: ``` git config --global commit.gpgsign true git config --global tag.gpgSign true ``` Now all commits and tags you create from this git instances will be signed and show up as "verified" on GitHub. Clone this repository locally: `git clone ssh://[email protected]/sethmlarson/secure-python-package-template` Rename the folder to the name of the package and remove existing git repository: ``` mv secure-python-package-template package-name cd package-name rm -rf .git ``` Create a new git repository and ensure the branch name is `main` : ``` $ git init Initialized empty Git repository in .../package-name/.git/ $ git status On branch main No commits yet ... ``` If the branch isn't named `main` you can rename the branch: `git branch -m master main` Create an **empty** repository on GitHub. To ensure the repository is empty you shouldn't add a README file, .gitignore file, or a license yet. For the examples below the GitHub repository will be named `sethmlarson/package-name` but you should substitute that with the GitHub repository name you chose. We need to tell our git repository about our new GitHub repository: `git remote add origin ssh://[email protected]/sethmlarson/package-name` Change all the names and URLs be for your own package. Places to update include: `README.md` `pyproject.toml` (`project.name` and`project.urls.Home` )`src/{{secure_package_template}}` `tests/test_{{secure_package_template}}.py` You should also change the license to the one you want to use for the package. Update the value in here: `LICENSE` `README.md` Now we can create our initial commit: ``` git add . git commit -m "Initial commit" ``` Verify that this commit is signed. If not you should configure git to auto-sign commits: ``` $ git verify-commit HEAD gpg: Signature made Fri 15 Jul 2022 10:55:10 AM CDT gpg: using RSA key 9B2E1343B0B201B8883C79E3A99A0A21AD478212 gpg: Good signature from "Seth Michael Larson <[email protected]>" [ultimate] ``` Now we push our commit and branch: ``` $ git push origin main Enumerating objects: 25, done. Counting objects: 100% (25/25), done. Delta compression using up to 12 threads Compressing objects: 100% (21/21), done. Writing objects: 100% (25/25), 17.92 KiB | 1.28 MiB/s, done. Total 25 (delta 0), reused 0 (delta 0), pack-reused 0 To ssh://github.com/sethmlarson/package-name * [new branch] main -> main ``` Success! You should now see the commit and all files on your GitHub repository. Dependabot is a service provided by GitHub that keeps your dependencies up-to-date automatically by creating pull requests updating individually dependencies on your behalf. Unfortunately, when using Dependabot with any non-trivial number of dependencies the number of pull requests quickly becomes too much to handle, especially when you think about a single maintainer needing to manage multiple projects worth of dependency updates. The approach taken with Dependabot in this repository is to keep the number of pull requests from Dependabot to a minimum while still maintaining a secure and maintained set of dependencies for developing and publishing packages. The policy is described below: - Always create pull requests upgrading dependencies if the pinned version has a public vulnerability. **This is the default behavior of Dependabot and can't be disabled.** - Create pull requests when new major versions of development dependencies are made available. This is important because usually major versions contain backwards-incompatible changes so may actually require changes on our part. - Create pull requests when there's a new version of a dependency that carries security sensitive data like `certifi` . It's always important to have this package be up-to-date to avoid monster-in-the-middle (MITM) attacks. - All other upgrades to dependencies need to be done manually. These are cases like bug fixes that are impacting the project or new features. The developer experience here is the same as if Dependabot wasn't automatically upgrading dependencies. You can read the `dependabot.yml` configuration file to learn how to encode the above policy or read the Dependabot documentation on the configuration format. - Settings > Code security and analysis - Dependency graph should be enabled. This is the default for public repos. - Enable Dependabot security updates Any upgrades to development dependencies to fix bugs or use new features will require a manual upgrade instead of relying on Dependabot to keep things up to date automatically. This can be done by running the following to upgrade only one package: ``` # We want to only upgrade the 'keyring' package # so we use the --upgrade-package option. pip-compile \ requirements/publish.in \ -o requirements/publish.txt \ --no-header \ --no-annotate \ --generate-hashes \ --upgrade-package=keyring ``` - CodeQL is already configured in `.github/workflows/codeql-analysis.yml` - Configure as desired after reading the documentation for CodeQL. - Settings > Branches - Select the "Add rule" button - Branch name pattern should be your default branch, usually `main` - Enable "Require a pull request before merging" - Enable "Require approvals". To get a perfect score from OpenSSF scorecard metric "Branch Protection" you must set the number of required reviewers to 2 or more. - Enable "Dismiss stale pull request approvals when new commits are pushed" - Enable "Require review from Code Owners" - Enable "Require status checks to pass before merging" - Add all status checks that should be required. For this template they will be: `Analyze (python)` `Test (3.8)` `Test (3.9)` `Test (3.10)` - Ensure the "source" of all status checks makes sense and isn't set to "Any source". By default this should be configured properly to "GitHub Actions" for all the above status checks. - Enable "Require branches to be up to date before merging". **Warning: This will increase the difficulty to receive contributions from new contributors.** - Add all status checks that should be required. For this template they will be: - Enable "Require signed commits". **Warning: This will increase the difficulty to receive contributions from new contributors.** - Enable "Require linear history" - Enable "Include administrators". This setting is more a reminder and doesn't prevent administrators from temporarily disabling this setting in order to merge a stuck PR in a pinch. - Ensure that "Allow force pushes" is disabled. - Ensure that "Allow deletions" is disabled. - Select the "Create" button. - Settings > Tags > New rule - Use a pattern of `*` to protect all tags - Select "Add rule" - Settings > Environments > New Environment - Name the environment: `publish` - Add required reviewers, should be maintainers - Select "Save protection rules" button - Select "Protected Branches" in the deployment branches dropdown - Select "Add secret" in the environment secrets section - Add the PyPI API token value under `PYPI_TOKEN` - Settings > Code security and analysis - Select "Enable" for "Private vulnerability reporting". This will allow users to privately submit vulnerability reports directly to the repository. - Update the URL in the `SECURITY.md` file to the URL of your own repository. - Settings > Code security and analysis - Select "Enable" for "Secret scanning". This will scan and report published tokens to their respective services (like AWS, GCP, GitHub Tokens, etc) so they can be revoked before they're used by a malicious party. - Also enable "Push Protection" which scans incoming commits for secrets before they are made publicly available. This should provide even more protection from accidentally publishing secrets to a git repository. PyPI is increasing the minimum requirements for account security and credential management to make consuming packages on PyPI more secure. This includes eventually requiring 2FA for all users and requiring API tokens to publish packages. Instead of waiting for these best practices to become required we can opt-in to them now. If you don't have 2FA enabled on PyPI already there's a section in the PyPI Help page about how to enable 2FA for your account. To make 2FA required for the new project: - Open "Your projects" on PyPI - Select "Manage" for the project - Settings > Enable 2FA requirement for project If your project is hosted on GitHub you can take advantage of a new PyPI feature called "Trusted Publishers". It's recommended to use a Trusted Publisher over an API key or password because it provides an additional layer of security by requiring the package to originate from a pre-configured GitHub repository, workflow, and environment. There's a short guide on how to add a Trusted Publisher to the project. Below is an example of how to map the publishing GitHub Workflow definition to the PyPI Trusted Publisher. WarningCare should be taken that the publishing workflow can only be triggered by the GitHub accounts that you intend. Remember that git tags (without Protected Tags enabled) only require write access to the repository. This is why GitHub Environments with a set of required reviewers is highly recommended to have an explicit list of people who are allowed to completely execute the publish job. Configuring the Trusted Publisher requires 4 values: - GitHub repository owner - GitHub repository name - GitHub workflow filename - GitHub environment name (optional, but highly recommended!) Using this repository (https://github.com/sethmlarson/secure-python-package-template) as an example, the values to set up a Trusted Publisher would be: - GitHub repository owner: `sethmlarson` - GitHub repository name: `secure-python-package-template` - GitHub workflow filename: `publish.yml` - GitHub environment name: `publish` Below is the minimum configurations required from the GitHub Workflow: ``` # Filename: '.github/workflows/publish.yml' # Note that the 'publish.yml' filename doesn't need the '.github/workflows' prefix. jobs: publish: # ... permissions: # This permission allows for the gh-action-pypi-publish # step to access GitHub OpenID Connect tokens. id-token: write # This job requires the 'publish' GitHub Environment to run. # This value is also set in the Trusted Publisher. environment: name: "publish" steps: # - ... # The 'pypa/gh-action-pypi-publish' action reads OpenID Connect # Note that there's zero config below, it's all magically handled! - uses: "pypa/gh-action-pypi-publish@0bf742be3ebe032c25dd15117957dc15d0cfc38d" ``` Find the latest release that was done via the publish GitHub Environment, I used v0.1.0 for this example. Open the corresponding release page on PyPI. Select the "Download files" tab. For each `.whl` file select "view hashes" and copy the SHA256 and save the value somewhere (`de58d65d34fe9548b14b82976b033b50e55840324053b5501073cb98155fc8af` ) Clone the GitHub repository locally. Don't use an existing clone of the repository to avoid tainting the workspace: `git clone ssh://[email protected]/sethmlarson/secure-python-package-template` Check out the corresponding git tag. `git checkout v0.1.0` Run below command and export the stored value into `SOURCE_DATE_EPOCH` : ``` $ git log -1 --pretty=%ct 1656789393 $ export SOURCE_DATE_EPOCH=1656789393 ``` Install the dependencies for publishing and build the package: ``` python -m pip install -r requirements/publish.txt python -m build ``` Compare SHA256 hashes with the values on PyPI, they should match the SHA256 values that we saw on PyPI earlier. ``` $ sha256sum dist/*.whl de58d65d34fe9548b14b82976b033b50e55840324053b5501073cb98155fc8af ``` CC0-1.0
true
true
true
Template for a Python package with a secure project host and package repository configuration. - sethmlarson/secure-python-package-template
2024-10-12 00:00:00
2016-11-21 00:00:00
https://opengraph.githubassets.com/402e23127ee854548bed66e016e90ddcd5120901d7d1b6cf45585c81c1a0937c/sethmlarson/secure-python-package-template
object
github.com
GitHub
null
null
6,880,517
http://www.zdnet.com/more-on-microsofts-sku-morphic-windows-vision-7000024092/
More on Microsoft's SKU-morphic Windows vision
Mary Jo Foley
# More on Microsoft's SKU-morphic Windows vision There's skeumorphism. And then there's SKU-morphism -- as in a few key next-generation Windows SKUs, which may morph before they debut. As I've blogged previously, while Microsoft is moving toward a "One Windows" vision, that vision is more along the lines of one Windows core, but multiple SKUs, or versions, according to my contacts. (SKU actually stands for stock-keeping unit, for those wondering.) This new strategy doesn't necessarily mean there will be a different SKU for every kind of Windows form factor out there. Instead, as Microsoft moves forward with its "Threshold" Windows wave, there might be just a few Windows SKUs built on top of a common Windows foundation, I'm hearing from my contacts. It's definitely still early days for Threshold, which is supposedly slated to begin arriving around Spring 2015. Given all the management changes at Microsoft, things could change. But here's supposedly what the Softies are thinking at this point. With Threshold, my sources say, there could be three primary SKUs: A "modern" consumer SKU; a traditional/PC SKU; and a traditional enterprise SKU. The **modern (i.e., Metro-Style/Windows Store) consumer SKU** would be focused on WinRT apps. (WinRT, in this case, refers to the API set at the heart of Windows, not the current Windows RT operating system that runs on ARM.) It may end up targeting ARM- and Intel-based devices both. It would be updated frequently by Microsoft through the Windows Store. This SKU supposedly wouldn't be optimized to run Win32 apps. However, my contact said there's the possibility that on some PC-like form factors, there may be a "desktop" that is more easily navigable for keyboard/mouse users. This modern SKU would be the SKU for Windows Phones, ARM-based Windows tablets/PCs, phablets and other kinds of tablets. Some PCs also may run this SKU, providing Microsoft with a more head-to-head competitor to Chromebooks, as these machines would be more secure and locked down (thanks to the way Microsoft built the WinRT/Windows Store model). The modern SKU is what has previously been rumored as a forthcoming Microsoft hybrid Windows Phone OS/Windows RT operating system. A **more traditional consumer SKU** would be aimed at the current PC market. This SKU would include a desktop and be customized so that mouse/keyboard users will be able to continue to have some semblance of productivity and familiarity with Windows. This SKU also would be updated regularly and often through the Windows Store. There also will likely be some kind of **traditional Enterprise SKU, **according to my contacts, that would include all the usual business bells and whistles, like support for Win32 apps via a Desktop environment, support for group policy, device management and more. This SKU would be aimed primarily at traditional PCs, tablets and other devices and also allow users to run "Modern"/Windows Store apps. The Enterprise SKU might end up being for volume licensees only. This might be a SKU that doesn't update frequently/constantly through the Windows Store. Instead, it might be subject to IT policies/approvals, making enterprise users who don't want silent, automatic updates a lot happier. Microsoft Windows chief Terry Myerson hinted at something like this during his recent Credit Suisse tech conference appearance. There will likely be some additional device-specific Windows "Threshold" SKUs for embedded devices and usages, such as point-of-sale terminals, kiosks, etc., given that the Embedded team is now part of Myerson's organization. But these SKUs won't be offered directly to consumers or business users directly. Microsoft is attempting to straddle a fence here and continue to advance Windows as a "modern" mobile platform, while not disenfranchising their huge existing base. The big takeaway here is there may be more concessions coming to folks who felt like Windows 8 went too far in turning Windows into a touch-first, tablet-centric operating system. To me, this is a welcome furthering of the changes that began more conservatively last year with the re-emergence of the Start button and allowance of boot to desktop by default. **Update**: Here are a couple of related tidbits, courtesy of sources of Windows SuperSite's Paul Thurrott. Thurrott said he's hearing the revised Desktop will allow users to run multiple Metro appson the Desktop. That'd mean windows comes back to Windows. Plus, he's hearing the Start Menu might return, too, supplementing the currently Start-Menuless Start Button -- another plus for those struggling with the current Windows 8.x navigation scheme.
true
true
true
Microsoft's plans for Windows SKUs is undergoing a transformation. Here's the latest on what the Softies may be planning with new Windows versions, moving forward.
2024-10-12 00:00:00
2013-12-09 00:00:00
https://www.zdnet.com/a/…t=675&width=1200
article
zdnet.com
ZDNET
null
null
10,758,236
http://www.solipsys.co.uk/new/GraphThreeColouring.html?HN_20151218
Graph Three Colouring
null
Subscribe! My latest posts can be found here: Previous blog posts: Additionally, some earlier writings: | # What is Graph Three-Colouring? Here is something you may have seen before. Take a map, any map, and colour the regions so that if two regions share a border, they must get different colours. Usually we count the exterior as a region and everything still holds. Some people use the convenience of drawing a box and saying that they don't care about anything outside it. Either is a choice, and it doesn't make much difference. So if we have only one region then clearly we colour it with whatever colour we choose, and we only need one colour. If we have two regions then they are going to touch somewhere, so they will have to get different colours. Fair enough. If we have a checkerboard sort of arrangement then we still only need two colours, colouring them in the usual way. The black squares only meet at a corner, and that's OK - they are not sharing a boundary, so they don't need to be different colours. We can **choose** to make them different, but that's different. If you see what I mean. So by creating bigger and bigger checkerboards we can make maps of whatever size we like that can *still* be covered by just two colours. We say that we can create "arbitrarily large" two-colourable maps. If all of that seems easy and obvious, here's an extra challenge. Suppose we also have to colour the outside, bounding region. The checkerboard pattern always has both colours on the squares on the boundary, so we can't use one of those two colours for the outside. That means that if we have to colour the outside as well, the idea of larger and larger checkerboards no longer lets us create arbitrarily large two-colourable maps, because the exterior has to get a third colour. # Moving beyond the checkerboard ... In fact, we can be more daring and more adventurous. We don't have to have anything as simple as a tiling of the plane with squares. Take any doodle made with a single pen stroke, and ending up where it started. The resulting diagram can always be coloured with just two colours. This is what we'll call The Doodle Theorem, and we'll come back to that in a little while. But sometimes things require three colours, a simple pie in three pieces is an example. Indeed, if you decide that the exterior also requires colouring, a pie into *any* number of pieces greater than one requires at least three colours, and sometimes four. But one day back in the 1800s, De Morgan wrote: | "A student of mine [Guthrie] asked me to day to give him a reason for a fact which I did not know was a fact - and do not yet. He says that if a figure be any how divided and the compartments differently colored so that figures with any portion of common boundary line are differently colored - four colors may be wanted but not more - the following is his case in which four colors are wanted. Query cannot a necessity for five or more be invented ..." | | And thus was born the Four Colour Conjecture. But I'm not going to go into that here! There are many good papers and books explaining the history, *etc.* Instead, I'm off in a different direction. # Abstracting away from the map ... A map, with watch-towers | We mark the capitals | ... and join them with roads | The dual graph | Let's look at our map. We are thinking, of course, of the regions as countries, and the lines as borders between them. We then have watchtowers wherever the borders meet, and sometimes on the border as well. So we have regions, lines, and points. To use the technical terms from graph theory, we have faces (although they are still sometimes called regions), edges, and vertices. So our map is a graph that can be drawn on the plane - it is a planar graph. Now let's do the following. Think of the regions as countries, and mark on our map each of the capitals. If two countries share a border, mark in a road joining the capitals. Don't put any other roads, and don't have the roads cross each other. In this way we have a new diagram of vertices (the capitals) and edges (the roads). We're also going to say that if a particular border has a watchtower in the middle, or even more than one watchtower along its length, we will put a road for each section of boundary. So two capitals may have more than one road joining them. This seems to be a useful idea, and we give it a name. Here we can see a map, then we overlay onto it the capitals and the roads joining them, and finally we erase the original map and get left with just the road network. This final diagram is called the **dual** of the original. If we can assign colours to the capitals (the vertices) such that each road (edge) has a different colour at each end, then the countries can inherit the capital's colour, and that gives us a colouring of the original map. But in the same way, a valid colouring of the regions induces a colouring of the capitals in such a way that any two capitals (vertices) that are joined by a road (edge) must have different colours. So colouring the original map is exactly the same task as colouring the vertices in the dual. "So what?" - you may think. | Colour the capitals | Induced colouring of the map | From Doodle to dual to colouring | # Leaving the plane behind Why are we doing this? What's wrong with talking about colouring maps? When we colour the regions on a map we are specifically limited to working on (or in) the two-dimensional surface. But when we are talking about graphs we are no longer limited to a surface. We can talk about graphs in abstract. So let's return, as promised, to what we call The Doodle Theorem. We claim that a map drawn with a single pen stroke that returns to its starting point can be two-coloured. A proof goes roughly like this: - Take a planar graph that's been drawn with a single pen stroke that returns to its starting point. - Such a graph is said to have an "Euler Cycle". - At every vertex we go in, and then out, so - Every vertex must have an even number of edges. - In the dual, a circuit around a vertex of the original will have an even number of edges, - ... because each edge in the dual crosses an edge in the original. - Extend that, and we can show that *every* circuit has an even number of edges. - Note: This requires a bit of proof! - A graph is bipartite if and only if every circuit is even. - This also requires a bit of proof. None of that is obvious! That's why this has the status of a theorem, and you can read more about it here: The Doodle Theorem. So the dual is bipartite, and that means it can be bi-coloured! So the dual can be bi-coloured, and that corresponds to a bi-colouring of the original map. And we're done - a planar graph with an Euler Cycle can be bi-coloured. # Bi-Partite graphs A map and its dual | It's pretty clear that a bi-partite graph can be two coloured, but if you're given a graph, how easy or hard is it to decide if it's bi-partite? Easy. Really easy. To decide if a graph is bi-partite, pick one vertex, colour it red. Colour its neighbours green, colour their neighbours red, and just keep going. If the graph is bi-partite this never goes wrong. If it goes wrong, it's not bipartite. And that's it! But this map is not bi-colourable - the dual is not bi-partite. So given a graph it's really easy to tell if it's bi-partite or not, is the same true of tri-partite-ness? Given a graph, can we easily tell if it's tri-partite? No. What? # What about tri-partite? Here we are talking about colouring the vertices - we have now left behind the idea of colouring regions in maps - that's too limiting, because it's restricted to graphs/maps that can be drawn in the plane (or other two-dimensional surface). | Well, to be tri-partite is the same as being three-colourable, because a three-colouring gives us a division of the vertices into three sets where the edges are only ever between the sets, never within. So to ask if a graph is tri-partite is to ask if it is three colourable. And it turns out that this is thought to be hard. You may notice that I say "thought to be". The thing is - no one knows if it really is hard. What currently **is** known is this: - We don't know of any algorithm that can efficiently determine if a graph is three-colourable. Of course, we need to define what we mean by "efficient," but we can do that. What's more, this is a question that is worth a million dollars. Literally. It's one of the Millennium Problems to determine if there is, or if there is not, an efficient algorithm for solving the three-colouring problem. To get a deeper understanding of this, we can look at Factoring Via Graph Three Colouring. There we can see, explicitly, that a problem thought to be hard can in fact be solved, provide we can three-colour efficiently. But here's a flavour. # Three-colouring a graph Given a graph, an independent set is a set of vertices that have no edges between them. | We can three-colour a graph if we can find an independent set of vertices such that the remaining graph is bi-partite. Given an independent set it's trivial (as shown above) to decide if the remaining graph forms a bipartite graph, so it comes down to finding independent sets. And given a set, it's trivial to decide if it's independent. So we can find any possible three-colourings by - Find a subset of vertices, then - if it's independent, - if the remainder is bipartite, - you've found a three-colouring. But if you have $N$ vertices in your graph, there are $2^N$ possible subsets, so this algorithm is terribly inefficient. Problem is, *every* algorithm we know is inefficient - there is no known algorithm that takes time that is polynomial in the number of vertices. ## Comments I've decided no longer to include comments directly via the Disqus (or any other) system. Instead, I'd be more than delighted to get emails from people who wish to make comments or engage in discussion. Comments will then be integrated into the page as and when they are appropriate. If the number of emails/comments gets too large to handle then I might return to a semi-automated system. That's looking increasingly unlikely. |
true
true
true
null
2024-10-12 00:00:00
2007-01-01 00:00:00
null
null
null
null
null
null
1,410,706
http://thenextweb.com/location/2010/06/04/the-new-york-times-fights-back-against-foursquare-and-yelp/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
35,425,342
https://www.ghacks.net/2023/04/03/mullvad-browser-privacy-friendly-browser-launched/
Mullvad Browser: Privacy friendly browser launched - gHacks Tech News
Martin Brinkmann
# Mullvad Browser: Privacy friendly browser launched Look, a new web browser that is not Chromium-based. Mullvad has just launched Mullvad Browser, a privacy-first web browser that is using the Firefox web browser as its base. Mullvad is a Swedish company that is best known for its VPN service. It was founded in 2009 in Sweden and is best known for its strong focus on privacy. ## Mullvad Browser Mullvad Browser has been developed in cooperation with the makers of the Tor web browser. Tor Browser is also based on Firefox ESR code, but it includes advanced security and privacy features. One of the main ideas behind the creation of the browser was the realization that a privacy-focused VPN may not be enough, if the tools used, in this case the web browser, would reveal information to "big tech, authorities, and data brokers". . The web browser does lots of things different when compared to the majority of web browsers out there. One of the main differentiating factors is that Mullvad has no intention of making money with the browser. It has created the browser for users of its VPN service. As such, it has been designed with privacy in mind. Developers have removed all Telemetry from the browser, added strong anti-fingerprinting protections from the Tor project, and made private browsing mode the default browsing mode. This means that cookies, cache and the browsing history are not retained between sessions. There is also a new reset button to reset the current session and start anew. Mullvad lists all outgoing connections that the browser makes on its own on the FAQ page on the official website: - Browser update (Mullvad) - Mullvad Browser Extension update (Mullvad) - Mullvad DoH (Mullvad) - NoScript/Ublock Origin update (Mozilla) - Certificates & Domains update (Mozilla) - Ublock Origin filter lists update (various lists) The default security level of the browser is set to standard, to ensure compatibility with the majority of websites. Users may increase security by switching to the safer or safest levels. These disable certain features that may be used for invasive acts, but it may lead to issues on some sites. Mullvad Browser includes several browser extensions by default, including the popular uBlock Origin content blocker and the company's own Mullvad VPN extension. Some users may wonder why they should use Mullvad Browser if they could use Tor Browser. The main difference is that Mullvad Browser is designed to be run with Mullvad VPN. It does work with any other VPN or no-VPN connection. Tor Browser on the other hand works best with a Tor connection. **Closing Words** Mullvad Browser follows the same release schedule as Firefox ESR and Tor Browser. It can best be described as Tor Browser without Tor. The decision to make private browsing the default may cause issues for some users, considering that cookies are not stored across sessions. Frequent sign-ins may be necessary because of this. Other than that, it is an alternative to Tor Browser and also to the Firefox web browser. Used in combination with a VPN, that does not log connections, it is improving privacy even further. **Now You**: what is your take on Mullvad Browser? TelVsaid on August 2, 2023 at 4:13 pmI uninstalled it. Got fed up with NoScript blocking embeded videos on sites together with no means of disabling it. The “Allow” menu didn’t work. System requirements are minimum Windows 10 and I’m running 8.1 so that may play a role in that regard. Similarly, to use Mullvad’s own search engine you have to be logged in to your account. Having to do that every time was also a hassle. Also, it has the same problem as Librewolf namely that the default window is so small. Having to keep resizing it every time eventually became a pain in the posterior. To some degree it can be used full screen, but I have a custom registry setting added to the context menu to clear the clipboard when I want and that needs a space outside the confines of the browser to access. So bye-bye Mullvad browser and I’ll just keep using the VPN on Firefox. owlsaid on August 4, 2023 at 2:07 pm@TelV, I saw your series of comments. This is a reply based on that. However, this reply is “just that FAQ” and has no intention to force you. I respect your judgment and decisions. First of all, Mullvad Browser is designed to be effective when used with a “definitely reliable VPN”. Mullvad Browser is developed and supported in collaboration with the Tor (Tor Browser) project. The Mullvad Browser, which was built from the collaboration project, is designed to give top priority to “fingerprint resistance”, so it has specifications that eliminate singularities (In other words, the default specifications of Firefox ESR. Limit extensions, do not enable functions that affect fingerprints, and maintain the default preferences). In short, Personal information protection is a design concept that is solved by a VPN, not a browser. Mullvad Browser is powerless to protect your personal information unless you use a “definitely reliable VPN”. That said, if you take measures to protect personal information in Browser, it will “increase singularity” resulting in neutralize fingerprint resistance. The actual situation can be measured using the following tools. My Fingerprint – web-am-i-unique https://amiunique.org/fingerprint/nojs Your fingerprint | No-JavaScript fingerprinting https://noscriptfingerprint.com/result/8CceBY4MGlYpukH3 Cover Your Tracks https://coveryourtracks.eff.org/ Test Ad Block – Toolz https://d3ward.github.io/toolz/adblock.html Browserleaks – Check your browser for privacy leaks https://browserleaks.com/ Regarding extensions implemented in Mullvad Browser # uBlock Origin, To the default uBlock Origin configuration, these two lists have been added: – Adguard URL Tracking Protection (query string tracking parameter stripping) – EasyList Cookie (cookie banners removal) # NoScript, NoScript is used as the back-end of the Security Level feature and provides additional protections like Cross-Site Scripting (XSS) filtering. NoScript’s icon is hidden by default like in the Tor Browser, but can be added along other extensions from the Customize Toolbar menu. “Letterboxing” in Mullvad Browser is a fingerprint-resistant function, so it can’t be helped. Based on that assumption, users will have no choice but to get used to it (put up with it). The Mullvad Browser hard facts: list of settings and modifications https://mullvad.net/en/browser/hard-facts FAQ – Help | Mullvad VPN https://mullvad.net/en/help/faq/ Summary Since the actual usage of the user is “unique”, it may be necessary to proper use different browsers in some cases. TelVsaid on August 2, 2023 at 12:02 pmTo get rid of the giant borders on Mullvad amend the following about:config setting: – “privacy.resistFingerprinting.letterboxing” = false (default is true) Not so private anymore, but the border at the bottom is huge in full screen mode and the above setting is what causes it. Other about:config settings in this link: https://mullvad.net/en/browser/hard-facts TelVsaid on August 2, 2023 at 11:06 amA few odd “about:config” settings I came across which I thought were surprising considering the Mullvad browser is supposed to be privacy orientated: – “privacy.trackingprotection.enabled” is set to false. – “privacy.purge_trackers.enabled” also set to false. – “network.cookie.cookieBehavior” is set to 1 while the default on FF is 5. – “dom.event.clipboardevents.enabled” is set to true. (which allows sites to modify the clipboard). – “dom.event.contextmenu.enabled” is set to true. (which means sites can block right click). Did anyone come across any other settings which were of some concern? Other than that, the browser installs and runs on Windows 8.1 without a problem (so far). Tachysaid on April 21, 2023 at 2:25 amOk, new problem. Mulvad browser is forcing my DNS requests through thier own DNS bypassing the DNS I want to use. How can I fix this? IP Address details IP: 193.138.218.118 ISP: 31173 Services AB AirVPN Exit Node: No No ASN: 39351 Country: Sweden Sweden (SE) Region: Skåne County (M) City: Malmo Time Zone: Europe/Stockholm Latitude & Longitude: 55.6078 , 12.9982 Geolocation map (Google Map) based on IP Address Activate Accuracy Radius: 20 KM Last data update: Fri, 21 Apr 2023 00:24:49 +0000 Tachysaid on April 21, 2023 at 2:32 amMaybe I should look harder before I ask questions? LOL Had to disable “Enable DNS over HTTPS”. I don’t want this on when using a VPN because it bypasses the VPN. Still trying to get it to start maximized….. Tachysaid on April 21, 2023 at 2:09 amOk, HUGE problem. Noscript is set to not block anything at all and any settings I change are not saved. UBO is fine, I imported my settings from FFESR and they stick NoScript keeps defaulting to allowing everything. I did install the browser to my user folder to avoid write permissions issues. Tachysaid on April 21, 2023 at 2:19 amOh ehll! It’s been so long I forgot I had this same issue with TOR. You have to open the NoScript options and check “Override Mullvad Browser’s Security Level Preset” Then NoScript will retain all your settings. This is important because the blah blah security preset is set to allow everything. Tachysaid on April 21, 2023 at 1:44 amHow do I “force” Mulvad Broswer to start maximized? PLEASE! I do not want nor need a speech on fingerprinting. I’ve already disabled the letterboxing, and the warning about disabling letterboxing, now I just need to figure out how to make it start maximized. And why is the Comic-Sans font not available? It’s my favorite font. TelVsaid on April 4, 2023 at 3:56 pmAccording to the download link Martin posted you need Windows 10 minimum to use the Mullvad browser so it wouldn’t be any good to me since I’m using Win 8.1 However, I’ve been using their VPN for the past 6 years or so in combination with the independent Wireguard app and the two work perfectly together. But like owl, I’m also using the Floorp browser which I’m quite content with. For the very rare occasions that Floorp won’t load a site, I switch to Firefox. owlsaid on April 10, 2023 at 11:09 am@TelV, it’s been six months. I have been away from the web for a long time, forced to recuperate from a serious injury in addition to my digital detox lifestyle. Glad to see that you are loving the Floorp browser. @TelV, > need Windows 10 minimum to use the Mullvad browser so it wouldn’t be any good to me since I’m using Win 8.1 > However, I’ve been using their VPN for the past 6 years or so in combination with the independent Wireguard app and the two work perfectly together. By the way, Mullvad Browser is great too. Please see below. https://www.ghacks.net/2023/04/03/mullvad-browser-privacy-friendly-browser-launched/#comment-4563581 Well, I can’t get too involved web, since I live a digital detox lifestyle. And now that the “iPad” is my regular machine… Best regards. Anonymoussaid on April 4, 2023 at 4:04 pmit works just fine on windows 7/8.1 – ESR102 (and ESR115 to come) are still supported on these platforms owlsaid on April 4, 2023 at 2:11 pmIn my case, my regular browser on Windows is “Floorp”, but I will try Mullvad Browser. I used Mullvad Browser for this post. If you want to add a new extension (addons.mozilla.org) to Mullvad Browser, the browser’s add-on manager (about:addons) does not provide the ability to add add-ons, so add can add it using the official “Extensions” page below. https://addons.mozilla.org/en-US/firefox/extensions/ For extensions not listed on that page, you can add the xpi file manually. How to do it: In the browser’s Add-ons Manager (about:addons), click Install Add-on From File… in the browser’s Add-ons Manager (about:addons). About Floorp Floorp (made in Japan, but multilingual versions are available) is based on the latest version of Firefox ESR as its platform. GitHub – Floorp-Projects/Floorp: The source code of version 10 or later of Floorp Browser, the most Advanced and Fastest Firefox derivative https://github.com/Floorp-Projects/Floorp owlsaid on April 5, 2023 at 4:25 amHere is a summary and citation based on first impressions and official FAQs: Mullvad Browser (made in Sweden) is, through a collaboration between Mullvad VPN and Tor Project, to minimize tracking and fingerprinting, The latest version of Mozilla Firefox Extended Support Release (ESR) was developed as the platform. *Intentionally, only the English (en-US) version is release. //mullvad.net/en/help/tag/mullvad-browser And to achieve the best “privacy protection”, “IP address” and “Cookies” is not the only measure, Fingerprinting resistance is also an important issue, Therefore, It is recommended that to use the default settings (including screen size) without “adding extensions” or “customization”. What’s the difference between the Tor Browser and the Mullvad Browser? https://mullvad.net/en/help/tag/mullvad-browser/#94 The difference is the network used to access the internet. Tor Browser connects to the internet through the Tor Network. The Mullvad Browser is instead designed to be used with a VPN. Differences with Tor Browser https://mullvad.net/en/browser/hard-facts No Tor Network patches No multilanguage support No onboarding patches Different branding/installer metadata WebRTC is enabled Web Audio API is enabled (needed for WebRTC) uBlock Origin / Mullvad Browser Extension NoScript Cross-tab Identity Leak Protection is disabled by default Mullvad DoH A Tor Browser specific cryptocurrency targeted protection is removed No drag and drop protections (it’s a specific proxy-bypass measure) No download warning popup (the one that says that you should use Tails to open downloads) What to look for when choosing a browser https://mullvad.net/en/browser/things-to-look-for-when-choosing-a-browser Quoting the main points: Let’s start with the obvious: don’t pick a browser whose core purpose is to collect data from you. Aka: big tech browsers. some browsers are entirely designed to collect your data. “Chrome collects your IP address, the words you search for, the videos you watch, the pages you visit, the ads you click, your purchase activity, the network of people you’re in touch with, and much more. All facets of your life are scrupulously collected, analyzed and assembled into an intimate profile: a data text that aims to describe what makes you you.” And there are several other browsers today that limit things like third-party-based tracking. However, as a result of this, advertisers and others interested in capitalizing on your behavioral data have invested in other tactics for tracking users around the web. In other words: it’s become more important for them to use browser fingerprinting. Especially now, with browsers blocking third-party resources and cookies are under legal attack, advertisers and other data gatherers are looking for other solutions. But the irony is that your attempts to block trackers could be the one thing to make you uniquely identifiable. The more protection you use, the higher the risk that you will be exposed with a unique browser fingerprint. That’s why the Mullvad Browser only uses uBlock Origin to block third-party trackers, for instance. Browser fingerprinting – tracking behind the curtain https://mullvad.net/en/browser/browser-fingerprinting Its preface: When it comes to mass surveillance, browser fingerprinting as a means for tracking people, isn’t as straightforward as tracking via IP addresses and cookies. Your IP address has a direct link to you as a person, cookies are locally saved on your specific device; there’s no doubt whatsoever that those techniques are used to gathering information about you and to follow you all over the internet over time. This is not the case for browser fingerprinting – which creates a very different challenge. All together as one: This is how the Mullvad Browser works https://mullvad.net/en/browser/mullvad-browser Its preface: What’s important when you develop a privacy-focused browser? In our world there’s only one method to strive for, and it’s a classic: hide in the crowd. Just like the Tor Browser, the Mullvad Browser has been developed with the purpose and ambition for all its users to appear as one (if you have the same ambition: use a trustworthy VPN together with the browser). When you have that aspiration and goal, it’s critical to choose carefully. With an internet infrastructure loaded with different tracking techniques, it could be tempting to offer as many cool features as possible to stop and block them. But the irony is: your attempt to block trackers could be the one thing to reveal you. Sometimes having no specific defense is better than having one. By wanting to increase online privacy, you install extensions that in the end make you even more visible than before. The Mullvad Browser hard facts: list of settings and modifications https://mullvad.net/en/browser/hard-facts FAQ https://mullvad.net/en/help/tag/mullvad-browser/ For more information, please see below: https://mullvad.net/en/browser owlsaid on April 5, 2023 at 1:50 amFrom the official FAQ: Why don’t you have more features within the Mullvad Browser? https://mullvad.net/en/help/tag/mullvad-browser/#96 We focus on privacy first. Too many features could make it possible to identify you through fingerprinting. Can I install other extensions? https://mullvad.net/en/help/tag/mullvad-browser/#97 Yes, but that is something we don’t recommend. Extensions could make it possible to identify you through fingerprinting. johnsaid on April 4, 2023 at 5:25 pmHello, I’ve installed floccus from the xpi, but the icon in the toolbar is not visible, so I cannot use it. owlsaid on April 4, 2023 at 11:31 pm@john, > I’ve installed floccus from the xpi, but the icon in the toolbar is not visible, so I cannot use it. Click on >> on the right side of the toolbar to pull down the “Overflow Menu”. Click “Customize Toolbar” at the bottom of the Overflow Menu, and you will find an icon that is not displayed. Drag and paste it to the desired location, and finally click [Done] at the bottom right of the screen. That will get you the results you were hoping for. Alternative Methods: You can also access the “Customize Toolbar” by right-clicking in the free space (area without tabs or buttons) at the top of the screen. Also, if the “Menu bar” is set to On, go to “View” > “Customize Toolbar”. johnsaid on April 5, 2023 at 11:39 am@owl thank for answering but customizing the toolbar didn’t work. I was told to go into about:addons and configure the extension to run in private windows. this fixed the issue. Gerardsaid on April 4, 2023 at 12:51 pm“It can best be described as Tor Browser without Tor. ” “NoScript/Ublock Origin update (Mozilla)” So Mullvad Browser is essentialy a tweaked Firefox + NoScript + Ublock Origin? In that case, why not simply use a tweaked Firefox + NoScript + Ublock Origin instead? Just wondering. owlsaid on April 5, 2023 at 1:35 amWhat’s the difference between the Tor Browser and the Mullvad Browser? https://mullvad.net/en/help/tag/mullvad-browser/#94 The difference is the network used to access the internet. Tor Browser connects to the internet through the Tor Network. The Mullvad Browser is instead designed to be used with a VPN. owlsaid on April 5, 2023 at 1:47 amDifferences with Tor Browser https://mullvad.net/en/browser/hard-facts No Tor Network patches No multilanguage support No onboarding patches Different branding/installer metadata WebRTC is enabled Web Audio API is enabled (needed for WebRTC) uBlock Origin / Mullvad Browser Extension NoScript Cross-tab Identity Leak Protection is disabled by default Mullvad DoH A Tor Browser specific cryptocurrency targeted protection is removed No drag and drop protections (it’s a specific proxy-bypass measure) No download warning popup (the one that says that you should use Tails to open downloads) Tonysaid on April 4, 2023 at 1:00 am@Paul(us) Mulvad owned by Amazon? Please provide citation. I was able to come up with the following: “Mullvad AB is the company that owns Mullvad, which is a subsidiary of Amagicom AB. Both companies are owned by the same people, Fredrik Strömberg and Daniel Berntsson” Paul(us)said on April 4, 2023 at 12:13 amAmazon does not want to make money from the browser you write. But Amazon (because Mulvad VPN is also owned by Amazon.) will still have to give all the data to the big six. Not only the VPN data, but Amazon also uses your data from this browser to make money from you. That’s why they lure you with the free browser, which is much less than Tor. Karlsaid on April 6, 2023 at 11:17 pm“But Amazon (because Mulvad VPN is also owned by Amazon.)” I do not use Mullvad but I do know that they have never been owned by Amazon. Why even write such foolish comment? Davidsaid on April 4, 2023 at 1:46 pmMullvad VPN AB is owned by parent company Amagicom AB. The name Amagicom is derived from the Sumerian word ama-gi – the oldest word for “freedom” or, literally, “back to mother” in the context of slavery – and the abbreviation for communication. Amagicom stands for “free communication”. Mullvad VPN AB and its parent company Amagicom AB are 100% owned by founders Fredrik Strömberg and Daniel Berntsson who are actively involved in the company. https://mullvad.net/en/about Andy Proughsaid on April 4, 2023 at 3:14 amMullvad is owned by Amagicom AB, a company based in Sweden. Which, as far as I can tell, has no connection to Amazon. So, it appears you have no clue what you are talking about. Anonymoussaid on April 4, 2023 at 2:13 am>Mulvad VPN is also owned by Amazon Any source on this? Anonymoussaid on April 3, 2023 at 10:39 pmMcAfee Stinger gives a warning against Mullvad. PS I do not need an explanation that I do not need Stinger ! Anonymoussaid on April 4, 2023 at 8:06 amVirus Total says it’s clean https://www.virustotal.com/gui/file/4f7008f3b26d682564e798d968455d326bcfe12196ba4d402826f4d5d42a1616 Davidsaid on April 3, 2023 at 7:24 pmHow many 3rd party FF-based browsers do we really need? We already have LibreWolf, Waterfox, and Tor. I’m just not seeing what significantly distinguishes this one from any of the rest. Claymoresaid on April 4, 2023 at 9:10 amHow many 3rd party browsers with Chromium backend do we need? There was a time, forks of browsers were done for good. Nowaday almost all browsers fork Chromium. It’s good to have a variety of different backends. It’s just the question how good and long a fork is being maintained. The best fork suffers from a lack of updates and fixes. Andy Proughsaid on April 3, 2023 at 7:19 pm>”Look, a new web browser that is not Chromium-based. Mullvad has just launched Mullvad Browser, a privacy-first web browser that is using the Firefox web browser as its base.” Of course you would never use chromium-based browsers for any kind of real privacy or security. This is great news, for too long Tor Browser has not worked without the Tor network, so you couldn’t use it as a good privacy general-purpose browser. Looks like (hopefully) Mullvad has stepped up to handle that situation. I’m writing this from my good looking new Mullvad browser. I’ll be interested to see how it compares to Librewolf. I like the Mullvad extension button on the toolbar. It quickly tells you about your current connection, regardless of whether or not you are using the Mullvad vpn. Tom Hawacksaid on April 3, 2023 at 6:12 pm“The decision to make private browsing the default may cause issues for some users, considering that cookies are not stored across sessions. Frequent sign-ins may be necessary because of this.” Indeed, which is why I never use Firefox’s Private Browsing : I aim to remove cookies (and localStorage) I don’t need when I exit a site, that means before I exit the browser, thanks to the dedicated ‘CookieAutodelete’ extension AND keep — allow — a cookie (and/or sometimes localStorage, choosing either or both thanks again to ‘CookieAutodelete) when I decide to. Basically a natural wish, no? If I dislike the rudeness of Chrome and Edge browsers I dislike as well being force-fed to virtuous environments which almost always leads to the best of one world by blocking it from another whilst an pertinent approach IMO is to conciliate freedom and privacy. owlsaid on April 5, 2023 at 2:25 am> “The decision to make private browsing the default may cause issues for some users, Frequent sign-ins may be necessary because of this.” From the official FAQ: How do I stay logged into specific websites between sessions? # https://mullvad.net/en/help/tag/mullvad-browser/#105 It’s not possible. It’s an action to combat tracking. Because the browser developer’s “intentions and specifications are clear,” users can even use (tweak and customize) them at their own discretion, having first understood them. The Mullvad Browser is developed – in collaboration between Mullvad VPN and the Tor Project – to minimize tracking and fingerprinting. But if you are looking for a quick summary, this is the place. https://mullvad.net/en/download/browser/windows You can read the full story and get the bigger picture here. https://mullvad.net/en/browser The Mullvad Browser hard facts: list of settings and modifications. https://mullvad.net/en/browser/hard-facts FAQ https://mullvad.net/en/help/tag/mullvad-browser/ Tom Hawacksaid on April 5, 2023 at 6:25 pm@owl, thanks for the links and for your commitment here to describe the Multivad Browser. From those links, from other Multivad’s Browser FAQ [https://mullvad.net/en/help/tag/mullvad-browser/], two points : 1- Remains a bother for me : “How do I stay logged into specific websites between sessions? # It’s not possible. It’s an action to combat tracking.” — I don’t agree. Well configured “accepted cookies” (trans-session) are in no way a tracking issue. 2- A plus for me, given I feared that Mullvad’s VPN was tied to Mullvad’s browser : “Do I have to be a Mullvad VPN user to run the Mullvad Browser? No, you don’t have to be a Mullvad VPN user to run the Mullvad Browser. But we highly recommend that you use a trustworthy VPN in combination with the browser.” “Can I use the Mullvad Browser without a VPN? Yes, but if you don’t use a trustworthy VPN in combination with the Mullvad Browser your IP address won’t be masked. To avoid data collectors and mass monitors to identify you thanks to your IP address (and hide your traffic from your ISP) – use a trustworthy VPN together with the Mullvad Browser.” — That’s a good point. Not that I’d be suspicious about their VPN but rather that I’m not fond of VPNs in general. Also, the price is affordable : 5$/month From there on I might give Mullvad browser a try. But I ignore if it handles two Mozilla features I use extensively : 1- AutoConfig [https://support.mozilla.org/en-US/kb/customizing-firefox-using-autoconfig] 2- Group Policy (Windows) [https://support.mozilla.org/en-US/kb/customizing-firefox-using-group-policy-windows] — Both are invaluable in my use of Firefox. owlsaid on April 6, 2023 at 3:33 am@ Tom Hawack, > 1- Remains a bother for me : 2- A plus for me, given I feared that Mullvad’s VPN was tied to Mullvad’s browser : From there on, I might give Mullvad browser a try. But I ignore if it handles two Mozilla features I use extensively : 1- AutoConfig [https://support.mozilla.org/en-US/kb/customizing-firefox-using-autoconfig] 2- Group Policy (Windows) [https://support.mozilla.org/en-US/kb/customizing-firefox-using-group-policy-windows] — Both are invaluable in my use of Firefox. In my opinion, and from what I understand from perusing the official information, the Mullvad Browser project is being developed with the mission of “focusing on the importance of privacy protection, educating people about it, and providing an easy-to-use browser for anyone who wishes to use it”. In short, the browser released by the project will be specialized to “specifications for all”, so expert users should, can tweak it at their own discretion without being attached to “defaults”. I will continue to use the “default specification of Mullvad Browser” for testing purposes, but will use other browsers (Floorp, Firefox ESR, Tor Browser, Brave) if necessary. Because I have my own settings for them (Group Policy in Firefox ESR), and am happy with the status quo. I also have measures in place at the system level (simplewall, AdGuard, AdGuardVPN, WPD, KeePass Password Safe 2) to complement browsers. However, my family is “at home, digital detox lifestyle” so there are few opportunities for practical use of digital devices and the practical device is an iPad… https://www.ghacks.net/2023/04/01/ios-privacy-settings-protecting-your-personal-information/#comment-4562999 Tom Hawacksaid on April 5, 2023 at 6:47 pmI just discovered at [https://mullvad.net/en/download/browser/windows] Mullvad Browser for Windows Latest version: 12.0.4 Works on Windows 10 or later (64 bit only) Windows 7 here. Testing Mullvad Browser will be for another day. owlsaid on April 10, 2023 at 10:50 am@Tom Hawack, > Windows 7 here. Testing Mullvad Browser will be for another day. The official system requirements for the Windows version are, It is explicitly stated as Windows 10 or later (64 bit only), though,, Even though it stateds so, I will do not think any problem with Windows 7 or later (32-bit and 64-bit). Perhaps, In order to use “Win 7 x86” etc., which Microsoft has declared to be no longer supported, in a development and test environment, In order to use “Win 7 x86”, for example, which Microsoft has declared to be no longer supported, in a development and test environment, a (costly) support contract with Microsoft is required, Therefore, it is thought that probably because the developer side can only check the operation of “Win 10 x64” or later. The rationale: #1. Mullvad Browser is, Mozilla Firefox Extended Support Release (ESR) latest version as its platform and follows its browser “System Requirements for Firefox ESR”. https://www.mozilla.org/en-US/firefox/102.9.0/system-requirements/ The current “esr102.x” is Windows Operating Systems (32-bit and 64-bit) Windows 7 or later The next milestone version “esr115.x”, is similar to the current version and appears to be no change in system requirements. #2. Even if you use “Mullvad VPN” on the network you use to access the Internet, the protocol is using “WireGuard and OpenVPN”. @TelV? https://www.ghacks.net/2023/04/03/mullvad-browser-privacy-friendly-browser-launched/#comment-4563194 @Keith? https://www.ghacks.net/2023/04/03/the-mullvad-browser-a-privacy-focused-browser-designed-to-reduce-your-fingerprint/#comment-4563361 The comments “Windows 7 or later” seems to indicate that there is no problem with “Windows 7 or later”. Notes: Source from official FAQ https://mullvad.net/en/help/tag/mullvad-browser/ Mullvad Browser is designed with the highest priority on “browser fingerprinting” prevention, so IP addresses are not masked. To avoid data collectors and mass monitors to identify you thanks to your IP address (and hide your traffic from your ISP) – use a trustworthy VPN together with the Mullvad Browser. Below are some impressions from my trial. I hope it will be helpful to you: I am followed by the “official” Mullvad Browser information, leave everything in Settings (Settings) as “default”, including themes, fonts, screen size, etc, no extensions have been added, and no customizations have been made. ? The browser is immediately (re)activated! ? All data is instantly erased (by restarting the browser). Pressing the New Identity button on the toolbar will instantly restart the browser, which is extremely easy to use. ? The bookmark and search functions were inconvenient by as default, therefore bookmark (icon) and search (icon) were placed on the toolbar. For many years, I always used a customized browser. By default, I will have to put up with “personal preference”, and I also thought that browser extensions (Tree style tabs, ClearURLs, Cookie AutoDelete, Dark Background and Light Text, Feedbro, etc.) were absolutely necessary … . Well, whatever it was, I just kept on using it, and after a week or so, the discomfort was gone. I will continue to use “Mullvad Browser” as my default. For VPN, I use AdGuardVPN (build a personal VPN and connect to AdGuard VPN IPSec on demand). As a comprehensive measure, “simplewall, AdGuard, AdGuardVPN, WPD, and KeePass Password Safe 2” are applied at the system level to complement the browser. AdGuard initially used the “free” version. I was satisfied and liked it so much that I got a permanent license during a special 80% off sale (twice a year, on Black Friday and Easter). I was also able to protect my iPad, which I use regularly. TelVsaid on April 4, 2023 at 3:33 pm@ Tom Hawack, You don’t need Cookie Autodelete to do the things you describe. All you need to do is to right click a site you don’t wish to retain data for and click “Forget about this site”. I use that option all the time myself and it’s never failed me yet. Tom Hawacksaid on April 4, 2023 at 6:08 pm@TelV, the ‘Cookie Autodelete’ does automatically what ‘Forget about this site’ does on user demand. I won’t “forget about a site’ manually every time I’m about to quit a site. The idea is that either you keep a site’s cookie(s) either you don’t. If you don’t there’s no reason to have it (them) hang along through you whole surfing session (should they even be deleted at Firefox exit). Most sites install either cookies or localStorage or both in the visitor’s profile WHICH IS ABSOLUUTELY NOT FOR THE USERS’S BENEFIT, so having that non asked-for trash be removed systematically when exiting a site is what ‘Cookie Autodelete’ is meant for. Personally I can’t stand having a sticker pasted to my back on practically all sites. Rather than blocking cookies we let those clowns stick all their trash as they like it and then get their s**t removed as soon as we exit their place. Andy Proughsaid on April 3, 2023 at 7:35 pmJust uncheck the “Always use private browsing mode” setting and add Cookie Autodelete and you are all set. That’s what I did. Tom Hawacksaid on April 3, 2023 at 9:11 pmThat’s what I do, that’s what I meant when mentioning that I never use Private Browsing but instead the ‘Cookie Autodelete’ approach, extension : removing cookies (and more) when exiting a site rather than when exiting Firefox itself. The great thing also with ‘Cookie Autodelete’ is that you can even fine tune cookies/localStorage/IndexedDB, Plugin Data, Service Workers, you can i.e. keep what is necessary for a site login and remove the extra non-necessary craps/tracking cookie(s). This extension in my experience is maybe #2 just after uBO of course. Andy Proughsaid on April 3, 2023 at 10:13 pm>”The great thing also with ‘Cookie Autodelete’ is that you can even fine tune cookies/localStorage/IndexedDB, Plugin Data, Service Workers, you can i.e. keep what is necessary for a site login and remove the extra non-necessary craps/tracking cookie(s).” That’s interesting, and you can do it site-by-site. I was not aware of this, very cool, thanks Tom.
true
true
true
Mullvad has just launched Mullvad Browser, a privacy-first web browser that is using the Firefox web browser as its base.
2024-10-12 00:00:00
2023-04-03 00:00:00
https://www.ghacks.net/w…urity-header.jpg
article
ghacks.net
Ghacks Technology News
null
null
20,104,667
https://drewdevault.com/2018/06/01/How-I-maintain-FOSS-projects.html
How I maintain FOSS projects June 1, 2018 on Drew DeVault's blog
null
Today’s is another blog post which has been on my to-write list for a while. I have hesitated a bit to write about this, because I’m certain that my approach isn’t perfect. I think it’s pretty good, though, and people who work with me in FOSS agreed after a quick survey. So! Let’s at least put it out there and discuss it. There are a few central principles I use to guide my maintainership work: - Everyone is a volunteer and should be treated as such. - One patch is worth a thousand bug reports. - Empower people to do what they enjoy and are good at. The first point is very important. My open source projects are not the work of a profitable organization which publishes open source software as a means of giving back. Each of these projects is built and maintained entirely by volunteers. Acknowledging this is important for keeping people interested in working on the project - you can never expect someone to volunteer for work they aren’t enjoying1. I am always grateful for any level of involvement a person wants to have in the project. Because everyone is a volunteer, I encourage people to work on their own agendas, on their own schedule and at their own pace. None of our projects are in a hurry, so if someone is starting to get burnt out, they should have no reservations about taking a break for as long as they wish. I’d rather have something done slowly, correctly, and by a contributor who is enjoying their work than quickly and by a contributor who is burnt out and stressed. No one should ever be stressed out because of their involvement in the project. Some of it is unavoidable - especially where politics is involved - but I don’t hold grudges against anyone who steps away and I try to shoulder the brunt of the bullshit myself. The second principle is closely related to the first. If a bug does not affect someone who works on the project and the problem doesn’t interest anyone who works on the project, it’s probably not going to get fixed. I would much rather help someone familiarize themselves with the codebase and tooling necessary for them to solve their own problems and send a patch, even if it takes ten times longer than fixing the bug myself. I have never found a user who, even if they aren’t comfortable with programming or the specific technologies in use, has been unable to solve a problem which they were willing to invest time into and ask questions about. This principle often leads to conflict with users whose bugs don’t get fixed, but I stick to it. I would rather lose every user who is unwilling to attempt a patch than invest the resources of my contributors into work they’re uninterested in. In the long term, the health of the project is far better if I always have developers engaged in and enjoying their work on it than if I lose users who are upset by my approach. These first two principles don’t affect my day-to-day open source work so much as they set the tone for it. The third principle, however, constitutes most of my job as a maintainer, and it’s with it that I add the most value. My main role is to empower people who contribute to do work they enjoy, which benefits the project, and which keeps them interested in coming back to do more. Finding things people enjoy working on is the main task in this role. Once people have made a few contributions, I can get an idea of how they like to work and what they’re good at, and help them find things to do which play to their strengths. Supporting a contributors potential is important as well, and if someone expresses interest in certain kinds of work or I think they show promise in an area, it’s my responsibility to help them find work to nurture these skills and connect them with good mentors to help. This starts to play in another major responsibility I have as a maintainer, which is facilitating effective communication throughout the project. As people grow in the project they generally become effective at managing communication themselves, but new contributors appear all the time. A major responsibility as a maintainer is connecting new contributors to domain experts in a problem, or to users who can reproduce problems or are willing to test their patches. I’m also responsible for keeping up with each contributor’s growth in the project. For those who are good at and enjoy having responsibility in the project, I try to help them find it. As contributors gain a better understanding of the code, they’re trusted to handle large features with less handholding and perform more complex work2. Often contributors are given opportunities to become better code reviewers, and usually get merge rights once they’re good at it. Things like commit access are a never a function of rank or status, but of enabling people to do the things that they’re good at. It’s also useful to remember that your projects are not the only game in town. I frequently encourage people who contribute to contribute to other projects as well, and I personally try to find ways to contribute back to their own projects (though not as much as I’d often like to). I offer support as a sysadmin to many projects started by contributors to my projects and I send patches whenever I can. This pays directly back to the project in the form of contributors with deeper and more diverse experience. It’s also fun to take a break from working on the same stuff all the time! There’s also some work that someone’s just gotta do, and that someone is usually me. I have to be a sysadmin for the websites, build infrastructure, and so on. If there are finances, I have to manage them. I provide some kind of vision for the project and decide what work is in scope. There’s also some boring stuff like preparing changelogs and release notes and shipping new versions, or liaising with distros on packages. I also end up being responsible for any marketing. Getting and supporting contributors is the single most important thing you can do for your project as a maintainer. I often get asked how I’m as productive as I seem to be. While I can’t deny that I can write a lot of code, it’s peanuts compared to the impact made by other contributors. I get a lot of credit for sway, but in reality I’ve only written 1-3 sway commits per week in the past few months. For this reason, the best approach focuses on the contributors, to whom I owe a great debt of gratitude. I’m still learning, too! I speak to contributors about my approach from time to time and ask for feedback, and I definitely make mistakes. I hope that I’ll receive more feedback soon after some of them read this blog post, too. My approach will continue to grow over time (hopefully for the better) and I hope our work will enjoy success as a result. - Some people do work they don’t enjoy out of gratitude to the project, but this is not sustainable and I discourage it. ↩︎ - Though I always encourage people to work on the things they’re interested in, I sometimes have to *discourage*people from biting off more than they can chew. Then I help them gradually ramp up their skills and trust among the team until they can take on those tasks. Usually this goes pretty quick, though, and a couple of bugs caused by inexperience is a small price to pay for the*gain*in experience the contributor gets by taking on hard or important tasks. ↩︎
true
true
true
null
2024-10-12 00:00:00
2018-06-01 00:00:00
null
null
null
null
null
null
21,780,782
https://www.folklore.org/StoryView.py?project=Macintosh&story=Reality_Distortion_Field.txt&sortOrder=Sort+by+Date&characters=Steve+Jobs
null
null
null
true
true
false
null
2024-10-12 00:00:00
null
null
null
null
null
null
null
9,092,110
http://www.philipotoole.com/replicating-sqlite-using-raft-consensus/
Replicating SQLite using Raft Consensus
Philip O'Toole
SQLite is a “self-contained, serverless, zero-configuration, transactional SQL database engine”. However, it doesn’t come with replication built in, so if you want to store mission-critical data in it, you better back it up. The usual approach is to continually copy the SQLite file on every change. I wanted SQLite, I wanted it distributed, and I **really** wanted a more elegant solution for replication. So **rqlite** was born. ## Why replicate SQLite? SQLite is very convenient to work with — the entire database is contained within a single file on disk, making working with it very straightforward. Many people have experience with it, and it’s been a natural choice for adding relational-database functionality to many systems. It’s also rock-solid. However, since it isn’t replicated it can become a single point of failure in a system design. While it is possible to continually copy the SQLite file to a backup server every time it is changed, this file-copy must not take place while the database is being accessed. I decided to build a distributed replication layer using the Raft consensus protocol, which gives me effective replication without the hassle of running a much heavier solution like MySQL. It provides all the advantages of replication, with the data modelling functionality of a relational database, but with the convenience of a single-file database. The entire system is written in Go, and the source is available on github. ## An rqlite cluster The diagram below shows an example rqlite cluster of 3 nodes, which continually work together to ensure that the SQLite file under each node is identical. With 3 nodes running, 1 node can fail, the cluster will remain up, and the data is still safe. In this example a leader has been elected and is coloured red. The Raft protocol dictates that all reads and writes should go through this node. For a write operation, only when a majority of nodes (including the leader) acknowledge that write, is that change actually committed to the Raft log and then to the actual SQLite databases underneath each node — it is the leader’s job to ensure this consensus is reached. If the leader fails, or there is a network partition such that the leader is cut off from the other two nodes, one of the other nodes will be elected leader shortly afterwards. rqlite is a CP system. When faced with a network partition it chooses consistency over availability — reads and writes in the partition with a quorum of servers will remain available. But the servers on the other side of the partition will refuse to accept any changes. When the partition is healed however, these nodes will receive any changes made to the nodes on the other side of partition, and all copies of SQLite database will be in consensus again. ## Choosing a Distributed Consensus algorithm Raft is used as the consensus protocol for multiple projects including InfluxDB and etcd. They both use the goraft implementation, and since I want to write more Go, it was a natural choice to use for rqlite. ## Deploying rqlite You can find the source code for rqlite, and instructions on how to build and deploy it, here on github. I hope to continue developing this software, as distributed consensus systems are immensely interesting. Hi: So, only operations by HTTP API are replicated , right ? What happens if all nodes reply with an ACK ( raft consensus ) and , before committing to raft log , one node fails ? When is synchronized again ? Regards Yes, only operations committed through the HTTP API are replicated. As for the failure you outlined, when the node is brought back online it will contact the leader and find that the leader has entries in its log that it is missing. It will then write those entries to its log. Check the Raft paper for full details. Just curious, what was the use-case for building this? Or was it mostly for fun? Cheers, Jens Hi Jens — it started as a real need. A system I was working on professionally needed a small amount of (ideally relational) database storage. The system was clustered, so the question was which node should host this storage, which I didn’t like. So this was about showing we could run SQLite in a replicated manner, such that the loss of any single node would mean not mean the data would be lost. We ended up going a different route however, and rqlite was not used in production. And what was the route you has chosen, if it is not a secret? Hi Stanislav — we also had access to a clustered Java-based key-value store (open source, but can’t provide the name), so just decided to use that. It did mean we had to build the relational layer ourselves though. Nice work! Very useful piece of software, but i have question. Philip is this possible to use https to secure database access? I want to synchronize many devices but i want to secure database against outside access? Is this possible or do you plan this functionality in near future? Regards. Mike — thanks. There is no support for HTTPS access to the system, though you might be able to do it yourself by putting something like nginx in front of each node, and have it do the HTTPS to HTTP conversion. This is interesting! Would it be feasible to modify this to tolerate byzantine failures? I.e, could I configure it to commit transactions only when 2/3+1 peers have voted for the same transaction? Background: @ ball.askemos.org we have a programming platform sitting atop of replicated state machines. Now I’m looking for alternative implementations of replicated state machines, which may fit the bill. Our platform is a bit Erlang-inspired having persistent agents which communicate by message passing. We replicate those agents in consensus with byzantine failure resistance. Often we store the state of our agents in a sqlite3 database. Pretty much like you’re doing here. That’s how I found rsqlite in the first place. Hi Jörg — if this system was changed to support byzantine failures, the code base would be very different. Since it’s built on the Raft consensus protocol, it only supports the failure modes that Raft tolerates. So if the code was modified as you would like, it would be a very different system — it wouldn’t be a modification so much as a new system. I hope this helps. I see. When I posted, I assumed that Raft was actually a *consensus* protocol. Which implies that it would normally tolerate byzantine faults. But Raft is in fact a *coherence* protocol, where you must trust the leader (making the latter a SPOF). Sure it helps: we better stick with the code we’ve been using so long as this protects us against byzantine faults. The slowdown is not as bad as a backdoor would be. I am not sure the guys at Stanford would agree with you. They clearly consider Raft a consensus protocol. I am unfamiliar with the term “coherence protocol”. Also the leader is clearly nota single point of failure when you consider the system as a whole (which is the point). The system remains fully functional if the leader fails — another node simply becomes leader.can this setup prevent db locks? Alex — I don’t really understand the question. I know what you might mean by a database lock, but I don’t see exactly how that is specific to rqlite. sorry about that . i was wondering if by having more nodes it will prevent less dblocks? going to try it & let you know. it works like a charm. i’m really impressed Philip. one question if i want to change location of the db can i do localhost:4001/dev/shm/db ? The path to the database file on each node is passed to the rqlite on startup. https://github.com/otoolep/rqlite/blob/master/README.md You can’t change once set, without moving the database file to a new location. how do you determine which one is the leader ? if i start the node 1, i’m seeing the following: is there a way to do a clean start? 2016/02/21 10:40:28 [ERROR] attempted leader redirection, but no leader available 2016/02/21 10:40:29 [ERROR] attempted leader redirection, but no leader available 2016/02/21 10:40:42 [ERROR] attempted leader redirection, but no leader available Alex — rqlite has a new consensus module, and those errors will no longer occur. You should run the latest code instead. i will grab the new version. thank you Buenas noches. Amigos me gustaría saber en que sistema operativo fue montado el cluster o como hizo? Does it support multiple SQLite DBs? We plan to use multiple SQLite DBs and want to replicate all of them. Also the writes and reads can be done on any of the DBs. No, you must run a rqlite cluster per SQLite database. And all writes must go through the leader, but reads can go through any node, depending on what read-consistency guarantees you can accept. How do I use SqliteStudio to browse data in rqlite? You could use it to browse the SQLite file directly, but it’s not officially supported. https://github.com/rqlite/rqlite/blob/master/README.md#limitations Let’s assume there are 3 nodes and one of the nodes got disconnected from the network and that was the Master, in this case there will be a new leader and it is great, but how can I still be able to write data to the disconnected node and when the network connectivity comes back up, we want it to push the changes back to the new master and merge the data if the same tables were used in the DB. No, rqlite doesn’t support that because’s a CP-system. You will only be able to write to the side with the quorum of nodes, and the disconnected node will respond with “no leader”. You could perform some queries though, if you accepted “weak” read-consistency.
true
true
true
SQLite is a "self-contained, serverless, zero-configuration, transactional SQL database engine". However, it doesn't come with replication built in, so if you want to store mission-critical data in it, you better back it up. The usual approach is to continually copy the SQLite file on every change. I wanted SQLite, I wanted it distributed, and I…
2024-10-12 00:00:00
2014-09-02 00:00:00
null
null
philipotoole.com
philipotoole.com
null
null
23,268,911
https://chrismorgan.info/blog/make-and-git-diff-test-harness/
Using `make` and `git diff` for a simple and powerful test harness
Chris Morgan
First, a demo. Here’s what is achieved with only half a dozen lines [(Depending on how you count it.)] of makefile: ## The makefile Let’s jump right in with a sample makefile, typically with the filename `Makefile` : That’s all the code needed for a fully functioning test harness. If you’re familiar with makefiles this may be enough for you to understand this whole post and you can stop reading. The remainder of this post explains how to use this, how it works, and the limitations of the approach. Note that this is all for GNU Make. Other Make implementations may not support all of the `foreach` , `wildcard` , `call` , `filter` and `patsubst` functions, so you could need to write out the `test` target’s prerequisites another way. The general principle is sound, however. ## Getting started with it - Make sure you have GNU Make installed and are using Git. - Paste the code above into a file called `Makefile` . - Create an executable file [That is, on Unixy platforms it’ll need to have mode `+x` , and either be an executable binary or start with a shebang.] called `foo.test` that contains the script of what you want to test. - Run `make test` (the test run will seem to succeed, since the expected output hasn’t been added to Git yet [Most things in Git ignore untracked files; and `git diff … foo.stdout foo.stderr` outputs nothing and reports success when those two files aren’t tracked.]). - Review the contents of `foo.stdout` and `foo.stderr` which have just been created, that they match what you expected. - Run `git add foo.test foo.stdout foo.stderr` . This is not intrinsically tied to Git; you could replace the `git diff` invocation with something semantically equivalent from another version control system. ## How to use it As you can hopefully see, this is simple; only half a dozen lines. Yet it’s very powerful: by using Make, you get all sorts of nice magic for free. **If tests vary on other inputs** (e.g. test data or build artefacts), you can just add new prerequisites to the targets: And you can get *much* fancier about prerequisites if you desire. These things can allow you to only run the tests whose inputs (whether it be data or code) have changed, rather than all the tests. **If you always want to run all the tests,** which you probably want to do if you haven’t set up precise dependency tracking, mark all of the `%.stdout` targets phony (which will be explained below): **If you just once want to run all the tests,** run `make `**--always-make** test [The short form of `--always-make` is `-B` , but I recommend long options in most places.]. **If you want to run tests concurrently,** use `make `**--jobs=8** test [Short form `-j8` , and this is one that I *do* use short form for.] or similar. (Caution: combined with `--keep-going` , you may get nonsense output to the terminal, with lines from multiple diffs interleaved.) By default it’ll quit on the first test failure, but `--keep-going` [Short form `-k` .] will continue to run all the tests until it’s done as much as it can. (This is close to essential for cases when you’ve made changes that you know will change the output of many tests, so that it’ll update all of their stdout and stderr files at once.) ## How it works Let’s pull apart those half dozen lines, bit by bit, to see how it works. This `rwildcard` function is a **r**ecursive **wildcard** matcher. `$(`**wildcard**) does a single level of matching; this takes it to multiple levels. I don’t intend to explain all the details of how this works, but here’s an approximation of the algorithm it’s achieving, in pseudocode: I use `$(`**call** rwildcard,,%.test) instead of `$(`**shell** find . -name \*.test) mainly for compatibility, so that the `find` binary (GNU findutils or another variant) is not required. The **.PHONY:** test rule marks `test` as a phony target, which is mostly not *necessary*, but speeds things up a very little and ensures that tests don’t break if you create a file with the name “test”. If you go marking *individual tests* as phony, the effect is that it won’t check the file modification times, and will just always rerun the test. Then the rule for the actual `test` target. It has no recipe, which means that nothing *extra* happens when you run `make test` , after it runs all necessary tests: it is just the sum of its parts, no more. Its parts? It depends on `$(`**patsubst** %.test,%.stdout,$(**call** rwildcard,,%.test)) , which means “find all `*.test` files recursively (`$(`**call** rwildcard,,%.test) ), then change each one’s ‘.test’ extension to ‘.stdout’ (`$(`**patsubst** %.test,%.stdout,…) )”. You may wonder why it needs to depend on `foo.stdout` rather than `foo.test` : this is because `foo.stdout` is the target that will *run* the test, while depending on `foo.test` would only ensure the *existence* of the test. You may wonder why we search for the `foo.test` definition files and change their extensions to `.stdout` , rather than just searching for the `.stdout` files: this is so that we can create new tests without needing to also create a `.stdout` file manually. The upshot of this is that we create a phony target called “test” which depends on the result of all of the tests. It would be incorrect to say that it runs all the tests; the test target doesn’t *run* the tests, but rather declares “I require the tests to *have been* run before you run me” [Consider how these things are properly named *prerequisites* rather than *dependencies*.] (and, as discussed, running the test target itself does nothing since it has no recipe). The way to make all the tests run each time is to mark them all as phony as well; otherwise, Make will look at their prerequisite trees and may observe them already satisfied, with the stdout file newer than the test file, and so say “that test has been run and doesn’t need to be run again”. Now the meat of it: how an individual test is run. We use an implicit rule so that we needn’t enumerate all the files (which we could do in a couple of ways, but it’d be much more painful). This rule says “any file with extension ‘.stdout’ can be created based upon the file with the same base name but the extension ‘.test’, by running these two commands”. At last we come to the recipe, how the .stdout file is created. **$@** and **$<** are automatic variables: **$@** expands to the name of the target, which we’ll call `foo.stdout` . **$<** expands to the name of the first prerequisite, which in this case will be `foo.test` . The other piece of Make magic is `$(`**patsubst** %.stdout,%.stderr,**$@**) , which will end up `foo.stderr` . (That the backslash at the end of a line is a line continuation is important too. *Each line of a recipe is invoked separately.* [You can opt out of this behaviour with **.ONESHELL:** , but that was introduced in GNU Make 3.82, and macOS includes the ancient GNU Make 3.81 for licensing reasons, so consider compatibility before using it.]) So, what is run is this: What this *does*: - Run `./foo.test` , piping stdout to `foo.stdout` and stderr to `foo.stderr` . - If that fails: zero `foo.stdout` ’s mtime [That is, update the file’s “last modified” time to the Unix epoch, 1970-01-01T00:00:00Z.] so that it’s older than `foo.test` (so that a subsequent invocation of Make won’t consider the `foo.stdout` target to be satisfied; you could also delete the file, but that would be less useful), and then fail (which will cause Make to stop executing the recipe, and report failure). - Run `git diff` in such a way as to print any unstaged changes to those files (that is: any way in which the test output didn’t match what was expected). - If there were any unstaged changes to those files, then zero `foo.stdout` ’s mtime (for the same reason as before) and report failure. ## Making test output deterministic As written, we’re just doing a *naïve* diff, assuming that the output of running a command is the same each time. In practice, there are often slight areas of variation, such as timestamps or time duration figures. For example, if you have something that produces URLs with `?t=``timestamp` or `?``hash` cache‐busting, you might wish to zero the timestamps or turn the hashes into a constant value. Cleaning the output might look like this, if your test file is a shell script: This general approach allows you to *discard* sources of randomness, or even to *quantise* it (e.g. discard milliseconds but keep seconds), but makes it hard to do anything fancier, like numeric comparisons to check that a value is within a given error margin—if you’re not careful, you start writing a test framework rather than using `git diff` as the test framework. [If you really want to go off the deep end here, start thinking about how Git filter attributes might be applied. But I recommend not doing so, even though it’d be possible!] ## Limitations Here is a non‐exhaustive list of some limitations of this approach. - Declaring dependencies properly is often infeasible in current languages and environments, so you can end up needing overly‐broad prerequisites, like “this test depends on *all* of the source files”, and so more tests may be run than are actually needed. - Spinning up processes all over the place can be expensive. Interpreters commonly take hundreds of milliseconds to start, to say nothing of how long it can take to import code, which you now need to do once per test rather than once overall. - The filtering of output to make it deterministic effectively limits you to equality checking, and not *comparisons*. To a substantial extent, this is a test framework with only assertions, and little or no logic. - Make does not play well with spaces in filenames. - Git doesn’t manage file modification times in any way. This can become a nuisance once you’re committing what are essentially build artefacts, since Make makes decisions based on mtimes. So long as your dependencies are all specified properly it shouldn’t cause any *damage* [In the general case, this is not actually quite true; I had a case at work a few months ago where I temporarily added a build artefact to the repository, and it normally worked fine, but just occasionally the mtimes would be back to front and the build server would try to rebuild it and fail, because its network access was locked down but building that particular artefact required internet access.] if you don’t modify the stdout file with Git, but it may lead to unnecessary running of a test. When you want to play it safe, `make --always-make test` will be your friend. ## Conclusion When you’re in an ecosystem that provides a test harness, you should probably use it; but outside such ecosystems, the power of shell scripting, makefiles and version control can really work nicely to produce a good result with minimal effort. There are various limitations to the approach, but for many things it works really well, and can scale up a lot. I especially like the way that it tracks the expected output, making manual inspection straightforward and updating trivial. I’ve used something similar to this approach before, and I’ve found makefiles in general to be very effective on various matters, small and large; I think this technique demonstrates some of the nifty power of Make. The GNU Make documentation is pretty good. It’s sometimes hard to find what you’re looking for in the index, but the information is definitely all there.
true
true
true
People regularly go for complex test harnesses; and when you’re in an ecosystem, they can be a good fit. But outside ecosystems, it’s actually quite easy to produce a fairly good test harness.
2024-10-12 00:00:00
2020-05-22 00:00:00
null
null
null
__Chrismorgan
null
null
7,722,273
https://plus.google.com/100788008008728864779/posts/dQYSNx6Pkuw
New community features for Google Chat and an update on Currents
Google
Note: This blog post outlines upcoming changes to Google Currents for Workspace users. For information on the previous deprecation of Google+ for users with personal Google accounts, please see this post . What's Changing We are nearing the end of this transition. Beginning July 5, 2023, Currents will no longer be available. Workspace administrators can export Currents data using Takeout before August 8, 2023. Beginning August 8th, Currents data will no longer be available for download. Although we are saying goodbye to Currents, we continue to invest in new features for Google Chat , so teams can connect and collaborate with a shared sense of belonging. Over the last year, we've delivered features designed to support community engagement at scale, and will continue to deliver more. Here is a summary of the features with additional details below: This month, we’re enabling new ways for organizations to share information across the enterprise with announcements in Google Chat . This gives admin controls to limit permissions for posting in a space, while enabling all members to read and react, helping ensure that important updates stay visible and relevant. Later this year, we plan to simplify membership management by integrating Google Groups with spaces in Chat, enable post-level metrics for announcements, and provide tools for Workspace administrators to manage spaces across their domain. Announcements in Google Chat Managing space membership with Google Groups We’ve already rolled out new ways to make conversations more expressive and engaging such as in-line threading to enable rich exploration of a specific topic without overtaking the main conversation and custom emojis to enable fun, personal expression. In-line threaded conversations Discover and join communities with up to 8,000 members We’ve also made it easier for individuals to discover and join communities of shared interest . By searching in Gmail , users can explore a directory of available spaces covering topics of personal or professional interest such as gardening, pets, career development, fitness, cultural identity, and more, with the ability to invite others to join via link. Last year, we increased the size of communities supported by spaces in Chat to 8,000 members , and we are working to scale this in a meaningful way later this year. A directory of spaces in Google Chat for users to join. Our partner community is extending the power of Chat through integrations with essential third-party apps such as Jira, GitHub, Asana, PagerDuty , Zendesk and Salesforce . Many organizations have built custom workflow apps using low-code and no-code tools , and we anticipate that this number will continue to grow with the GA releases of the Chat API and AppSheet’s Chat app building capabilities later this year. For teams to thrive in this rapidly changing era of hybrid work, it’s essential to build authentic personal connections and a strong sense of belonging, no matter when or where individuals work. We will continue to make Google Chat the best option for Workspace customers seeking to build a community and culture for hybrid teams, with much more to come later this year. Who's impacted Admins and end users Why it’s important The transition from Currents to spaces in Google Chat removes a separate, siloed destination and provides organizations with a modern, enterprise-grade experience that reflects how the world is working today. Google Workspace customers use Google Chat to communicate about projects, share organizational updates, and build community. Recommended action Availability Spaces in Google Chat are available to all Google Workspace customers and users with personal Google Accounts. Resources
true
true
true
Note: This blog post outlines upcoming changes to Google Currents for Workspace users. For information on the previous deprecation of Googl...
2024-10-12 00:00:00
2023-04-12 00:00:00
https://blogger.googleus…_LINKS%20(2).png
article
googleblog.com
Google Workspace Updates
null
null
1,766,739
http://esr.ibiblio.org/?p=2658
Kill the Buddha
Esr
There’s a Zen maxim that commands this: “If you meet the Buddha on the road, kill him” There are several closely related interpretations of this maxim in Buddhist tradition. The most obvious one is that worship of the Buddha interferes with comprehending what he actually said – that religious fetishization is the enemy of enlightenment. While I completely agree with this interpretation, I’m writing to argue for a more subtle and epistemological one. I interpret Zen Buddhism as a set of practices for not tripping over your own mind – avoiding our tendency to bin experiences into categories so swiftly and completely that we stop actually paying attention to them, not becoming imprisoned by fixed beliefs, not mistaking maps for territories, always remaining attentive to what actually is. Perhaps the most elegant expression of this interpretation is this koan setting forth the problem: “The mind is like a dog. His master points at the moon, but he barks at the hand.” In this sense, Zen is discipline that assists instrumental rationalism by teaching important forms of self-monitoring and mental hygiene – in effect very similar to General Semantics. In this interpretation of Zen, “killing the Buddha” can be taken to stand for a very specific practice or mental habit. Here’s how it works: Find the premise, or belief, or piece of received knowledge that is most important to you right at this moment, *and kill it*. That is, imagine the world as it would be if the most cherished belief in your thoughts at this moment were false. Then reason about the consequences. The more this exercise terrifies you or angers you or undermines your sense of self, the more brutally necessary it is that you kill your belief. Sanity is measured by the ability to recognize evidence that your beliefs are wrong, and to detach yourself from them in order to form improved beliefs that conform to reality and better predict your future experiences. Killing the Buddha is an exercise to strengthen your sanity, to decrease your resistence to inconvenient facts and disruptive arguments. It teaches you not to become attached to beliefs, as you gradually learn that your rational coping capacity *transcends any specific belief about how the universe is*. Being sane, being rational, does not consist of attachments and knowledge and beliefs. You do not become one whit more rational by knowing Newton’s Laws or the Periodic Table or the Pythagorean Theorem, or by believing evolutionary theory; what matters is how you acquired such beliefs and how you maintain them. Sanity is the process by which you continually adjust your beliefs so they are predictively sound. By regularly killing the Buddha, you prepare yourself for those moments in which you must abandon a belief not because you choose to do it as a mental exercise but because experience of reality tells you it has failed the predictive test and is thus false. When you have reached the point that killing the Buddha every day no longer frightens you, but is instead a central part of your self-discipline that you greet each and every day as an opportunity to learn, you are on the road to full sanity. But only on the road. Your next challenge, which never ends, is to learn how to see — and kill — the Buddhas that are invisible because they lurk behind your own eyes. “Always Be Closing. AlwaysBe Closing!”(As Miyamoto said.) I must be dense. I’m not seeing the connection from the motivational speech to what I’m talking about, though the connection to Miyamoto Musashi seems a little clearer. I’ll reflect about the main topic a bit later, first about Zen: one thing that we tend to forget is that Zen is primarily something for monks. As in: for people who work with their minds inside a kind of a sterile laboratory which purposefully isolates them from many of the usual temptations lay people face. Actually it is a problem not with just Zen but with about 95% of Buddhism that it is by monks, for monks and therefore the priorities are not optimal for lay people. The number of teachers and lineages which are clearly by lay people for lay people who have sex, drink beer and engage in economic activity is rather low – the best example is probably Marpa from Tibet, whose teachings and lifestyle have been incorporated into the Karma-Kagyü school. And the interesting aspect is that in such dedicatedly lay Buddhism the priorities are different from those of monks, including Zen monks. Yes, the whole set of problems you describe here, which these lay teachers (Lama Ole Nydahl) call “fixed ideas” is there, it is important, it is emphasized, it is never denied. But in such dedicatedly lay Buddhism this gets a slightly lower priority than another set of problems called “disturbing emotions”, which roughly correlate with the “passions” of Greek philosophers, “sins” of Christians, “character faults” or “vices” of Franklin & Lincoln-type XVIII. – XIX. century people; the five most important being ignorance, desire (incl. lust/greed/gluttony/desire for power), jealousy/envy, hatred/anger/fear, and pride/vanity, plus a number of lesser ones. The way it is described – usually called the “mind-only” philosophy, Cittamatra, also called “yoga-practitioner” (Yogacara) because it is strongly based on actual meditation experience – is this: there is the mind, and there is the original mistake of the mind that it does not recognize that the thing experienced and the ability or awareness to experience are both parts of the mind. The mind mistakes the thing experienced as “the other”, or “the world”, and the awareness that experiences as an “I”. From this arise the disturbing emotions: if the thing experienced is seen as good for the experiencer, arises clinging or desire, if it is seen as hostile to it, arises hatred, anger of fear and so on. Plus from this separation the fixed ideas arise – roughly the same problems what you have described. Now, the point is, that a strong and exclusive focus on the problem of fixed ideas might be all right if either one is isolated from the usual disturbing emotions (as Zen monks and other monks are) or is one already on a very high level, has a small ego and few attachments, like a budget model vacuum cleaner. But, for the average lay person, the radical skepticisim of focusing on the problem of fixed ideas, like the way it is done in Zen, might be dangerous. And maybe the same thing can be said for General Semantics. On the lower levels of awareness, say, on the level of big egos, strong passions, strong emotions, “vices”, “sins”, when one has little self-control, it is actually very dangerous to to be too skeptical, one is better off by sticking to some rules and saying “X is BAD, this is an objective value judgement for me, period”, which is absolutely speaking, of course not true, just a fixed idea and an illusion but useful illusion in such cases, as a hopefully just temporary measure. On the level of the average lay person, “don’t be a slave of your passions, even it takes making up some completely fake, quasi-objective rules about values which are nothing but fixed ideas that on the absolute level cannot be justified” can be more useful than being strictly skeptical and critical about any kind of fixed idea like the Zen monks do. This can every person evaluate himself/herself by simply making a list of things we know we should do or should not do but are just not able to conjure the willpower to actually do so. My list is rather long – cigarettes, booze, Internet addiction, caffeine addiction, sticking to safe but boring, uncreative jobs out of risk-avoidance, and so on. Obviously at such a low level Zen-type radical skepticism is not useful and even dangerous, I am better off making up some fixed ideas for myself, of the “getting hammered too often is BAD, BAD, feel bad about it, vice, sin, bad doggie!”, even though they cannot be justified on the absolute level at all. (That’s what I like Catholics – I disagree with their theology and ontology but their psychology seems to be the same “bad doggie” psychology I use on myself to keep my faults within manageable limits.) But this and similar sets of problems describe most of humanity, of course – martial artist hackers with huge amounts of willpower to do what they actually want to and not do what they don’t want to are outliers, are exceptions, are not the norm, for them such radical skepticism might not be dangerous and can of course very beneficial – but for most of us it can be dangerous. >the original mistake of the mind that it does not recognize that the thing experienced and the ability or awareness to experience are both parts of the mind. *snort* This isn’t profound truth, it’s just early Mahayana-Buddhist subjectivism and it’s full of shit. There’s a reason Zen shifted to a conceptualist position after the Sixth Patriarch; subjectivism doesn’t work. It doesn’t predict the way observed reality behaves, which is that there’s stuff out there that has prior causal power over mind even if we never have access to it that’s unmediated by our nervous systems.Your whole notion that most people can’t handle skepticism depends on the assumption that an elaborate set of fixed ideas is necessary for self-control. Nonsense! All it takes is the understanding that if you don’t behave in efficient and ethical ways, you become far more likely to eat consequences you don’t like. You seem to be peddling an unpleasant form of elitism here. “We must lie to the peasants, otherwise they’ll act out.” I’m not buying it. Hypothesis: you are not a Boltzmann brain. Your body, your mind, your memory of the past, your memory of what you were thinking about a tenth of a second ago, arose through some ordered process. All of these things did not blink into existence just an instant ago through some random fluctuation of whatever-it-is-that-holds-reality-together, and it is not likely that they will all blink of out of existence just an instant from now. Is this a Buddha you’re willing to kill? >Is this a Buddha you’re willing to kill? Yes. Because I can reason about the consequences and rapidly conclude that as long as the universe and/or my mind popped into being with regularities as ifit had a consistent history, then the difference between “I am not a Boltzmann brain” and “I am a Boltzman brain” has no observable consequences and there’s no point in my being worried about it.OK, the universe and my mind might be a huge quantum fluctuation that blinks out of existence any second. So what? That wouldn’t be an observable event either, and there wouldn’t be anything I could do about it if it were. So, again, there’s no point in my being worried about it. I engaged in this exercise just this morning with my father. Attempting to kill the non-aggression principle. I am perceptive enough to see that a sociopath who understands a moral code can use it to their advantage. Turning an opponents moral code into a weakness to exploit. If one is bent on domination and nothing else, this tactic is very useful. So why is morality seemingly the default position? I would argue that if you slay this particular Buddha, civilization unravels. A Stalin for instance is an aberration, because such a mind is an evolutionary dead end. In a tribal society he would most likely have been banished or killed. Only in a large modern ideologically driven organization in turmoil can such a pathology become malignant. Stalin as the local school superintendent, while unpleasant for a few, would be relatively harmless. >So why is morality seemingly the default position? I would argue that if you slay this particular Buddha, civilization unravels. That’s a misinterpretation. When I say “Kill the Buddha” I’m not arguing for nihilism. I don’t mean that your most cherished premise has to stay dead, if you can confirm it with observation and sound reasoning after you’ve stopped assuming it. The point is to regularly go through the process of checking your assumptions. In this case, we can reconfirm “morality”, or at least large pieces of ethics, by reasoning about the consequences of abandoning premises and noticing when the consequences would imply a lot more suffering and grief. Now onto the main topic: “Sanity is measured by the ability to recognize evidence that your beliefs are wrong, and to detach yourself from them in order to form improved beliefs that conform to reality and better predict your future experiences.” This is perfectly true, very important – but at some level superficial. Lack of sanity is not just an error in a logical process but has much deeper psychological reasons – intellectual vanity, a certain kind of self-centeredness, self-adsorpedness, and so on. Let me just put a bit of Chesterton here and then I’ll go on tomorrow. “The madman’s explanation of a thing is always complete, and often in a purely rational sense satisfactory. Or, to speak more strictly, the insane explanation, if not conclusive, is at least unanswerable; this may be observed specially in the two or three commonest kinds of madness. If a man says (for instance) that men have a conspiracy against him, you cannot dispute it except by saying that all the men deny that they are conspirators; which is exactly what conspirators would do. His explanation covers the facts as much as yours. (…) Perhaps the nearest we can get to expressing it is to say this: that his mind moves in a perfect but narrow circle. A small circle is quite as infinite as a large circle; but, though it is quite as infinite, it is not so large. In the same way the insane explanation is quite as complete as the sane one, but it is not so large. A bullet is quite as round as the world, but it is not the world. There is such a thing as a narrow universality; there is such a thing as a small and cramped eternity; you may see it in many modern religions. Now, speaking quite externally and empirically, we may say that the strongest and most unmistakable mark of madness is this combination between a logical completeness and a spiritual contraction. The lunatic’s theory explains a large number of things, but it does not explain them in a large way. I mean that if you or I were dealing with a mind that was growing morbid, we should be chiefly concerned not so much to give it arguments as to give it air, to convince it that there was something cleaner and cooler outside the suffocation of a single argument.” http://www.archive.org/details/orthodoxy16769gut Chesterton chases his own tail through several cycles in this quote because he lacks the concept that truth claims have to be falsifiable and justified by predictive success. The madman’s problem is that his delusions meet neither criterion. Really, this was not at all interesting. It was the philosophical equivalent of watching a baby finger-paint – perhaps entertaining if you like the particular baby, but far too clumsy and crude to be worth keeping. “You seem to be peddling an unpleasant form of elitism here. “We must lie to the peasants, otherwise they’ll act out.— Wasn’t I clear that this is a method I use to keep *myself* sane & sober & under control and not someone else? >Wasn’t I clear that this is a method I use to keep *myself* sane & sober & under control and not someone else? No, you were not at all clear. You made general statements about people needing control of passion more than the ability to get past fixed ideas. means the same thing as : don’t let the means become the end – and in particular, don’t shelter yourself from the end by focussing on acting out the means. “All it takes is the understanding that if you don’t behave in efficient and ethical ways, you become far more likely to eat consequences you don’t like.” Probably you are either too lucky (as in: genes) or have solved this problem so long ago that you have forgotten it, but the whole tragedy of human life for most folks incl. myself revolves around being able to understand something will cause bad consequences and is therefore ain’t cool on an intellectual level, and being able to conjure the actual willpower to do something about it – the separation between thought and action, should do and actually do, and so on. Let me be blunt – you as a non-drinking, non-smoking, non-druggie, non-aggressive, very few regrets etc. etc. type of perfect self-control guy have actually no clue, no hands-on experience about how huge a problem this normally is. I mean essentially this problem _is_ _history_ _itself_ – as in, history is generally and by large a history of (usually state-organized) violence done by people who, theoretically speaking, did not approve of violence in general. Just did it anyway. >being able to understand something will cause bad consequences and is therefore ain’t cool on an intellectual level, and being able to conjure the actual willpower to do something about it What makes you think I don’texperience this? As one example, I have lost substantial amounts of money because I often can’t summon up the willpower to do boring paperwork. I understand the need to self-discipline and control passions as well as you do, I think – I just don’t accept that doing so required fixed ideas in the Buddhist sense.“You made general statements about people needing control of passion more than the ability to get past fixed ideas.” Yes, but not in the sense of “others suck, but I am so cool”, but in the sense of “I suck, am roughly aware why, and therefore can understand why and how most folks suck too”. @esr >That’s a misinterpretation. When I say “Kill the Buddha†I’m not arguing for nihilism. I don’t mean that your most cherished premise has to stay dead, if you can confirm it with observation and sound reasoning after you’ve stopped assuming it. The point is to regularly go through the process of checking your assumptions. Sorry for being unclear. I meant to convey the non-aggression principle survived my attempt at murder. >Sorry for being unclear. I meant to convey the non-aggression principle survived my attempt at murder. Oh, good. >Lack of sanity is not just an error in a logical process but has much deeper psychological reasons – intellectual vanity, a certain kind of self-centeredness, self-adsorpedness, and so on. True. So what? When I am exercising my sanity by killing the Buddha, I don’t have to care about this level of explanation much more than I care about the details of glucose respiration when I exercise my muscles. The point of the practice is to become sane, not analyze why I am unsane. Actually, killing the Buddha eventually forces you into confrontation with all these issues. But I think it does so in a more constructive and more direct way than if you intellectualize them before hand without connection to an actual Buddha-killing. Fascinating debate between ESR and Shenpen. (I do not have much knowledge about this and might be adding noise and also it is off-topic.) But on an intuitive level, this seems to parallel the debates which ultimately led to the demise of Buddhism (perhaps of the Zen variety) in India where it originated. >But on an intuitive level, this seems to parallel the debates which ultimately led to the demise of Buddhism (perhaps of the Zen variety) in India where it originated. That is a very interesting claim and I would like to hear you expand on it. >”Always Be Closing” means the same thing as “The primary thing when you take a sword in your hands is your intention to cut the enemy, whatever the means”: Oh, I got that immediately, yes. What I don’t get is the connection to killing the Buddha and the problem of fixed ideas. “The point is to regularly go through the process of checking your assumptions.” I agree, but my observation is that The World ™ does a pretty good job of forcing one to do so with regularity (assuming one is not pathologically set in her ways and beliefs, or isolated and oblivious to people and events around her.) I guess the major difference is probably that disciplining yourself to do so periodically is less painful and traumatic than having it shoved in your face. >I guess the major difference is probably that disciplining yourself to do so periodically is less painful and traumatic than having it shoved in your face. Exactly so. The more times you kill the Buddha voluntarily, the better you cope when brute reality squashes your beliefs. always closing Musashi and the Oar vs a fixed idea of how the battle should be….yes I get that. Musashi an outcast who thought through his own methods easily defeating received training…yes. Perhaps you meant that unless you intend to make the sale, there is no point in trying. That is one dimensional thinking. Solving problems for people is fine. Exploitation for personal gratification is not. That was more the message I got from the jackass with the watch. I’m not a closer. I find them rather annoying, like a dog always wanting to hump my leg. >That was more the message I got from the jackass with the watch. The jackass with the watch can be interpreted in two ways, and I was conscious of both of them as I viewed that. On the one hand, yes, he’s a jackass who exploits people. On the other hand, there is a Musashi-like purity there, a ruthless willingness to discard all attachments other than to the goal. He’s repellent and admirable at the same time. I think, also, we’re supposed to both sympathize with the scruffy humanity of the people he’s berating andsee them as losers.The scene was well-constructed to forbid simple, one-dimensional interpretation.I still don’t see what the clip has to do with my original post, though. >That is a very interesting claim and I would like to hear you expand on it. I can only expand my claim or the idea that I have. It might not be close to the reality and in that case I hope that it doesnt waste your time. But from what I have read (and it isnt a concentrated study), the reason for decline of buddhism as a philosophy in India isnt that well understood. The usual claim that I have heard is that it was brahmannical opposition which wanted to preserve or win back its elite status lost to buddhism. That maybe true but the reason for defeat also parallel some of the arguments which I think Shenpen makes. “the original mistake of the mind that it does not recognize that the thing experienced and the ability or awareness to experience are both parts of the mind.” This is a central part of Sankara’s philosophy, I think. And it is believed that it was Sankara who ultimately defeated all the other major philosophies including buddhism by debate. (http://en.wikipedia.org/wiki/Adi_Shankara) It would be great to know more about how those debates actually went. The commentaries alas seem to end up eulogizing Sankara and present the debate mostly from the view point of the victor, it seems. I know my arguments above are very airy fairy. But I confess my ignorance and the desire to learn more about this aspect. Thanks. >I know my arguments above are very airy fairy. But I confess my ignorance and the desire to learn more about this aspect. Thanks. I don’t know enough about that corner of Buddhist history to respond or critique. This goes on my list of topics to investigate, thanks. If one were to be interested in reading further on General Semantics, is Korzybski’s “Science and Sanity, an Introduction to Non-Aristotelian Systems and General Semantics” the right place to start or are there better starting points these days? >If one were to be interested in reading further on General Semantics, is Korzybski’s “Science and Sanity, an Introduction to Non-Aristotelian Systems and General Semantics†the right place to start or are there better starting points these days? There are better. I would recommend People in Quandaries or Language in Thought And Action. Eventually you’ll want to read Korzybski, but he’s very dense and a terrible writer. I attempt to kill my buddha by intentionally reading the opponents of my ideas whenever I encountered such a source, therby torturing my self-worth in the process. The only problem is that socialists tend to be the most BOOOORING to read. The lesson I learned is that sometime you cannot alway relies on your opponents to kill your buddha. You must do it yourself. “The mind is like a dog. His master points at the moon, but he barks at the hand.†He? Also: Is there a crucial difference between this koan and the common western expression “missing the forest for the tree” that I’m not seeing? >He? That is, the dog’s master points at the moon, but the dog barks at the hand. Also: Is there a crucial difference between this koan and the common western expression “missing the forest for the tree†that I’m not seeing? Yes, quite an important one. When you miss the forest for the trees you over-focus on concrete details and are unable to abstract or reason about larger entities properly. The koan is much more general but centers on the opposite problem – focusing only on abstractions, not on concrete reality. If you meet the Buddha on the net, put him in your killfile. Thanks for the diff. It should be ‘it’ instead of ‘he’, is what I meant. @ esr >I think, also, we’re supposed to both sympathize with the scruffy humanity of the people he’s berating and see them as losers.The >scene was well-constructed to forbid simple, one-dimensional interpretation. I got that. I found myself irritated that none of them had the balls to simply walk. This type of motivational technique is not foreign to me, having been on both the giving and receiving end. It is appropriate and effective in certain skill based settings with the right personalities. I enjoyed it, but I don’t see how it fits. I was trying to find some way that it could be construed to fit. I await enlightenment. …It should be ‘it’ instead of ‘he’, is what I meant….Oh, I thought you were making a very subtle joke. How disappointing. Always test your assumptions for predictive value. Especially test the assumptions you receive from others for predictive value. Always point out the things you do not know when you present the results of your test; doing so is a sign of wisdom, not ignorance. Interesting. My first reaction to that saying was, “Thou shalt not suffer a Buddha impersonator to live,” but that definitely doesn’t fit in with what I know of Buddhist philosophy. I think I’ll start trying out this exercise. It’s probably less obnoxious and more effective than my current techniques. Currently, I accomplish this kind of thing in two ways. First of all, I can clarify my understanding of a topic (especially some kind of scientific or mathematical topic) by explaining it to someone who doesn’t understand it at all; this forces me to look at my beliefs through totally new eyes. Second of all, argument can be a useful device for exposing every aspect of some topic, especially for things which might be more subjective. Being in an argument forces me to notice and justify my assumptions, looking at my beliefs with critical eyes. I enjoy both of these so much that I routinely hold imaginary lecture or debate sessions in my head against strawmen or long-dead historical figures. The problem is that I end up lecturing people in abstract concepts which they may or may not care about, or arguing with every single opinion someone has. I like to think I can keep it civil and interesting, but I often get the idea people just want me to shut up. I think I’ll try your way instead. Thanks! All living things are creatures of habit. Acquiring new and beneficial habits frequently takes time and concentrated effort. “killing the Buddha” seems to be an effective method for acquiring (or reinforcing) the reason habit. This is today’s useful insight. There should be another heuristic related to this applied to the test results of others, suggesting at least mild skepticism if they don’t do it. someone’s been reading yudkowsky again…. >someone’s been reading yudkowsky again…. No, though that was a reasonable conjecture and not one that either Eliezer or I could be offended by. Eliezer and I have a lot of common influences, and we have critiqued each others’ work. This can make it look like we’re influencing each other a lot, but it’s actually mostly parallel development. JonB: I consider that to be enfolded into the second heuristic. But yes, that is a telling sign. Anyone who hedges on the results of their tests of assumptions may not have attacked them with sufficient rigor. milieu: “… the reason for decline of buddhism as a philosophy in India isnt that well understood.” I’ve seen two reasons for it. 1) Linguistic shift. The oldest Hindu scriptures were written when Sanskrit was the demotic speech in India. After some centuries, the demotic evolved into Pali, the language of the Buddhist scriptures, which gave Buddhism an edge in accessibility. But after more centuries, the demotic changed again, and Buddhism lost that edge. New Brahmanic texts in modernized Sanskrit and modern Hindi became available. 2) Buddhism became institutionalized in monasteries, which became isolated from the general population. (Buddhism AFAIK lacks a “secular clergy” of “parish priests”.) Then in the 500s, India was invaded by the Ephthalite (“White”) Huns, who sacked the monasteries for their accumulated wealth. This destroyed the institutional fabric of Indian Buddhism. I really meant to point to the old maxim itself rather than its Glengarry Glen Ross appearance. (Baldwin’s scene does reflect some light on the phrase, though, and it’s probably the phrase’s highest-profile appearance in the wider culture.) I interpret the saying about the Buddha a bit differently to esr. As best I can guess, it is a gentle reminder that the object of the exercise is not to join the Buddha Fan Club, or to construct a self-image as any kind of Buddhist, or even just to “do Buddhism” as a thing that you do. Yet it’s easy to let these things obscure the actual objective, or even to subtly mistake them for the actual objective, and the objective cannot be attained in that case. This bears comparison to running harmlessly through the motions of selling (‘close’ being somewhat equivalent to ‘cut’ here), or mistaking the techniques of swordfighting for the purpose they’re intended to serve.It is common for monks in various disciplines to say: “Everything I have to say has been said many, many times before” .. Thus (hopefully) differentiating the teachings from the teacher. On another note, “teachings” is a bad word, especially in western culture because it signifies a spoon full of stuff that you are expected to ingest then regurgitate. Zen comes from within, it is realized then sought. Teachers, parents, wives, husbands, children, jobs, fortunes, fame and even respect are always incidental. The western synonym for this is “Do as I say, not as I do, but pay attention to what I just did!”, which is quite confusing, but documenting personal growth in a public forum is very useful. I really enjoy your writing. I am not trolling by asking this, but how would you feel if you woke up tomorrow homeless, without money while seeing your wife on TV with someone else through a store front window? Would you experience anger, or perhaps existential angst initially? There is Zen in principle, then there is also the real world. You, I and most others reading this are simple, humble householderswho just want to be happy. Once you achieve a position in life, be it parent, elder, son, daughter or any other combination, you can’t conveniently commit yourself to “nothing”. We attach ourselves to responsibility just as we do to romance. Thinking about it, there is very little difference between the two.Additionally, your writing style has changed, recently. I’m not sure if you have more time to refine your writing, or perhaps you’ve reached some kind of epiphany. Whatever the case, I hope it continues :) I’m critiquing only as a reader, of course. I highly recommend surrounding yourself with extreme poverty for at least three years. I suspect you’ve seen it, but perhaps you don’t remember. That suggestion has been made many, many times before :) >I am not trolling by asking this, but how would you feel if you woke up tomorrow homeless, without money while seeing your wife on TV with someone else through a store front window? Would you experience anger, or perhaps existential angst initially? Anger, probably. Existential angst, no. I know who I am and what I have dedicated myself to. >You, I and most others reading this are simple, humble householders who just want to be happy. Actually, I have much larger objectives than that, and have made what I consider good progress on some of them. >We attach ourselves to responsibility just as we do to romance Correct. I’m not actually a Buddhist – I don’t have perfect detachment as my goal. I write about Zen techniques because I think they are useful tactics for calm and clarity, not because I’ve bought into the Buddhist prescription. That I have studied and rejected. >Additionally, your writing style has changed, recently. How would you describe the difference? >I highly recommend surrounding yourself with extreme poverty for at least three years. I suspect you’ve seen it, but perhaps you don’t remember. I have been surrounded with extreme poverty, living in the Third World as a child, though not directly immersed in it. This does change one’s perspective. the difference between “I am not a Boltzmann brain†and “I am a Boltzman brain†has no observable consequencesYes it does. If you are a Boltzmann brain that popped into existence this instant, then nothing you experience going forward should be expected to have any correlation at all with your apparent memories. But let me discard this line of argument and try making my point differently. A rationalist, in order to function, must, to at least some limited extent, assume his own sanity. That is, he must assume that his own thoughts are coherent from one moment to the next — that at least some of his beliefs are not merely the product of having being led around by Descartes’ demon. You will correctly retort that the negation of this hypothesis is unfalsifiable, but without its affirmation, nothingis falsifiable. Without first believing that you are sane, you cannot formulate the framework of rationalism, because you cannot have confidence that notions such as “falsifiable” make any sense, or that they mean the same thing to you as they did a moment ago.So, my contention is that you dohave one sacred Buddha, and that that Buddha is worth keeping sacred. This Buddha is the premise that the universe makes sense.>So, my contention is that you do have one sacred Buddha, and that that Buddha is worth keeping sacred. This Buddha is the premise that the universe makes sense. We’re really back at the same place I got to in my previous reply. When I posited that the universe came into being as ifit had a generative history, part of that bundle was that it have causal machinery so it continues to evolve consistent with that history.You say the universe making sense is a sacred Buddha. I see it differently – it’s not a premise you can negate, it’s a boundary condition without which trying to reason simply doesn’t work. Game over. If you don’t assume causal coherence, you can’t play at all. What would it mean to say the Universe doesn’t make sense? We certainly observe causality and consistency around us – so this would have to unpack to “causality and consistency could break down at any moment”. This is the same situation I’ve described before with respect to miracles and occasionalism and leads to the same result. @ Rich Rostrom The two reasons seem plausible. However, I don’t think the first one is entirely true. AFAIK, sanskrit was never the demotic language. Infact, it was always an evolved form of the demotic (which was called prakrit). Sankrit actually means refined, while prakrit roughly means demotic. So the Hindu scriptures (which usually were never written in the first place and only memorized, i guess) were in the hands of the well-educated priestly class (the brahmins). Buddhism arose in some ways as a rebellion against it and yes it became highly monastic and like all system of thought its success perhaps became its own weakness. Plus the later invasions by the Huns etc. However Buddhism in India produced tremendous philosophical and logical works. Ultimately, Sankara infact incorporated many of those elements in his philosophy but still managed to defeat them in debate, AFAIK. Infact he was even called a quasi-buddhist by his detractors. Also, atleast during his time there is no mention of any hun invasion. Thanks for ur comments. So what wonders has Buddhism produced in the lives of regular men? What wars has it stopped, what children has it fed, what diseases has it cured? I think the “Buddha” I will kill is Buddhism and all its foolish, cowardly introspective progeny. >I think the “Buddha†I will kill is Buddhism and all its foolish, cowardly introspective progeny. You have completely missed the point. Youcan’t kill that Buddha, since it obviously wasn’t your cherished premise to begin with.You’d have to do this exercise by killing whatever religious or antireligious premise youare actually carrying in your head.As with the logical positivists and their unverifiable verificationist’s creed, we have a recursive problem here. To be consistent, you also have to “kill the Buddha” of killing the Buddha. Ad infinitum. At some stage, you just have to accept axioms sine occisione. >At some stage, you just have to accept axioms sine occisione. No, it all grounds out in predictive utility, which in turn grounds out in the survival imperative. You make this error (“have to accept axioms sine occisione”) through not having an account of what reason is for. As I have written before, we build theory because we need predictions because we want goals because we are survival machines.(It is reasonable for you to be confused about this, since we’re actually beyond the forward edge of Western philosophy here. The analytical school doesn’t get this yet – actually, the only academic philosopher to get this far AFAIK was Heidegger, and he was a crazy Nazi so he’s generally ignored these days. Someday I may write a book about this.) More specifically, when you say that to be consistent, you also have to “kill the Buddha†of killing the Buddha, you’re confusing means for ends. Killing the Buddha isn’t a goal, it’s a technique. You evaluate the technique with respect to whether it achieves the goal. @Dan “Oh, I thought you were making a very subtle joke. How disappointing.” Sorry to disappoint, just your run-off-the-mill grammar stickler. In the case, it’s because it’s especially jarring for me when I read the koan. I’m supposed to let go and find the inner meaning, but all I can do is think ‘that doesn’t make sense’. Which is probably part of lesson one in zen meditation, and would get me a scowl by the sensei. Hmmm. Introducing subtle mistakes in koans as a means to make them “harder” and emphasize the ‘can’t be taught with words’ point… Worth a few minutes of pondering. Robert Speirs: >>>So what wonders has Buddhism produced in the lives of regular men? What wars has it stopped, what children has it fed, what diseases has it cured? I think the “Buddha†I will kill is Buddhism and all its foolish, cowardly introspective progeny.<<< Considering the examples you've chosen, I have a reasonable guess as to what you would suggest for a replacement, and invite you to consider ITS overall track record as well, especially in 20th-century Europe. >So what wonders has Buddhism produced in the lives of regular men? What wars has it stopped, what children has it fed, what diseases has it cured? I think the “Buddha†I will kill is Buddhism and all its foolish, cowardly introspective progeny. That’s actually a pretty decent start, but I’m not sure you’re aware of that. >That’s actually a pretty decent start, but I’m not sure you’re aware of that. LOL. I’m pretty sure he isn’t. esr: aren’t you begging the question there (in the proper, technical, perhaps overly-Western sense of the term ;-) ? What do you think of Hegel, now that we’re talking Philosophy? Aren’t you describing here something not a million miles away from his notion of dialectic progress (albeit with a forced pump)? >esr: aren’t you begging the question there (in the proper, technical, perhaps overly-Western sense of the term ;-) ? Sorry, I don’t understand the question. What question am I supposed to be begging? (Argh. Too many metalevels…) >What do you think of Hegel, now that we’re talking Philosophy? Aren’t you describing here something not a million miles away from his notion of dialectic progress (albeit with a forced pump)? No. Hegel wasn’t an empiricist at all. Beyond that I can’t say much, as I can’t make much sense out of Hegel and suspect this is because there’s not much there under the verbiage. He’s one of those philosophers who dishes out a lot of word salad that looks intellectually compelling and as though it oughtto mean something profound, but seems to have little or no reference outside itself.As a survey for our understanding, after all this gratuitous Buddhacide, where have ye gory felons arrived WRT deity/s. When asked my religion I typically state that I am a non-practicing agnostic. ;-) I find ESR’s adoption of neo-paganism interesting. I don’t seem to have an itch for the ritualistic, so I have never sought to scratch it. > Sanity is the process by which you continually adjust your beliefs so they are predictively sound. I like this definition which implies that the Scientific Method itself is sanity. Though true, the definition is probably too intellectual for most people. I’ve been searching for phrases that force people to question themselves objectively: “The only way to be right is to admit you’re wrong.” “Party loyalty is un-American”. “I don’t believe in Santa’s workshop, Atlantis, or heaven.” “No imaginable force could ever stop a good God from saving a helpless child from suffering.” “Governments kill. Governments steal. We can do better.” “The only legitimate purpose of a government is to prevent other governments from forming.” it’s not a premise you can negate, it’s a boundary condition without which trying to reason simply doesn’t work. Game over. If you don’t assume causal coherence, you can’t play at all.I can’t make sense of this because I don’t understand the distinction you’re drawing between a premise and a boundary condition. Please describe to me your model of what reasoning consists of, and define “premise” and “boundary condition” in terms of that model. >I can’t make sense of this because I don’t understand the distinction you’re drawing between a premise and a boundary condition. Please describe to me your model of what reasoning consists of, and define “premise†and “boundary condition†in terms of that model. Most generally, reasoning is what you do to generate testable predictions about observables from other observables. In order for reasoning to work, there have to be causal relationships among observables that are within the computational capacity of your mind to model. More strictly, observables have to exhibit causal regularity – that is, like causes produce like effects. Theories are ways of compressing lots of observations and intermediate predictions into formulas that allow us to make final predictions. Some kinds of reasoning – theoretical reasoning – consist of rearranging and checking your theories. While this kind of reasoning is still ultimately motivated and justified by prediction of observables, it may be abstracted enough that the connection to observables is no longer obvious. The most obvious example in this category is formal reasoning in pure mathematics. A “premise” is an assumption or input into a theory. In formal mathematical reasoning, a premise is an axiom. In kinds of reasoning closer to observables, a premise may be a truth claim about observables that is outside the scope of the theory you are considering – so it is testable, but not testable within the theory of which it is a premise.In formal mathematical reasoning, it is often possible to negate a single premise of an axiomatic system and arrive at a different but still internally consistent theory. For example, set theory can be done with either the Continuum Hypothesis or its negation and will still be consistent, generating different predictions only for some recondite questions about infinite sets. In more empirical reasoning, it is also possible to negate or modify the premises of a theory and yield a sheaf of theories which predict different observables. For example, we could assume different values of G in Newton’s Law and derive different expectations about planetary orbits. Only one value would be true, e.g., predictive of actual observables, but the point is that premises can mutate without making reasoning impossible. OK, now let’s consider “the universe doesn’t make sense”. The only way I can unpack this is as an assertion that either (a) the universe does not have causal regularity, or (b) it has causal regularity but modeling the regularity is beyond the computational capacity of human minds. In either of those cases you can’t reason at all, even in the most general sense of “reason”. This is why I described “the universe makes sense” as a boundary condition rather than a premise. It’s a boundary condition of reasoning, not a premise you can negate and still be in a possible world where reason is applicable. ESR “He’s one of those philosophers who dishes out a lot of word salad that looks intellectually compelling and as though it ought to mean something profound, but seems to have little or no reference outside itself.” Heh. Of course, people like Derrida would claim that this is the tragedy (and the joy) of all language. But that really would be a meta-level too high. That said: although I don’t usually subscribe to the Sapir-Whorf hypothesis or its relatives, Hegel really does make more sense in German. Whether it is a sense for which it’s worth learning German ist umstritten. A great deal of my reputation with my co-workers is derived from my willingness to kill the Buddha. They come to me asking advice on how to resolve a particular problem, and I often have to get them to separate what they know from direct observation from the conclusions they’ve reached from those observations, and the observations and conclusions relayed by others. I often refer to the scene in the original Bad News Bearsin which the coach writes on the chalkboard “ASSUME”, then divides the word into three parts “ASS|U|ME”, while saying “when you ASSUME, you make an ASS out of U and ME.” It’s a handy, colorful rubric.~~You’d be surprised~~how often those assumptions turn out to be false.>A great deal of my reputation with my co-workers is derived from my willingness to kill the Buddha. They come to me asking advice on how to resolve a particular problem Your account is wrong in at least one important way. You cannot kill someone else’s Buddha; the most you can achieve to to make it visible to them so they can kill it themselves. I’m not saying this to sound mystical or profound, although I concede that is a likely side-effect :-). It’s very valuable to expose the hidden premises in your co-workers’ thinking, but until they actually negate the incorrect premises in their own minds, enlightenment is not achieved.@ Monster >They come to me asking advice on how to resolve a particular problem, and I often have to get them to separate what they know from direct observation from the conclusions they’ve reached from those observations, and the observations and conclusions relayed by others. In my line of work it usually takes the following form: I’m having this problem. Why are you doing that? Because of X. Disregard X… …..Oh! You start them out flying in some direction and come around periodically to save them from their death spirals. I’ve tried to train myself to fly on instruments as much as possible, but it still helps to have somebody to bounce things off of. I find myself in a spiral from time to time. Certainly, questioning one’s assumptions is a good thing to do…when one has time to do it. During the rest of our lives, we need a framework of ‘rules’ or ‘morals’ or ‘laws’ to guide us in choosing our actions in response to different situations. An example would be a policeman responding to a call. He’s been given all sorts of examples in training, by his sergeant at roll calls, and by other cops. When he gets to the scene, he needs to “pull one out” and use it. There’s no time for complicated analysis. It’s the difference between a quick table lookup and some elaborate algorithmic process. You can’t kill the Buddha and the perp at the same time. Those of you who carry might run into this very problem. Another example would be things that we are taught by our elders. They attempt to pass on things that have proved useful to them, so that we don’t have to waste our lives endlessly reconfirming things. My father told me, “There’s no such thing as ghosts.” This simple bit of advice has saved me hours and hours that I’ve seen other people waste on the subject. I would tell people to not question their basic assumptions ordinarily, but BE READY to do so when confronted with some situation that demands it. The hard part is learning to recognize when such a situation has come up. Aaron Davies: On reflection, I’ve decided you were onto something. I didn’t write the OP because of anything I read in Eliezer Yudkowsky’s stuff, but I think he did influence it on a different level. That the formof the essay was possible – that philosophical writing about the tactics of rationality can be packaged as a hortatory essay in blog-post length – is probably something I learned from him.Of course, in exploring this form he has been refining precedents of which he and I are both aware. Well, I kill my own Buddhas all the time, and I generally succeed in getting them to kill theirs when an opportunity arises. They state a conclusion; I ask them “what makes you think that?” and we’re off to the races. More strictly, observables have to exhibit causal regularity – that is, like causes produce like effects.I think you need to weaken this definition in one of two ways in order to allow for quantum indeterminacy. Either allow causes to be unobservable (allowing for many-worlds or hidden-variable interpretations of QM), or say that like causes produce independent random samples of a distribution of effects (allowing for Copenhagen-style interpretations). The model of reasoning you’ve described doesn’t address how a reasoner can make allowances for his own fallibility. The obvious way to work this into the model is to assign a confidence level to each axiom and derived proposition, with confidence gradually diminishing through extended chains of deduction. Under this model, a reasoner who believes himself to be insane (which I think you can interpret as being a strictly weaker proposition than “the universe doesn’t make sense”) is one whose confidence diminishes completely after a single inference step. Such a reasoner is still able to go through the motions of making inferences even though he will never acquire any new beliefs as a result. He would be engaging in a degenerate kind of reasoning, but I think it still deserves to be called reasoning. >I think you need to weaken this definition in one of two ways in order to allow for quantum indeterminacy. Yeah, this is all technical stuff involving the definition of “like” (causes and effects). If we extend the model of reasoning so confirmation is probabilistic rather than boolean-valued, there are no problems of principle here – you just end up in Eliezer Yudkowsky’s world, where Bayes’s Theorem is king. >The model of reasoning you’ve described doesn’t address how a reasoner can make allowances for his own fallibility. True. Your solution is sound in principle, I think. Shenpen did say that he applied the idea of unquestionable rules to himself– it’s in his second-to-last paragraph. More generally, the problem of destructive impulsiveness is more serious for some people than others. If you problem is not getting around to paperwork and losing some non-crucial money as a result, it’s in a different category, practically speaking, from liking to get drunk and having no inhibitions about driving in that state. Someone in the latter category might need solid rules to stay alive long enough to work on acquiring some wisdom. I would reject the entire discussion since it seems to be based on the idea that there is some central truth and “I/we” know it but you don’t. If you truely believe there is some value to anything “Zen” or “Buddhist” then put it out there in 125 words or less and lets discuss it. My experience is that even difficult concepts can be explained to lay people in simple terms IF the person explaining it understands it. AND that all complicated discussions that require endless additional verbage to explain or prove are in fact nothing more then frauds that cannot stand the light of day and must be shrouded in code words, mystery and priest-like double-speak. >If you truely believe there is some value to anything “Zen†or “Buddhist†then put it out there in 125 words or less and lets discuss it. What, the 563 words in the original post was too long for you? There are no secrets here, no mysteries, no elites. Actually, thinking about it, what you describe is very similar to Derrida’s “deconstruction” in its original form. And again, apologies for mentioning him again ;-) Dear Eric, your writing is clear and complete. I agree with the sanity process you explain: very simple to understand. I just wonder how much fuzzle and noise is made because you used such entangled language. Maybe it sounded mystic to somebody… Thanks for posting! Hope you can read in Spanish… you can join us someday as reader in Loquaris. Greetings, Celita Palacios I like this a lot. Write more self-help stuff plz :) ESR, how do you feel about the type of beliefs where the belief itself influences the truth-value? Ie if you believe you are Casanova, you will be a better ladies’ man than if you believe that you are the lamest man in the world. Are you against the type of mind hacking offered by eg rituals, NLP, personal development, affirmations, etc? Clarification is needed here! :) If you believe you are a great hacker….. Correct. I’m not actually a Buddhist – I don’t have perfect detachment as my goal. I write about Zen techniques because I think they are useful tactics for calm and clarity, not because I’ve bought into the Buddhist prescription. That I have studied and rejected.Elaborate on this plz. ESR says: Um, not right now. Maybe I’ll post about Buddhism sometime.Also what do you think about this: http://en.wikipedia.org/wiki/Depressive_realism I likes me some optimism bias in order to git r done :) @esr >Hegel wasn’t an empiricist at all. Beyond that I can’t say much, as I can’t make much sense out of Hegel and suspect this is because there’s not much there under the verbiage. He’s one of those philosophers who dishes out a lot of word salad that looks intellectually compelling and as though it ought to mean something profound, but seems to have little or no reference outside itself. Slightly OT, but thank you. I have gotten the same sense from Hegel every time I have tried to read it, but lacked the words to describe my distaste. I was afraid maybe I had missed something, but I’m relieved others have come to similar conclusions. >I have gotten the same sense from Hegel every time I have tried to read it, but lacked the words to describe my distaste. I was afraid maybe I had missed something, but I’m relieved others have come to similar conclusions. This isn’t only a Hegelian problem :-). Sadly, mostpre-analytic philosophy is like this.Fight for the sake of fighting without considering happiness or distress loss or gain victory or defeat and you shall never incur sin. Bhagavad Gita 2.38 “Sanity is the process by which you continually adjust your beliefs so they are predictively sound.” I *really* like that. I hope you don’t mind if I spread that. >ESR, how do you feel about the type of beliefs where the belief itself influences the truth-value? Ie if you believe you are Casanova, you will be a better ladies’ man than if you believe that you are the lamest man in the world. Are you against the type of mind hacking offered by eg rituals, NLP, personal development, affirmations, etc? Clarification is needed here! :) ESR has referred in the past to rituals/spells/whatever they’re actually called in Wicca that allow him, for instance, to effect real improvement in the condition of a sprained joint (someone else’s), and hypothesized that mind-hacking is involved. I’m not sure how this is related to other, more self-directed forms of mind-hacking. I like this essay a lot. This is actually something we teach to pre-initiates in the Georgian Tradition. The idea is that you’re supposed to go through this with all of your idees belief, values, ssumptions and attitudes about everything you think about. Politics, religion, sex, personal relationships, business relationships, work, life in general, and so forth. You’re supposed to throw out any ideas that conflict with observation and sound reasoning. Pat Patterson referred to this process in his original tapes as “unbrainwashing” and he specifically emphasized that you should do this not just once, but that it’s a continuing process that you constantly do your whole life. I personally wish more people would do this; there exists a lot of ignorance out there due to people simply believing what they are told — especially by someone with an agenda. It is extremely difficult to articulate why one is more captivated and entertained by reading one thing vs. the other, as any explanation is of course entirely subjective. When writing my comment, I thought perhaps you had more time to write. In fact, I fully expected you to say “Oh, yeah, I have more time to write lately.” I can only say that as a wordsmith, you have become more clever and entertaining. It seems like you are having more fun than usual while writing. I’ve been reading your blog and occasionally commenting for over a year, and I have enjoyed your essays during that time. Lately, I’ve enjoyed them more. I don’t mean to be incoherent, but your question is rather difficult to answer :) Why do I love red snapper grilled in a salt cast after being smothered with lemon and habanero peppers? I don’t know, I just like it :) Gone with the Wind: My experience is that even difficult concepts can be explained to lay people in simple terms IF the person explaining it understands it. AND that all complicated discussions that require endless additional verbage to explain or prove are in fact nothing more then frauds that cannot stand the light of day and must be shrouded in code words, mystery and priest-like double-speak.My experience is that some things are very difficult to explain to people who already have a strong preconception. Or, more exactly, it’s possible to put together a very nice, concise explanation, but people are unlikely to assimilate it. For example, take Alexander Technique— it’s a method of improving coordination by showing people how not to interfere with their kinesthetic sense. It’s amazing how hard it is to get this across. Many people reflexively say that they have no coordination. More people assume it has something to do with posture. And I’ve done something I thought was Alexander Technique for years, seeking contextless perfection and making my coordination worse rather than better. My teachers were reasonably competent, but there were things they were saying that I just wasn’t hearing. ***** esr: I’m not actually a Buddhist – I don’t have perfect detachment as my goal. I write about Zen techniques because I think they are useful tactics for calm and clarity, not because I’ve bought into the Buddhist prescription. That I have studied and rejected.I think Buddhism is the result of a very serious effort to solve a particular problem– that of suffering. Buddha didn’t have anything in particular that he wanted to do, so he wasn’t working on how to act well. The tremendous influence of Buddhism and the value which has come out of it are indicators of both how important it is to really dig into problems and how rarely anyone does so. I don’t mean to be incoherent, but your question is rather difficult to answer :) Why do I love red snapper grilled in a salt cast after being smothered with lemon and habanero peppers? I don’t know, I just like it :) I was just thinking about this. You can be more specific, like “I like the combination of saltiness and piquancy, and the acidity of the lemon juice brings the fish flavor to the foreground” or “I like how the fish meat is solidified by cooking it in lemon juice”, but at the end it always comes down to “I like X. Why? Because.” Sorry, forgot to quote Tim in my last post. Eric, there are a number of things “wrong” with your wp setup, if I may point them out here: – Inside a post, there’s no clicky to get to the homepage, or anything but the immediate adjacent posts. Usually the blog header is a link to the homepage. Yes, I could and I do use Ctrl-L to retype, or Backspace, but it’s something one expects from a blog. – Also, a “Go to the top” link. Sure, there’s the Home button, but it doesn’t work inside forms. – you could enable ‘pretty urls’ that display post title like “esr.ibiblio.org/kill-the-buddha/” if your wp allows – I think for the level of conversation in this blog, http://wordpress.org/extend/plugins/quote-comments/ or something like it could be very useful. I remember the failed test of a comments plugin you did previously, but this might be a bit easier on the processing. Just don’t install it just before you leave for the weekend :) >Eric, there are a number of things “wrong†with your wp setup, if I may point them out here: Not bad ideas, but I have no idea how to accomplish any but the last. I’ll look into upgrading. @Adriano: Actually, I think that there are real, scientific reasons why people tend to like certain foods, though specifically why Tim especially likes red snapper grilled in a salt cast after being smothered in lemon and habanero peppers (which sounds good to me, too) may be difficult to identify. For example, people tend to crave salt partly because it is needed by the body, since it contains copious amounts of the electrolyte sodium, which adequate levels are required for central nervous system functionality. But then again, Tim may like red snapper grilled in a salt cast, while he may eschew some other salty meats. At the end of the day, however, people tend to just shrug and say that there’s no accounting for taste. Fortunately, in my case, copious amounts of Lutjanus gibbous (humpback red snapper) are caught everyday in the waters of the Gulf of Mexico and Tampa Bay, hence fresh supplies are readily available at my local fish market. If done correctly and with quality coarse salt, the fish does not come out salty. The meat is very light with a nice citrus / spicy finish. The salt cast ensures that the fish is cooked evenly and dehydrates the skin. When opened and the skin peeled away, you see the natural fish oils on the surface of the meat. Believe it or not, many people add salt to taste. Except for the lemon / pepper, the meat is quite bland. Trout, if cooked in a similar manner comes out with much more subtle flavor instead of the usual strong fish taste. If done incorrectly, the worst that you end up with is the taste of pickled fish, which is also quite delicious. It’s an interesting technique to try. You can use a conventional oven, but best results are obtained using a very hot grill. I began being curious about the science of taste when I realized that my daughter inherited my love of very spicy food. I always attributed that to an endorphin rush, but that doesn’t explain the chain of events where a toddler first decided that atomic chili smells good. I was amazed to see her finish it. When I cook chili, the kitchen is often uninhabitable. Most other children her age would have refused to taste it, just on the grounds of the very strong aroma. @Tim My wife has prepared other kinds of fish cooked in this way, and indeed they were not very salty. As for the spicy food, I’ve had extensive training: years ago, I couldn’t finish a bowl of spaghetti with chili pepper powder; today, I can eat a rocoto pepper “marinade” I make myself without flinching much. Wasabi and horseradish mustards were involved in the progression. I wonder if it’s the same for most people? Back on topic, this has been a very useful thread, and it does dovetail nicely with Harry Potter and the Methods of Rationalitywhich has also been a wonderful boon to me.Yours, Tom Thank you, this is a nice set of rules to deal with one’s ideas. So… You don’t let them rot and you swing at them (to kill them)? Train them in your mind with rejection? (Isolating your own memes(ideas) and shrinking the pool(memory, processing capacity)? or just releasing more vicious memes(old & new ideas, ) to eat them? :) ) @boril Now *there’s* an interesting pair of concepts to play with… a person’s knowledge/belief system as a competitive ecosystem of memes, vs. the more traditional cognitive psych-type view as something more in the nature of an architectural structure (schema) that each of us “builds” internally over time. @Matt: I wouldn’t say that those are necessarily competing ideas. Memes are just whatever is copied from one person to the next, whether that be songs, stories, habits, etc.: the concept revolves around the idea that these are passed by imitation. That these concepts replicate with variation is rather like genes — hence the term “meme.” Cognitive psych definitely recognizes that a significant portion of a person’s knowledge/belief system arises from cultural influences. So I don’t think those ideas are necessarily opposing or competing at all. @Matt >ecosystem of memes, vs. psych-type Structures of ideas over a wild plain? :) Even then, one usually has to dismantle old buildings in order build a new one (there was a saying about this, I don’t remember it right now) Evolutionary speaking, it could be said that those old buildings has “survived” for so long because they are either beautiful (we like them, they “exploit” that) or they are useful & sturdy (we need them). In a sense, the structures are like “living beings” on the “open plains” (cities). We the humans are the evolutionary process that “selects” which buildings would survive in the future. Now…. one has just to shift his/hers mind from the literalities and switch the context from “architectures in a city” to his own mind, “buildings” as ideas and see the same process just as it is in an ecosystem :-/ @Morgan >Memes are just whatever is copied from one person to the next Yeah, copying with variation though as ideas are much more unstable than genes Splicing (and?) & splitting (or?) is permitted on ideas and the results are undefined/unknown from time to time. You never know what mutant may come out… “There’s a reason Zen shifted to a conceptualist position after the Sixth Patriarch; subjectivism doesn’t work. It doesn’t predict the way observed reality behaves, which is that there’s stuff out there that has prior causal power over mind even if we never have access to it that’s unmediated by our nervous systems.” Given that the purpose of the whole thing is changing the mind and not changing external reality, what counts is how well it predicts the functioning of the mind and the methods used to influence it and not external reality. BTW seeing the observer and the thing observed as different aspects of the same mind is pretty much a part of Zen too, this is not where the difference lies, f.e. there is the Zen story two monks looking at a flag and debating whether it is the flag that moves or the wind that moves and a third one tells them that it is the mind that moves. The difference rather lies in importance or priority one assigns to getting rid of fixed ideas and getting rid of disturbing emotions (passions, vices, whatevers). Fixed ideas are the ultimate reasons of disturbing emotions but treating the symptom is very often more urgent than removing the cause which is a much longer process and often one does not have time to wait for that. And the reason of this difference is simply explained by a temptation-free monastic lifestyle vs. living in the storm of lay, worldly life. One typical technique is for example treating jealousy, envy, anger and even fear/anxiety/stress (because fear is often suppressed anger) by wishing as much happiness to the person we are jealous / envious / angry / afraid of as possible. I tried it, it is difficult, but works fast. As in, minutes, hours, days. Realizing the untruth of the underlying fixed ideas AND getting this intellectual realization down from head into the heart takes much longer, as in, decades. Finally, I’m not saying fixed ideas are necessary for suppressing disturbing emotions / passions / vices, I am saying removing stuff in the wrong order can be dangerous, if I for example remove the fixed idea that there are some objective rules to morality but do not remove those fixed ideas that make me greedy or hateful can easily turn me into a criminal. As long as one has these fixed ideas and these passions, having some extra fixed ideas such as there are objective rules to morality can be useful, even something completely ridiculous fixed idea like the existence of an omnipotent god can be useful on this level. If one does the right thing, and removes the kinds of fixed ideas first that tend to cause the most dangerous passions, then of course reliance on other fixed ideas is not necessary. But very few attempt it and even fewer do it successfuly. On Hegel: 70% of the problem is that he wrote bullshit and 30% is translation problems. Take the title of his most famous work, for example: usually translated as The Phenomenology of Spirit, should be translated as The Phenomenology of Spirit, Soul, Reason, Mind, Psyche And Pretty Much Anything Happening In Your Head – der Geist is an umbrella term for a Cartesian ghost in the machine, it encompasses any activity that does not involve motoric movements, from mathemathics to religious faith. The only philosphers you can really trust are those like Socrates (not Plato!) and Diogenes because their only real claim was that people don’t know jack. Yours, Tom In my case, mathematics always involves motoric movements. :-P Python 2.6.4 (r264:75706, Dec 7 2009, 18:43:55) [GCC 4.4.1] on linux2 Type “help”, “copyright”, “credits” or “license” for more information. >>> 2+4 6 Does religion predate the soul, or the other way around – that’s what I’d like to know. I seriously don’t get the idea of killing Buddha. Is it to murder someone in your mind? Scary. I’m probably not knowledgeable in this area but I found mediating a very useful application to calm down your mind during stress. =D
true
true
true
null
2024-10-12 00:00:00
2010-10-05 00:00:00
null
null
ibiblio.org
esr.ibiblio.org
null
null
5,208,844
http://nakedsecurity.sophos.com/2012/03/27/facebook-profile-viewer-rogue-application/
Naked Security – Sophos News
Wp-Block-Co-Authors-Plus-Coauthors Is-Layout-Flow
September 26, 2023 Naked Security Insights, education and and advice on cybersecurity issues and threats August 22, 2023 ## Smart light bulbs could give away your password secrets August 17, 2023 ## S3 Ep148: Remembering crypto heroes August 16, 2023
true
true
true
null
2024-10-12 00:00:00
2023-08-15 00:00:00
https://news.sophos.com/…For-SN.png?w=640
website
sophos.com
Sophos News
null
null
3,129,045
http://itunes.apple.com/gb/app/gmusic-a-native-google-music/id472342018?mt=8&ls=1
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
12,131,300
https://www.theguardian.com/uk-news/2016/jun/07/destroyers-will-break-down-if-sent-to-middle-east-admits-royal-navy
Destroyers will break down if sent to Middle East, admits Royal Navy
Richard Norton-Taylor
The Royal Navy’s fleet of six £1bn destroyers is breaking down because the ships’ engines cannot cope with the warm waters of the Gulf, defence chiefs have admitted. They also told the Commons defence committee on Tuesday that the Type 45 destroyers’ Rolls-Royce WR-21 gas turbines are unable to operate in extreme temperatures and will be fitted with diesel generators. Rolls-Royce executives said engines installed in the Type 45 destroyers had been built as specified – but that the conditions in the Middle East were not “in line with these specs”. Earlier a Whitehall source told Scotland’s Daily Record: “We can’t have warships that cannot operate if the water is warmer than it is in Portsmouth harbour.” The problem with the engines, which the Ministry of Defence initially dismissed as “teething problems”, first became clear when HMS Daring lost power in the mid-Atlantic in 2010 and had to be repaired in Canada. The ship, built by BAE Systems, needed repairing again in Bahrain in 2012 after another engine failure. The first warning signs emerged in 2009 when the Commons defence committee warned that “persistent overoptimism and underestimation of the technical challenges combined with inappropriate commercial arrangements” would lead to rising costs. The navy wanted 12 ships but ended up with six. The Type 45 has an integrated electric propulsion system that powers everything on board. The ships are vulnerable to “total electric failures”, according to one naval officer in an email. That leaves the ships without propulsion or weapons systems. Gen (now Lord) David Richards, the former chief of defence staff, repeatedly questioned the relevance of expensive kit procured by successive governments. “We have £1bn destroyers trying to sort out pirates in a little dhow with RPGs [rocket-propelled grenades] costing $50, with an outboard motor [costing] $100,” he said. The cost of preparing the destroyers, expected to amount to tens of millions of pounds will increase pressure on the defence budget and may be one of the reasons for delays in the construction of Type 26 frigates on the Clyde. Delaying the Type 26 frigate programme will mean the UK fleet would be “grossly inadequate” for the tasks ahead, Lord West told the defence committee. The Labour peer and former first sea lord said the Tory government was being “economical with the actualité” when it said the frigates are being held up by design changes when “the reality is there is not enough money in the MoD”. A Unite convener told the committee that the union expects construction to be delayed until early 2018, leaving the Clyde shipyards overstaffed for at least two years. John Hudson, managing director at BAE Systems, said: “We are in detailed negotiations with the MoD as to the build programme for the Type 26. “Until those discussions are complete I am not in a position to be able to advise what the cut steel date might be for the Type 26 programme.” An MOD Spokesperson said: “The Type 45 was designed for world-wide operations, from Sub-Arctic to extreme tropical environments, and continues to operate effectively in the Gulf and the South Atlantic all year round.”
true
true
true
Defence chiefs tell Commons committee the £1bn ships are likely to suffer engine failure in warm waters
2024-10-12 00:00:00
2016-06-07 00:00:00
https://i.guim.co.uk/img…cef29bdc1fdc54de
article
theguardian.com
The Guardian
null
null
5,893,114
http://www.washingtonpost.com/blogs/wonkblog/wp/2013/06/17/can-bitcoin-make-peace-with-washington/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
28,171,223
https://en.wikipedia.org/wiki/Triskaidekaphobia
Triskaidekaphobia - Wikipedia
null
# Triskaidekaphobia It has been suggested that Thirteenth floor be merged into this article. (Discuss) Proposed since May 2024. | **Triskaidekaphobia** (/ˌtrɪskaɪˌdɛkəˈfoʊbiə/ *TRIS-kye-DEK-ə-FOH-bee-ə*, /ˌtrɪskə-/ *TRIS-kə-*; from Ancient Greek * *τρεισκαίδεκα* (treiskaídeka)* 'thirteen' and Ancient Greek * *φόβος* (phóbos)* 'fear')[1] is fear or avoidance of the number 13. It is also a reason for the fear of Friday the 13th, called *paraskevidekatriaphobia* (from Greek * *Παρασκευή* (Paraskevi)* 'Friday' Greek * *δεκατρείς* (dekatreís)* 'thirteen' and Ancient Greek * *φόβος* (phóbos)* 'fear') or *friggatriskaidekaphobia* (from Old Norse * Frigg* 'Frigg' Ancient Greek * *τρεισκαίδεκα* (treiskaídeka)* 'thirteen' and Ancient Greek * *φόβος* (phóbos)* 'fear'). The term was used as early as in 1910 by Isador Coriat in *Abnormal Psychology*.[2] ## Origins [edit]The supposed unlucky nature of the number 13 has several theories of origin. Although several authors claim it is an older belief, no such evidence has been documented so far. In fact, the earliest attestation of 13 being unlucky is first found after the Middle Ages in Europe. ### Playing cards [edit]Tarot card games have been attested since at least around 1450 with the Visconti-Sforza Tarot. One of the trump cards in tarot represents Death, and is numbered 13 in several variants. In 1781, Antoine Court de Gébelin writes of this card's presence in the Tarot of Marseilles that the number thirteen was *"toujours regarde comme malheureux"* ("always considered as unlucky").[3] In 1784, Johann Gottlob Immanuel Breitkopf cites Gébelin, and reaffirms that the tarot card number 13 is death and misfortune (*"Der Tod, Unglück"*).[4] ### 13 at a table [edit]Since at least 1774, a superstition of "thirteen at a table" has been documented: if 13 people sit at a table, then one of them must die within a year.[5] The origin of the superstition is unclear and various theories of its source have been presented over the years. In 1774, Johann August Ephraim Götze speculated:[5] Da ich aus der Erfahrung weis, daß der Aberglaube nichts liebers, als Religionssachen, zu seinen Beweisen macht; so glaube ich bey nahe nicht zu irren, wenn ich den Ursprung des Gegenwärtigen mit der Zahl XIII, von der Stelle des Evangelii herleite, wo der Heiland, bey der Ostermahlzeit, mit zwölf Jüngern zu Tische saß. Since I know from experience that superstition loves nothing better than religious matters as its proofs, I believe I'm almost certainly unmistaken when I derive the origin of the matter of the number XIII from the passage of the Gospel where the Savior sat at table with twelve disciples at the Easter meal. From the 1890s, a number of English-language sources reiterated the idea that at the Last Supper, Judas, the disciple who betrayed Jesus, was the 13th to sit at the table.[6] The Bible says nothing about the order in which the Apostles sat, but there were thirteen people at the table. In 1968, Douglas Hill in *Magic and Superstitions* recounts a Norse myth about 12 gods having a dinner party in Valhalla. The trickster god Loki, who was not invited, arrived as the 13th guest, and arranged for Höðr to shoot Balder with a mistletoe-tipped arrow. This story was also echoed in *Holiday folklore, phobias, and fun* by folklore historian Donald Dossey, citing Hill.[7][8] However, in the *Prose Edda* by Snorri Sturluson, the story about Loki and Balder does not emphasize that there are 12 gods, nor does it talk about a dinner party or the number 13. ## Events related to "unlucky" 13 [edit]- On Friday, October 13, 1307, the arrest of the Knights Templar was ordered by Philip IV of France. While the number 13 was considered unlucky, Friday the 13th was not considered unlucky at the time. The incorrect idea that their arrest was related to the phobias surrounding Friday the 13th was invented early in the 21st century and popularized by the novel *The Da Vinci Code*.[9] - In 1881 an influential group of New Yorkers, led by US Civil War veteran Captain William Fowler, came together to put an end to this and other superstitions. They formed a dinner cabaret club, which they called the Thirteen Club. At the first meeting, on January 13, 1881, at 8:13 p.m., thirteen people sat down to dine in Room 13 of the venue. The guests walked under a ladder to enter the room and were seated among piles of spilled salt. Many "Thirteen Clubs" sprang up all over North America over the next 45 years. Their activities were regularly reported in leading newspapers, and their numbers included five future US presidents, from Chester A. Arthur to Theodore Roosevelt. Thirteen Clubs had various imitators, but they all gradually faded due to a lack of interest. [10] - The British submarine, HMS *K13*, sank on 29 January 1917 while on her trials after diving with a hatch and some vents still open. Although she was raised and 48 men were rescued, 32 sailors and civilian technicians died. When repaired, she was renamed*K22*but was later involved in multiple collisions with other K-class submarines on 1 February 1918 in which a total of 103 men were killed, an event known as the Battle of May Island.[11]In the subsequent British L-class submarine, the number L13 was not used.[12] - Apollo 13 was launched on April 11, 1970, at 13:13:00 CST and suffered an oxygen tank explosion on April 13 at 21:07:53 CST. It returned safely to Earth on April 17. [13][14] - Friday the 13th mini-crash was a stock market crash that occurred on Friday, October 13, 1989. - Vehicle registration plates in Ireland are such that the first two digits represent the year of registration of the vehicle (i.e., 11 is a 2011 registered car, 12 is 2012, and so on). In 2012, there were concerns among members of the Society of the Irish Motor Industry (SIMI) that the prospect of having "13" registered vehicles might discourage motorists from buying new cars because of superstition surrounding the number thirteen, and that car sales and the motor industry (which was already doing badly) would suffer as a result. The government, in consultation with SIMI, introduced a system whereby 2013 registered vehicles would have their registration plates' age identifier string modified to read "131" for vehicles registered in the first six months of 2013 and "132" for those registered in the latter six months of the year. [15][16]1 ### Effect on US Shuttle program mission naming [edit]The disaster that occurred on Apollo 13 may have been a factor that led to a renaming that prevented a mission called STS-13.[18][19] STS-41-G was the name of the thirteenth Space Shuttle flight.[20] However, originally STS-41-C was the mission originally numbered STS-13[21][22] STS-41-C was the eleventh orbital flight of the space shuttle program.[23] The numbering system of the Space Shuttle was changed to a new one after STS-9.[24] The new naming scheme started with STS-41B, the previous mission was STS-9, and the thirteenth mission (what would have been STS-13) would be STS-41C.[24] The new scheme had first number stand for the U.S. fiscal year, the next number was a launch site (1 or 2), and the next was the number of the mission numbered with a letter for that period.[24] In the case of the actual 13th flight, the crew was apparently not superstitious and made a humorous mission patch that had a black cat on it.[24] Also, that mission re-entered and landed on Friday the 13th which one crew described as being "pretty cool".[24] Because of the way the designations and launch manifest work, the mission numbered STS-13 might not have actually been the 13th to launch as was common throughout the shuttle program; indeed it turned out to be the eleventh.[25][23] One of the reasons for this was when a launch had to be scrubbed, which delayed its mission.[26] NASA said in a 2016 news article it was due to a much higher frequency of planned launches (pre-Challenger disaster).[24] As it was, the Shuttle program did have a disaster on its *one-hundred* and thirteenth mission going by date of launch, which was STS-107.[27] The actual mission STS-113 was successful, and had actually launched earlier due to the nature of the launch manifest.[28] ### Omission of 13th rooms, floors, houses and decks [edit]Many ships, including cruise liners have omitted having a 13th deck due to triskaidekaphobia. Instead, the decks are numbered up to 12 and skip straight to number 14.[29] Hotels, buildings and elevator manufacturers have also avoided using the number 13 for rooms and floors based on triskaidekaphobia.[30] Several notable streets in London lack a No. 13, due to “the superstitions of London,” including “Fleet Street, Park Lane, Oxford Street, Praed Street, St. James’s Street, Haymarket and Grosvenor Street.”[31] ## Notable people with triskaidekaphobia [edit]- Arnold Schoenberg [32] - Franklin D. Roosevelt [33] - Sholom Aleichem [34] - Stephen King [35] - Nick Yarris [36] - Ángel Nieto [37] ## Similar phobias [edit]- Number 4 (Tetraphobia). In China, Hong Kong, Taiwan, Singapore, Japan, Korea, and Vietnam, as well as in some other East Asian and South East Asian countries, it is not uncommon for buildings (including offices, apartments, hotels) to omit floors with numbers that include the digit 4, and Finnish mobile phone manufacturer Nokia's 1xxx-9xxx series of mobile phones does not include any model numbers beginning with a 4 (except Series 40, Nokia 3410 and Nokia 4.2). This originates from Classical Chinese, in which the pronunciation of the word for "four" (四, *sì*in Mandarin) is very similar to that of the word for "death" (死,*sǐ*in Mandarin), and remains so in the other countries' Sino-Xenic vocabulary (Korean*sa*for both; Japanese*shi*for both; Vietnamese*tứ*"four" vs.*tử*"death"). - Friday the 13th (Paraskevidekatriaphobia or Friggatriskaidekaphobia) is considered to be a day of bad luck in a number of western cultures. In Greece and some areas of Latin America, Tuesday the 13th is similarly considered unlucky. 2 - Number 17 (Heptadecaphobia). In Italy, perhaps because in Roman numerals 17 is written XVII, which can be rearranged to VIXI, which in Latin means "I have lived" but can be a euphemism for "I am dead." In Italy, some planes have no row 17 and some hotels have no room 17. [38]In Italy, cruise ships built and operated by MSC Cruises lack a Deck 17.[39] - Number 39 (Triakontenneaphobia). There is a belief in some parts of Afghanistan that the number 39 (thrice thirteen) is cursed or a badge of shame. [40] - Number 616 (Hexakosioihekkaidekaphobia) or 666 (Hexakosioihexekontahexaphobia), which come from the Biblical number of the beast. ## Lucky 13 [edit]In some regions, 13 is or has been considered a lucky number. For example, prior to the First World War, 13 was considered to be a lucky number in France, even being featured on postcards and charms.[41] In more modern times, 13 is lucky in Italy except in some contexts, such as sitting at the dinner table.[42] In Cantonese-speaking areas, including Hong Kong and Macau, the number 13 is considered lucky because it sounds similar to the Cantonese words meaning "sure to live" (as opposed to the unlucky number 14 which in Cantonese sounds like the words meaning "sure to die"). Colgate University was started by 13 men with $13 and 13 prayers, so 13 is considered a lucky number. Friday the 13th is the luckiest day at Colgate.[43] A number of sportspeople are known for wearing the number 13 jersey and performing successfully. On November 23, 2003, the Miami Dolphins retired the number 13 for Dan Marino, who played quarterback for the Dolphins from 1983 to 1999. Kurt Warner, St. Louis Rams quarterback (NFL MVP, 1999 & 2001, and Super Bowl XXXIV MVP) also wore number 13. Wilt Chamberlain, 13-time NBA All-Star, has had his No. 13 Jersey retired by the NBA's Golden State Warriors, Philadelphia 76ers; Los Angeles Lakers, Harlem Globetrotters, and Kansas University Jayhawks, all of which he played for. In 1966, the Portugal national football team achieved their best-ever result at the World Cup final tournaments by finishing third, thanks to a Mozambican-born striker, Eusebio, who has scored nine goals at World Cup – four of them in a 5-3 quarterfinal win over North Korea – and won the Golden Boot award as the tournament's top scorer while wearing the number 13. In the 1954 and 1974 World Cup finals, Germany's Max Morlock and Gerd Müller, respectively, played and scored in the final, wearing the number 13.[44] More recent footballers playing successfully despite wearing number 13, include Michael Ballack, Alessandro Nesta, and Rafinha.[45] Among other sportspeople who have chosen 13 as their number, are Venezuelans Dave Concepción, Omar Vizquel, Oswaldo Guillén and Pastor Maldonado due to the number being considered lucky in Venezuelan culture. Swedish-born hockey player Mats Sundin, who played 14 of his 18 NHL seasons for the Toronto Maple Leafs, setting team records for goals and points, had his number 13 retired by the team on 15 October 2016. Outside of the sporting industry, 13 is used as a lucky number by other individuals, including Taylor Swift who has made prominent use of the number 13 throughout her career.[46] ## See also [edit]## Notes [edit]**^1**The main reason for this was stated to be to increase the number of car sales in the second half of the year. Even though 70% of new cars are bought during the first four months of the year, some consumers believe that the calendar year of registration does not accurately reflect the real age of a new car, since cars bought in January will most likely have been manufactured the previous year, while those bought later in the year will be actually made in the same year.**^2**Tuesday is generally unlucky in Greece for the fall of Byzantium Tues 29th May 1453.[47]In Spanish-speaking countries, there is a proverb: En martes no te cases, ni te embarques 'On Tuesday, do not get married or set sail'.[48] ## References [edit]**^**"triskaidekaphobia - Origin and meaning of triskaidekaphobia by Online Etymology Dictionary".*Etymonline.com*. Retrieved 5 November 2017.**^**"Abnormal Psychology" p. 319, published in 1910, Moffat, Yard and company (New York). LCCN 10-11167.**^**Court de Gébelin, Antoine (1781).*Du jeu des tarots*(in French).**^**Breitkopf, Johann Gottlob Immanuel (1784).*Versuch, den Ursprung der Spielkarten, die Einführung des Leinenpapieres, und den Anfang der Holzschneidekunst in Europa zu erforschen. 1, Welcher die Spielkarten und das Leinenpapier enthält*(in German). p. 20.- ^ **a**Götze, Johann August Ephraim (1774).**b***Neue Mannigfaltigkeiten : eine gemeinnützige Wochenschrift. 1. 1773/74 (1774) ## Woche 041, 05.03.1774*. **^**Cecil Adams (1992-11-06). "Why is the number 13 considered unlucky?". The Straight Dope. Retrieved 2011-05-13.**^**Friday the 13th Superstitions Rooted in Bible and More,*National Geographic***^**Why is Friday the 13th Considered Unlucky?,*Mental Floss***^**Robinson, John J. (1990).*Born in Blood: The Lost Secrets of Freemasonry*. M. Evans & Company. ISBN 978-0-87131-602-8.**^**Nick Leys,*If you bought this, you've already had bad luck*, review of Nathaniel Lachenmayer's*Thirteen: The World's Most Popular Superstition*, Weekend Australian, 8–9 January 2005**^**Jan Meecham (17 April 2017). "The calamity k-class submarines of the First World War".*Roger (Jan) Meecham*. Retrieved 12 February 2023.**^**Hillbeck, I W. "1916–1945: L Class".*rnsubs.co.uk*. Barrow Submariners Association. Retrieved 12 February 2023.**^**"13 Things That Saved Apollo 13, Part 9: Position of the Tanks".*Universetoday.com*. 21 April 2010. Retrieved 5 November 2017.**^**"What Really Happened to Apollo 13".*Spaceacts.com*. Retrieved 5 November 2017.**^***2013 number plates to be changed to avoid ‘unlucky 13’*,*Irish Independent*, 24 August 2012**^**"2013 Number Plates To Be Changed To Avoid 'Unlucky 13'".*Irish Independent*.**^**Evans, Ben (2007).*Space Shuttle Challenger: Ten Journeys into the Unknown*. Springer. ISBN 978-0387496795. Retrieved 30 May 2012 – via Google Books.**^**Almeida, Andres (5 December 2016). "Behind the Space Shuttle mission numbering system".*nasa.gov*. Retrieved 5 November 2017.**^**Evans, Ben (2012).*Tragedy and Triumph in Orbit: The Eighties and Early Nineties*. Springer Science & Business Media. p. 211. ISBN 978-1461434306.**^**"Challenger mission No. 6 (13th shuttle program mission overall)".*Orlando Sentinel*. 41-G. Retrieved 5 November 2017.**^**"James D. A. van Hoften" (PDF). Oral History Project. NASA Johnson Space Center. 5 December 2007. Archived (PDF) from the original on 9 October 2022. Retrieved 20 July 2013.**^**"Terry J. Hart" (PDF). Oral History Project. NASA Johnson Space Center. 10 April 2003. Archived (PDF) from the original on 9 October 2022. Retrieved 20 July 2013.- ^ **a**"STS-41-C Information".**b***Astonautix*. Archived from the original on December 27, 2016. Retrieved 28 December 2017. - ^ **a****b****c****d****e**Almeida, Andres (5 December 2016). "Behind the Space Shuttle Mission Numbering System". NASA. Retrieved 17 January 2017.**f** **^**Evans, Ben (2012).*Tragedy and Triumph in Orbit: The Eighties and Early Nineties*. Springer Science & Business Media. ISBN 978-1461434306.**^**Evans, Ben (2012).*Tragedy and Triumph in Orbit: The Eighties and Early Nineties*. Springer Science & Business Media. p. 211. ISBN 978-1461434306 – via Google Books.**^**"The Columbia Disaster".*Space Safety Magazine*. Retrieved 28 December 2017.**^**Warnock, Lynda. "NASA STS-113". KSC.*nasa.gov*. Retrieved 17 January 2017.**^**Mallinson, Harriett (December 8, 2018). "Cruise secrets: Why can passengers never find this mysterious location on a cruise ship?: CRUISE ships are often enormous vessels that some passengers may be overwhelmed by when first boarding. However, keen-eyed cruisers may notice that there is one thing missing from many ships".*The Express*. Retrieved February 26, 2024.**^**"Cruise secrets: Why can passengers never find this mysterious location on a cruise ship?".*www.express.co.uk*. 8 December 2018. Retrieved 2020-09-30.**^**Peter Ackroyd (2000).*London: The Biography*. Chatto & Windus. p. 207. ISBN 978-0385497701.**^**"Fear of 13 and Other Superstitions Embedded in Compositions".*WQXR-FM*. May 13, 2016. Retrieved 2019-01-24.**^**Perry, Warren. "Fears of the Fearless FDR: A President's Superstitions for Friday the 13th".*Smithsonian Institution*. Retrieved 2019-01-24.**^**Haberman, Clyde (May 17, 2010). "A Reading to Recall the Father of Tevye".*The New York Times*. Retrieved 2019-01-24.**^**Chan, Melissa (13 October 2017). "Why Friday the 13th Is a Real Nightmare for Some People". Retrieved 8 December 2018.**^**Tobias, Scott (March 29, 2016). "Film Review: 'The Fear of 13'".*Variety*. Retrieved 2019-01-24.**^**Gadd, Mick (3 August 2017). "Angel Nieto dead: Motorsport family in mourning over 13-time world champion".*mirror*. Retrieved 8 October 2018.**^**Harris, Nick (15 November 2007). "Bad Omen for Italy as Their Unlucky Number Comes Up".*The Independent*. London.**^**Sloan, Gene (February 3, 2017). "Photos: MSC Seaside under construction in Italy".*USA TODAY*. Retrieved August 10, 2024.**^**Jon Boone (15 June 2011). "The curse of number 39 and the steps Afghans take to avoid it".*The Guardian*. Retrieved 28 December 2017.**^**Davies, Owen (2018).*A Supernatural War: Magic, Divination, and Faith During the First World War*. Oxford: Oxford University Press. p. 136. ISBN 9780198794554.**^**"Aggiungi un posto a tavola, siamo in 13!" [Add a seat at the table, we are 13!].*Di cibo e altre storie*[*Of food and other stories*] (in Italian). 13 January 2012.**^**"Lucky 13". Colgate University. Retrieved 20 February 2015.**^**Dpa (1 July 2010). "Unlucky 13, unless your name is Mueller".*Thehindu.com*. Retrieved 5 November 2017.**^**"Football Facts: Who Wears Number 13?".*Thefootballnation.co.uk*. March 2015. Retrieved 5 November 2017.**^**Orlando, Joyce (13 June 2023). "No triskaidekaphobia for Taylor Swift: What's the pop star's connection to 13".*The Tennessean*. Retrieved 25 July 2023.**^**Margarita Papantoniou (13 August 2013). "Why Are Tuesday and 13 Bad Luck?".*GreekReporter*. Retrieved 28 December 2017.**^**"Tuesday the 13th… the Friday the 13th of the Spanish-speaking world (and vice-versa)".*WordPress*. 13 August 2013. Retrieved 28 December 2017. ## External links [edit]*in Wiktionary, the free dictionary.* **triskaidekaphobia**- 'Unlucky' airline logo grounded BBC, 21 February 2007 - Would you buy a number 13 house? BBC Magazine, Friday, 12 December 2008 - Triskaidekaphobia on MathWorld - Who's Afraid Of Friday The 13th? on NPR
true
true
true
null
2024-10-12 00:00:00
2002-01-31 00:00:00
https://upload.wikimedia…28cropped%29.jpg
website
wikipedia.org
Wikimedia Foundation, Inc.
null
null
11,360,211
http://www.careermetis.com/three-skype-interview-disasters-recover/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
35,138,456
https://www.invinciblemoth.com/p/breaking-into-product-management-entry-points
Breaking into Product Management: key 'entry points' into the field
Alex Stern
# Breaking into Product Management: key 'entry points' into the field ### Product Management discipline is full of career switchers; exploring if there is a single best time to join A lot of what I strive to cover here is relevant to people working (or looking to work) in many different functions (e.g., SDE, DS, PM, Marketing, and so on). My last three articles all fall into this category: Today, however, I’d like to focus on a topic that would be most relevant for those considering starting their careers in, or switching to, Product Management. ## What’s so different about Product Management? First of all, I’d like to touch on the topic of what makes Product Management fairly unique compared to the other fields & functions in technology. For disciplines like Software Engineering or Data Science, the vast majority of people will either begin working in those fields directly out of college (some might get a Masters or a PhD first, but that doesn’t really change things), or will do a ‘hard’ career switch at some point, meaning they reset any career progress they had in their previous field, and have to start from scratch as a Software Engineer or a Data Scientist, usually joining as a junior SDE or junior DS first, and then rising through the ranks over time (that’s usually the path graduates of various ‘bootcamps’ have to take, for example). For some of the other disciplines, like Marketing or Strategy, it’s not uncommon to see experienced hires who didn’t work in those fields (at least not in the tech industry) before successfully make the switch to tech without having to lose their previous career progress. Some will take the business school route to make this change more straightforward, but many would just find jobs & switch (e.g., it’s quite common for former management consultants to move to functions such as Strategy or Operations in the tech industry). The reason why this is possible is that in functions like Strategy or Marketing, almost none of the things you do are unique to the technology industry, and so if you worked in one of those for a while, even if it was in a different industry or space, your experience will most of the time be relevant for the tech firms as well (you’ll still need to learn some nuances / pick up some context, of course, but that’s just par for the course). Product Management, however, it’s a different beast entirely. The discipline of Product Management came to be in 1980s & a lot of it is quite unique & specific to the tech industry, since it were the tech companies who created it in the first place (some other industries — e.g., FMCG — also have Product Managers, but those jobs are nothing like PM jobs in the tech space). Because of this, nothing from the other industries/fields quite maps to what Product Managers do in tech, and that’s exactly what makes it challenging to break into Product Management as an experienced hire (that, and also the fact that everyone and their mothers want to be Product Managers nowadays, it feels like). That’s not to say that you cannot break into Product Management if you already have some years of experience under the belt. You absolutely can do that, but there are some ways that work better than others, and the first step towards successfully executing such a move is to get to learn more about & understand the key ‘entry points’ to Product Management that most people who work in the field today used, which is the focus of this post. ## Key ‘entry points’ to Product Management There are **3 key ‘entry points’** most people end up taking to get into Product Management: Join an APM program out of college Join as a PM / Senior PM out of business school Switch to the PM discipline later on as an experienced hire Let’s take a look at those one by one. ### APM programs APM stands for ‘Associate Product Manager’. APM is usually the most junior title for Product Managers at the companies where such title exists (e.g., at Google, it roughly maps to L3; see more about ‘big tech’ leveling here). APM programs are typically 2-year rotational programs aimed at bringing in recent college grads & then training them to become highly productive product managers over time. The first APM program was created by Google in 2002 and proved to be very successful, so then a number of other similar programs were created over the years, e.g., Meta’s Rotational Product Manager Program, Uber’s APM program & Salesforce’s APM Program. There are also some well-known programs that don’t quite fit the definition of an APM program, but have the same idea behind them, most notably, Microsoft’s Aspire Program & Kleiner Perkins Fellows Program (both these programs aren’t limited to Product Management either, and instead exist for a variety of disciplines; also, both have 2 possible ‘entry points’ for PM roles: 1) right out of college, and 2) out of business school). The idea behind APM programs is fairly straightforward: since Product Management is such a unique role, instead of having people work somewhere else for a few years, it makes more sense to bring in bright college grads on board from the very beginning & then train them how to be great PMs internally. As part of a 2-year program, APMs typically get to experience 2-3 different products/teams, and also receive centralized training/mentoring. Companies like Uber & Google spend a lot of resources to make sure APMs get a great experience, and more senior PMs are usually happy to pitch in helping to train APMs — both because it’s important for the company, and because it’s often a good way to get someone to help out with your product & get some early management experience yourself along the way. At the end of the program, APMs typically choose a permanent team to work on (oftentimes, one of the teams they spent a rotation working with) & also get promoted to the next level (L4 at Google / Meta / Uber). Note that L4 at those companies is also the typical entry point for new hires joining out of business school, as well as many of the early- to mid-career experienced hires joining from other firms. All those folks will usually have more than 2 years of experience that APMs gain by the time they finish the program, so in a way, APMs are getting a pretty sweet deal. APMs also get to learn both the ropes of both the Product Management discipline & also how to navigate the company they work for during the program, which often puts them on a fast career track. I’ve seen some examples of extremely high promo velocity for people who started as APMs, where a person would rise to be a GPM or even a Director at top-tier firms within 10 years of graduating college — that’s not an easy feat even for APMs, of course, but it would be virtually unheard of for anyone who joined Product Management discipline later on in their career. Finally, it’s worth discussing whether there are any drawbacks of becoming a PM right out of college. In my opinion, the answer is ‘not really’. APM programs at top-tier tech companies are very competitive, so if you think you’re interested to try your hand at Product Management and you can secure a spot in one of those, you should take it — worst case scenario, you can apply to business school / go do something else after 2 years (and get some impressive experience on your resume in the meanwhile). Of course, one cannot really turn back the clock — so unless you’re reading this post while you’re still in college, chances are, it might be too late for you to consider the APM route. Not to worry though — if that’s you, just keep reading. ### Going to business school to become a PM Next, a very popular (for better or worse) option to break into Product Management is to go to business school to get your MBA, do an internship while there and then become a PM after you graduate. **Note:** Before I continue, I’d like to call out the fact that this is the route that I myself ended up taking a number of years ago: I started my career by working at an early-stage venture capital firm, and then went to business school with the explicit goal to switch to Product Management afterwards. At the end, I’d say it worked out pretty well for me (see the ‘About’ section on this website, if you’d like to learn more about my journey). That being said, I think there are a number of pros & cons of taking this route one should be aware of, which is what I am going to focus on in this section. So, is it a good idea to go to business school in the hopes of becoming a PM afterwards? The answer, as with most things in life, is ‘it depends’. Let’s unpack it below (note that I’m going to use the words ‘business school’ and ‘MBA’ interchangeably from this point on). #### Have a plan Overall, my personal belief is that business school is one of the best routes to change careers (not just in the context of Product Management, but in general too). The way MBA degrees are structured, the flexibility they allow for, the network & dedicated career services you get — all of it can be very useful for someone looking to change fields/industries, be it to Product Management, or to something else. The caveat is, you really have to have a plan, ideally before you even arrive on campus. Business schools are often advertised as places where you can spend a lot of quality time figuring out what you want to do next. That, in my opinion, is a bit disingenuous — while there might be some situations / careers where doing so is fine, anyone looking to switch to Product Management after graduating should really be focused & have a plan from day 1. The specifics of the plan can vary based on person’s background, previous experience, exact goals and so on (this probably warrants writing a separate article about) — but having a plan is essential. #### Understand your strengths & weaknesses Next, you have to understand that in a field like Product Management, especially now that it got so competitive over the last few years, not everyone can get the top jobs straight out of business school. Someone who has spent a few years as a software engineer or a data scientist at a top-tier company before coming to business school will probably have much easier time securing interviews (and likely passing them), compared to a person looking to switch from a completely unrelated field. That’s inevitable, and nothing to get too frustrated about — unlike with some of the other fields, PM interviews are about more than just raw brainpower, dedication, or the ability to network your way in — there are certain backgrounds that can be quite useful for Product Management, and others that won’t be relevant at all. There are certain advantages to this situation too, provided that you do a thorough reflection on, and can play to, your strengths & weaknesses. For once, you don’t have to go to the very top-tier school, if your goal is to break into Product Management — it’s very possible that you’ll get more exposure to tech & more opportunities to network at a place like University of Washington’s Foster School of Business, compared to some of the more prestigious schools located on the East Coast, just because of the Foster’s location (Seattle) & close ties to companies like Microsoft & Amazon that are based in the area. Next, there might be companies for which you would be a naturally better fit, based on your hobbies, past experience, or interests. And so, while you can’t turn the time back, changing your major in college & becoming a software engineer (same as you cannot go back & do an APM program out of college), you can, for instance, decide to focus on the analytics classes, read up on how one scales SaaS companies, and then leverage your past experience in, say, Sales, to position yourself as a really good PM candidate for the B2B SaaS startups (doing some networking also won’t hurt, of course). #### Keep your mind open That being said, my main advice to people who want to become PMs out of business school is to always look wide & keep your options open, instead of spending too much time trying to optimize for a few select jobs / companies. Recruiting in general is a numbers game, and tech recruiting in particular can be both chaotic & at times dumb (in reality, it’s not so much dumb, as it is optimized for avoiding ‘false positives’, where you hire the wrong people — ‘big tech’ in particular is heavily optimized for this mindset, even if it also means missing out on a ton of good candidates). **Remember:** once you get your foot in the door, it’s a lot easier to switch companies (or, in some cases, switch functions if you’re already working at the right company — you can find some more info on this in one of my previous posts here). So, you should apply to a lot of places, even if some aren’t your dream jobs — it’s much better to have multiple options to choose from, than to have none because you were too picky. #### Which roles can you get out of business school? Most companies (at least, ‘big tech’ companies — it’s a lot harder to generalize for startups) hire business school graduates at the second lowest PM level they have, which is also the same level APMs will graduate to, if the company in question has an APM program. For Google / Meta / Uber, that would be an L4 level; for Microsoft, it’s usually called L61 (sometimes, L62); at Amazon, it’s L6, which confusingly carries ‘Senior PM’ title, but in reality is still the second lowest PM level Amazon has & is roughly equivalent to L4 at Google / Meta / Uber. Also, remember that some of the APM-like programs are also open to MBA grads (e.g., Microsoft’s Aspire Program & Kleiner Perkins Fellows Program) — those are very much worth considering, since if you get in, you’ll get all of the same benefits APMs get, which are manifold (mentoring from experienced folks, opportunity to experience several products/areas, a chance to quickly build a rather wide network, etc.) #### Is it a good idea to go to business school to become a PM after all? That is indeed a million dollar question. For most people looking to make a career switch, I’m inclined to say ‘yes’, provided that they spend time & effort to figure out a plan, and then stick to it. In some cases, you might be better off networking your way into PM roles at your current company, instead of going to business school (e.g., if you are already working at, say, Microsoft or Google, especially in a technical role, you can probably find a way to switch to Product Management internally without going through too much trouble). Overall, my key advice here is this: **if you already know that you want to be a Product Manager, try to make the switch as early as possible** — and if the business school fits into that paradigm, then it’s probably worth it for you. That also brings us to the last scenario I wanted to discuss today, which is switching to Product Management as an experienced hire. ### Switching to PM as an experienced hire Ok, so now let’s discuss a situation where you already have years of experience under your belt — way too many to become an APM, or even to go to business school (or perhaps you did go to business school at some point, but then decided to work for a few years in another field, before realizing you want to be a PM). This probably means you have something like 7-10+ years of experience — a classic example of what’s often called an ‘experienced hire’ or an ‘industry hire’. So, what do you do then? At a high level, **most of the things that I wrote in the previous section still apply**, with some additional caveats / nuances. To recap: Having a plan is still critical — even more so than it is for those going to business school Your background / prior experience will still play a role, and there is no way to get around that, so instead you should figure out a way to use it to your advantage Targeting top-tier jobs only isn’t always the best strategy — or, in some cases, you should indeed shoot for top-tier companies, but accept the fact that you might not get to do straight-up PM work, at least not at first (and that’s ok) Casting a wide net is super-important, as is not getting discouraged by the rejects you will inevitably receive (everybody gets those) **Note:** one thing that I’d like to point out is that here, I’m focusing on the cases where one has to do a straight-up career switch to become a PM. If you are already working as a PM at a smaller startup/consulting firm/bank/etc., and want to switch to being a PM in ‘big tech’, it’s a bit of a different story. Making such a change isn’t always easy, but doing so is very different from breaking into Product Management for the first time, which is the key focus here. Above, I listed some of the things that are similar between the ‘experienced hire’ case & getting hired out of business school. However, there are some differences to be aware of as well. #### Aim to switch as early as you can For once, as with any career, the more experience you gain, the more specialized you become. In many cases, that can be a good thing (since that’s what makes you valuable as an ‘experienced hire’) — but in case of switching careers this typically works against you. As I mentioned at the beginning, Product Management as a discipline is fairly unique to the technology industry, and so there is no clear equivalent of it in other fields. That means that it’s highly unlikely that the experience you gain in another industry (and thus also another discipline) will directly translate to Product Management. In practical terms, this means that beyond a certain level, it becomes a lot harder for you to make the switch, since you are now considered to be overqualified for the jobs you can realistically perform decently well in. This in turn means that you have to be willing to take a down-level (be it in terms of pay, level of responsibility, or scope) to be able to make such a transition (and would probably also have to convince folks interviewing you that you are genuinely interested in such a move & will be fine with it). To illustrate this point, I fairly often encounter people who have spent several years post-MBA working in consulting, and are now looking to make the switch to Product Management in the technology industry for various reasons. I’m going to use McKinsey system of titles in this example, but the same idea carries over to the rest of the consulting firms as well. As a Senior Associate at McKinsey (0-2 years out of business school), you won’t have to face a down-level, drop in pay, etc., — in fact, if you can get an L4 PM position at Google / Meta / Uber / etc., your pay might actually go up. At the Engagement Manager level (2-5 years out of business school), things are going to get a bit more tricky, since you’d probably want to shoot for L5 PM positions at those firms, but getting those will be a lot tougher, since you’ve never actually done the job, and so the hiring managers will be quite reluctant to hire you, especially if they also have good candidates with PM experience. Beyond that, at the Associate Partner level & above (so, 4+ years out of business school), it is going to become exponentially harder to break into Product Management, for all the reasons mentioned above. Note that what I’m describing above isn’t unique to the Product Management industry — in fact, you can see the same patterns in most spaces — but I would argue that the nature of Product Management exacerbates those trends beyond what you might see in other fields. The implication here should be very clear by now: **once you figure out you want to make the switch, you should aim to do it as early as possible**, since staying on your previous path will have very little benefit & at some point might actually start working against you. #### Not all PM roles are the same — use it to your advantage Next, previously I have already alluded to the fact that PM roles are not all the same — in fact, I believe one would be hard-pressed to find another field that has so many very different jobs gathered under one umbrella title. To give you an example here, being a Data PM on a highly technical platform product is going to be very different from being a UX-focused PM for a fast-growing consumer-facing product, and that in turn would also be very different from being a PM responsible for running a customer engagement program for a mature, well-established b2b product. In fact, I’d argue these 3 roles have very little in common — and yet, they are all PM roles. This fact can be used to your advantage, since it allows you to look for PM roles that best align with your past experience, and then go after those. Note that you don’t have to commit to doing those types of roles forever either — rather, focus on getting your foot in the door, and then, once you have your PM title, you can work your way to the types of roles you fee truly passionate about. #### Think hard about the types of companies you want to work for Finally, while it might seem to be a no-brainer to go after PM roles at ‘big tech’ companies out of college or after graduating from business school, the more experienced you get, the harder I believe you should think about the types of roles that appeal the most to you. True, many of the advantages of joining ‘big tech’ firms are still going to be there, but you might also find yourself in situations where ‘big tech’ companies don’t value your past experience much, thus requiring you to take a substantial down-level if you want to join them (I’ve seen this happen many times), require you to work on products that you don’t really care about, or just generally seem like a poor fit for your career aspirations. At the same time, it’s possible that you’ll find PM positions that fit with your goals / interests / past experience a lot better at startups, or just at the companies outside of what’s typically known as ‘big tech’. If that happens, and you think the role is a good fit, don’t stress too much about what could have been, had you gotten a position at Google or any of its peers, for a) it’s not worth to dwell on it for too long; b) you can always attempt to switch later, once you are working in the field; and c) at some point, the fit should start mattering more than abstract things like prestige. Hope this article gives you some idea of the various ‘entry points’ one can take to break into Product Management, and pros & cons associated with each of them. Also, if you have any comments / feedback, or just want to share your personal experiences on the matter, please do so in the comments section below!
true
true
true
Product Management discipline is full of career switchers; exploring if there is a single best time to join
2024-10-12 00:00:00
2023-03-13 00:00:00
https://substackcdn.com/…1_3727x5591.jpeg
article
invinciblemoth.com
Invincible Moth by Alex Stern
null
null
21,742,288
https://aeon.co/essays/is-the-world-ordered-as-a-system-or-as-a-project
Is the world ordered as a system or as a project? | Aeon Essays
Paul Kahn
Ludwig Wittgenstein once spoke of the way in which a picture can hold us captive. The picture he had in mind was that of the correspondence theory of truth. We imagine words representing things in the world as if we match a linguistic sign to a real object. We think of children learning a language as if it is a game of connecting signs to objects or actions. This picture comes to mind when we have to give an account of the meaning of a word or the way in which language works. It is, to borrow a phrase from a contemporary of Wittgenstein’s, ‘ready-to-hand’. From Wittgenstein, we learn that the picture misleads, rather than clarifies. His *Philosophical Investigations* (1953) tries to get us to think differently by providing puzzles and examples that often point to where the picture runs out. Language, he explains, is more a matter of ‘know how’ than of knowing what. We use language to get around in the world. It is not as if we first have a world and then assign words to objects in it. We understand a word when we know how to use it. We know how to use it when we can see likeness and differences. Concepts gain meaning within networks of family resemblances: different members of the family can share some features but not others. To use the terms that I will shortly explain, language is not a *project* of assigning signs; it is rather a *system* in which the whole and the elements are both present from the beginning. No words without sentences, and no sentences without an entire language. Toward the end of his *Philosophical Investigations*, Wittgenstein considers the well-known duck/rabbit illusion. The same image can appear as a duck or as a rabbit. It cannot, however, be seen as both at once; our perception moves back and forth between the two. We cannot decide to see just one. That *seeing as* will determine a set of expectations and connections – uses – that will sustain different forms of enquiry responsive to different questions and moving in different directions. The image does not change, but the world within which it has meaning changes. The law, for example, is like that duck/rabbit image. We are not just captured by a picture; we are captured by two pictures of the same phenomenon. The pictures are already at work when we reason within the law: they ground different interpretive methods. Similarly, they are at work when we theorise about the nature of law, that is, when we try to explain the character of legal order. These two pictures are that of project and system. ‘Project’ imagines law as the product of authors who are free agents capable of acting with intention after some sort of deliberation. For the ‘project’ imagination, legislation is the paradigm of law. Meanwhile, ‘system’ imagines law as a well-ordered whole that develops immanently and spontaneously from within individual transactions. System is a relationship of parts to whole, and of whole to parts. For the systemic imagination, the common law is the paradigm. Judges decide individual cases relying on precedents – that is, relying on prior acts of the same sort that they are pursuing. Out of those countless individual decisions, a system of order emerges. That system has identifiable principles – legal norms – but those principles were not themselves the product of an intentional act. The common law of contract, for example, has a systematic order, but the idea of that order did not precede the fact of its appearance. In a project, the idea of order precedes the act; in a system, we discover the idea of order only after the act. Once we are alert to the distinction of ‘project’ and ‘system’, we see that it is by no means unique to law. These two pictures dominate our accounts of order. Traditionally, those accounts extended into the natural order: is nature God’s project or a spontaneous system? Today, the duck/rabbit problem of ‘project’ and ‘system’ presents itself whenever we give an account of the human world, from the individual to the society. Do we make ourselves according to an idea or do we realise an inner truth of ourselves? The social sciences approach society as system; the regulatory state imagines it as project. The picture of a project offers the simplest explanation of the origin of order. Projects can extend from an individual artisan to a creator god; they can involve objects in the world (eg, a house) or social structures (eg, a corporation). A legislature has law-creation as its project; a people can take up the project of creating a constitution. A project has a beginning in the action of a free subject. That subject explains his project by referring to his intentions. Those intentions can reflect a well-thought-out theory or simply the agent’s interests. To make a constitution is to take up a project informed by political theory. To make dinner is to take up a project informed by taste and hunger. Even the latter, simple as it is, requires an agent capable of reasoning and intending. Projects are the way in which a free agent occupies the world. An animal will look for food, but it will not plan its dinner. A bird might build a nest, but that is not a project because the bird could not have decided to experiment with a new design. It could not have been other than it is. That ‘might have been’ is critical to projects and thus to freedom. In a world of projects, we are always thinking of what we might do, what we might have done, and what we might do better. Projects are successful when they meet their goals; they are redesigned when they fail. Projects then, whether of law or anything else, put at stake not just an idea of order, but also an idea of freedom. Freedom ends where projects end. The acorn grows into the oak not as a project, but as the realisation of a systemic whole If ‘project’ is our duck, then ‘system’ is our rabbit. To explain phenomena on this account is to give a narrative of how the particular fits into the whole. Instead of looking backwards to a point of origin in a decision, we offer a synchronic account of reciprocal support among elements: parts, organs or functions. A system is always greater than the aggregate of its elements. The whole gives meaning to the parts. A system has the curious temporal quality of preceding the elements that constitute its parts. We confront this temporal mystery directly in our experience of an organism. The acorn grows into the oak not as a project, but as the realisation of a systemic whole. It grows into itself spontaneously. It does so without plan or intention. Similarly, we understand an organism when we understand the relation of its parts – organs – to the whole, that is, when we understand its immanent order. If ‘project’ delineates a space of freedom, then ‘system’ delineates a space of life. All life has this systemic quality of spontaneous, immanent order. Systems have the capacity for maintenance and some ability of repair. An injured organism can heal itself; a market in disequilibrium can return to equilibrium. Of course, some systemic disturbances are beyond these capacities: systems do die. Projects, though, ordinarily have no such capacities of repair. When a watch breaks, we take it to the watchmaker for repair. When legislation fails, we go back to the legislature for a new plan. Today, artificial intelligence is challenging that distinction precisely to the degree that we can teach machines to learn and to respond. This effort to endow a project with systemic qualities is not new. We have seen this intersection before. The US Constitution’s framers thought they were building ‘a machine that would go of itself’, because it was to have some capacities of repair when it lost balance. And, of course, Deists had long believed that God’s project was to create a system with these capacities of maintenance and repair. There would be no return to the divine watchmaker after the project was set out. In a curious coincidence of history, 1776 marks the transformation of both project and system from patterns of theological narrative to narratives of the modern age. Project had been the narrative of creation; system had been that of theodicy. In 1776, both project and system take on a human scale. The *Declaration of Independence* announces that politics can be a project. When government fails to achieve the ends for which it is designed – life, liberty, and the pursuit of happiness – it can be abandoned and then made again. Constitutional construction becomes the most important human project. So much so that the idea of freedom is deeply politicised: we are free when we are self-governing, and we are self-governing when we are the authors of our own laws. To live under laws that are not the product of our projects is, the American revolutionaries believed, to lack freedom, regardless of how just those laws are or how wealthy individuals become. The year 1776 also marks the publication of *The* *Wealth of Nations* by Adam Smith. That work signals the emergence of a modern idea of system linked to new forms of knowledge. The market is the paradigm of a social system, and economics is the first science of the social. The American constitutional framers thought of themselves as standing in a classical Republican tradition. Smith stands with modern science. All science seeks to identify and explain the immanent principles of order that give order to phenomena. Smith’s work is radical because it shows that the same idea of an immanent order extends from the natural world to human society. ‘One truth is clear, whatever is, is right.’ In a system, things cannot be other than they are *The Wealth of Nations* delineates those immanent principles of order that emerge spontaneously as individuals pursue their natural inclination to truck, barter and trade. An economy displays lawful regularities, but they are not the product of any participant’s project. Participants engage in transactions; they do not set out to establish the law of supply and demand. Economics is the study of the systemic order that emerges spontaneously from individual economic transactions. That order precedes the knowledge of it – just the opposite of a project in which an idea always precedes its realisation. Over the course of the 19th century, the social sciences proliferated as one area of social activity after another – sociology, history, political science, social psychology and law – is colonised by the systemic imagination. These two master narratives are as old as the West. The creation account in Genesis describes God’s project: free action of a subject capable of acting on final causes, deliberating, and judging his product. Like all projects, God’s takes time – six days – and is subject to an external, normative evaluation – it was ‘good’. Some things do not quite work out as planned – man, for instance – and require new interventions, such as Eve, to modify the original project. We account for the order of the world, on this view, by identifying the craftsman and explaining his intentions. We investigate his plan. The systemic view can also be found in the Garden of Eden. Picture Adam and Eve, before the arrival of the Serpent, contemplating the well-ordered nature of the garden in which everything works harmoniously as parts of a single whole. They do not know God’s plan. For them, the entire universe has the order of a garden in which the parts seem naturally to support each other. Its goodness is not located in an external measure – rather, it is good because it is. Man’s role is only to name, not to make, this creation. Alexander Pope’s great work, ‘An Essay on Man’ (1733-34), returns to this idea of God’s system. The poem is a discourse on system in support of a theodicy: ‘One truth is clear, whatever is, is right.’ In a system, things cannot be other than they are. We are not done with these arguments between the rabbit of project and the duck of system. Think of the conflict between science and creationism. Modern science is determinately systemic in its approach to nature. To have scientific knowledge is to identify the immanent principles of order. This is what links the social sciences to the natural sciences. For both, to know is to identify the laws behind the appearances of ordinary experience. The creationists respond with a different picture, for they cannot imagine order that does not have its origin in a conscious, deliberate act. They offer proof of the existence of God from the ‘fact’ of design: order itself, they believe, supports their idea of project. Behind the appearance of order, they see the hand of an orderer. That is what creationists see, but that is just what it means to be captured by a picture. Both sides have authoritative texts: Isaac Newton versus the Bible. The choice of authority depends upon a prior choice: rabbit or duck? The text does not ground the narrative. It is, rather, the other way around. Nature, accordingly, is neither system nor project. We might think that system is closer to the ‘facts of the matter’, but that is because we live in a scientific age. The systemic imagination has no account of the origins of order that can satisfy the person who wants to know ‘Why?’ Science reaches a limit in the Big Bang but, of course, our capacity to question is not so limited. We want to know about the moment before; we are not satisfied when told that the question is senseless. Immanuel Kant already saw that such questions are part of our condition: we ask questions that require answers beyond our capacity to reach. Yet still we ask. In doing so, we are following the demand for project, for only projects have beginnings. The categories of ‘project’ and ‘system’ are brought to the task of making sense of experience. These pictures frame the narratives by which we offer explanations, for an explanation needs an overall shape. It is not just ‘one damn thing after another’. A chronology of events is not a history; a list of organs is not an account of an organism; a description of a series of trades is not a market analysis. To make sense of experience, we need to fit it into a familiar pattern that itself has conceptual completion: a stranger came to town or I went on a journey, for example, are two of our most basic narrative forms, showing up in endless fictional accounts. Odysseus goes on a journey; Oedipus comes to town. We come to the task of offering an account, then, with a sense of what an explanation looks like. That changes across time, place and fields. Revolution created a space for a new project of constitutional construction Intellectual histories of a field can usefully pursue the pattern of movement between project and system. This is the task of my book on the development of the legal imagination in the long 19th century, *Origins of Order* (2019)*.* The movement over the course of that century is generally from project to system, although it would be a mistake to think that either category ever disappears. Each is always deployed against the other, like the scientists and creationists, and no victory is every final. The 19th-century movement from project to system is, accordingly, only one iteration of a longer conflict between the party of the duck and the party of the rabbit. The constitution begins as a project of We the People, but by the end of the century lawyers and theorists are speaking of an ‘unwritten constitution’ that is the spontaneous, immanent order of free Anglo-Saxons everywhere. There is, in short, a convergence of American constitutionalism and the system of the common law, as if no revolutionary project of break and radical construction ever occurred. If law is a system, why should British law and American law be any different? Indeed, the earliest casebooks in the modern law schools, which began toward the end of the 19th century, were filled with British cases. To think that jurisdiction made a difference to the law would be like thinking that the location of the laboratory made a difference to the principles of a science. Once the law school became a part of the university, law was to be a science like any other. In 1765, the traditional, common-law lawyer Sir William Blackstone said that the law has its origins in ‘time immemorial’. He was deploying a picture of system. The common law is no one’s project. It has an immanent order that emerges through the case law, just as economic laws emerge through individual transactions. When early British theorists responded that the origin of the common law must have been in ‘lost statutes’ – that is, legislative acts that have been lost to history – they were playing the role of creationists. They cannot imagine order absent a rational agent who puts that order into the world through a project. The American founders’ belief that law could be a project explains their skepticism about the common law and their commitment to writing a constitution. Revolution created a space for a new project of constitutional construction. Their aim was to make a constitution based on the best political theory of their day. The United States begins as an enlightenment project of creating law and legal institutions on a plan grounded in theory. When the French took up a similar project of constitutional creation a few years later, the UK’s conservative Member of Parliament Edmund Burke responded that law is not like that: it is system, not project; it grows immanently, it is not made. The Burkean response came to the US at the end of the 19th century. By then, the dominant picture of American constitutionalism is that of system, not project. The real constitution is not the founders’ written text, but ‘unwritten’ practices that develop according to immanent principles of order. Those principles are no one’s project. They emerge naturally and spontaneously, the thinking then went, wherever the Anglo-American race has the freedom to develop through the pursuit of its own ideals and interests. By the end of the century, constitutionalism, Christianity and civilisation coincide in the legal imaginary as the *telos* of history, that is, of the system realising itself. The task of jurisprudence, accordingly, is to discover those principles that structure a free society, not to create them. The role of legislation or of a written constitution, on the systemic view, is no more than that of removing pathologies that block the free actions of citizens. Projects of law, in other words, are now only remedial, just like a doctor’s interventions are designed to address pathologies that keep the body from realising its immanent principles of order – which we describe as ‘health’. Out of this systemic imagination comes a convergence of constitutionalism with *laissez-faire* capitalism. Should I make my life a project or should I seek to realise the person that I already am? Remarkably, given the revolutionary/project origins of the constitution, constitutional law and common law, by the end of the century, are imagined as converging on the same principles of order. This is famously captured in a remark by William Gladstone in 1878: [A]s the British Constitution is the most subtile organism which has proceeded from the womb and the long gestation of progressive history, so the American Constitution is, so far as I can see, the most wonderful work ever struck off at a given time by the brain and purpose of man. Duck or rabbit, he is saying, we arrive at the same place. Wherever there is a dominant discourse of system, a critical response will be framed as project, and vice versa. Thus, the legal realism of the 1920s and ’30s is the project-response to the systemic view of classical legal formalism that dominated both academy and bench at the turn of the century. The realists labelled the systemic ‘truths’ of those institutions ‘transcendental nonsense’. Formalism, they argued, offered only ideological constructions designed to divert attention from reality, which was nothing more than powerful interests pursuing their own projects. The realists called for a new law of projects in pursuit of the interests of the people, not those of the rich. These projects would be informed by the principles of real science, particularly the social sciences, rather than the phoney science of law. Out of this came the modern administrative state that works to inform government projects by scientific expertise. The contest between project and system continues. In our deeply polarised political age, these conflicting images provide the organisational forms for much of our ideological combat in the law. The originalists, who dominate the US Supreme Court today, are relying on a picture of project; they are opposed by those who believe in an organic constitutional order of system. Once the conflict of project and system comes to our attention, it is not difficult to spot the competition everywhere. Consider attitudes toward government regulation, monetary policy, climate change, delivery of health services, and land management. In each case, there are those who think that government responsibility is to adopt projects to make over the world according to our own ends. Others believe that interventions should be no more than remedial, allowing immanent principles to do the work of order. The same contrasting pictures work in our personal narratives. Should I make my life a project or should I seek to realise the person that I already am? At the opposite extreme from the personal is international relations. Do states relate to each other as elements of a system, the immanent principles of which were already identified by Thucydides, or is international law a modern project? These debates will never end although their locus will continually shift. They cannot end because order is *always* both duck and rabbit. This is merely another way of saying that man always finds himself both free and already situated in a fully ordered world. Without projects, we could not imagine our own freedom; without a system, the world would so lack order that everything would appear arbitrary and capricious. We can no more escape ‘project’ and ‘system’ than we can escape freedom and causality. At the root of our legal and social theories, one finds the deepest puzzle of metaphysics. If ever we thought otherwise, it was because we were held captive by a single picture.
true
true
true
There are two ways of seeing order in the world: as a spontaneous system or as an intentional project. Which way lies freedom?
2024-10-12 00:00:00
2019-12-09 00:00:00
https://images.aeonmedia…y=75&format=auto
article
aeon.co
Aeon Magazine
null
null
15,290,362
https://en.wikipedia.org/wiki/Jack_of_all_trades,_master_of_none
Jack of all trades - Wikipedia
null
# Jack of all trades "**Jack of all trades, master of none**" is a figure of speech used in reference to a person who has dabbled in many skills, rather than gaining expertise by focusing on only one. The original version, "**a jack of all trades**", is often used as a compliment for a person who is good at fixing things and has a good level of broad knowledge. They may be a master of integration: an individual who knows enough from many learned trades and skills to be able to bring the disciplines together in a practical manner. This person is a generalist rather than a specialist. ## Origins [edit]Robert Greene used the phrase "absolute Johannes Factotum" rather than "Jack of all trades" in his 1592 booklet *Greene's Groats-Worth of Wit,*[1] to dismissively refer to actor-turned-playwright William Shakespeare;[2] this is the first published mention of Shakespeare.[3] Some scholars believe Greene was referring not to Shakespeare, but to "Resolute" Johannes Florio, known as John Florio. They have pointed out how "Johannes" was the Latin version of John (Giovanni), and the name by which Florio was known among his contemporaries.[4] The term "absolute" is thought to be a rhyme for the nickname used by Gregorio in his signature ("resolute"), and the term "factotum" is thought to be used as a disparaging word for secretary, John Florio's job.[5][6][ additional citation(s) needed] In 1612, the phrase appeared in the book "Essays and Characters of a Prison" by English writer Geffray Mynshul (Minshull),[7] originally published in 1618,[8] and was probably based on the author's experience while held at Gray's Inn, London, when imprisoned for debt.[ citation needed] ## "Master of none" [edit]The "master of none" element appears to have been added in the late 18th century;[2] it made the statement less flattering to the person receiving it. Today, "Jack of all trades, master of none" generally describes a person whose knowledge, while covering a number of areas, is superficial in all of them. When abbreviated as simply "jack of all trades", it is an ambiguous statement – the user's intention is then dependent on context. However, when "master of none" is added (sometimes in jest), this is unflattering.[9] In the United States and Canada, the phrase has been in use since 1721.[10][ full citation needed] [11] ## Other quotation variants [edit]In modern times, the phrase with the "master of none" element is sometimes expanded into a less unflattering couplet by adding a second line: "but oftentimes better than a master of one" (or variants thereof), with some modern writers incorrectly saying that such a couplet is the "original" version with the second line having been dropped.[12] Online discussions attempting to find instances of this second line dated to before the twenty-first century have resulted in no response, however.[2] ## See also [edit]## References [edit]**^**"There is an upstart crow, beautified with our feathers, that with his tiger's heart wrapped in a player's hide supposes he is as well able to bombast out a blank verse as the best of you: and being an absolute Johannes Factotum, is in his own conceit the only Shake-scene in a country." --*Groats-Worth of Wit;*cited from*William Shakespeare--The Complete Works,*Stephen Orgel and A. R. Braunmuller, editors, Harmondsworth: Penguin, 2002, p. xlvii.- ^ **a****b**Martin, Gary. "'Jack of all trades' – the meaning and origin of this phrase".**c***www.phrases.org.uk*. Retrieved 30 September 2022. **^**Van Es, Bart (2010). ""Johannes fac Totum"?: Shakespeare's First Contact with the Acting Companies".*Shakespeare Quarterly*.**61**(4): 551–577. doi:10.1093/sq/61.4.551. JSTOR 40985630.**^**Iannaccone, Marianna (26 January 2021). "John or Giovanni Florio? Johannes Florius!".*www.resolutejohnflorio.com*. Retrieved 30 September 2022.**^**Gerevini, Saul. "Shakespeare and Florio" (in Italian). Retrieved 30 September 2022.**^**Gerevini, Saul (2008).*William Shakespeare ovvero John Florio*(in Italian). Pilgrim.**^**"Geffray Minshull (Mynshul), English miscellaneous writer (1594? - 1668)". Giga-usa.com. Retrieved 2 April 2014.**^**Minshull, Geffray (1821).*Essayes and characters of a Prison and Prisoners originally published in 1618*. Retrieved 2 April 2014.**^***Morris Dictionary of Word and Phrase Origins*, compiled by William and Mary Morris. HarperCollins, New York, 1977, 1988.**^**The OED notes appearance in*The Boston News-Letter*in August 1721 as "Jack of all Trades; and it would seem, Good at none."**^**"Random House Dictionary of Popular Proverbs and Sayings" by Gregory Y. Titelman (Random House, New York, 1996)**^**David Epistein (2020). "How Falling Behind Can Get You Ahead".Jack of all trades, master of none," the saying goes. But it is culturally telling that we have chopped off the ending: "…but oftentimes better than master of one. ## External links [edit]- The dictionary definition of *jack of all trades*at Wiktionary
true
true
true
null
2024-10-12 00:00:00
2005-07-17 00:00:00
https://upload.wikimedia…_a_jackknife.jpg
website
wikipedia.org
Wikimedia Foundation, Inc.
null
null
33,954,596
https://tmate.io/
tmate
null
tmate is a fork of tmux. tmate and tmux can coexist on the same system. brew install tmate Note: Homebrew is required as a prerequisite. sudo apt-get install tmate sudo dnf install tmate The Fedora packages are maintained by Andreas Schneider. sudo zypper install tmate Package available on openSUSE Tumbleweed and Leap. On SUSE Linux Enterprise, you need to activate the Package Hub Extension first. pkg install tmate The FreeBSD packages are maintained by Steve Wills. pkg_add tmate The OpenBSD packages are maintained by Wesley Mouedine Assaby. emerge -a app-misc/tmate Package information: https://packages.gentoo.org/packages/app-misc/tmate. pacman -S tmate The ArchLinux package is maintained by Christian Hesse. opkg install tmate The OpenWrt package is maintained by Tianling Shen. We provide i386, x86_64, arm32v6, arm32v7, and arm64v8 linux static builds for convenience. Binaries can be found on the GitHub release page. The binaries are built using the `build_static_release.sh` script in the tmate source directory. Sources are on GitHub: https://github.com/tmate-io/tmate Download, compile, and install with the following steps: git clone https://github.com/tmate-io/tmate.git cd tmate ./autogen.sh ./configure make make install A few dependencies are required. The Ubuntu package names are: git-core build-essential pkg-config libtool libevent-dev libncurses-dev zlib1g-dev automake libssh-dev libmsgpack-dev Once installed, launch tmate with `tmate` . You should see something like `ssh [email protected]` appearing. This allows others to join your terminal session. All users see the same terminal content at all time. This is useful for pair programming where two people share the same screen, but have different keyboards. tmate is useful as it goes through NATs and tolerate host IP changes. Accessing a terminal session is transparent to clients as they go through the tmate.io servers, acting as a proxy. No authentication setup is required, like setting up ssh keys. Run `tmate show-messages` in your shell to see tmate's log messages, including the ssh connection string. tmate also allow you to share a read-only view of your terminal. The read-only connection string can be retrieved with `tmate show-messages` . tmate uses `~/.tmate.conf` as configuration file. It uses the same tmux syntax. In order to load the `~/.tmux.conf` configuration file, add `source-file ~/.tmux.conf` in the tmate configuration file. When tmate is used for remote access only (as opposed to pair programming), it is useful to launch tmate in foreground mode with `tmate -F` . This does two things: It only starts the server side of tmate and outputs its log on stdout (as opposed to showing the session shell, useful for pair programming). This makes it easy to integrate into a service manager like systemd or kubernetes. It ensure the session never dies, by respawning a shell when it exits. If you wish to specify the program to run as a shell, run `tmate -F new-session [command...]` . For example, to have a rails console (it's a popular web framework) accessible with a named session (see next section), one can run: tmate -F -n web new-session rails console You can think of tmate as a reverse ssh tunnel accessible from anywhere. Typically, tmate generates random connection strings which are not stable across restarts, like `ssh [email protected]` . This can be a problem for accessing remote machines. One way to deal with connection string instability is to use tmate Webhooks, but this requires some effort to integrate. Another way is to use named sessions: by specifying a session name, the connection string becomes `ssh username/[email protected]` which is deterministic. The username is specified when registering for an API key (see below) and the session name is specified as follows: From the CLI: tmate -k API_KEY -n session-name Or from the `~/.tmate.conf` file: set tmate-api-key "API_KEY" set tmate-session-name "session-name" It is possible put the API key in the tmate configuration file, and specify the session name on the CLI. To specify the read-only session name, you may use the CLI option `-r` , or the configuration option `tmate-session-name-ro` . If you get the error `illegal option -- n` , ensure you are running tmate greater than **2.4.0**. You can check what tmate version you have by running: `tmate -V` . If your tmate version is too old, scroll up to the installation section. **Warning: access control must be considered when using named sessions, see next section.** Fill the following form to get an API key and start naming your sessions When using named sessions, access control is a concern as session names can be easy to guess if one is not careful. There are two ways to do access control: Use hard to guess session names. For example *machine1-3V6txGYUgglA*. This makes the session name hard to guess, like a password. Only allow SSH clients with specific public keys to connect to the session. To do so, create an `authorized_keys` file containing public keys that are allowed to connect. In this example, we'll reuse the one sshd uses, namely `~/.ssh/authorized_keys` . Then, specify the authorized keys file via the tmate CLI using `-a` as such: tmate -a ~/.ssh/authorized_keys The authorized keys file can also be specified in the `~/.tmate.conf` configuration file with: set tmate-authorized-keys "~/.ssh/authorized_keys" You can use the following docker image tmate/tmate-ssh-server. Note that you will need to create SSH keys using `create_keys.sh` (see below). Alternatively, you can compile the ssh server from source located at https://github.com/tmate-io/tmate-ssh-server. tmate also depends on a couple of packages. On Ubuntu, the packages are: git-core build-essential pkg-config libtool libevent-dev libncurses-dev zlib1g-dev automake libssh-dev cmake ruby Once all the prerequisites are satisfied, you can install tmate-ssh-server with: git clone https://github.com/tmate-io/tmate-ssh-server.git && cd tmate-ssh-server ./create_keys.sh # This generates SSH keys ./autogen.sh && ./configure && make sudo ./tmate-ssh-server Once your server is running, you must configure the clients to use your custom server. You may specify your custom options in the `~/.tmate.conf` file. Here are the default options: set -g tmate-server-host "ssh.tmate.io" set -g tmate-server-port 22 set -g tmate-server-rsa-fingerprint "SHA256:Hthk2T/M/Ivqfk1YYUn5ijC2Att3+UPzD7Rn72P5VWs" set -g tmate-server-ed25519-fingerprint "SHA256:jfttvoypkHiQYUqUCwKeqd9d1fJj/ZiQlFOHVl6E9sI" If you are interested in fault tolerance, you should setup the `tmate-server-host` host to resolve to multiple IPs. The tmate client will try them all, and keep to the most responsive one. `ssh.tmate.io` resolves to servers located in San Francisco, New York, London, and Singapore. To support named sessions, at this moment you must self-host the websocket server as well. This is because the session unix sockets must be renamed, but the jail make it difficult. You may follow the kubernetes configuration used for tmate.io at github.com/tmate-io/tmate-kube/prod. To faciliate developing, we run all the various tmate services with tilt. It's a tool like docker compose, but with features like live update. When a source file changes, it is immediately copied into the corresponding container and recompiled on the fly. This feature is very useful for developing. Here at the steps to setup the tmate dev environment: # macOS specific. On linux you can use microk8 instead of minikube brew install minikube tilt minikube start # Install sources git clone https://github.com/tmate-io/tmate-ssh-server.git git clone https://github.com/tmate-io/tmate-websocket.git git clone https://github.com/tmate-io/tmate-master.git git clone https://github.com/tmate-io/tmate-kube.git # Compile and run the tmate servers in a local kubernetes environment cd tmate-kube/dev eval $(minikube docker-env) tilt up # Create the postgres database and do database migrations kubectl exec -it deploy/master mix do ecto.create, ecto.migrate # Finally, configure tmate to use the local dev environment cat >> ~/.tmate.conf <<-EOF set tmate-server-host localhost set tmate-server-port 2200 set -g tmate-server-rsa-fingerprint "SHA256:pj6jMtCIgg26eJtHUro6KEmVOkVGmLdclArInW9LyLg" set -g tmate-server-ed25519-fingerprint "SHA256:ltQuqZqoF1GHYrrAVd99jW8W7vj/1gwoBwBF/FC9iuU" EOF At this point you should be able to navigate to http://localhost:4000 and see the tmate homepage. You should also be able to run `tmate` and a local connection string should appear. **Warning: this information is outdated.** A more up to date technical draft can be found here [PDF], but is still outdated. Sorry :( When launching tmate, an ssh connection is established to tmate.io (or your own server) in the background through libssh. The server ssh key signatures are specified upfront and are verified during the DH exchange to prevent man in the middle attacks. When a connection is established, a 150 bits session token is generated, then a tmux server is spawned in a jail with no file system, with its own PID namespace to isolate the server from other processes, and no user privileges. To allow this, all files required during the tmux server execution are opened before getting jailed. These measures are in place to limit the usefulness of possible exploits targeting the tmux server. The attacker would not be able to access other sessions, ensuring confidentiality. When an ssh client connects to tmate.io (or your own server), the tmux unix socket is looked up on the file system. On lookup failures, a random sleep is performed to prevent timing attacks, otherwise a tmux client is spawned and connected to the remote tmux server. The local and remote tmux servers communicate with a protocol on top of msgpack, which is gzipped over ssh for network bandwidth efficiency as vim scrolling can generate massive amounts of data. In order to keep the remote tmux server in sync with the local tmux server, PTY window pane's raw outputs are streamed individually as opposed to synchronizing the entire tmux window. Furthermore, window layouts, status bar changes, and copy mode state are also replicated. Finally, most of the tmux commands (like bind-key) are replicated. This ensures that the key bindings are the same on both side. The remote client's keystrokes are parsed and the outcome is sent to the local tmux server. This includes tmux commands such as split-window, window pane keystrokes, or window size information. This project can take many interesting directions. Here is what I have on the roadmap: If you'd like to get in touch, here are your options: Enjoy, Nico
true
true
true
null
2024-10-12 00:00:00
null
null
null
null
null
null
null
23,046,568
https://joyframework.com/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
19,031,221
https://www.youtube.com/watch?v=7N7hsho4Sjg
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
8,741,349
http://opinionator.blogs.nytimes.com/2014/12/11/big-ideas-in-social-change-2014/
Big Ideas in Social Change, 2014
Tina Rosenberg
Fixes looks at solutions to social problems and why they work. This year, people in Charleston, S.C., taught young children to read. In Las Cruces, N.M., others cured hepatitis C. And still others treated depression in the slums of Kampala, Uganda. On the surface, these people have nothing in common — except for being featured in Fixes columns this year. But they are all cousins, in a sense. They all owe their success to one particular strategy. This year in the Fixes column, we’ve looked at 60 or so ways that people are trying to change the world. Some of these projects are successful, some partially successful, some are failing in ways we can learn from, and some are intriguing ideas that have yet to compile a track record. The initiatives we’ve covered are — quite literally — all over the map. But there are ideas that unite them, a few strategies that show up over and over again. By connecting the dots we can get a sense of what can work in various contexts to solve many different types of problems. These, then, are Fixes’ nominations for the big ideas in social change of 2014. **DOWNSHIFT JOBS** Reading Partners in Charleston, Project ECHO in New Mexico, and Strong Minds in Kampala all rely on task shifting: taking jobs normally restricted to specialized professionals and turning them over to people with far less training in order to reach underserved groups. At face value, task shifting sounds dangerous. After all, we have professional qualifications for a reason. It takes training and experience to be a good teacher, hepatologist or psychologist. People in those professions have learned things they need to know. What’s next, having amateurs take over the operating room? It’s already happening. Zambia, a country the size of Texas, has only six surgeons in its rural areas. So it offers clinical officers a three-year surgery course, during which they learn to do the most commonly needed operations, such as C-sections, hysterectomies, appendectomies and bowel obstruction surgery. Personally, I’d prefer a surgeon. But in rural Zambia, a surgeon isn’t the alternative. Death is. Task shifting matters because it brings good things like health care and education to people who otherwise wouldn’t get them at all. And not just in Africa. Project ECHO began training rural doctors by video because when it started, in 2003, only 5 percent of New Mexico’s 34,000 chronic hepatitis C patients were getting treatment. It uses regular case-based training, and specialists work remotely with the front-line doctors and nurses. Another example is dental therapists, who have two or four years of graduate training. They are treating patients in Alaska and Minnesota (and soon, Maine) who live in areas with limited access to dental services. Task-shifting doesn’t necessarily mean second-class care, and sometimes it produces better results than the specialists do. The patients with hepatitis C in rural New Mexico seen by local nonspecialists do better than those seen at the specialist clinic in Albuquerque, possibly because health workers from the same community do better at getting patients to stick to their care plans, and can interact regularly and catch problems early. In India, the nongovernmental group Pratham devised one-page tests for basic reading and math. Volunteers with very little training go door-to-door to test children. Because of their simplicity and portability, these tests have allowed Indian parents for the first time to grasp the true failure of the country’s educational system — and mobilize to do something about it. Especially when care involves behavioral change, less training can be an asset, not a liability. Who’s better at getting people to improve their eating and exercise habits? A doctor — an authority figure who can spend only 15 minutes and generally learns little about a patient’s life — or a health worker from the patient’s own community, with the same problems, who can visit the patient at home and has the luxury of spending time? Experience indicates it’s the latter. Task-shifting can be an alternative any time resources are short and people can learn to do a specific activity without the full training specialists receive. Reading Partners succeeds, for example, because it isolated the key components of helping children learn to read, and found a way to teach them clearly and quickly to community volunteers. Like other forms of task-shifting, this is a money-saver, as volunteers or low-paid workers are stretching the efforts of higher-paid professionals. Such programs also allow people to get access to preventive services, which almost always result in huge savings later. The advantages of task shifting can be summed up in the aphorism: “Don’t let the perfect be the enemy of the good.” **FOCUS ON PEOPLE’S STRENGTHS, NOT THEIR NEEDS** This idea is related to task-shifting — in fact, it’s the main reason task-shifting is effective: we often underestimate what ordinary people can accomplish and succumb to stereotypes about low-income people or minorities. Look at the Springboard Collaborative, a summer reading program in Philadelphia, for example. Springboard uses the public school system’s most underemployed resource: low income, predominately African-American or Latino parents, a group usually falsely written off as unable or unwilling to help their children. Alejandro Gac-Artigas, Springboard’s founder, wondered whether low-income parents didn’t get as involved with their children’s education as more affluent parents because they didn’t believe they had much to offer. So Springboard’s system gives parents an easy-to-use curriculum that mainly involves asking questions of the child before, during and after reading — even parents who can’t read can use it. Parents average over 90 percent attendance at the weekly training sessions — and instead of falling back several months over the summer, which is standard for low-income kids, their children gain more than three months in reading. “In our experience, people want to do things they’re good at and avoid things they’re bad at,” said Gac-Artigas. A world away, nine million women in villages in Mali and some 40 other African countries are living better by using savings groups. Women make small deposits at each meeting, and members can borrow from the pool of funds. It’s sort of like microcredit — except there’s no money from outside and no bank, just a locked box. No outsiders at all, in fact, except to facilitate spreading the word and getting things started, and not always then. These groups go on for years and years with no intervention. The advantages are obvious: women who live too far away from anything to be served by a bank or traditional microcredit can benefit, which means they are less likely to experience food shortages during the year. After the initial training, these groups cost nothing to run. And they last. Projects NGOs bring in from outside often fall apart when the NGO pulls out. Not savings groups, because they truly belong to the women themselves. (The Family Independence Initiative in the United States uses a similar model of self-organized problem-solving led by families, and has also produced surprising results. We usually define “helping people” as doing things for them. But often, it should mean asking them to do things for others. People like to feel competent, useful, needed — they benefit greatly from that feeling. That’s the insight behind the work of The Mission Continues, which asks veterans — a highly service-oriented group — to continue serving at home, working with chronically homeless veterans, building playgrounds or mentoring young people. Another example is the Homeless World Cup, a series of local, national and international soccer leagues for homeless people. If you listed what homeless people need, a soccer league probably wouldn’t figure high on that list. And yet it can be a catalyst, a painless way to for a homeless person to build the confidence, trust and relational skills to be able to go after all the other things they need. It can also work for depression, the world’s second most burdensome disease — for women, it’s first. In Kampala, Strong Minds is testing cost-effective ways to treat depression, with the goal of eventually getting to self-sustaining peer groups. Peer support groups do successfully treat depression. They can be doubly effective, in fact, because people get more out of helping others than being helped. Helping others reinforces their own behaviors, and being useful is in itself a depression-fighter. In Alcoholics Anonymous, the very model of a self-sustaining peer group, research shows that sponsors benefit more than the people they sponsor. But it’s best to do both — and in peer support groups, people do. **TARGET THE “SOCIAL DETERMINANTS”** A medical mystery: A child lands in the hospital with asthma. The doctor prescribes medicines. The child uses them — properly. Yet two months later, she is back in the hospital. Maybe the problem is that the child lives alongside mold, insects and rats. That child doesn’t need a doctor — she needs a lawyer, who can persuade, or threaten, the landlord to clean it up. And at more than 230 medical clinics around the country, lawyers are on hand to help. Health isn’t just a medical problem. Health is undercut by substandard housing, air pollution, food deserts, dangerous streets, trauma and toxic stress — the social determinants of health. Being poor can make you sick, and doctors can’t always help. Education, too, has social determinants. “Zero tolerance” has become the fashion in American schools — kids who act up are suspended and then expelled, even in preschool, and sometimes arrested. This policy succeeds only in sabotaging the education of the children who need it most — mostly low-income African-American or Hispanic boys, many of whom have already faced daunting, often overwhelming, problems. When a 6-year old is aggressive, uncontrollable or violent in class, the question shouldn’t be “What’s wrong with him?” but “What happened to him?” The answer, too often, is that he experienced homelessness, divorce, family violence, incarceration, drug use, neighborhood violence, sudden separation or loss — or typically a combination of the above. Researchers now call these things ACEs or “adverse childhood experiences.” Increasingly, they are measuring them and discovering how drastically they can decrease the chances of a successful life if parents, educators or doctors fail to recognize them and respond. But parents and teachers can learn new, and better ways of helping these children — far more effective than reflexive punishment — and the kids themselves can learn ways to calm themselves and manage their strong emotions. Social determinants follow students all the way through their educations. Community colleges have become the colleges of the poor in this country. And six years after enrolling, only one-third have completed a degree or transferred to a four-year college. Why do students drop out? Overwhelmingly, it’s that they can’t afford school, and can’t afford to take time off from work to study. One response is Single Stop, which has offices now spreading through community colleges that help people find out if they qualify for benefits such as food stamps, child care subsidies, federal financial aid or the earned-income tax credit — and if so, Single Stop helps clients to get them. Looking for the social determinants is not the same as looking for root causes, which can sometimes become an excuse for inaction — the idea that you have to solve everything before you can solve anything. Both concepts recognize that poverty causes interlocking problems. But targeting social determinants is specific and practical. A medical clinic doesn’t have to lift a child out of poverty to treat her asthma. But it must get the landlord to do mold abatement. Schools can’t reduce adverse childhood experiences. But they can manage their impact. You don’t have to solve everything to make progress on social problems. But what needs to be solved may be hidden from view. Finding it, attacking it and measuring the results — that’s a big idea in social change, this year and every year. *Join Fixes on Facebook and follow updates on twitter.com/nytimesfixes. To receive e-mail alerts for Fixes columns, sign up here.* *Tina Rosenberg won a Pulitzer Prize for her book “The Haunted Land: Facing Europe’s Ghosts After Communism.” She is a former editorial writer for The Times and the author, most recently, of “Join the Club: How Peer Pressure Can Transform the World” and the World War II spy story e-book “D for Deception.” She is a co-founder of the Solutions Journalism Network, which supports rigorous reporting about responses to social problems.*
true
true
true
Lots of dedicated people and groups are trying to change the world. What do the most successful ones have in common?
2024-10-12 00:00:00
2014-12-11 00:00:00
https://static01.nyt.com…bookJumbo-v3.png
article
nytimes.com
Opinionator
null
null
40,001,396
https://next-wayfinder.dev
null
null
null
true
false
false
null
null
null
null
null
null
null
null
null
2,890,360
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3086893/pdf/nihms288435.pdf
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
34,430,655
https://ofdollarsanddata.com/what-is-considered-rich/
How Much Income Do You Need to Be Rich? [2022 Latest Data]
Nick Maggiulli
Have you ever wondered what it really means to be rich? Is it a certain salary? A specific amount of assets? A particular lifestyle? The answer to this question is highly subjective and dependent on individual factors such as your age, your education level, and where you live. For example, someone making $100,000 a year might be considered rich if they are 18 years old, but not if they are 50. Similarly, someone with a college education might feel rich compared to their high school friends, but inadequate around their university peers. Rich will always be a relative term. Nevertheless, if you’re interested in understanding how your *income* compares to others in the U.S. (and whether that makes you rich), then you’ve come to the right place. So, how much income do you need to be rich? Let’s find out. ## What are the Top 10%, Top 5%, and Top 1% of Incomes? Since there is no generally accepted definition of what income level is considered rich, I’ve decided to summarize the top 10%, top 5%, and top 1% of household incomes in the U.S. based on the *newly released* 2022 Survey of Consumer Finances (SCF). This will give you a high level overview of how much income the highest earning households make so that you can determine for yourself what it means to be rich. But, before we dig in, I want to answer a question you might have: *Why am I using household income?*I am showing household income (and not individual income) because household income is what drives financial competition in our society. When you go to bid on a home, you aren’t just bidding against other individuals—you are bidding against other households. And now that the majority of U.S. households are dual income, you can see why household income is more relevant than individual income when trying to determine who is rich. **With that being said, here are the top 10%, top 5%, and top 1% of household incomes in the United States (in 2022):** - Top 10% = $248,610 - Top 5% = $390,209 - Top 1% = $1,199,812 As you can see, the amount of household income needed to be considered rich is very dependent upon how you define rich. And, depending on how exclusive you want to be, rich could be defined as $250k a year, $400k a year, or even up to $1.1M a year. But this still doesn’t fully answer our question because individual factors such as age can make a big difference. How big of a difference exactly? Let’s find out. ## How Does Income Change with Age? After adjusting for household age, you will see that the income needed to be rich rises into mid-career and then falls once retirement hits (typically in the mid-to-late 60s). Below is a chart showing the 90th percentile (i.e. top 10%) of household incomes in the U.S. across different age ranges: This same general pattern can also be found within the top 5% of earners and the top 1% of earners (as seen in the chart below): At the 99th percentile, the impact of age is particular strong given the wide range of incomes needed to make it to the top 1%. You could need as little as $465,000 a year or as much as $1.5M to be in the top 1%, depending on your household’s age. To make it easier to compare the top 10%, top 5%, and top 1% of household incomes across different age groups, I have summarized the data in the table below: Age Range | Top 10% | Top 5% | Top 1% | ---|---|---|---| 20-24 | $64,855 | $86,473 | $129,709 | 25-29 | $142,680 | $210,778 | $303,736 | 30-34 | $188,079 | $276,713 | $468,035 | 35-39 | $230,234 | $370,753 | $1,048,484 | 40-44 | $271,309 | $446,417 | $1,065,779 | 45-49 | $301,574 | $382,643 | $1,167,385 | 50-54 | $324,274 | $611,796 | $1,869,977 | 55-59 | $359,944 | $546,941 | $1,318,712 | 60-64 | $276,713 | $497,219 | $1,772,695 | 65-69 | $263,742 | $495,058 | $1,837,550 | 70-74 | $245,367 | $387,615 | $1,294,608 | 75-80 | $191,321 | $334,002 | $811,765 | From this table you can get a much better idea of what is considered rich over time and where earnings tend to peak. In particular, the peak earnings for different income cutoffs are approximately: - Top 10% peak earnings = $360,000 - Top 5% peak earnings = $610,000 - Top 1% peak earnings = $1.87M To go from the top 10% to the top 5%, your household would need to bring in an additional $250,000 in income at its peak. However, to go from the top 5% to the top 1%, your household’s peak earnings would need to be over $1M higher! This illustrates not only the divergence between the top 10%, top 5%, and top 1% of household incomes, but also how that divergence changes over time. In other words, getting into the top 1% is hard, but staying in the top 1% is even harder. If you want to dig deeper into household income by age, I recommend trying out this income by age calculator I created for this purpose. Now that we have looked at how age impacts earnings power over time, let’s see how education level impacts household income as well. ## How Does Income Change with Education Level? There’s a common belief that someone with a college degree will earn $1M more across their lifetime than someone without a college degree. This equates to an extra $25,000 a year over a 40-year career. Well, as you go higher up the income spectrum, the difference in lifetime income gets even larger. You can see this clearly in the plot below which shows the 90th percentile of household incomes, broken out by education level: Within the top 10% of households, someone with a high school education earns about $50,000 more per year than someone who dropped out of high school, and someone with a college degree earns about $300,000 more per year than someone with only a high school education. This divergence in household income by education level explains why someone earning $250,000 a year with a college degree may not feel rich though their income is in the top 10% of households nationally. As you go further up the income spectrum, it’s even easier for college-educated households to feel left behind. You can see this in the plot below showing the top 1% of household incomes by education level: At the 99th percentile, a college-educated household earns over $1.6 million *per year* more than someone with only a high school education. This difference in income by education level is even greater than the difference in income by age within the top 1% of households. But, after digging a little deeper, I discovered that this wasn’t the full story. So far we have examined how age and education level affect household income *separately.* However, when we examine how these two factors impact household income *together*, a clearer picture emerges. ## Why Both Age *and* Education Level Matter Given that incomes tend to rise with age and incomes tend to rise with education level, you might also assume that incomes would rise with age *within each education level*. But you’d be wrong! Surprisingly, incomes only seem to rise with age *within college-educated households*. You can see this in the chart below, which shows the top 10% of household incomes broken out by age and education level: From this chart we can see that the entire relationship between income and age is driven by a subset of the population (i.e. college-educated households). Therefore, our definition of what income is considered rich by age is also biased by this subset of households. This doesn’t imply that we should ignore age altogether, only that we shouldn’t expect older households to have more income than younger households within the same education level. And this isn’t just an artifact of the highest income households. If we look at the *median* household income by age and education level, the story is very similar: Now that we have looked at the impact of age and education level on income, let’s discuss where you live and why it may not matter for defining who is rich. ## Does Where You Live Impact If You are Rich? When it comes to figuring out how much income you need to be rich, a common response is, “It depends where you live!” After all, someone earning $150,000 in a small town can afford a lot more than someone earning $150,000 in New York City, so they must be richer, right? While this might seem true at first glance, the argument breaks down quite easily. For example, if you took the median U.S. household and moved them to a third world country, would they be rich? On a relative basis, yes, but on an absolute basis, no. Just because you are the biggest fish in a small pond, doesn’t mean that bigger ponds don’t exist. But, more importantly, most cost of living comparisons only look at the cost of big, tangible stuff like housing, food, taxes, etc. These things are easy to measure and, therefore, easy to compare. However, they don’t tend to compare things that are less tangible like convenience, our time, or future income opportunities. If they did, then the differences in cost of living would seem less extreme. For example, if you live in a small town and want to see a Broadway show, you would need to have a car, gasoline, and car insurance. On top of that, you would need to spend a few extra hours driving to and from the venue. Someone living in a big city could simply walk or take public transit to get the same experience as you without all the extra expense or hassle. You might argue, “But Nick, I don’t care about having access to Broadway shows! I don’t care about not needing a car.” That is completely fine and gets at a much bigger point about what it means to be rich. ## Why Being Rich is More Than a Number So far I have tried to *quantify* what income level you need to be rich. I’ve tried to give you numbers that you could use to compare yourself to other people. However, in my many years of writing about money, I’ve come to realize something—being rich is more than a number. Being rich is a feeling. Because even if the numbers say that you are doing well, if you don’t *feel* rich, then it doesn’t matter. You could be making $10M a year, but if you feel like you need to make $20M, then you will feel poorer than someone making $100,000 who only feels like they need to make $50,000. We all live our lives relative to our expectations. This is true in our relationships, in our careers, and in our finances. So, if we want to feel rich, we only have two options—earn more or expect less. The choice is yours. Because, ultimately, your income doesn’t determine how rich you are, your desires do. Thank you for reading! **If you liked this post, consider signing up for my newsletter.** This is post 331. Any code I have related to this post can be found here with the same numbering: https://github.com/nmaggiulli/of-dollars-and-data
true
true
true
On how much income you need to be considered rich and why it's influenced by your age, your education level, and where you live.
2024-10-12 00:00:00
2023-10-20 00:00:00
https://ofdollarsanddata…_2023_01_14.jpeg
article
ofdollarsanddata.com
Of Dollars And Data
null
null
4,402,145
http://www.securityweek.com/resilient-smszombie-infects-500000-android-users-china
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
13,481,904
https://spin.atomicobject.com/2017/01/25/alexa-aws-deployment/
Amazon Lambda Auto-Deployment For Your Alexa Skill Using AWS CLI
Jaime Lightfoot
My latest project includes integration with Amazon’s Alexa voice service. My coworker Jordan already wrote an excellent post on how to get started writing your own Alexa Custom Skill. Amazon’s API makes it relatively easy to develop a new Skill, and with a number of languages to choose from (Python, Node.js, Java, C#, etc), developers can create a simple Skill in a weekend. With longer development times and iterative updates to our Skill’s functionality, we decided to find a way to auto-deploy our Lambda code with each Git commit. *Note: this post was originally titled ‘Deploying your Alexa Skill’ and has been changed. Alexa Skills have two parts: the skill itself (consisting of a name, logo, intent schema, sample utterances, etc.), and code that executes commands based on the incoming user input. These two parts are managed separately. While you can choose to host this code by yourself (using Heroku, for example), Amazon makes it easy to host your code with Amazon Lambda, and simplifies the authentication process for you. * This post is about deploying to Amazon Lambda. Amazon does not provide a way to auto-deploy to the Alexa Skill itself. However, because that content is relatively static (with the name, logo, intents, and utterances set up at the beginning of our development cycle), I haven’t had a need to repeatedly change the Alexa Skill, only the Lambda code. ## 1. Install AWS CLI First, you will need to install the Amazon Web Services command line interface (AWS CLI). For Mac and Unix users, you will need Pip and Python installed. Then: `$ sudo pip install awscli` *Note: if you get an error regarding `six` (an issue in El Capitan), use --ignore-installed option.* Alternatively, if you use Homebrew: `$ brew install awscli` To test if your installation was successful, type: `$ aws help` For Windows, installers are available here. ## 2. Configure AWS CLI Next, you will need to configure AWS CLI so it has permission to access and update your Alexa Skill code. You will need an Access Key and Secret Access Key, which can be found in the AWS Lambda portal. After you have logged in, click your name in the upper right-hand corner and select “My Security Credentials” from the dropdown menu. Click “Access Keys” to get your `AWS Access Key ID` . Your `AWS Secret Access Key` cannot be retrieved from this page. If you previously created a `AWS Secret Access Key` , you should have downloaded a file with both the `Access Key ID` and `Secret Access Key` . If you don’t have a `Secret Access Key` , or you have lost yours, you can create a new one on this page. While you’re here, take note of your region name as well. Next, open up a command line and type: `$ aws configure` This command is interactive, and it will let you input each of the following pieces of information: ``` $ aws configure AWS Access Key ID [None]: [ID found on "My Security Credentials" screen] AWS Secret Access Key [None]: [Secret Key found on "My Security Credentials" screen] Default region name [None]: [Region found on "My Security Credentials" screen] Default output format [None]: [Enter] ``` ## 3. Deploy Your Updates Once you have updates to your code, open up the command line and navigate to the directory where your Alexa Skill code is. *Note: this assumes that you have already manually deployed your code through the AWS Lambda portal when you initially set up your Skill.* In the photo above, the name of my Skill is `AlexaTest` , and the name of my handler is `lambda_function` . You will first need to zip up your directory. You can do this from the command line if you’d like: `$ zip -r -X lambda_function.zip lambda_function.py` Here, I am zipping my file into a zipped directory with `lambda_function` as the name. It is important that this name matches the handler that I set up in the AWS Lambda portal. We used Python for our Alexa Skill, so all of our code is in the `lambda_function.py` file. *Note: -r tells zip to recurse into directories, and -X tells zip to exclude extra file attributes.* Let’s push our (zipped) changes to AWS Lambda using the AWS CLI: `$ aws lambda update-function-code --function-name 'AlexaTest' --zip-file 'fileb://lambda_function.zip'` The function name needs to match my Skill name (Alexa Test), and again, I’m using `lambda_function.zip` . I also need the `fileb://` prefix. Go to the AWS Lambda Portal to see if your code was updated. If something went wrong, double-check your Skill and handler names, and make sure that the zip file contains the appropriate files. ## 4. Include Libraries Often, you will want to include a library in your deployment. We needed to include a library to support JWT tokens. We decided to put this library in a folder called “deployment-package,” along with the Python file. Once again, we’ll zip up our project files. Here, we’re navigating into the directory where the project files live and zipping them into a directory called `lambda_function.zip` (one directory level higher). The `*` grabs all the files in the deployment-package directory. `$ cd deployment-package && zip -r -X ../lambda_function.zip *` Putting this together, I have: ``` (cd deployment-package && zip -r -X ../lambda_function.zip *) aws lambda update-function-code --function-name 'AlexaTest' --zip-file 'fileb://lambda_function.zip' ``` The parenthesis around the first line denote a subshell. In this case, it means that I don’t need a `cd ..` before I make my `aws` call. ## 5. Add to Your Deployment Workflow Lastly, you can turn these two lines into a script. We are using CircleCI for this project, and we altered our `circle.yml` file to execute our deployment script after every push to master. ``` deployment: production: branch: master commands: - pip install awscli - aws configure set aws_access_key_id $AWSKEY - aws configure set aws_secret_access_key $AWSSECRETKEY - aws configure set default.region us-east-1 - aws configure set default.output json - ./deploy.sh - cp lambda_function.zip $CIRCLE_ARTIFACTS/ ``` or as a Git hook.
true
true
true
How to use AWS CLI to auto-deploy an Amazon Alexa skill. We decided to find a way to auto-deploy our Skill with each Git commit.
2024-10-12 00:00:00
2017-01-25 00:00:00
https://spin.atomicobjec…d-image-spin.jpg
article
atomicobject.com
Atomic Object
null
null
24,137,378
https://www.microsoft.com/en-us/surface/devices/surface-duo?activetab=overview
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
2,313,532
http://www.tnr.com/article/books-and-arts/magazine/84531/end-bookstores-amazon-e-book-borders?passthru=YTI3MzgwYmE5M2JlY2ZkM2Q2Y2ZjOWYxMDRmNGFkZDg
Writer's Block
Nicole Krauss
The end of bookstores. A few weeks ago, with a small footnote by way of introduction, *The New York Times Book Review* published revamped best-seller lists that, for the first time, separately reflect the sale of e-books. The new lists were inevitable—e-books made up about 10 percent of book sales in 2010, and that number is rapidly rising. You had to read between the lines to find the real news, but there it was: To the growing list of things that will be extinct in our children's world, we can now add bookstores. Does it surprise us? Should we care? There were booksellers in ancient Greece and Rome and the medieval Islamic world, but it was not until after the advent of printing that the modern bookstore was born. In sixteenth-century England, a license from the king was needed to print a book, and those books considered distasteful by the monarchy were suppressed; to trade in such outlawed books was a punishable offense, and yet there were booksellers and readers willing to take the risk, and these books were sold and read. The bookseller, in other words, was, from the beginning, an innately independent figure, in spirit if not by law. As the availability and variety of printed books increased, the bookseller became a curator: one who selects, edits, and presents a collection that reflects his tastes. To walk into a modern-day bookstore is a little bit like studying a single photograph out of the infinite number of photographs that cold be taken of the world: It offers the reader a frame. Within that frame, she can decide what she likes and doesn't like, what is for her and not for her. She can browse, selecting this offering and rejecting that, and in this way she can begin to assemble a program of taste and self. The Internet has co-opted the word “browse” for its own purposes, but it’s worth pointing out the difference between browsing in a virtual realm and browsing in the actual world. Depending on the terms entered, an Internet search engine will usually come up with hundreds, thousands, or millions of hits, which a person can then skate through, clicking when she sees something that most closely echoes her interest. It is a curious quality of the Internet that it can be composed of an unfathomable multitude and, at the same time, almost always deliver to the user the bits that feed her already-held interests and confirm her already-held beliefs. It points to a paradox that is, perhaps, one of the most critical of our time: To have access to everything may be to have nothing in particular. After all, what good does this access do if we can only find our way back to ourselves, the same selves, the same interests, the same beliefs over and over? Is what we really want to be solidified, or changed? If solidified, then the Internet is well-designed for that need. But, if we wish to be changed, to be challenged and undone, then we need a means of placing ourselves in the path of an accident. For this reason, the plenitude may narrow the mind. Amazon may curate the world for you, but only by sifting through your interests and delivering back to you variations on your well-rehearsed themes: Yes, I do love Handke! Yes, I had been meaning to read that obscure play by Thomas Bernhard! A bookstore, by contrast, asks you to scan the shelves on your way to looking for the thing you had in mind. You go in meaning to buy Hemingway, but you end up with Homer instead. What you think you like or want is not always what you need. A bookstore search inspires serendipity and surprise. It’s a revealing experiment to put side by side bookstores and the Internet—or even just Google Books, which now offers 15 million of the world’s 130 million unique books. Both the Internet and Google Books strive to assemble the known world. The bookstore, on the other hand, strives to be a microcosm of it, and not just any microcosm but one designed—according to the principles and tastes of a “gatekeeper”—to help us absorb and consider the world itself. That difference is everything. To browse online is to enter into a search that allows one to sail, according to an idiosyncratic route formed out of split-second impulses, across the surface of the world, sometimes stopping to randomly sample the surface, sometimes not. It is only an accelerated form of tourism. To browse in a bookstore, however, is to explore a highly selective and thoughtful collection of the world—thoughtful because hundreds of years of thinkers, writers, critics, teachers, and readers have established the worth of the choices. Their collective wisdom seems superior, for these purposes, to the Web’s “neutrality,” its know-nothing know-everythingness. ** ** **The other day **I rented *Hannah and Her Sisters *after having not seen it for many years. I was struck by how many chance meetings and conversations took place in book or record stores. No one would ever turn to Woody Allen for a realistic portrayal of New York, but, all the same, it surprised me that back then there were so many book and record stores to film in. They used to say about New York that, if you get rid of the drug dealers, people will stop using. It makes one wonder if the phrase carries: Get rid of the bookstores, and people will stop reading. Recently, the largest independent bookstore in the country, Powell’s, laid off 31 employees. In a release, the company cited the changes that technology has forced on the book industry. And Borders—which was once a predator upon other, smaller bookshops—has filed for bankruptcy after seeing sales fall by double-digit percentages in 2008, 2009, and in each quarter in 2010. There are many reasons for the decline of bookstores. Blame the business model of superstores, blame Amazon, blame the shrinking of leisure time, blame a digital age that offers so many bright, quick things, which have crippled our ability for sustained concentration. You can even blame writers, if you want, because you think they no longer produce anything vital to the culture or worth reading. Whatever the case, it is an historical fact that the decline of the bookstore and the rise of the Internet happened simultaneously; one model of the order and presentation of knowledge was toppled and superseded by another. For bookstores, e-books are only the nail in the coffin. Or are they? Or rather, should we let them be? What, really, is the difference if we can still download all those books we once might have bought in a real-world store? Presumably, once bookstores are gone—and the professional literary critic (those widely and deeply read Edmund Wilsons, Alfred Kazins, and Susan Sontags who once told us not just what books to read but also taught us how to read), no longer considered valuable enough to seriously employ, is completely dead—other “content curators” will rise up in their place to build a bridge between the reader and the books that were meant for her. What’s so terrible about that? But here we run into a strange misapprehension about digital culture and commerce. The accepted notion that the Internet, as an open-access forum, has disseminated power and influence and opened the door to seemingly endless variety is true in many instances, but not always. If creative copyright laws remain in place, most books will continue to be available only at a price. (If such laws don’t remain in place, most of the future’s great books will not get written.) Merchants will still control the sale of books, and, at least for the foreseeable future, those merchants will be a handful of corporations like Apple, whose “curatorial” decisions are based solely on profitability, their selections determined by the best-seller list, which is itself determined by corporations like Apple, so that the whole thing takes on the form of a snake eating itself. Looking over these newly expanded and increasingly desperate best-seller lists—to hardcover (fiction and nonfiction), paperback (trade fiction, mass-market fiction, and nonfiction), and a number of other specialized lists (advice, how-to, and miscellaneous; children’s—further divided into picture books, chapter books, paperbacks, and series) have now been added a list of e-book sales, for a total of six whole pages of print—one can’t help but wonder why. Why do we care about best-sellers? Why does *The New York Times Book Review*, one of the last book-review sections of a national newspaper left in this country, dedicate six pages that might otherwise be given over to reflection on books to their commercial ranking instead? If between the lines of those new best-seller lists is an obituary for bookstores, there is also one for *The New York Times Book Review *itself: Soon all that might be left of it is a bundle of best-seller lists. It is not the notion of a best-seller list that rankles: Commerce is a part of literary life, and the commercial distinction of a serious book—not everything that sells well is dross—lifts the spirits and the bottom lines of publishers and writers. But six pages of Dow Jones-like charts? Why this obsession with the money side, even while everyone agrees that salability has little relationship to quality? The independent spirit of the bookstore is, at its best, a much-needed bulwark against this obsession. Yes, the technology is real, and, yes, e-books will exist—but why to the exclusion of books and bookstores? Is convenience really the highest American value? When you download an e-book, it is worth stopping to consider what you are choosing, why, and what your choice means. If enough people stop taking their business to bookstores, bookstores—all bookstores—will close. And that, in turn, will threaten a set of values that has been with us for as long as we have had books. *Nicole Krauss is the author, most recently, of* Great House.* This article originally ran in the March 24, 2011, issue of the magazine.*
true
true
true
The end of bookstores. A few weeks ago, with a small footnote by way of introduction, The New York Times Book Review published revamped best-seller lists that, for the first time, separately reflect the sale of e-books. The new lists were inevitable—e-books made up about 10 percent of book sales in 2010, and that number is rapidly rising. You had to read between the lines to find the real new...
2024-10-12 00:00:00
2011-03-03 00:00:00
https://images.newrepubl…&fit=crop&fm=jpg
article
newrepublic.com
The New Republic
null
null
10,377,472
http://www.motherjones.com/media/2015/10/book-review-devils-chessboard-david-talbot
This is the guy who quietly made the CIA more powerful than God
Aaron Wiener
“What follows,” David Talbot boasts in the prologue to his new book *The Devil’s Chessboard*, “is an espionage adventure that is far more action-packed and momentous than any spy tale with which readers are familiar.” Talbot, the founder of *Salon.com* and author of the Kennedy clan study *Brothers*, doesn’t deal in subtlety in his biography of Allen Dulles, the CIA director under presidents Eisenhower and Kennedy, the younger brother of Secretary of State John Foster Dulles, and the architect of a secretive national security apparatus that functioned as essentially an autonomous branch of government. Talbot offers a portrait of a black-and-white Cold War-era world full of spy games and nuclear brinkmanship, in which everyone is either a good guy or a bad guy. Dulles—who deceived American elected leaders and overthrew foreign ones, who backed ex-Nazis and thwarted left-leaning democrats—falls firmly in the latter camp. *Mother Jones* chatted with Talbot about the reporting that went into his 704-page doorstop, the controversy he invited with his discussion of Kennedy-assassination conspiracy theories, and the parallels he sees in today’s government intelligence overreach. **Mother Jones:** You seem to have a thing for brothers—particularly for younger brothers in the shadow of their more prominent older brothers. As it happens, you yourself have a successful older brother—former child actor and Emmy Award-winning broadcast journalist Stephen Talbot. Do you see yourself in Allen Dulles or in Bobby Kennedy? **David Talbot:** No one has pointed that particular analogy out before. But definitely it’s there. I had a very close relationship and still do with my older brother. We both went into progressive media work, and live in the same city still, San Francisco, and have worked together off and on over the years. So I guess I have a feel for what that chemistry is like between brothers. **MJ:** Given that Allen Dulles isn’t exactly a household name these days, did you feel the need to inject your book with extra drama? **DT:** No, because I actually do think the history is so epic that it actually kind of writes itself. Dulles is not a household name anymore. He was at the time, though, particularly as part of this two-brother team. He was on the cover of all the magazines. For a spy, he was kind of a glory hog. But what I was really trying to do was a biography on the American power elite from World War II up to the 60s. That was the key period when the national security state was constructed in this country, and where it begins to overshadow American democracy. It’s almost like *Game of Thrones* to me, where you have the dynastic struggles between these power groups within the American system for control of the country and the world. **MJ:** Is that why you chose not to include much about Dulles’ childhood or his internal strife or the other types of things that tend to dominate biographies? **DT:** I focused on those elements that I thought were important to understanding him. I thought other books covered that ground fairly well before me. But what they left out was the interesting nuances and shadow aspects of Dulles’s biography. I think that you can make a case, although I didn’t explicitly say this in the book, for Allen Dulles being a psychopath. They’ve done studies of people in power, and they all have to be, to some extent, on the spectrum. You have to be unfeeling to a certain extent to send people to their death in war and take the kind of actions that men and women in power routinely have to take. But with Dulles, I think he went to the next step. His own wife and mistress called him “the Shark.” His favorite word was whether you were “useful” to him or not. And this went for people he was sleeping with or people he was manipulating in espionage or so on. He was the kind of man that could cold-bloodedly, again and again, send people to their death, including people he was familiar with and supposedly fond of. There’s a thread there between people like Dulles up through Dick Cheney and [Donald] Rumsfeld—who was sitting at Dulles’s knee at one point. I was fascinated to find that correspondence between a young Congressman Rumsfeld and Allen Dulles, who he was looking to for wisdom and guidance as a young politician. **MJ:** I’m interested to hear you mention Rumsfeld. Do you think the Bush years compared in ruthlessness or secrecy to what was going on under Dulles? **DT:** Definitely. That same kind of dynamic was revived or in some ways expanded after 9/11 by the Bush-Cheney-Rumsfeld administration. Those guys very much were in keeping with the sort of Dulles ethic, that of complete ruthlessness. It’s this feeling of unaccountability, that democratic sanctions and regulations don’t make sense in today’s ruthless world. **MJ:** And do you see echoes of the apparatus that Dulles created in some of the debates today over spying on allies and collection of cellphone records? **DT:** Absolutely. The surveillance state that Snowden and others have exposed is very much a legacy of the Dulles past. I think Dulles would have been delighted by how technology and other developments have allowed the American security state to go much further than he went. He had to build a team of cutthroats and assassins on the ground to go around eliminating the people he wanted to eliminate, who he felt were in the way of American interests. He called them communists. We call them terrorists today. And of course the most controversial part of my book, I’m sure, will be the end, where I say there was blowback from that. Because that killing machine in some way was brought back home. **MJ:** Let’s talk about that. For 500 pages of the book you lay out Dulles’s acquisition and use and abuse of power in and out of the CIA. And then at the end you take a deep dive back into some of the Kennedy assassination conspiracy ideas that you explored in *Brothers*. It’s not an uncontroversial subject. Did you worry that including that might color the reaction to the rest of the book? **DT:** Yeah, you always worry, because unfortunately this climate has been created over the years that discourages and intimidates scholars and journalists and investigators from looking into these dark corners in American life that should be examined. Poll after poll for the last 50 years has shown that most American people don’t accept the official version. The only people who do are the media establishment and the political establishment, at least in public. To me it’s one of the greatest examples of media incompetence and negligence in American history. I even confronted Ben Bradlee about this, who was probably JFK’s closest friend in the Washington press corps and wrote a book all about JFK and their close friendship. “Why didn’t you, with your investigative resources, try to get [to] the bottom of it?” You should read what he says in *Brothers*, but basically it came down to, “Well, I thought it would ruin my career.” I think I have studied this about as much as anyone in my generation at this point, and my final conclusion after 50 years was we have to go there, we have to look at the fact that there’s a wealth of circumstantial evidence that says not only was there, at the highest level, CIA involvement. Probably in the assassination cover-up. But beyond the CIA, because the CIA wouldn’t have acted on its own. During the Kennedy period, there was a sense that he’d broken from the Cold War hegemony and that he was putting the country at risk, and that he was a young, untested president. He was maybe cowardly. He was physically not fit. So they just felt, for the good of the nation, that as painful as it probably was to do, he had to be removed. That’s what I think the consensus finally was about him. And Dulles would have been the person, as the executor of this kind of security wing of the American establishment, who would have been given this job. **MJ:** Given that exploring these theories has been perceived as a career-killer, did you not have those same fears yourself? **DT:** If you have fears at 63 after a career in journalism like I have, taking the risks I have, then you don’t belong in journalism. That’s what journalism should be all about: taking risks and asking the questions that no one else is. **MJ:** Alright, last question for you. [*Connection cuts out. MJ calls DT back.*] **DT:** Aaron? There you are. They’re fucking with us again! The NSA! **MJ:** The NSA, of course. Okay, so: When the *Devil’s Chessboard* movie comes out, who should play Allen Dulles? **DT:** [*Laughs*.] That’s a very good question. In fact, the book is being read widely in Hollywood now, and I have no idea. But there have been some interesting suggestions. One is William Hurt, who kind of looks like him now in his older age. You know, to tell you the truth, we’ll see if Hollywood will be willing to take this on. *Brothers* had a long and winding road in Hollywood. And it was about to go many different times and then the plug was pulled on it. I still think this is kind of a verboten subject in Hollywood, particularly the Kennedy stuff. But, you know, we’ll see. We’ll see if they’re braver with this one.
true
true
true
In a new book, David Talbot makes the case that the CIA head under Eisenhower and Kennedy may have been a psychopath.
2024-10-12 00:00:00
2015-10-10 00:00:00
https://www.motherjones.…200&h=630&crop=1
article
motherjones.com
Mother Jones
null
null
31,333,982
https://stockmarketgame.net/blog/unusual-stock-trading-by-the-members-of-the-us-congress/
null
null
null
true
false
false
null
null
null
null
null
null
null
null
null
16,121,065
https://www.economist.com/news/technology-quarterly/21733192-once-data-have-been-extracted-brain-how-can-they-be-employed-best
Turning brain signals into useful information
null
# Turning brain signals into useful information ## Once data have been extracted from the brain, how can they be employed to best effect? FOR those who reckon that brain-computer interfaces will never catch on, there is a simple answer: they already have. Well over 300,000 people worldwide have had cochlear implants fitted in their ears. Strictly speaking, this hearing device does not interact directly with neural tissue, but the effect is not dissimilar. A processor captures sound, which is converted into electrical signals and sent to an electrode in the inner ear, stimulating the cochlear nerve so that sound is heard in the brain. Michael Merzenich, a neuroscientist who helped develop them, explains that the implants provide only a crude representation of speech, “like playing Chopin with your fist”. But given a little time, the brain works out the signals. This article appeared in the Technology Quarterly section of the print edition under the headline “Translation required”
true
true
true
Once data have been extracted from the brain, how can they be employed to best effect?
2024-10-12 00:00:00
2018-01-04 00:00:00
https://www.economist.co…106_TQD005_4.jpg
Article
economist.com
The Economist
null
null
14,450,137
https://www.npmjs.com/package/npx
npx
null
# npx(1) -- execute npm package binaries ## SYNOPSIS `npx [options] <command>[@version] [command-arg]...` `npx [options] [-p|--package <pkg>]... <command> [command-arg]...` `npx [options] -c '<command-string>'` `npx --shell-auto-fallback [shell]` ## INSTALL `npm install -g npx` ## DESCRIPTION Executes `<command>` either from a local `node_modules/.bin` , or from a central cache, installing any packages needed in order for `<command>` to run. By default, `npx` will check whether `<command>` exists in `$PATH` , or in the local project binaries, and execute that. If `<command>` is not found, it will be installed prior to execution. Unless a `--package` option is specified, `npx` will try to guess the name of the binary to invoke depending on the specifier provided. All package specifiers understood by `npm` may be used with `npx` , including git specifiers, remote tarballs, local directories, or scoped packages. If a full specifier is included, or if `--package` is used, npx will always use a freshly-installed, temporary version of the package. This can also be forced with the `--ignore-existing` flag. - `-p, --package <package>` - define the package to be installed. This defaults to the value of`<command>` . This is only needed for packages with multiple binaries if you want to call one of the other executables, or where the binary name does not match the package name. If this option is provided`<command>` will be executed as-is, without interpreting`@version` if it's there. Multiple`--package` options may be provided, and all the packages specified will be installed. - `--no-install` - If passed to`npx` , it will only try to run`<command>` if it already exists in the current path or in`$prefix/node_modules/.bin` . It won't try to install missing commands. - `--cache <path>` - set the location of the npm cache. Defaults to npm's own cache settings. - `--userconfig <path>` - path to the user configuration file to pass to npm. Defaults to whatever npm's current default is. - `-c <string>` - Execute`<string>` inside an`npm run-script` -like shell environment, with all the usual environment variables available. Only the first item in`<string>` will be automatically used as`<command>` . Any others*must*use`-p` . - `--shell <string>` - The shell to invoke the command with, if any. - `--shell-auto-fallback [<shell>]` - Generates shell code to override your shell's "command not found" handler with one that calls`npx` . Tries to figure out your shell, or you can pass its name (either`bash` ,`fish` , or`zsh` ) as an option. See below for how to install. - `--ignore-existing` - If this flag is set, npx will not look in`$PATH` , or in the current package's`node_modules/.bin` for an existing version before deciding whether to install. Binaries in those paths will still be available for execution, but will be shadowed by any packages requested by this install. - `-q, --quiet` - Suppressed any output from npx itself (progress bars, error messages, install reports). Subcommand output itself will not be silenced. - `-n, --node-arg` - Extra node argument to supply to node when binary is a node script. You can supply this option multiple times to add more arguments. - `-v, --version` - Show the current npx version. ## EXAMPLES ### Running a project-local bin ``` $ npm i -D webpack $ npx webpack ... ``` ### One-off invocation without local installation ``` $ npm rm webpack $ npx webpack -- ... $ cat package.json ...webpack not in "devDependencies"... ``` ### Invoking a command from a github repository ``` $ npx github:piuccio/cowsay ...or... $ npx git+ssh://my.hosted.git:cowsay.git#semver:^1 ...etc... ``` ### Execute a full shell command using one npx call w/ multiple packages ``` $ npx -p lolcatjs -p cowsay -c \ 'echo "$npm_package_name@$npm_package_version" | cowsay | lolcatjs' ... _____ < [email protected] > ----- \ ^__^ \ (oo)\_______ (__)\ )\/\ ||----w | || || ``` ### Run node binary with --inspect ``` $ npx --node-arg=--inspect cowsay Debugger listening on ws://127.0.0.1:9229/.... ``` ### Specify a node version to run npm scripts (or anything else!) ``` npx -p node@8 npm run build ``` ## SHELL AUTO FALLBACK You can configure `npx` to run as your default fallback command when you type something in the command line with an `@` but the command is not found. This includes installing packages that were not found in the local prefix either. For example: ``` $ npm@4 --version (stderr) npm@4 not found. Trying with npx... 4.6.1 $ asdfasdfasf zsh: command not found: asfdasdfasdf ``` Currently, `zsh` , `bash` (>= 4), and `fish` are supported. You can access these completion scripts using `npx --shell-auto-fallback <shell>` . To install permanently, add the relevant line below to your `~/.bashrc` , `~/.zshrc` , `~/.config/fish/config.fish` , or as needed. To install just for the shell session, simply run the line. You can optionally pass through `--no-install` when generating the fallback to prevent it from installing packages if the command is missing. ### For bash@>=4: ``` $ source <(npx --shell-auto-fallback bash) ``` ### For zsh: ``` $ source <(npx --shell-auto-fallback zsh) ``` ### For fish: ``` $ source (npx --shell-auto-fallback fish | psub) ``` ## ACKNOWLEDGEMENTS Huge thanks to Kwyn Meagher for generously donating the package name in the main npm registry. Previously `npx` was used for a Tessel board Neopixels library, which can now be found under `npx-tessel` . ## AUTHOR Written by Kat Marchan. ## REPORTING BUGS Please file any relevant issues on Github. ## LICENSE This work is released by its authors into the public domain under CC0-1.0. See `LICENSE.md` for details. ## SEE ALSO `npm(1)` `npm-run-script(1)` `npm-config(7)`
true
true
true
execute npm package binaries. Latest version: 10.2.2, last published: 5 years ago. Start using npx in your project by running `npm i npx`. There are 169 other projects in the npm registry using npx.
2024-10-12 00:00:00
2020-01-28 00:00:00
https://static-productio…c7780fc68412.png
null
npmjs.com
Npm
null
null
27,057,396
https://www.coglode.com/story/from-booze-to-cheers
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
13,410,599
http://www.gartner.com/newsroom/id/3570917
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
11,715,896
http://themacro.com/articles/2016/05/vote-org/
Vote.org is a non-profit that wants to get the U.S. to 100% voter turnout
Meet the Batch
*Vote.org has helped more than 1.5 million people register to vote.* #### What it does Vote.org makes it easy to vote. Their mission is to get to 100% voter turnout in the United States. To accomplish that, they’re building tech that removes barriers to voting. On Vote.org, you can register to vote, check your voter registration status, or get an absentee ballot. #### The problem Voter turnout in the United States is notoriously low. According to the U.S. Election Project, the highest turnout we’ve seen since 1980 was during the 2008 presidential election, and even then, only 61.6% of the eligible voting population went to the polls. “The enormity of the problem keeps me up at night. We are a nation of nonvoters, and trying to address that quickly and without spending billions of dollars keeps me awake,” says founder Debra Cleaver. Many believe turnout is low because Americans aren’t interested in politics—but that simply isn’t true. The real issue is the antiquated process that has become a barrier for many people. If you visit any government website to learn how to update your voter status or register to vote, you’re inundated with information. Furthermore, voter ID laws enacted in a number of states prevent many people with no permanent address or a different address from voting. This immediately adds a layer of difficulty for college students or anyone who doesn’t have a driver’s license. “Young voters, low income voters, and voters of color are more likely to face trouble voting at the polls than older, whiter and wealthier voters," Cleaver says. By removing the roadblocks that prevent disenfranchised groups from voting, Vote.org empowers people to fight back against a broken system. Unlike typical nonprofits that rely on grants and fundraising, Vote.org offers technology that can be white-labeled for a fee. This approach both increases the number of people voting, and dramatically reduces the cost of registering a voter. #### Why now? We do almost everything online, and we need to make voting just as accessible as sharing photos and accessing bank statements. With the presidential election approaching this November, it’s more important than ever to shine a spotlight on voter turnout, and get more American voices heard. #### What's next? Today, Vote.org makes it easy for anyone to register to vote and get their absentee ballot -- and they want people to be able to do both from their smartphones. Eventually, they plan to streamline the voting process to allow voters in all 50 states to register to vote and cast their ballots simply by signing their name on their smartphone. #### What we liked about Vote.org “We want to see 100% voter turnout, and we believe technology like Vote.org is the key to making that happen.” -Tim Brady, Partner, Y Combinator“Debra is a force of nature. She has an incredible passion for this mission. We were impressed that Vote.org has already made such a great impact and hope we can help in scaling the reach of the organization.” - Geoff Ralston, Partner, Y Combinator #### About the founder For the last eight years, Debra Cleaver has focused on making it easy for people to get their absentee ballot and vote from anywhere with an organization called “Long Distance Voter,” which recently relaunched as Vote.org. The organization helped over 1.5 million people register to vote with only a budget of $366k. Prior to Long Distance Voter, Debra worked as a product manager at Change.org.
true
true
true
Read more on Y Combinator's blog.
2024-10-12 00:00:00
2016-05-01 00:00:00
/images/ycombinator-logo-fb889e2e.png
website
themacro.com
Ycombinator
null
null
23,070,316
https://stellarupdate.com/
NFT and Cryptocurrency News
null
Crypto News Read Story October 14, 2023 Crypto News ## Stellar Transactions are no longer supported in th … October 14, 2023 NFT News ## Justin Bieber NFT down 95 percent July 4, 2023 Crypto News ## Coinme adds supports for USDC on Stellar network March 28, 2023 Crypto News ## Euro Stablecoin EURC coming to Pendulum March 21, 2023 Crypto News ## USDC used for international aid payments in Ukrain … March 20, 2023 NFT News ## Meta removes NFT feature from Facebook and Instagr … March 15, 2023 NFT News ## WazirX NFT Marketplace closes down February 26, 2023 NFT News ## Radoko and Fitz_lol rug pulled different NFT proje … January 27, 2023 ## Stellar Transactions are no longer supported in the Keybase app October 14, 2023 ## Justin Bieber NFT down 95 percent July 4, 2023 ## Coinme adds supports for USDC on Stellar network ## Euro Stablecoin EURC coming to Pendulum ## USDC used for international aid payments in Ukraine March 20, 2023 ## Meta removes NFT feature from Facebook and Instagram March 15, 2023 ## WazirX NFT Marketplace closes down February 26, 2023 ## Radoko and Fitz_lol rug pulled different NFT projects January 27, 2023
true
true
true
NFTputing is blog dedicated to cryptocurrency news, blockchain and NFT developments. Find NFT News, cyrpto announcements and development updates.
2024-10-12 00:00:00
2023-10-14 00:00:00
null
website
nftputing.com
NFT and Cryptocurrency News
null
null
6,946,976
http://funkatron.com/posts/empathy-is-our-most-important-attribute.html
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
12,683,625
https://blog.rstudio.org/2016/10/05/r-notebooks/
R Notebooks - Posit
Posit Team
We use cookies to bring you the most relevant experience by remembering your preferences between your visits to our website. By clicking "Accept All," you consent to the use of ALL cookies. However, you may click on "Cookie Settings" to select the types of cookies you choose to use or avoid. ## Data storage that enables specific features you have used or requested, or to enable transmissions over an electronic communications network. ## Data that stores your choices about your experience on our website. ## The technical storage or access that is used exclusively for statistical purposes. Data storage used for compiling statistics about how people use our website. These cookies are used for us to improve our site and better understand our community, and are not used to identify you. ## Data storage used to deliver you the most relevant and targeted content (which may include commercial information regarding our professional products and services), and to better understand the customers who sustain our business. Although we use this information internally, Posit will never sell your data to third parties or to advertisers.
true
true
true
null
2024-10-12 00:00:00
2016-10-05 00:00:00
https://posit.co/wp-cont…/r-notebooks.jpg
article
posit.co
Posit
null
null
14,864,138
https://github.com/lord/slate
GitHub - slatedocs/slate: Beautiful static documentation for your API
Slatedocs
Slate helps you create beautiful, intelligent, responsive API documentation. *The example above was created with Slate. Check it out at slatedocs.github.io/slate.* - **Clean, intuitive design**— With Slate, the description of your API is on the left side of your documentation, and all the code examples are on the right side. Inspired by Stripe's and PayPal's API docs. Slate is responsive, so it looks great on tablets, phones, and even in print. - **Everything on a single page**— Gone are the days when your users had to search through a million pages to find what they wanted. Slate puts the entire documentation on a single page. We haven't sacrificed linkability, though. As you scroll, your browser's hash will update to the nearest header, so linking to a particular point in the documentation is still natural and easy. - **Slate is just Markdown**— When you write docs with Slate, you're just writing Markdown, which makes it simple to edit and understand. Everything is written in Markdown — even the code samples are just Markdown code blocks. - **Write code samples in multiple languages**— If your API has bindings in multiple programming languages, you can easily put in tabs to switch between them. In your document, you'll distinguish different languages by specifying the language name at the top of each code block, just like with GitHub Flavored Markdown. - **Out-of-the-box syntax highlighting**for over 100 languages, no configuration required. - **Automatic, smoothly scrolling table of contents**on the far left of the page. As you scroll, it displays your current position in the document. It's fast, too. We're using Slate at TripIt to build documentation for our new API, where our table of contents has over 180 entries. We've made sure that the performance remains excellent, even for larger documents. - **Let your users update your documentation for you**— By default, your Slate-generated documentation is hosted in a public GitHub repository. Not only does this mean you get free hosting for your docs with GitHub Pages, but it also makes it simple for other developers to make pull requests to your docs if they find typos or other problems. Of course, if you don't want to use GitHub, you're also welcome to host your docs elsewhere. - **RTL Support**Full right-to-left layout for RTL languages such as Arabic, Persian (Farsi), Hebrew etc. Getting started with Slate is super easy! Simply press the green "use this template" button above and follow the instructions below. Or, if you'd like to check out what Slate is capable of, take a look at the sample docs. To get started with Slate, please check out the Getting Started section in our wiki. We support running Slate in three different ways: You can view more in the list on the wiki. If you've got questions about setup, deploying, special feature implementation in your fork, or just want to chat with the developer, please feel free to start a thread in our Discussions tab! Found a bug with upstream Slate? Go ahead and submit an issue. And, of course, feel free to submit pull requests with bug fixes or changes to the `dev` branch. Slate was built by Robert Lord while at TripIt. The project is now maintained by Matthew Peveler and Mike Ralphson. Thanks to the following people who have submitted major pull requests: Also, thanks to Sauce Labs for sponsoring the development of the responsive styles.
true
true
true
Beautiful static documentation for your API. Contribute to slatedocs/slate development by creating an account on GitHub.
2024-10-12 00:00:00
2013-09-13 00:00:00
https://opengraph.githubassets.com/8991553339debf4e2aa17f685f4773e87479a9c85fd3d88d4a39de65c7586fc7/slatedocs/slate
object
github.com
GitHub
null
null
39,333,068
https://literaryreview.co.uk/anatomist-of-evil
Stuart Jeffries - Anatomist of Evil
Robin Simon
# Stuart Jeffries #### Anatomist of Evil ### We Are Free to Change the World: Hannah Arendt’s Lessons in Love and Disobedience ## By Lyndsey Stonebridge ##### Jonathan Cape 304pp £22 When Hannah Arendt looked at the man wearing an ill-fitting suit in the bulletproof dock inside a Jerusalem courtroom in 1961, she saw something different from everybody else. The prosecution, writes Lyndsey Stonebridge, ‘saw an ancient crime in modern garb, and portrayed Eichmann as the latest monster in the long history of anti-Semitism who had simply used novel methods to take hatred for Jews to a new level’. Arendt thought otherwise. Adolf Eichmann was on trial after being captured by Israeli agents in Argentina and brought to Israel to face charges of being a leading organiser of the Holocaust. Arendt was there to report on the trial for the *New Yorker*. The commission would lead to Arendt’s most famous book, *Eichmann in Jerusalem *(1963). Arendt was the ideal woman for the job: she was not just a Jewish refugee who had fled Hitler in 1933, and a philosopher who had studied with and loved the one-time Nazi Martin Heidegger, but also someone who had reinvented herself in America as a journalist and political theorist, and the author of *The Origins of Totalitarianism*, about the rise of Nazism and Stalinism. She was, if anything, overqualified. But what did Arendt see in Eichmann? Stonebridge tells us that Arendt put on dark glasses in court to shield her eyes, not from the former SS-Obersturmbannführer’s diabolical aura but rather from the TV lights: this was a media event without precedent, beamed live across the world. Eichmann was a mass murderer deludedly vain enough to boast to a court teeming with Holocaust survivors that he had insisted on limiting the number of persons per cattle truck because the conditions were so inhumane. Arendt, relates Stonebridge, was dumbfounded at Eichmann’s ‘lack of moral, social, historical, of *human* awareness’. Arendt wrote: ‘The longer one listened to him, the more it became obvious that his inability to speak was closely connected to his inability to think, namely to think from the standpoint of somebody else.’ When Arendt wrote of the banality of evil, the phrase that of all the millions of words she wrote has survived her death in 1975, it was this deficiency she was indicting. ‘Eichmann was not stupid, but rather intelligent,’ she told the historian Joachim Fest. ‘But it was his thickheadedness that was so outrageous, as if speaking to a brick wall. And that was what I actually meant by banality … There’s simply resistance ever to imagine what another person is experiencing.’ Stonebridge, professor of humanities and human rights at the University of Birmingham, has written a sometimes infuriating yet often scintillating and always bracing book. It is in part a biography and in part a meditation on what, if anything, the long-dead philosopher has to say to us today. Stonebridge takes inspiration from the fact that when Donald Trump became president in 2016, Arendt’s *The Origins of Totalitarianism* got a sales bump. Clearly some thought Arendt’s dissection of Nazism could help us understand 21st-century populism’s leading monster. Moreover, this is a book in which Stonebridge conducts a bold literary experiment. When she writes about what Arendt saw in Eichmann, she does something extraordinary – and in keeping with her heroine’s political philosophy. She puts herself in Arendt’s place and imagines how she, Stonebridge, might have regarded that putatively banal devil. Stonebridge claims justification for her method by citing how Arendt was committed to what Immanuel Kant called ‘an enlarged mentality’. Arendt told her students at New York’s New School in 1968 what that meant: ‘You think your own thoughts but in the place of somebody else.’ An enlarged mentality, though, is not the same as empathy, which involves sharing the feelings of another. Arendt didn’t do empathy. Indeed for her, just as for Kant, feelings could get in the way of proper moral judgement. Arendt was raised in the German city of Königsberg (now called Kaliningrad and part of Russia), where Kant had lived and taught. She read his *Critique of* *Pure Reason* at sixteen and emerges from Stonebridge’s pages as more marked by Kantian thought on how we should act than by anything she learned in Heidegger’s lectures or bed. For me, at least, Stonebridge’s approach produced confusion. Sometimes I couldn’t be certain that I wasn’t reading a novelettish passage of poetic licence rather than about what actually happened. For instance, Stonebridge writes: ‘On 4 October 1957 … Hannah Arendt looked up at the night sky from her window on the Upper West Side and wondered whether she would catch a glimpse of Sputnik, the first satellite to orbit the earth.’ Did she? Or is this Stonebridge’s enlarged mentality journeying where other biographers would not dare to go? She describes Kant walking through Königsberg, thinking ‘crossly’ about why it was *Vernunft* (‘reason’) rather than *Verstand* (‘understanding’) that conferred freedom on humans. I’m sure Kant was thinking, but did he think *crossly*? This notion of an enlarged mentality is key to how Arendt approached real-life political problems, often with disastrous consequences. Consider Arendt’s take on what happened in Little Rock, Arkansas, on the morning of 3 September 1957. Elizabeth Eckford was surrounded by a mob of white youths screaming that they wanted to lynch her as she walked to school. The fifteen-year-old African-American was one of the Little Rock Nine, a group of students fighting for their constitutional rights following the 1954 Supreme Court ruling effectively outlawing racial segregation in schools. Arendt’s perspective on this incident was, to put it mildly, obtuse. ‘Have we now come to the point where it is the children who are being asked to change the world?’ she asked. ‘And do we intend our political battles to be fought out in the school yards?’ Had Arendt asked the nine, their parents or any of the African-Americans who fought for civil rights, they might have told her the answer to both questions was ‘yes’. Arendt thought otherwise: ‘My first question was, what would you do if you were a Negro mother? … The answer: under no conditions would I expose my child to conditions which made it appear as though it wanted to push its way into a group where it was not wanted.’ Her short-sightedness on this undermines Stonebridge’s insistence that Arendt is valuable to us now because she helps us think like a refugee, that her outsider’s queering perspective might yield ethical and political insights. To her credit, Arendt later admitted that she should have known better, writing to the great African-American novelist Ralph Ellison a decade on, ‘It is precisely the “ideal of sacrifice” which I didn’t understand and … this failure to understand caused me indeed to go in an entirely wrong direction.’ But the point for me in this passage is not that it shows how a white German-Jewish refugee couldn’t grasp a black American girl’s struggle (Arendt could have done if she’d tried harder and thought better), but rather how her notion of ‘enlarged mentality’ has its shortcomings. If you put yourself in someone else’s shoes but don’t take the extra step of sharing their feelings or comprehending their struggle, then politically your stance risks being not just jejune but counterproductive. Are we free to change the world, as the book’s title suggests? To answer the question, we need to go back to that Jerusalem courtroom. At one point during the trial, Eichmann invoked duty, as if to say that mass-murdering Jews was okay because his bosses told him to do it. If Eichmann had read Kant, as he claimed he had, he clearly hadn’t understood him. ‘Kant’s whole ethics amounts to the idea that every person, in every action, must reflect on whether the maxim of his action can become a universal law,’ Arendt wrote in *On Humanity in* *Dark Times*. ‘In other words … it really is the complete opposite, so to speak, of obedience … In Kant, nobody has the right to obey.’ Across the decades, that sentence resounds: ‘nobody has the right to obey’. Not Eichmann, not anyone blaming others for their own failings. We are free only to the extent that we are capable of disobeying. Kant discovered, Stonebridge tells us, that it is only because we can think (which seems to boil down to reasoning about what we ought to do) that human freedom and dignity are possible. Arendt did a lot thinking, some of it solitary, some of it with others, and a great deal of it in writings that, like the Little Rock essay, needed another draft. Her thinking is sometimes exasperatingly mutable. The critic Martin Jay called Arendt’s political theory a ‘force field’ rather than a set of doctrines. But that makes sense if you see philosophy in the way that two of the most important men in her life did. Her friend Karl Jaspers thought that the world’s philosophical systems were mythological structures devised to shelter ourselves from the hard facts of existence. Heidegger, similarly, supposed that philosophy was the ‘narcotization of anxiety’. Arendt emerges from this book as valuable precisely because she stepped away from philosophy’s patriarchal strictures and grand systems: she didn’t shelter from existence or narcotise anxiety. She philosophised not as men had but as a woman could. What Arendt wrote about the 18th-century philosopher and dramatist Gotthold Ephraim Lessing is true of her: ‘Instead of fixing his identity in history with a perfectly consistent system, he scattered into the world, as he himself knew, “nothing but *fermenta cognitionis*”.’ Stonebridge’s book is essential reading because she hurls us deeper into Arendt’s ferment of thinking than previous interpreters have dared to, taking the risk of showing us that there was muddle rather than method at the heart of it. I have the sense that this gripping book has emerged from the wreckage of someone else’s conception. Chapters are called things like ‘How to Think’, ‘How to Change the World’ and ‘How to Love’. It’s as if the publisher wanted Stonebridge to write another book in that bastard subgenre that involves violating a philosopher’s reputation by converting their thinking into a self-help manual (you know the drill: how Nietzsche can get you a pay rise; how Judith Butler can help you become your best self; how Iris Murdoch can improve your sex life). If so, happily Stonebridge has gone rogue, disregarding the parameters of marketability and presenting us with a Hannah Arendt who is all too human. ## Sign Up to our newsletter Receive free articles, highlights from the archive, news, details of prizes, and much more.## @Lit_Review Follow Literary Review on Twitter ## Twitter Feed It is a triumph @arthistorynews and my review @Lit_Review is here! In just thirteen years, George Villiers rose from plain squire to become the only duke in England and the most powerful politician in the land. Does a new biography finally unravel the secrets of his success? John Adamson investigates. John Adamson - Love Island with Ruffs John Adamson: Love Island with Ruffs - The Scapegoat: The Brilliant Brief Life of the Duke of Buckingham by Lucy Hughes-Hallett literaryreview.co.uk During the 1930s, Winston Churchill retired to Chartwell, his Tudor-style country house in Kent, where he plotted a return to power. Richard Vinen asks whether it’s time to rename the decade long regarded as Churchill’s ‘wilderness years’. Richard Vinen - Croquet & Conspiracy Richard Vinen: Croquet & Conspiracy - Churchill’s Citadel: Chartwell and the Gatherings Before the Storm by Katherine Carter literaryreview.co.uk
true
true
true
Stuart Jeffries: Anatomist of Evil - We Are Free to Change the World: Hannah Arendt’s Lessons in Love and Disobedience by Lyndsey Stonebridge
2024-10-12 00:00:00
2024-02-01 00:00:00
https://literaryreview.c…4.jpg?1728768596
article
literaryreview.co.uk
Literary Review
null
null
32,422,823
https://co2coalition.org/wp-content/uploads/2022/06/Happer-Lindzen-SEC-6-17-22.pdf
null
null
null
true
false
false
null
null
null
null
null
null
null
null
null
11,080,798
https://www.groovehq.com/blog/arguments-against-remote-work
The Most Common (Bad) Arguments Against Remote Work
Alex Turnbull
Distributed teams are getting more and more common, but too many businesses still cling to outdated assumptions For the last few years, our team has been living and breathing remote work. I’ve written about it time and time again, from the pros and cons, to the tools we use, to the all-in commitment required to really make it work. It works well for us, and for hundreds of other businesses. It can work well for a lot more. Still, I see comments and emails like this pretty often: Remote isn’t for everyone. There are companies that struggle with it, and not all employees work best in a distributed culture. And if a business thoughtfully considers their team and situation and decides that remote isn’t for them, that’s great; they should absolutely stick with that decision. But I see some knee-jerk arguments against remote work popping up time and time again that simply aren’t true. These objections are almost always built on false assumptions or refusal to see alternative approaches to achieving a goal… a “we do things this way because that’s how they’ve always been done” mindset that keeps many of us from making progress. Below are the most common anti-remote arguments I hear, and what I usually say to people when I hear them. ### 1) A “Water Cooler” Is Critical for Culture *“Your culture improves when people can congregate in the office and chat about non-work things.”* This argument isn’t entirely without merit. It’s *true* that having relationships with your co-workers that *aren’t* based 100% around work is tremendously valuable for your team’s culture. But it’s also not something that you can’t reproduce in a remote environment. For us, that water cooler is Slack. Rather than discouraging non-Groove discussion, we embrace it as a huge part of letting our team members show off their “real” selves, and of getting to know one another as more than just “our [developer/designer/marketer/CEO/support agent/etc…].” The water cooler isn’t what’s important here; it’s just a symbol for a place where your team shoots the breeze. And that can happen anywhere. ### 2) People Are More Productive in an Office *“Offices are built for work! That’s where people get the most work done.”* This is one of the arguments that I find most condescending and disrespectful to workers everywhere. “People” aren’t anything, and any statement about where people are most productive will only apply to *some* people. Everyone has their own work style, their own preferences, and their own “optimal” conditions. Remote work actually recognizes and embraces that. If you’re most productive in an office, you *can* work in an office. That’s what co-working spaces or Regus rentals are for. But if you’re most productive from your home, or a coffee shop, or the library, then remote work gives you the freedom to do that, too. ### 3) You Need an Office for Work/Life Separation *“People are happier when they can get into and snap out of work mode, and coming to and leaving a physical office helps them do that.”* Again, I’ve found that the concept *behind* this argument is true. Carrying your work with you in your head 24/7 is not a healthy or particularly pleasant way to live. But that separation can be achieved outside of the office, too. Some people on our team do that by working off-site. Others do it by dedicated a separate room in their home as an office. And some even simply have a dedicated desk in the corner of a room that’s *only* used for work, creating a separation in their mindset when they’re sitting there versus when they’re not. There are also ways to digitally accomplish this as well: turning off email notifications on my phone at the end of the day works wonders for me. ### 4) It Adds to Your Valuation *“Startups with the entire team in a single office get better valuations than remote teams.”* My thoughts on taking loads of VC money and trying to build a unicorn are well documented. It’s not for us. But even if it was, this is an argument that, if I hadn’t actually heard it from *two separate investors* looking to buy a stake in Groove, I wouldn’t believe to even exist. The fact is, *maybe* some companies looking to make an acquisition would prefer to acquire a co-located team. But you know what drives up your valuation far, far more? Revenue. Customers. Growth. These are metrics that, if focused on, will bring you *huge* gains in the value of your business. The potential gain from having a co-located team, in comparison, is marginal. Having a team in an office is an easy thing to accomplish—much easier than actually growing your business—so I can see why it would be a tempting choice for an entrepreneur looking to maximize their return. But it’s short-term, small-picture thinking. Focus on making your business as successful as you possibly can. Figure out whether you can accomplish that better with a remote team or a co-located team, and decide based on that. If you want to cash out, *that’s* what will help you create an attractive target for acquirers. If you execute well, you’ll have more offers than you know what to do with. ### 5) It Makes Meetings Easier *“Having everyone in the same room makes meetings more efficient and effective.”* While many startups rail against them, I’m not opposed to meetings. I think that sometimes, they *can* get things done faster. But I also think that meetings are far too often leaned on by people who don’t want to make a difficult decision and would rather punt it to a group. And often, these decisions aren’t even that consequential. The color of your CTA buttons shouldn’t require a meeting. Jason Fried puts the true cost of meetings into perspective brilliantly in his TEDx talk, where he points out that a one-hour meeting with eight people isn’t really a one-hour meeting at all; it’s a meeting that just consumed *eight hours of productive time* from your team. So, to respond to this objection, my first challenge would be that perhaps meetings *shouldn’t* be easy. If there were more barriers to meetings, we’d have fewer of them, and waste less time. But for when meetings *are* necessary, the tools exist to make them happen. I have successful meetings every day using Google Hangouts, which has only gotten better and better in the last couple of years. And there are dozens of tools like it now, depending on your needs. Whether you need to share screens, loop in people on phones, work together on documents or just about any other collaborative process you can think of, it can be done with tools that are now available and affordable. ### 6) There Are Too Many Distractions at Home *“If I can’t see what my team is doing, then how do I know they’re not sitting on Twitter all day?”* Twitter exists everywhere, whether you’re at home or at the office. And if you have so little trust in your employees that you’re blocking Twitter on your office firewall, then you have issues far bigger than social media distractions. The reality that I’ve found to be true is that if you measure people by the work they get done, rather than by the hours they spend on it, your team will accomplish more, not less. They’ll *still* take time for Twitter and Facebook and all of the distractions that exist everywhere; but their work time will be more engaged since they know their performance is being measured on output and the quality of that output, rather than simply by the time they’re spending in their chair. Hiring the right people—the ones who are accountable for their work—is important here, but it’s important in a traditional office too. However, in a traditional office, it’s a lot easier to make the mistake of assuming that an employee who’s present is an employee who’s producing. ## How to Apply This to Your Business Remote work isn’t for everyone. But if you’re going to think about it, then it’s important to give it a fair and thoughtful consideration rather than a knee-jerk dismissal. I hope that this post helps to clear up a few misconceptions and get a few more businesses open to trying the system that’s worked so well for us.
true
true
true
Distributed teams are getting more and more common, but too many businesses still cling to outdated assumptions
2024-10-12 00:00:00
2016-02-11 00:00:00
https://blog.groovehq.co…-remote-work.png
article
groovehq.com
Groove Blog
null
null
18,873,099
https://www.theguardian.com/sport/2019/jan/10/rich-alati-poker-player-bet-dark-room-isolation
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
16,070,727
http://www.danwolch.com/2018/01/flying-blind-not-setting-or-measuring-product-metric-goals/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
31,302,087
https://jmmv.dev/2022/05/rust-is-hard-but-does-it-matter.html
Rust is hard, yes, but does it matter? - Julio Merino (jmmv.dev)
Julio Merino
Rust is infamous for having a steep learning curve. The borrow checker is the first boss you must defeat, but with a good mental model of how memory works, how objects move, and the rules that the borrow checker enforces, it becomes second nature rather quickly. These rules may sound complicated, but really, they are about understanding the fundamentals of how a computer works. That said… the difficulties don’t stop there. Oh no. As you continue to learn about the language and start dealing with things like concurrency—or, God forbid, Unix signals—things can get tricky very quickly. To make matters worse, mastering idiomatic Rust and the purpose of core traits takes a lot of time. I’ve had to throw my arms up in frustration a few times so far and, while I’ve emerged from those exercises as a better programmer, I have to concede that they were exhausting experiences. And I am certainly not an expert yet. So, yes, there is no denying in saying that Rust is harder than other languages. But… does it matter in practical terms? Betteridge’s law of headlines says that we should conclude the post right here with a “no”—and I think that’s the right answer. But let’s see why. A blog on operating systems, programming languages, testing, build systems, my own software projects and even personal productivity. Specifics include FreeBSD, Linux, Rust, Bazel and EndBASIC. To answer this question, I’d like to imagine what would happen if we were to use Rust in a large team (say tens of people) that deals with a large codebase (hundreds of thousands of SLoC). The reason I pick this scenario is *totally* unrelated (wink, wink) to the work I do on a day-to-day basis in Azure Storage. Our current codebase is written in C++ and has its fair share of NPEs and concurrency bugs, so we have sometimes ~~argued~~ fantasized with the idea of adopting Rust. An obvious concern that arises is that adding a new language to a large project is… very difficult: fighting inertia, bringing up tooling, training people, porting code without a rewrite… these are all very hard work items. But there is a more subtle worry: even if we went through this whole endeavor, would our developer population be able to learn enough Rust to contribute to the codebase in a meaningful way? The language is, again, complex at first sight, and we should not expect everyone to master it. The first observation is that, in a sufficiently large team, we will have developers with various levels of expertise no matter the language. This is expected and intentional, but depending on the language, the consequences are different. For example: C++ is *also* a very complex language. Sure, it may be easier to *write* than Rust because the compiler is more forgiving, but it’s also much harder to guarantee its correctness. This comes back to bite the developer team at a later point, because now you need to call the experts to debug crashes and race conditions. The second observation comes from my writing of side projects in Rust. I’m finding that the majority of my time goes into writing straightforward business logic and refactoring tests, for which Rust doesn’t get in the way. It’s only during certain parts of the project’s lifetime that I’ve had to build the foundations (abstractions, layering, async constructs, etc.) or done large redesigns, and it’s only during those times that I’ve had my fights with Rust. In other words: given a sufficiently large project or team, and irrespective of the language, there will always be a set of core maintainers that design, write and maintain the foundations. This set of people knows the ins and outs of the project and should know the ins and outs of the language and its ecosystem. This set of people is *necessary*. But once these foundations are in place, all other contributors can focus on the exciting aspects of building features. Rust’s complexities may still get in the way, but not much more than those of other languages. To conclude, I would like you to consider again that learning a language is not just a matter of learning its syntax. Mastering a programming language requires months of expertise with the language’s idiosyncrasies and its libraries, and one must go through these before making comparisons about long-term maintainability. But yes, Rust could be simpler, and there are efforts to make it so! Finally, a question for you. I haven’t had the fortune (?) of working in a large-scale Rust project yet, so all I’m doing is hypothesizing here based on experiences with other languages in large projects and teams. If you have converted a team into Rust, or if you were brought in to contribute to an existing Rust codebase, would you mind sharing your experience below? :)
true
true
true
Rust is infamous for having a steep learning curve. The borrow checker is the first boss you must defeat, but with a good mental model of how memory works, how objects move, and the rules that the borrow checker enforces, it becomes second nature rather quickly. These rules may sound complicated, but really, they are about understanding the fundamentals of how a computer works. That said… the difficulties don’t stop there. Oh no. As you continue to learn about the language and start dealing with things like concurrency—or, God forbid, Unix signals—things can get tricky very quickly. To make matters worse, mastering idiomatic Rust and the purpose of core traits takes a lot of time. I’ve had to throw my arms up in frustration a few times so far and, while I’ve emerged from those exercises as a better programmer, I have to concede that they were exhausting experiences. And I am certainly not an expert yet. So, yes, there is no denying in saying that Rust is harder than other languages. But… does it matter in practical terms? Betteridge’s law of headlines says that we should conclude the post right here with a “no”—and I think that’s the right answer. But let’s see why.
2024-10-12 00:00:00
2022-05-06 00:00:00
/images/favicons/favicon-1200x1200.png
blog
jmmv.dev
Julio Merino (jmmv.dev)
null
null
8,165,621
https://medium.com/@rianvdm/miles-davis-and-the-nature-of-true-genius-79e4e1fb4e75
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
4,168,504
http://support.apple.com/kb/HT5266?viewlocale=en_US&locale=en_US
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
39,351,615
https://cbh.bearblog.dev/always-sprinting/
null
null
null
true
false
false
null
null
null
null
null
null
null
null
null
8,351,629
http://www.gizmag.com/quarkson-skyorbiter/33912/
SkyOrbiter UAVs will fly for years at a time and provide global internet access
September
The internet has become a critical means of communication during humanitarian crises and a crucial everyday tool for people around the world. Now, a Portuguese company wants to make sure everyone has access to it. Quarkson plans to use unmanned aerial vehicles (UAVs) to transmit internet access "to every corner of the world." Quarkson's SkyOrbiter program is similar to Google's Project Loon, which also seeks to deliver internet access to remote places. Where Google plans to float internet-enabled balloons above the earth, however, Quarkson intends to use a fleet of high-range UAVs much like the Titan Aerospace Solara 50 to deliver connectivity from orbit. The SkyOrbiter fleet comprises six different low-altitude models and three high-altitude models. The most basic SkyOrbiter is the LA25. It is designed for commercial and government use and is able to provide connectivity to areas where none is available. The LA25 has a wingspan of about 25 m (82 ft), operates at 3,500 m (11,500 ft) and has a range of over 42,000 km (26,000 mi) or up to two weeks. Each of the subsequent low altitude SkyOrbiters has an increased wingspan and range right up to the LA75, which has a wingspan of 75 m (246 ft) and a range of over 150,000 km (93,000 mi) or up to seven weeks. Unlike the low-altitude models, the high altitude UAVs orbit at 22,000 m (72,000 ft) and can stay in orbit for years as opposed to weeks. The most advanced of the high-altitude models, the HA75, has a wingspan of around 75 m (246 ft) and a range of up to 5,000,000 km (3,000,000 mi) or five years. The low-altitude SkyOrbiters series will be powered primarily by fossil fuel based technology. According to Quarkson, this will provide the best performance in terms of endurance. The high altitude SkyOrbiters, however, will be powered more similarly to the aforementioned Solara, with a solar array on its wings and body parts. Quarkson says that the SkyOrbiters can accommodate different weights and types of payload depending on what data may need to be collected. The UAVs can be used for a variety of purposes in addition to providing internet access, including aerial imaging, security and military applications, environmental monitoring and in agriculture. Users can manage their fleet of SkyOrbiters using the Constellation Manager system. Its HC-LOS ground antennas can be used to connect to the UAVs using the company's Q-SATCOM bi-directional data link or its SkyLink wireless communication system. Quarkson is in the process of fundraising and development for a number of "challenges" that it aims to complete for testing. Its Maiden Flight challenge will see the SkyOrbiter LA25 fly for the first time and provide proof of concept. The challenges will culminate Pole-to-Pole and Around the World flights. **Update 23 Sept 2014**: This article has been updated to include information on how the UAVs are expected to be powered. Source: Quarkson
true
true
true
The internet has become a critical means of communication during humanitarian crises and a crucial everyday tool for people around the world. Now, a Portuguese company wants to make sure everyone has access to it. Quarkson plans to use unmanned aerial vehicles (UAVs) to transmit internet access "to…
2024-10-12 00:00:00
2014-09-22 00:00:00
https://assets.newatlas.com/dims4/default/6cff2f9/2147483647/strip/true/crop/672x353+0+49/resize/1200x630!/quality/90/?url=http%3A%2F%2Fnewatlas-brightspot.s3.amazonaws.com%2Farchive%2Fquarkson-skyorbiter.jpg
article
newatlas.com
New Atlas
null
null
6,099,509
http://www.xe.com/?c=XBT
Global currency conversions & money transfers
null
Leading the world in currency information and global transfers for 30+ years We use the mid-market rate for our Converter. This is for informational purposes only. You won’t receive this rate when sending money. Login to view send rates Send money online At Xe, we make sending money fast, secure, and convenient. With just a few clicks, you can send money to over 220 countries worldwide. Join the thousands who trust us every day for their money transfer needs. Xe for business Whether you need to make cross-border payments or FX risk management solutions, we’ve got you covered. Schedule international transfers and manage foreign exchange risk across 130 currencies in 190+ countries.
true
true
true
Get the best currency exchange rates for international money transfers to 200 countries in 100 foreign currencies. Send and receive money with best forex rates.
2024-10-12 00:00:00
2024-05-14 00:00:00
null
null
xe.com
Xe
null
null
21,516,685
https://medium.com/@wenbinf/hao123-the-static-website-sold-for-10-million-old-china-tech-1-7a68f8293551
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
4,315,081
http://representpa.com
RepresentPA Pennsylvania Apparel Brand: The Symbol of Pennsylvania
null
## What's RepresentPA®? RepresentPA was founded in 2017 in PA to unite and energize Pennsylvania pride by establishing a "Symbol of Pennsylvania"—a signature, recognizable way to show pride in our PA roots. Our mission is to encourage Pennsylvania pride across the Keystone State, while lifting up our communities and making PA a better place to call home. Join us by representing PA! #### Join the RepresentPA Mailing List *Stay in touch with RepresentPA--promotions, new products, and sales news to your email.*
true
true
true
RepresentPA is the symbol of Pennsylvania pride. Join us in showing pride in your Pennsylvania roots with RepresentPA hats, stickers, patches, magnets, and pins!
2024-10-12 00:00:00
2024-01-01 00:00:00
https://cdn.shopify.com/…56544&width=1200
website
representpabrand.com
RepresentPA
null
null
5,597,155
http://www.businessweek.com/articles/2013-04-23/a-fake-ap-tweet-sinks-the-dow-for-an-instant
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
30,895,213
https://mohamed-abdalla.com/chief-information-security-officer-ciso-responsibilities
CISO responsibilities
Mohamed Abdalla Ibrahim
let us assume that Today is my first working day as an information security officer and I would like to share with you my daily activity in my new job so you can learn really what we are doing and how we are doing I will show you how we are doing the following tasks in a real organization aka : - Information security policy framework - Information asset management - Risk management - Incident management - Access control management - Operation security - vulnerability management network security - Disaster recovery - business continuity planning -> everything will be explained to you using real tools processes and documentation you will also learn how to manage solutions like : - GRC - Enterprise antivirus solution - Enterprise DLP solution - Data classification tool - SIEM Solution I’m also responsible to manage a few information security standards and compliance - ISO 27001 - NIST - PCI DSS I will show you how we plan to implement and audit those standards and compliance and I will show you how we are doing some of those tasks in my company what tools we are using the challenges we are facing and what is the final deliverables by the end of each article I will raise a problem that we face at work. So let's begin, before we started we had to know that the CISO's responsibilities are : - Following standards compliance and regulation - protecting the company information assets - Managing risks And : - He should understand the company business very well what they are doing - What service or product do they provide the company structure - How many departments do they have - Do they provide online services or only physical services - Technical infrastructure to be able to implement security controls he had to have all those information -> We don’t start from the technical implementation first we start to understand the business identify the critical business assets and functions and according to that we decide about the technical controls We don’t start from the technical implementation first we start to understand the business identify the critical business assets and functions and according to that we decide about the technical controls -> let us assume that am working in a company and this company is an insurance company and they have five departments in a small workgroup network that include around 140 computers and laptops so brief they didn’t have security in place according to that I have to implement a full information security management system from scratch and the best way to do that is to implement ISO 27001 controls in my company ISO 27001 includes 140 controls some of them are technical and others are not we will learn the implementation of all the applicable controls from scratch if you need the resources you can email me and I will send it to you. ISO 27001 classification controls are intended to: -Deter: the control reduces the threat, deterring hackers from attacking a given system for example. -Avoid: the control involves avoiding risky situations, perhaps ensuring that a known vulnerability is not exposed to the threat. -Prevent: the control usually reduces the vulnerability (most common security controls act in this way). -Detect: the control helps identify an event or incident as soon as possible, generally triggering reactive measures. -React: the control helps minimize the impact of incidents by promptly and effectively reacting or responding to them. -Recover: the control helps minimize the impact of incidents by aiding the restoration of normality, or at least a fallback service. While the objectives of the controls are primarily to maintain the confidentiality, integrity, and/or availability of information. Other classifications are possible. Furthermore, you may disagree with the particular way we have classified each control. However, we feel this is a pragmatic starting point for discussion. Feel free to modify this spreadsheet as you wish for your own purposes. One way to use the spreadsheet is to identify and mark any controls that are excluded from your Statement of Applicability, in other words, those you have decided are not appropriate to your circumstances. Then look down the columns to check that you still have a sensible mix of different types of control. You may also use this spreadsheet when deciding how to treat identified risks, choosing a balanced set of controls giving defense-in-depth. Written by : Mohamed Abdalla Ibrahim PMP | CISM | ITIL | CEH | Azure Architect | Azure Security Engineer | IBM Cybersecurity Analyst
true
true
true
Chief Information Security Officer (CISO) responsibilities
2024-10-12 00:00:00
2022-04-03 00:00:00
https://hashnode.com/utility/r?url=https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1648968539232%2Fxw66Yh4_Q.jpeg%3Fw%3D1200%26h%3D630%26fit%3Dcrop%26crop%3Dentropy%26auto%3Dcompress%2Cformat%26format%3Dwebp%26fm%3Dpng
article
mohamed-abdalla.com
Mohamed Abdalla Ibrahim's Blog
null
null
12,409,752
http://www.nytimes.com/2016/09/01/upshot/obamacare-premiums-set-to-rise-even-for-savvy-shoppers.html
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
27,175,272
https://en.wikipedia.org/wiki/Fulton_surface-to-air_recovery_system
Fulton surface-to-air recovery system - Wikipedia
null
# Fulton surface-to-air recovery system The **Fulton surface-to-air recovery system** (**STARS**), also known as **Skyhook**, is a system used by the Central Intelligence Agency (CIA), United States Air Force, and United States Navy for retrieving individuals on the ground using aircraft such as the MC-130E Combat Talon I and B-17 Flying Fortress. It involves using an overall-type harness and a self-inflating balloon with an attached lift line. An MC-130E engages the line with its V-shaped yoke and the person is reeled on board. Red flags on the lift line guide the pilot during daylight recoveries; lights on the lift line are used for night recoveries. Recovery kits were designed for one- and two-man retrievals. This system was developed by inventor Robert Edison Fulton, Jr., for the CIA in the early 1950s. It was an evolution from a glider snatch pick-up, a similar system that was used during World War II by American and British forces to retrieve both personnel and downed assault gliders following airborne operations.[1] Snatch pick-up did not use a balloon, but a line stretched between a pair of poles set in the ground on either side of the person or glider to be retrieved. An aircraft, usually a C-47 Skytrain, trailed a grappling hook that engaged the line, which was attached to the intended cargo. ## Development of the recovery system [edit]Experiments with the recovery system began in 1950 by the CIA and Air Force. Using a weather balloon, nylon line, and weights of 10 to 15 pounds (4.5 to 6.8 kg), Fulton made numerous pickup attempts as he sought to develop a reliable procedure. Successful at last, Fulton took photographs and sent them to Admiral Luis de Florez, who had become the director of technical research at the CIA. Believing that the program could best be handled by the military, de Florez put Fulton in touch with the Office of Naval Research (ONR), where he obtained a development contract from ONR's Air Programs Division. Over the next few years, Fulton refined the air and ground equipment for the pickup system. Based at El Centro, California, he conducted numerous flights over the Colorado Desert using a Navy P-2V Neptune. He gradually increased the weight of the pickup until the line began to break. A braided nylon line with a test strength of 4,000 pounds (1,800 kg) solved the problem. A major problem was the design of the locking device, or sky anchor, that secured the line to the aircraft. Fulton considered the solution of this issue [ clarification needed] the most demanding part of the entire developmental process. Further tests were conducted at Eglin Air Force Base, Florida, from 1 August 1959, using RB-69A, 54-4307, a CIA P2V-7U, according to an agency document.[2] After experiments with instrumented dummies, Fulton continued to experiment with live pigs, as pigs have a nervous system close to humans. Lifted off the ground, the pig began to spin as it flew through the air at 125 miles per hour (200 km/h). It arrived on board uninjured, but in a disoriented state. When it recovered, it attacked the crew.[3] By 1958, the Fulton aerial retrieval system, or "Skyhook", was finished. The ground system could be dropped from an aircraft and contained the necessary equipment for a pickup, including a harness, for cargo or a person, attached to 500 feet (150 m) of high-strength, braided nylon line and a dirigible-shaped balloon inflated by a helium bottle. The pickup aircraft was equipped with two tubular steel "horns", 30 feet (9 m) long and spread at a 70° angle from its nose. The aircraft flew into the line, aiming at a bright mylar marker placed at the 425 foot (130 m) level. As the line was caught between the forks on the nose of the aircraft, the balloon was released and a spring-loaded trigger mechanism (sky anchor) secured the line to the aircraft. After the initial pickup, the line was snared by the pickup crew using a J-hook and attached to a powered winch and the person or cargo pulled on board. To prevent the pickup line from interfering with the aircraft's propellers in the case of an unsuccessful catch, the aircraft had deflector cables strung from the nose to the wingtips. Later the US Navy tested the Fulton system fitted to modified S-2 Tracker carrier-based antisubmarine patrol aircraft for use in rescuing downed pilots. It is unknown whether a Fulton equipped S-2 was ever used on a combat mission. ## First human pickups [edit]The CIA had secretly trained Special Activities Division paramilitary officers to use a predecessor system for human pickups as early as 1952. The first human recovery mission authorized for operational use of this "all American system" took place in Manchuria on 29 November 1952. CIA C-47 pilots Norman Schwartz and Robert Snoddy were trained in the aerial pickup technique towards the end of 1952. CIA paramilitary officers John T. Downey and Richard G. Fecteau, themselves hurriedly trained in the procedure during the week of 24 November, were to recover a courier who was in contact with anti-communist sympathizers in the area. The mission failed when Chinese forces downed the aircraft with small arms fire, capturing survivors Downey and Fecteau. The British allegedly also used the American system for personnel.[3] The first human pickup using Fulton's STARS took place on 12 August 1958, when Staff Sergeant Levi W. Woods of the U.S. Marine Corps was winched on board the Neptune.[4] Because of the geometry involved, the person being picked up experienced less of a shock than during a parachute opening. After the initial contact, which was described by one individual as similar to "a kick in the pants",[5] the person rose vertically at a slow rate to about 100 ft (30 m), then began to streamline behind the aircraft. Extension of arms and legs prevented spinning as the individual was winched on board. The process took about six minutes. In August 1960, Capt. Edward A. Rodgers, commander of the Naval Air Development Unit, flew a Skyhook-equipped P2V to Point Barrow, Alaska, to conduct pickup tests under the direction of Dr. Max Brewer, head of the Navy's Arctic Research Laboratory. With Fulton on board to monitor the equipment, the Neptune picked up mail from Floating Ice Island T-3, also known as Fletcher's Ice Island, retrieved artifacts, including mastodon tusks, from an archaeological party on the tundra, and secured geological samples from Peters Lake Camp. The high point of the trials came when the P2V dropped a rescue package near the icebreaker USS *Burton Island*. Retrieved by a ship's boat, the package was brought on deck, the balloon inflated, and the pickup accomplished. In July 1967, two MACV-SOG operators were retrieved by an MC-130E using the Fulton Skyhook system while operating in Vietnam.[6] ## Project Coldfeet [edit]The first operational use of Skyhook was Project Coldfeet, an examination of the Soviet drift station NP-8, abandoned on 19 March 1962. Two agents parachuted to station NP 8 on 28 May 1962. After 72 hours at the site, on 1 June 1962, a pick-up was made of the Soviet equipment and both men. The mission yielded information on the Soviet Union's Arctic research activities, including evidence of advanced research on acoustical systems to detect under-ice submarines and efforts to develop Arctic anti-submarine warfare techniques.[3] ## Later use [edit]The Fulton system was used from 1965 to 1996 on several variants of the C-130 Hercules including the MC-130s and HC-130s. It was also used on the C-123 Provider.[7] Despite the apparent high-risk nature of the system, only one fatal accident occurred in 17 years of use. On 26 April 1982, SFC Clifford Wilson Strickland was picked up by a Lockheed MC-130 Combat Talon of the 7th Special Operations Squadron at CFB Lahr, Germany, during Flintlock 82 exercise, using the Fulton STARS recovery system, but fell to his death due to a failed bushing at the top of the left yoke pivot bolt.[8] The increased availability of long-range helicopters such as the MH-53 Pave Low, HH-60 Pave Hawk, and MH-47 Chinook, and the V-22 Osprey tilt-rotor aircraft, all with aerial refueling capability, caused this system to be used less often. In September 1996, the Air Force Special Operations Command ceased maintaining the capability to deploy this system.[ citation needed] ## In popular culture [edit]The Skyhook has been featured in a number of films and video games. It was seen in the 1965 *James Bond* film *Thunderball*, where James Bond and his companion Domino Derval are rescued at sea by a modified Boeing B-17 equipped with the Fulton system at the end of the movie.[9] In 1968, it was used in the John Wayne movie *The Green Berets* to spirit a VC officer to South Vietnam.[9] The Skyhook system was also featured in the 2008 film *The Dark Knight*. First mentioned by Lucius Fox as a means of re-boarding an aircraft without its landing,[10] the system is attached to a Lockheed L-100 Hercules.[11][12][13] In video games, a heavily fictionalized interpretation of the Skyhook forms a core gameplay mechanic in *Metal Gear Solid: Peace Walker,* and *Metal Gear Solid V: The Phantom Pain. [14]* More realistic depictions can be found in *PUBG: Battlegrounds*, [15]Call of Duty: Modern Warfare, and its sequel, Call of Duty: Modern Warfare III. [ *citation needed*]## See also [edit]- Military Assistance Command, Vietnam – Studies and Observations Group - Mid-air retrieval - Special Patrol Insertion/Extraction - United States Air Force § Personnel Recovery - Glider snatch pick-up ## References [edit]**^***Video: B-29s Rule Jap Skies,1944/12/18 (1944)*. Universal Newsreel. 1944. Retrieved 20 February 2012.**^**"PHASE VI EMPLOYMENT AND SUITABILITY TESTS OF THE RB - 69A AIRCRAFT" (PDF).*CIA Reading Room*. Retrieved 17 February 2024. Approved for release 21 April 2010. Document number CIA-RDP61-00763A000100110122-7.- ^ **a****b**"Robert Fulton's Skyhook and Operation Coldfeet".**c***Center for the Study of Intelligence*. Central Intelligence Agency. Retrieved 1 January 2022. **^**"500-Foot High Jump".*Popular Mechanics*, April 1960, p. 111.**^**Leary, William M. "Robert Fulton's Skyhook and Operation Coldfeet" (PDF).*CIA*. p. 4.**^***MC-130E Recovers MACV-SOG Using Fulton Recovery System (Skyhook)*. Retrieved 19 September 2024 – via www.youtube.com.**^**Friddell, Phillip (18 December 2010). "Replica in Scale: 'Tis the Season---It's Our Christmas Special Edition Which Contains Some Tasty Transports, Some Colorful Jet Fighters, Odd Neptunes, Jugs, A Bird That Barely Flew, and a Blast From the Past".*Replica in Scale*.**^**"» Friday FOIA Fun: The Fulton Skyhook - Entropic Memes".- ^ **a**"Fulton Recovery System".**b***www.specialforceshistory.info*. **^**Nolan, Christopher (2008).*The Dark Knight*(Motion Picture). 28 minutes in. ASIN B001GZ6QC4. OCLC 259231584.Bruce Wayne: "And what about getting back into the plane?" Lucius Fox: "I'd recommend a good travel agent." Bruce Wayne: "Without it landing." Lucius Fox: "Now that's more like it. The CIA had a program back in the '60s for getting their people out of hotspots called Skyhook. We could look into that." **^**Nolan, Christopher (2008).*The Dark Night*(Motion Picture). 37 minutes in. ASIN B001GZ6QC4. OCLC 259231584.**^**Pulver, Andrew. "Top 10 films set in Hong Kong".*The Guardian*. Retrieved 27 October 2015.**^**Hall, Peter. "Did You Know the Plane Extraction Scene from 'The Dark Knight' Used Real CIA Technology?".*movies.com*. Archived from the original on 6 August 2020. Retrieved 27 October 2015.**^**"The True Story of 'Metal Gear Solid's' Fulton Recovery System".*War Is Boring*. 4 September 2015. Retrieved 12 March 2023.**^**""PUBG UPDATE 11.1 OUT NOW ON ALL PLATFORMS" - KRAFTON Press Room".*press.krafton.com*. Retrieved 17 July 2024. ## External links [edit]- Fact Sheet: Fulton Surface-to-Air Recovery System at the Wayback Machine (archived 1 February 2008) *GlobalSecurity.org*article- High-resolution photo of HC-130 fitted for system, on *www.airliners.net* - Two CIA Prisoners in China, 1952–73 — Central Intelligence Agency - Fulton Skyhook System Live Recovery (1962), footage of live pick-ups conducted by the United States Army, Texas Archive of the Moving Image. - Lockheed ER-4112 Fulton Skyhook Aerial Recovery System manual
true
true
true
null
2024-10-12 00:00:00
2006-02-17 00:00:00
https://upload.wikimedia…lton_system1.jpg
website
wikipedia.org
Wikimedia Foundation, Inc.
null
null
3,596,739
http://venturebeat.com/2012/02/15/how-one-dev-used-90-of-his-windows-phone-code-to-port-a-game-to-windows-8/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
37,984,786
https://arxiv.org/abs/2306.17844
The Clock and the Pizza: Two Stories in Mechanistic Explanation of Neural Networks
Zhong; Ziqian; Liu; Ziming; Tegmark; Max; Andreas; Jacob
# Computer Science > Machine Learning [Submitted on 30 Jun 2023 (v1), last revised 21 Nov 2023 (this version, v2)] # Title:The Clock and the Pizza: Two Stories in Mechanistic Explanation of Neural Networks View PDFAbstract:Do neural networks, trained on well-understood algorithmic tasks, reliably rediscover known algorithms for solving those tasks? Several recent studies, on tasks ranging from group arithmetic to in-context linear regression, have suggested that the answer is yes. Using modular addition as a prototypical problem, we show that algorithm discovery in neural networks is sometimes more complex. Small changes to model hyperparameters and initializations can induce the discovery of qualitatively different algorithms from a fixed training set, and even parallel implementations of multiple such algorithms. Some networks trained to perform modular addition implement a familiar Clock algorithm; others implement a previously undescribed, less intuitive, but comprehensible procedure which we term the Pizza algorithm, or a variety of even more complex procedures. Our results show that even simple learning problems can admit a surprising diversity of solutions, motivating the development of new tools for characterizing the behavior of neural networks across their algorithmic phase space. ## Submission history From: Ziqian Zhong [view email]**[v1]**Fri, 30 Jun 2023 17:59:13 UTC (9,332 KB) **[v2]**Tue, 21 Nov 2023 17:08:34 UTC (10,185 KB) ### References & Citations # Bibliographic and Citation Tools Bibliographic Explorer *(What is the Explorer?)* Litmaps *(What is Litmaps?)* scite Smart Citations *(What are Smart Citations?)*# Code, Data and Media Associated with this Article CatalyzeX Code Finder for Papers *(What is CatalyzeX?)* DagsHub *(What is DagsHub?)* Gotit.pub *(What is GotitPub?)* Papers with Code *(What is Papers with Code?)* ScienceCast *(What is ScienceCast?)*# Demos # Recommenders and Search Tools Influence Flower *(What are Influence Flowers?)* Connected Papers *(What is Connected Papers?)* CORE Recommender *(What is CORE?)* IArxiv Recommender *(What is IArxiv?)*# arXivLabs: experimental projects with community collaborators arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website. Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them. Have an idea for a project that will add value for arXiv's community? **Learn more about arXivLabs**.
true
true
true
Do neural networks, trained on well-understood algorithmic tasks, reliably rediscover known algorithms for solving those tasks? Several recent studies, on tasks ranging from group arithmetic to in-context linear regression, have suggested that the answer is yes. Using modular addition as a prototypical problem, we show that algorithm discovery in neural networks is sometimes more complex. Small changes to model hyperparameters and initializations can induce the discovery of qualitatively different algorithms from a fixed training set, and even parallel implementations of multiple such algorithms. Some networks trained to perform modular addition implement a familiar Clock algorithm; others implement a previously undescribed, less intuitive, but comprehensible procedure which we term the Pizza algorithm, or a variety of even more complex procedures. Our results show that even simple learning problems can admit a surprising diversity of solutions, motivating the development of new tools for characterizing the behavior of neural networks across their algorithmic phase space.
2024-10-12 00:00:00
2023-06-30 00:00:00
/static/browse/0.3.4/images/arxiv-logo-fb.png
website
arxiv.org
arXiv.org
null
null
25,822,377
https://medium.com/bumble-tech/bpf-and-go-modern-forms-of-introspection-in-linux-6b9802682223
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
18,475,378
https://montera.co/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
14,460,136
https://hackernoon.com/node-javascript-react-redux-isomorphic-boilerplate-tutorial-example-adding-new-page-component-router-match-f0347ad42c67
[react][redux] Isomorphic boilerplate: Adding new page | HackerNoon
Peterchang
**NODES, The Dev Community Conference by Neo4j!** It is an example of adding a static new page(no action-dispatch) base on the react-redux isomorphic boilerplate, after wire up the configurations of server rendering, client rendering and router, only 3 files will be modified(or added) for creating a very simple static page. **Presentational Components** is component which are concerned with _how things look only, and h_ave no dependencies on the rest of the app `./src/components/NewPage.js` import React, { PropTypes } from 'react' const NewPage = ({ onClick, message }) => {return (<div><h1>NewPage: { message }</h1></div>)} NewPage.propTypes = {message: PropTypes.string.isRequired} export default NewPage `./src/containers/NewPage.js` import { connect } from 'react-redux'import NewPage from '../components/NewPage' const mapStateToProps = (state, ownProps) => {return {message: 'well behave !!!'}} const mapDispatchToProps = (dispatch, ownProps) => {return {}} const newPage = connect(mapStateToProps,mapDispatchToProps)(NewPage) // initState is a function which is run before server, and keep consistency as a thunk middleware, and return a promisenewPage.initState = (store,req,res) => {return (dispatch, getState) => {return new Promise( (resolve, reject)=> {resolve ()})}} export default newPage `matchConfig.js` The new page is created with new URL `/preload', as the first field` path: '/preload'` ...{path: '/preload',component: PreloadHelloWorld,initState: PreloadHelloWorld.initState},... Open browser with url `localhost:3000/newpage` Git repository ``` $ git clone https://github.com/wahengchang/react-redux-boilerplate/tree/addNewPage ``` https://github.com/wahengchang/react-redux-boilerplate/tree/addNewPage
true
true
true
It is an example of adding a static new page(no action-dispatch) base on the <a href="https://hackernoon.com/isomorphic-universal-boilerplate-react-redux-server-rendering-tutorial-example-webpack-compenent-6e22106ae285" target="_blank">react-redux isomorphic boilerplate</a>, after wire up the configurations of server rendering, client rendering and router, only 3 files will be modified(or added) for creating a very simple static page.
2024-10-12 00:00:00
2017-05-19 00:00:00
https://hackernoon.imgix…8paYs9I-vYQ.jpeg
article
hackernoon.com
Hackernoon
null
null
27,480,488
https://daily.jstor.org/library-fires-have-always-been-tragedies-just-ask-galen/
Library Fires Have Always Been Tragedies. Just Ask Galen. - JSTOR Daily
Caroline Wazer
Though less famous than the purported burning of the Library of Alexandria, the great fire that tore through central Rome in 192 CE resulted in a similarly profound loss for ancient Greek and Roman scholarship. The true cost of this fire became clear to historians in 2005, when a text believed lost for centuries was unexpectedly rediscovered in a Greek monastery. Titled “On Consolation from Grief” and written in the aftermath of the blaze by Galen—court physician to several Roman emperors—the work does more than chronicle an unfortunate accident. As classicist Matthew C. Nicholls shows, it offers a rare and poignant glimpse into the lost world of ancient public libraries. The three lost libraries Galen describes, all located in close proximity to each other on Rome’s Palatine hill, shared some important characteristics. In a world without printing presses or photography, a crucial function of imperial public libraries was to safeguard authoritative versions of important texts—ideally the original manuscripts—that scholars like Galen could consult and copy with confidence. Some texts were stored in special collections assembled by a notable individual, while others appear to have been shelved by subject. Galen boasts of finding inconsistencies and errors in the catalogues used as finding aids, suggesting that patrons were free to browse shelves on their own, without a librarian’s supervision. Another feature shared by the Palatine libraries is that they did not permit their holdings to leave the premises (although, as Galen tells us, one library had a certain guard who could be bribed to turn a blind eye to patrons pilfering texts). Unable to check books out, scholars would have had to either conduct their research at the library or produce their own copies of texts to take home with them. Either activity would have required plenty of workspace, good light, and long hours. The need for books to remain on-site also had effects outside the library walls: for convenience, many scholars rented nearby storage space for their personal books and research materials. In consequence, the neighborhood’s streets were full of scholars debating and booksellers hawking their wares. #### Weekly Newsletter Among the losses of the fire of 192 CE, the most obvious were the rare texts that, as Nicholls puts it, were “permanently lost to the world.” Texts that survived in the form of copies stored elsewhere were also affected, as scribal errors could no longer be corrected against an authoritative version. To make things worse, the fire also consumed the nearby storage facilities rented by scholars. For Galen that meant losing his personal collection of books in addition to his notes for in-progress works. This was an enormous headache, to be sure, but Galen seems to have handled the loss relatively well: one of his neighbors at the storage facility, a grammarian, “died after he lost books in the fire, consumed with despair and pain,” he tells us, “and everyone wandered for a long time dressed with black cloaks, thin and pale, like people in mourning.” Support JSTOR Daily! Join our new membership program on Patreon today.
true
true
true
When Rome burned in 192 CE, the city's vibrant community of scholars was devastated. The physician Galen described the scale of the loss.
2024-10-12 00:00:00
2021-06-07 00:00:00
https://daily.jstor.org/…len_1050x700.jpg
article
jstor.org
JSTOR Daily
null
null
18,659,908
https://www.nytimes.com/2018/12/03/science/space-stars-photons-light.html
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
6,458,695
http://www.itworld.com/it-management/371685/learning-st-louis-entrepreneurs
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
14,349,557
https://ma.ttias.be/ways-wannacry-ransomware-much-worse/
Ways in which the WannaCry ransomware could have been much worse
Mattias Geniar
If you’re in tech, you will have heard about the WannaCry/WannaCrypt ransomware doing the rounds. The *infection* started on Friday May 12th 2017 by exploiting MS17-010, a Windows Samba File Sharing vulnerability. The virus exploited a known vulnerability, installed a cryptolocker and extorted the owner of the Windows machine to pay ransom to get the files decrypted. As far as *worms* go, this one went viral at an unprecedented scale. But there are some design decisions in this cryptolocker that prevent it from being *much* worse. This post is a thought exercise, the next vulnerability will probably implement one of these methods. Make sure you’re prepared. # Time based encryption This WannaCry ransomware found the security vulnerability, installed the cryptolocker and immediately started encrypting the files. Imagine the following scenario; - Day 1: worm goes round and infects vulnerable SMB, installs backdoor, keeps quiet, infects other machines - Day 14: worm activates itself, starts encrypting files With WannaCrypt, it took a few hours to reach world-scale infections, alerting everyone and their grandmother that something big was going on. Mainstream media picked up on it. Train stations showed cryptolocker screens. Everyone started patching. What if the worm gets a few days head start? By keeping quiet, the attacker risks getting caught, but in many cases this can be avoided by excluding known IPv4 networks for banks or government organizations. How many small businesses or large organizations do you think would notice a sudden extra running .exe in the background? Not enough to trigger world-wide coverage, I bet. # Self-destructing files A variation to the scenario above; - Day 1: worm goes round, exploits SMB vulnerability, encrypts each file, but still allows files to remain opened (1) - Day 30: worm activates itself, removes decryption key for file access and prompts for payment How are your back-ups at that point? All files on the machine have some kind of hidden time bomb in them. Every version of that file you have in back-up is affected. The longer they can keep that hidden, the bigger the damage. More variations of this exist, with Excel or VBA macro’s etc, and all boil down to: modify the file, render it unusable unless proper identification is shown. (1) This should be possible with shortcuts to the files, first opening some kind of wrapper-script to decrypt the files before they launch. Decryption key is stored in memory and re-requested whenever the machine reboots, from its Command & Control servers. # Extortion with your friends The current scheme is: *your* files get encrypted, *you* can pay to get your files back. What if it’s not your own files you’re responsible for? What if are the files of your colleagues, family or friends? What if you had to pay 300$ to recover the files from someone you know? Peer pressure works, especially if the *blame* angle is played. It’s *your* fault someone *you know* got infected. Do you feel responsible at that point? Would that make you pay? From a technical POV, it’s tricky but not impossible to identify known associates for a victim. This could only happen a smaller scale, but might yield bigger rewards? # Cryptolocker + Windows Update DDoS? Roughly 200.000 affected Windows PCs have been caught online. There are probably a lot more, that haven’t made it to the online reports yet. Those are quite a few PCs to have control over, as an attacker. The media is now jumping on the news, urging everyone to update. What if the 200k infected machines were to launch an effective DDoS against the Windows Update servers? With everyone trying to update, the possible targets are lowering every hour. If you could effectively take down the means with which users can protect themselves, you can create bigger chaos and a bigger *market* to infect. The next cryptolocker isn’t going to be “just” a cryptolocker, in all likeliness it’ll combine its encryption capacities with even more damaging means. # Stay safe How to prevent any of these? - Enable auto-updates on all your systems (!!) - Have frequent back-ups, store them long enough Want more details? Check out my earlier post: Staying Safe Online – A short guide for non-technical people.
true
true
true
If you’re in tech, you will have heard about the WannaCry/WannaCrypt ransomware doing the rounds. The infection started on Friday May 12th 2017 by exploiting MS17-010, a Windows Samba File Sharing vulnerability.
2024-10-12 00:00:00
2017-05-15 00:00:00
https://ma.ttias.be//wp-…_cover_image.png
article
ttias.be
https://www.facebook.com/www.ma.ttias.be/
null
null
4,442,710
http://www.slate.com/blogs/moneybox/2012/08/25/apple_v_samsung_verdict_creates_new_pinch_to_zoon_monopoly_that_s_bad_for_consumers.html
Apple's New Pinch-To-Zoom Monopoly
Matthew Yglesias
Since it came out on a Friday afternoon, I was only able to comment briefly on the Apple v Samsung verdict yesterday but I was able to read some pushback on Twitter against my view that Apple’s win is a loss for consumers. To look specifically at what I’m unhappy about, the jury upheld several Apple patents which amount to saying that if there are now-standard elements of touchscreen user interfaces that Apple did first in iOS now *only* iOS can use them. Another aspect of the case relates to the allegation that Samsung products have been violating Apple’s “trade dress” by basically looking too much like iPhones. That I’m less concerned about. What troubles me is the verdict upholding the US Patent and Trademark Office’s decision to say that, for example, Apple should have a legal monopoly on the pinch-to-zoom feature which I think is a great example of how the modern-day patent system has gone awry. Think about cars and you’ll see that, of course, lots of different companies make cars. But they all have some very similar user interface elements. In particular, there’s a steering wheel that you turn left and right to shift the wheels and there’s a gas pedal and brakes that you hit with your right foot. Imagine if the way the automobile industry worked was that each car maker had to devise a unique user interface. So maybe GM cars would have a steering wheel, but Toyotas would have a joystick, and Honda you would steer with your feet and use your hands to control the gas and brakes. In some sense there’d be “more innovation” in this world since there’d be this kind of arbitrary proliferation of user interfaces. But in a more important sense there’d be less competition, since there are only so many viable ways for a person to interact with a car and a lot of those ways suck. You’d have few new entrants, and those entrants would be hobbled from the get-go. Meanwhile, UI proliferation would make it much harder for people to switch car brands or launch car rental companies since with each brand reinventing the steering wheel you’d constantly need to be learning to drive again.
true
true
true
Since it came out on a Friday afternoon, I was only able to comment briefly on the Apple v Samsung verdict yesterday but I was able to read some...
2024-10-12 00:00:00
2012-08-25 00:00:00
https://compote.slate.co…1.jpg?width=1560
article
slate.com
Slate
null
null
17,112,529
https://www.reuters.com/article/us-amazon-com-whole-foods/amazon-cuts-whole-foods-prices-for-prime-members-in-new-grocery-showdown-idUSKCN1IH0BM
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
12,190,289
http://arstechnica.com/science/2016/07/welcome-to-the-age-of-ancient-dna-sequencing/
Welcome to the age of ancient DNA sequencing
Annalee Newitz
The greatest technological revolution in human history arguably happened about 12,000 years ago, when humans first stopped living as hunter gatherers and became farmers. This so-called Neolithic Revolution transformed human culture, our genomes, and our ecosystems. But the origins of farming have remained a mystery. Was there one eureka moment, when an early Neolithic person realized the seeds they scattered in fall had sprouted into grains two seasons later? Or, more intriguingly, did several groups of people start farming independently? Two new studies published this month in *Science* and *Nature* magazines use DNA analysis of ancient human bones to conclude that farming arose in multiple regions simultaneously. The *Science* study focused on four farmers who lived between 9,000 and 10,000 years ago in the mountainous Zagros region of Iran. The *Nature* study analyzed 44 individuals (farmers as well as hunter-gatherers) from Armenia, Turkey, Israel, Jordan, and Iran who lived between 14,000 and 3,500 years ago. By sequencing parts of these ancient people's DNA, researchers could determine their likely ancestry as well as what populations are descended from them today. The researchers conclude that there are at least two groups of ancient humans who discovered farming separately in the Middle East and then exported the Neolithic revolution across large parts of the continent. ## The secrets of ancient DNA Over the past decade, modern DNA sequencing techniques have allowed scientists to recover strands of genetic material from decayed bones that have been infused with microbes over thousands of years. Now, those techniques are widely accessible and highly refined. It starts with how researchers pick their bones. If possible, they'll extract DNA from the petrous bone in the inner ear, a goldmine for genetic material that can yield roughly 100 times more ancient DNA than other parts of the skeleton. Then researchers use a process called in-solution hybridization, which uses special probes made from DNA or RNA that attach to the desired ancient human DNA, fishing it out of a soup of other genetic material from other organisms that accumulated in the decomposing bone. Techniques like these are making it easier than ever for us to sequence ancient DNA and reconstruct the human past.
true
true
true
New tech gives us a sharper view of how people lived 12,000 years ago.
2024-10-12 00:00:00
2016-07-29 00:00:00
https://cdn.arstechnica.…AM_farm_feat.jpg
article
arstechnica.com
Ars Technica
null
null
5,170,108
http://community.practutor.com/puzzle/222-daunting-division
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
7,801,364
http://www.sitepoint.com/php-fights-hhvm-zephir-phpng/
PHP Fights HHVM and Zephir with PHPNG — SitePoint
Bruno Skvorc
*This article was sponsored by NewRelic. Thank you for supporting the sponsors who make SitePoint possible!* *A previous version of this article incorrectly stated that PHPNG is JIT. This is not the case, and the necessary amendments have been made. See bottom of article for more info.* Chaos in the old world! First HipHop, years ago, and no one bats an eye. Then suddenly, HHVM happens, introduces Hack, and all hell breaks loose – Facebook made a new PHP and broke/fixed everything (depending on who you ask). Furthermore, Zephir spawns and threatens with C-level compilation of all your PHP code, with full support for current PHP extensions (while Zephir is not intended to replace C or PHP, it does let you write near-PHP code and compile it to C, which lets you easily rewrite all your PHP apps to a format that can be close-sourced and compiled for speed and security). It’s mushroom growth time for alternative PHP runtimes, and HippyVM appears as well. Amid the sea of changes, another splash was heard: PHPNG. As introduced by Manuel Lemos, PHPNG is a new branch of PHP coming to a yet undetermined future version of PHP. ## Wait, what? This somewhat cheesily named (NG = new generation) and clumsily presented version of PHP is the core team’s attempt to optimize PHP drastically and allow JIT compilers in the future to push it even further. PHPNG is **not** a JIT, but an upgrade that allows the construction of a good JIT compiler later on. The PHPNG branch on its own does not include any JIT features. PHPNG was presented by Dmitry Stogov in an internals newsgroup thread. Dmitry is responsible for performance and optimization at Zend, and mostly deals with the Zend Engine. The NG upgrade focuses on rewriting core parts of the Zend Engine to enable better memory allocation on PHP’s data types. As quoted from Reddit: NG exists because the experiments conducted by Zend in introducing a JIT failed in the real world because of the way the engine is currently designed, mostly because we allocate everything all the time. The NG patch changes the norm, so that we no longer by default allocate zvals, this increases performance and allowed for much tidier API’s. As with any “Make PHP better” attempt, this one has its pros and cons. ## Pros ### Speed! Faster execution means faster resource allocation, means faster request processing, means bigger request throughput. Initial results are promising (1, 2). The performance still needs to be benchmarked against other alternative solutions, but 10-30% is nothing to scoff at. ### Extensions! Since this upgrade is being done on the official Zend Engine, not an alternative runtime, it pretty much guarantees compatibility with current extensions. One of the biggest reasons for people hesitating about a transfer to HHVM is the unavailability of essential extensions they’re used to (Phalcon, in my case). Personally, a faster engine for PHP that supports Phalcon would make me care significantly less about the upgrades Hack offers today. So it guarantees extensions compatibility… wait. Does it? Uh oh. ## Cons ### Extensions! Too good to be true. Not all extensions are supported, some tests are failing, and we also have more ideas for additional improvement. In all fairness, NG is young. Far younger than anything we’ve ever really dealt with in the PHP world, and far more of a serious update – so it goes without saying that some compatibility issues are guaranteed. But I agree with Manuel here when he says this might be the pain point for most shared hosting providers when the time to upgrade comes. Even though I’m quite vocal against shared hosting providers, I fully understand the problem this might pose. We’ve had a similar mess when we tried to make providers “Go PHP5”, and quite recently once more when they needed talking into more up to date versions of PHP, so getting them to make a big shift that has the potential to introduce BC breaks will be a daunting task. This fear of change will cement the use of old PHP versions, in turn breeding more horribly unqualified PHP developers working on outdated code, completely oblivious to best practices and vulnerabilities. In short, we’re due for a replay of history. This might sound doomsday-ish, as some have pointed out, but being deeply involved in all circles of PHP and exposed to the lowest quality ones on a daily basis through a full inbox, I see where we are now and where we’re going. All is not black, however – solutions such as Heroku and DigitalOcean will let people run the most up to date and custom versions of PHP possible for prices as low as (or lower) than that of shared hosting providers. My sincerest hope is that the core team will manage to iron the new Zend Engine out to such a degree that it retains backwards compatibility with ALL extensions, but giving warning pings on compilation to all extension developers who fail to adhere to NG’s regulations and best practices. ### Internal Slowness The core dev group is infamous for being slow to adapt to change. Modern features that existed for years in other languages were voted against in the past, only to be implemented years later. Whether this is because the core dev team is simply without vision, like Anthony’s and Phil’s posts say, or because it’s too small and underfunded to make any major changes at a rapid pace is irrelevant – the internal slowness means we might never see NG out in the open and out of “alpha” status, much like the case was with the mythical PHP6. This brings us to the last point. ### Late to the Party… Again Due to the inherent slowness often witnessed in the PHP core development group, by the time NG is implemented (if ever) all it will offer is a performance upgrade. By then, Hack with HHVM which is leaps and bounds above the standard PHP already will offer so many additional features, the race will be rigged and PHP won’t stand a chance. Type hinting, available today in both Hack and Zephir, will have grown roots in those implementations. Multithreading, compilation, standalone web server – all features available today in alternative solutions, and all of them almost production ready. While the core dev group is working on some of those, and PHP will probably have IIS support way before HHVM (which is, apparently, important to some people), I personally still believe this isn’t nearly enough rapid progress from the official side of PHP. Even if the core group does decide to vote “yes” on all these exotic features for which issues and demands exist, it’ll take them far too long to implement – and they’ll be late to the party by default, unless a paradigm shift is introduced and their entire way of working is turned around. Moving the source to GitHub was a good move, but it only scratched the surface. That said, Rasmus himself supposedly said HHVM becoming PHP’s core engine in a few years isn’t that much of a Sci-Fi scenario. ## Conclusion Facebook-related ownership aside (which carries with itself plenty of negative connotations on its own), HHVM pushed the devs in the right direction by showing how such upgrades can be done. This drives innovation and forces those who have been comfortable in their throne for too long to get up, stretch their legs and see if they can still run. Facebook’s aggressive approach forced the PHP world to do a double-take and wonder about what’s going on, and soon enough it caught on. Competition is awesome. Wherever this takes us next, I’m optimistic about it. *Article update May 28th, 2014* *After an email exchange with Phil Sturgeon, and after reading the official statement, I have edited parts of the above text. In short, I classified PHPNG as a JIT, when it is clearly not that, but a mere performance upgrade that will allow the core group to develop a proper JIT compiler later on. * ## Frequently Asked Questions about PHP, HHVM, Zephir, and PHPNG ### What are the key differences between PHP and Zephir? PHP and Zephir are both scripting languages used for web development. PHP is a widely-used open-source language, while Zephir is a high-level language that allows developers to write extensions for PHP. Zephir offers a statically typed syntax, which can help prevent bugs and errors that might occur in PHP. However, PHP has a larger community and more resources available, which can be beneficial for developers. ### How does HHVM compare to PHPNG? HHVM (HipHop Virtual Machine) and PHPNG (PHP Next Generation) are both engines that execute PHP code. HHVM was developed by Facebook and uses a just-in-time (JIT) compilation approach to achieve superior performance. On the other hand, PHPNG is an internal project of PHP that aims to improve the performance of PHP applications. It does this by changing the way PHP internally represents values and objects, leading to significant memory usage improvements. ### Is Zephir still being maintained? As of recent updates, Zephir is no longer being actively maintained. This means that while the language is still usable, it may not receive updates or fixes for any potential issues that may arise. ### What are the advantages of using Zephir? Zephir provides a number of advantages for developers. It offers a statically typed syntax, which can help prevent bugs and errors. It also allows developers to write extensions for PHP, providing a way to increase the performance of PHP applications. ### Why was Zephir created? Zephir was created to provide a high-level language that allows developers to write extensions for PHP. The goal was to increase the performance of PHP applications by allowing developers to write critical code parts in a language that is easier to optimize and manage. ### How does PHPNG improve the performance of PHP applications? PHPNG improves the performance of PHP applications by changing the way PHP internally represents values and objects. This leads to significant memory usage improvements and can result in faster execution times for PHP applications. ### What is the future of PHP with the advent of HHVM and Zephir? Despite the advent of HHVM and Zephir, PHP continues to be a widely-used language for web development. While HHVM and Zephir offer performance improvements, PHP has a large community and a wealth of resources available. The future of PHP looks promising, with ongoing efforts to improve its performance and capabilities. ### Can I use Zephir to write PHP extensions? Yes, one of the main advantages of Zephir is that it allows developers to write extensions for PHP. This can be a way to increase the performance of PHP applications. ### What is the difference between a scripting language and a high-level language? A scripting language is a type of programming language that is used to automate tasks that would otherwise need to be executed step-by-step by a human operator. A high-level language, on the other hand, is a programming language with strong abstraction from the details of the computer, making it easier to read and write. ### How does the just-in-time (JIT) compilation approach of HHVM improve performance? The just-in-time (JIT) compilation approach of HHVM improves performance by compiling the bytecode into machine code just before it is executed. This allows for optimizations that can significantly improve the execution speed of PHP applications. Bruno is a blockchain developer and technical educator at the Web3 Foundation, the foundation that's building the next generation of the free people's internet. He runs two newsletters you should subscribe to if you're interested in Web3.0: Dot Leap covers ecosystem and tech development of Web3, and NFT Review covers the evolution of the non-fungible token (digital collectibles) ecosystem inside this emerging new web. His current passion project is RMRK.app, the most advanced NFT system in the world, which allows NFTs to own other NFTs, NFTs to react to emotion, NFTs to be governed democratically, and NFTs to be multiple things at once.
true
true
true
What's PHPNG? Who's leading it and how does it stack up against HHVM and Zephir?
2024-10-12 00:00:00
2014-05-25 00:00:00
null
article
sitepoint.com
PHP Fights HHVM and Zephir with PHPNG — SitePoint
null
null
5,051,626
http://howardlindzon.com/hackers-or-the-nra-which-group-protects-us-more-from-the-government-and-crime-of-the-century-the-album/
Howie Town
Howard Lindzon
Howie Town I am a trend follower. I am here to profit from trends. You can too! Connect Trends With Friends New Episode A Year of Revelations Feed Mayo To The Tuna...
true
true
true
I am a trend follower. I am here to profit from trends. You can too!
2024-10-12 00:00:00
2024-10-12 00:00:00
https://media.beehiiv.co…cape_HowardF.jpg
website
howardlindzon.com
Howie Town
null
null
20,781,588
https://www.artsy.net/article/artsy-editorial-jony-remade-visual-culture-apples-image
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
6,550,252
http://www.washingtonpost.com/blogs/the-switch/wp/2013/10/14/yahoo-to-make-ssl-encryption-the-default-for-webmail-users-finally/?asdfasdfasdfasdfasdf
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
38,722,574
https://gizmodo.com/researchers-test-2-400-year-old-leather-and-realize-its-1851114782
Researchers Test 2,400-Year-Old Leather and Realize It's Made of Human Skin
Isaac Schultz
Scythians in modern-day Ukraine made leather out of human skin, a team of researchers has determined, likely as a macabre trophy item. The discovery affirms a claim by the ancient Greek historian Herodotus, who wrote extensively on the Scythian way of life. In their work, the researchers use paleoproteomics to establish the sources of leather found on 14 different Scythian sites in southern Ukraine. The manifold sources—sheep, goat, cattle, horse, and yes, human—suggest that the equestrian steppe groups had a sophisticated knowledge of leatherworking. The team’s research was published last week in PLOS One. The 45 leather samples (as well as two fur objects) were recovered from 18 Scythian burials. Many of the Scythian burials were in kurgans—ancient burial mounds found across eastern Europe. Some of the earliest evidence of horse domestication comes from in kurgans in Romania, Bulgaria, and Hungary. Scythians were a genetically diverse group of nomads in the Eurasian steppe; they “served as the mobile bridge that linked the various sedentary societies of Europe and Asia,” according to the new paper. Scythians carried technologies, goods, and ideas between the continents. The Scythians were described by Herodotus—sometimes through firsthand experience, but also through hearsay that likely traveled along trade routes from Eastern Asia—about 2,500 years ago. As documented by the Penn Museum, Herodotus described groups of people who were “one and all” able to “shoot from horseback,” with wagons as “the only houses that they possess.” Their weapon of choice on foot was the battle-ax, Herodotus added, and archaeological evidence suggests that the Scythians adored their horses. As the researchers noted, Herodotus detailed stories of Scythians drinking the blood of the defeated, using severed heads as a bargaining token for booty, and sewing together scalps to make clothing. Importantly for this line of research, Herodotus also said that “Many too take off the skin, nails and all, from their dead enemies’ right hands, and make coverings for their quivers.” The fur samples were identified as red fox and animals in the cat and squirrel families. The team could not get a taxonomic ID on 26% of the samples they identified, but the majority of the identified samples were likely goat (C. hircus.) The runner-up was sheep leather (~19%), while the other leather sources were roughly evenly represented in the samples. Two of the leather samples were from horse, and two—probably the reason you’re here—were human skin. Scrutiny of the human leather made the team conclude that the skin bits were crafted on the top parts of their respective quivers; the rest of the quivers were made from animal leather. But even the animal leather quivers used a combination of different skins in their creation; the team posits that “each archer made their own quiver using the materials available at the moment.” Scythians’ prowess on battlefields didn’t get in the way of a good time: the British Museum notes that various Greek authors documented a heavy drinking culture among the Scythians, and Herodotus even detailed a sort of ancient hot box (until now I didn’t know the word ‘weed’ shows up on the British Museum website). New research continues to reshape the modern image of Scythians. They were much more than fearsome nomadic warriors. In 2021, a different team studied isotopes in tooth enamel from sites across Ukraine to understand the diet and range of ancient people. Those scientists concluded that only a small subset of people that lived in Scythian times were leading heavily nomadic lifestyles. Even if only a few of these ancient warriors used human skin for their quivers, the work substantiates one of Herodotus’ more metal claims about the Scythians. Whether any of the rumored clothing sewn from scalps will ever be recovered is another matter.
true
true
true
A group of horse-riding warriors kept some gruesome battle trophies.
2024-10-12 00:00:00
2023-12-20 00:00:00
https://gizmodo.com/app/…d28b48bffe57.jpg
article
gizmodo.com
Gizmodo
null
null
35,769,313
https://www.marketwatch.com/story/the-fed-says-dont-worry-about-u-s-banks-but-its-unclear-why-you-should-believe-them-2b309c06
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
4,912,705
https://github.com/sailfrog/hnreader/blob/master/hn.py
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
9,453,811
http://phys.org/news/2015-04-drones-dogs-deployed-guacamole.html
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
1,866,112
http://about.digg.com/blog/bury-is-back
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
40,323,408
https://blog.gingerbeardman.com/2024/05/10/emoji-history-the-missing-years/
Emoji history: the missing years ⌘I Get Info
null
During my research into vintage Japanese drawing software, I came across some devices that had built in sketch or handwritten memo functions. I bought a couple of them to see if they did anything cool or interesting. These sorts of devices are pre-internet, so there’s not much about them online, and they can’t be emulated, so the only way to find out what they do is to get first hand experience by reading the manual or, better, using one yourself. It’s difficult to find these devices in working condition, as most of them have screen polarisers that have gone bad over time, but if you’re lucky you can find one. ## 1994 One such device I bought was the * Sharp PI-4000*, from 1994. This is a pocket computer that rolled out of *Sharp*’s involvement in the development and manufacturing of *Apple*’s *Newton* *MessagePad*. In 1993 *Sharp*did their own licenced version of the *Apple* *Newton* *MessagePad*H1000, the *Expert Pad* *PI-7000*, but just like *Apple*’s device it wasn’t as successful as they’d hoped. But before that, in 1992, they’d made a device called the *PV-F1*which was the first touchscreen-only PDA. After the *Expert Pad*failure, *Sharp*took another attempt at the concept and came up with the *PI-3000*in 1993. This solved all the problems with the *PV-F1*, most notably size and cost. The device I have, the *PI-4000*, was released a year later and features higher memory capacity. The *PI-3000*/4000 devices could transfer data via infrared, connect to a modem to send faxes, and by the PI-5000 in 1995 could connect to cell phones to send emails. They all use a simplified—but still quite complicated—version of the multi-window operating system that had been developed for the *PV-F1*. So I was trying out the *PI-4000*, the memo function is pretty cool allowing you to draw in different dither shades and pen widths, and use stamps to add symbols to your memo. These are mostly map-related things like road and rail junctions, buildings, and train stations. Pretty cool. Then I tried typing some messages on the device and as I explored the myriad of keyboard input mechanisms I came across something rather familiar (sorry about the awful photo—it’s the best I could do, honest—the screen is very reflective and the pixels are so far from the backing they cast individual shadows!): At this point, I couldn’t quite believe what I was seeing because I was under the impression that the first emoji were created by an anonymous designer at *SoftBank* in 1997, and the most famous emoji were created by Shigetaka Kurita at *NTT DoCoMo* in 1999. But the * Sharp PI-4000* in my hands was released in 1994, and it was chock full of recognisable emoji. Then down the rabbit hole I fell. 🕳️🐇 ## 1990 A little more reading, and a tip from my friend @chamekan on Twitter, unearthed the fact that the * NEC PI-ET1* in 1990 also contained emoji 1. I also found a collector who owned a device, and we’ll hear more from them later on. The device is literally the coolest thing you’ve ever seen. With system software written by video game developer Hudson Soft its character set features emoji that can be typed inline, and it also features a “montage function” that allows you to create faces for each of your contacts — 15 years later we’d see something similar in *Mii*on *Nintendo Wii*in 2006. The emoji on this device are a lot less well designed, in my humble opinion, than those on the *Sharp*devices. ## A word about word processors By now I was in contact with Keith at Emojipedia, who mentioned that he remembered a *Sharp* device with emoji, a word processor. I found one in the *Sharp* *WD-A521*, from November 1990, which featured higher resolution versions of the emoji designs found on my * Sharp PI-4000*. There’s also the *Panasonic* *FW-U1S50* from 1990, which contains 110 famiiar emoji under a section called “illustrations”, and also contains another 99 “audio/visual” symbols some of which coincide with modern emoji. Perhaps there are other word processors from around that time that also contain emoji? I understand from my friend Izumi Okano that Japanese software developer Enzan-Hoshigumi, most famous for their Macintosh software and clipart, had created pictograms for one of the *Canoword* word processors around 1986. So at this point I’m thinking, why would the emoji on a word processor be ignored on the timeline of emoji history? Was there anything else being ignored? Before cell phones became prevalent there were pagers, or beepers, in Japan these were known as Pocket Bell. Initially they would only beep and show a number, and people would use “beeper slang” to form words by using numbers whose pronunciation was similar to words and syllables. Necessity is the mother of invention! Eventually pagers would be able to send and receive text. It was perhaps only natural that emoji find a home on these devices, with the most notable being the heart ❤️ emoji. But the date of this transition is 1995, which is earlier than the *SoftBank* emoji from 1997 but later than my * Sharp PI-4000* device. ## A note about beepers As an aside, it’s interesting to understand how emoji were typed on pagers/beepers. They weren’t selected using a picker, which would have required cycling through a huge range of characters, but rather typed in numeric digits which narrows the cycling down to far less characters. The numeric code: `21 91 15 24 12 23 78` …would map to:`カラオケイク?` …which means:`KARAOKE?` Wild. Typing text this way must have felt like programming machine code directly in hexadecimal! ## What makes it emoji? I was chatting to my friend Louie Mantia, who has designed many emoji in his career, discussing the earlier emoji I had found in my 1994 device. Louie asked me to confirm that I could type emoji inline with text, giving me the example `W😲W` , which was his criteria for the symbols to qualify as emoji. If I couldn’t do that, he suggested we could only consider the symbols as icons. So if I can type them inline amongst text on my device from 1994 that was capable of connecting to other devices and sending messages, then surely they should be considered the first emoji? Why do we, currently, only count emoji as emoji if they’re on a mobile phone? I’m also wondering when these emoji might have been designed. Were they created in 1994 for the *PI-4000*, in 1993 for the *PI-3000*, or earlier for another device? ## 1988 So I kept looking. I was aware of another line of *Sharp* devices, electronic organisers, known as the *Bware* range in Japan and *Wizard* in the USA. These were pretty popular at the time, so much so that the USA device even got it’s own episode of Seinfeld in 1998. I’d come back into contact with these devices just last year as they had the interesting capability of being able to play video games stored on solid-state application “IC” cards. You can play a version of *Tetris* by *BPS* that is quite different to the *Game Boy* version, and both were released in 1989. You can also play versions of *Sokoban* by *Thinking Rabbit*, and *Fortress* by *SSI/Victor*, amongst others. Thanks to a collector, Akuji, I was able to confirm that the Japanese *PA-8500* device, released in 1988, contains emoji2 similar in design to those found on my *PI-4000* and on the *WD-A521*. When redrawing these it was obvious that all the *Sharp* emoji sets are based on the same master design. (I’d love to know more about the *Sharp* artwork if anybody knows anything.) ## How old is an emoji? At this point we’ve wiped almost a decade off the creation date of emoji, but can we go further? Is there a way to date a set of emoji? In Japanese 絵文字 means emoji. If we think about the PA line of devices, the *PA-8500* was released in 1988, and it’s predecessor the (emoji-less) PA-7000 was released in 1987. So maybe the emoji set was created around this time? We can get closer by looking at a couple of characters present in the emoji that give us a clue to the date of creation. That is indeed the case with the * Sharp PI-4000* and *WD-A521*. The characters ○金 and ○ビ (*maru-kin* meaning rich/successful/winner and *maru-bi* meaning poor/unsuccessful/loser) were invented by the author Kazuhiro Watanabe in 1984 in his book Kinkonkan which was later made into a movie. These were quickly accepted into Japanese vocabulary, winning the 84年の日本流行語 (Japanese Buzzwords Award 1984). And they are right there in the * Sharp PI-4000* emoji, represented as characters enclosed in circles. They were in common use throughout Japan’s bubble-era, 1986-1991, but eventually fell out of fashion and are now considered obsolete. It’s interesting to note that they are not featured in either the 1997 *SoftBank*or 1999 *NTT DoCoMo*emoji sets. ## 1984 Once you accept that emoji existed in the 1980s, more things come to light. The Ishii Award 「石井賞創作タイプフェイスコンテスト」 was a typeface design contest organised by the community of type designers in 1970. By 1984 it was in its 8th year. Yutaka Satoh of Type-Labo proposed a typeface consisting of emoji. Because they weren’t on screen they were created by arranging dots in various shapes, but they are recognisably emoji. Coincidentally, I used a hybrid of this sort of approach when I added emoji to my game YOYOZO in September 2023: I plot the emoji as points but define them on a pixel grid. In Matt Alt’s book “The Secret Lives of Emoji: How Emoticons Conquered the World”, there is a brief mention of ASCII emoticons on the Japanese internet (JUNET) in 1984, and then it fast forwards to 1995 to begin talking about the Pager, missing a decade of emoji usage in the process. In the Yakumono typeface, created by Yutaka Satoh (TYPE-LABO), we can clearly see many of the key emoji that would persist throughout the years: smiley faces, food, drink, cigarettes, sweat, umbrella, paperclip, lips, envelope, and most interestingly the (not smiling) pile of poo. This typeface received an honourable mention at the awards. Some 40 years later, I think it’s safe to say it deserved more. 🏆 ## 1979 We can see emoji in the character sets of Japanese home computers such as the * Sharp MZ-80K*, which included a UFO, smiley faces, stick figures, car, snake, and more. I won’t include them here but you can click the above link to see some in a PDF. 💾 ## 1965 “Full Moon With Face”, also known as BA-90 which was listed in a book of typesetting symbols, published by Sha-ken in 1965. A smiling moon is still present in the emoji set today. 🌝 ## 1959 CO-59 is a character set created in 1959 for exchange of data between Japanese newspapers. In it is included a symbol of a baseball, which again is still present in emoji ⚾️ and at Unicode codepoint U+26BE ⚾︎ today. ## Comparing Emoji I was interested in how the emoji that I have redrawn compared to the 1997 *SoftBank* and 1999 DoCoMo sets, and an early Pocket Bell, so here’s a little table. SharpPA-8500 | NECPI-ET1 | SharpPI-4000 | Pocket Bell R-FAHC | SoftBank | NTT DoCoMo | | ---|---|---|---|---|---|---| Year | 1988 | 1990 | 1994 | 1995 | 1997 | 1999 | Quantity (approx) | 100 | 130 | 170 | 7 | 90 | 176 | Resolution | 16×16 | 16×16 | 12×12 | 5×7 | 12×12 | 12×12 | ## Conclusion So what does this all mean? I’d say mostly that the history emoji isn’t as clean cut as you might have thought. You can decide for yourself on what you consider to be the first emoji. It depends on our own personal definition, so there is no right or wrong answer. 😎 Personally, I define the start date of emoji as the point in time when sets of these symbols first appeared for use whilst composing text. I don’t think the timeline should start at mobile phones, as this feels like a somewhat arbitrary decision that dismisses a lot of history. It’s like saying music only began to exist from the moment it could be recorded and listened to without the actual muscians being present. 🤔 As to whether the timeline of emoji history will be rewritten with this knowledge, it’s difficult to say. Much of this falls in the grey area of happening around the time the internet was taking hold, plus most things about the origin of emoji are in Japanese language, so there are unlikely to be sources Wikipedia would consider verifiable enough. The best we could do is quote the pages of the manuals for devices, and for the rest hope that there’s some record in Japanese literature that could be cited. I won’t be running the Wikipedia editing gauntlet, but if you do please let me know how it goes! 🧨 ## Terms of use I painstakingly recreated the emoji sets on this page, pixel by pixel, over many days of hard work. I even went so far as adding a new tool to the pixel art app I use, so as to make the task of redrawing hundreds of emoji a little less daunting. Feel free to utilize the emoji images, just remember to credit @gingerbeardman and include a link to this page. With one exception: I object to the use of these images for the purpose of creating NFTs. Thanks for your understanding! Originally published: 2024-05-10 -- Enjoyed this blog post? Please show some support. -- Comments: @gingerbeardman
true
true
true
During my research into vintage Japanese drawing software, I came across some devices that had built in sketch or handwritten memo functions. I bought a coup...
2024-10-12 00:00:00
2024-05-10 00:00:00
https://cdn.gingerbeardm…-table-20-20.png
website
gingerbeardman.com
blog.gingerbeardman.com
null
null
36,914,642
https://marshallbrain.com/manna1
Manna – Two Views of Humanity’s Future – Chapter 1
null
Depending on how you want to think about it, it was funny or inevitable or symbolic that the robotic takeover did not start at MIT, NASA, Microsoft or Ford. It started at a Burger-G restaurant in Cary, NC on May 17. It seemed like such a simple thing at the time, but May 17 marked a pivotal moment in human history. Burger-G was a fast food chain that had come out of nowhere starting with its first restaurant in Cary. The Burger-G chain had an attitude and a style that said “hip” and “fun” to a wide swath of the American middle class. The chain was able to grow with surprising speed based on its popularity and the public persona of the young founder, Joe Garcia. Over time, Burger-G grew to 1,000 outlets in the U.S. and showed no signs of slowing down. If the trend continued, Burger-G would soon be one of the “Top 5” fast food restaurants in the U.S. The “robot” installed at this first Burger-G restaurant looked nothing like the robots of popular culture. It was not hominid like C-3PO or futuristic like R2-D2 or industrial like an assembly line robot. Instead it was simply a PC sitting in the back corner of the restaurant running a piece of software. The software was called “Manna”, version 1.0*. Manna’s job was to manage the store, and it did this in a most interesting way. Think about a normal fast food restaurant. A group of employees worked at the store, typically 50 people in a normal restaurant, and they rotated in and out on a weekly schedule. The people did everything from making the burgers to taking the orders to cleaning the tables and taking out the trash. All of these employees reported to the store manager and a couple of assistant managers. The managers hired the employees, scheduled them and told them what to do each day. This was a completely normal arrangement. In the early twenty-first century, there were millions of businesses that operated in this way. But the fast food industry had a problem, and Burger-G was no different. The problem was the quality of the fast food experience. Some restaurants were run perfectly. They had courteous and thoughtful crew members, clean restrooms, great customer service and high accuracy on the orders. Other restaurants were chaotic and uncomfortable to customers. Since one bad experience could turn a customer off to an entire chain of restaurants, these poorly-managed stores were the Achilles heel of any chain. To solve the problem, Burger-G contracted with a software consultant and commissioned a piece of software. The goal of the software was to replace the managers and tell the employees what to do in a more controllable way. Manna version 1.0 was born. Manna was connected to the cash registers, so it knew how many people were flowing through the restaurant. The software could therefore predict with uncanny accuracy when the trash cans would fill up, the toilets would get dirty and the tables needed wiping down. The software was also attached to the time clock, so it knew who was working in the restaurant. Manna also had “help buttons” throughout the restaurant. Small signs on the buttons told customers to push them if they needed help or saw a problem. There was a button in the restroom that a customer could press if the restroom had a problem. There was a button on each trashcan. There was a button near each cash register, one in the kiddie area and so on. These buttons let customers give Manna a heads up when something went wrong. At any given moment Manna had a list of things that it needed to do. There were orders coming in from the cash registers, so Manna directed employees to prepare those meals. There were also toilets to be scrubbed on a regular basis, floors to mop, tables to wipe, sidewalks to sweep, buns to defrost, inventory to rotate, windows to wash and so on. Manna kept track of the hundreds of tasks that needed to get done, and assigned each task to an employee one at a time. Manna told employees what to do simply by talking to them. Employees each put on a headset when they punched in. Manna had a voice synthesizer, and with its synthesized voice Manna told everyone exactly what to do through their headsets. Constantly. Manna micro-managed minimum wage employees to create perfect performance. Prefer the Kindle? “Manna” is now available on the Kindle – Click here! | The software would speak to the employees individually and tell each one exactly what to do. For example, “Bob, we need to load more patties. Please walk toward the freezer.” Or, “Jane, when you are through with this customer, please close your register. Then we will clean the women’s restroom.” And so on. The employees were told exactly what to do, and they did it quite happily. It was a major relief actually, because the software told them precisely what to do step by step. For example, when Jane entered the restroom, Manna used a simple position tracking system built into her headset to know that she had arrived. Manna then told her the first step. Manna: “Place the ‘wet floor’ warning cone outside the door please.” When Jane completed the task, she would speak the word “OK” into her headset and Manna moved to the next step in the restroom cleaning procedure. Manna: “Please block the door open with the door stop.” Jane: “OK.” Manna: “Please retrieve the bucket and mop from the supply closet.” Jane: “OK.” And so on. Once the restroom was clean, Manna would direct Jane to put everything away. Manna would make sure that she carefully washed her hands. Then Manna would immediately start Jane working on a new task. Meanwhile, Manna might send Lisa to the restroom to inspect it and make sure that Jane had done a thorough job. Manna would ask Lisa to check the toilets, the floor, the sink and the mirrors. If Jane missed anything, Lisa would report it. I grew up in Cary, NC. That was a long time ago, but when I was a kid I lived right in the middle of Cary with my parents. My father was a pilot for a big airline. My mother was a stay-at-home mom and I had a younger sister. We lived in a typical four bedroom suburban home in a nice neighborhood with a swimming pool in the backyard. I was a 15 year-old teenager working at the Burger-G on May 17 when the first Manna system came online. I can remember putting on the headset for the first time and the computer talking to me and telling me what to do. It was creepy at first, but that feeling really only lasted a day or so. Then you were used to it, and the job really did get easier. Manna never pushed you around, never yelled at you. The girls liked it because Manna didn’t hit on them either. Manna simply asked you to do something, you did it, you said, “OK”, and Manna asked you to do the next step. Each step was easy. You could go through the whole day on autopilot, and Manna made sure that you were constantly doing something. At the end of the shift Manna always said the same thing. “You are done for today. Thank you for your help.” Then you took off your headset and put it back on the rack to recharge. The first few minutes off the headset were always disorienting — there had been this voice in your head telling you exactly what to do in minute detail for six or eight hours. You had to turn your brain back on to get out of the restaurant. To me, Manna was OK. The job at Burger-G was mindless, and Manna made it easy by telling you exactly what to do. You could even get Manna to play music through your headphones, in the background. Manna had a set of “stations” that you could choose from. That was a bonus. And Manna kept you busy the entire day. Every single minute, you had something that Manna was telling you to do. If you simply turned off your brain and went with the flow of Manna, the day went by very fast. My father, on the other hand, did not like Manna at all from the very first day he saw me wearing the headset in the restaurant. He and Mom had come in for lunch and to say hi. I knew they were coming, so I had timed my break so I could sit down with them for a few minutes. When I sat down, my father noticed the headset. “So”, he said, “they have you working the drive-thru I see. Is that a step up or a step down?” “It’s not the drive-thru,” I replied, “it’s a new system they’ve installed called Manna. It manages the store.” “How so?” “It tells me what to do through the headset.” “Who, the manager?” “No, it’s a computer.” He looked at me for a long time, “A computer is telling you what to do on the job? What does the manager do?” “The computer is the manager. Manna, manager, get it?” “You mean that a computer is telling you what to do all day?”, he asked. “Yeah.” “Like what?” I gave him an example, “Before you got here, I was taking out the trash. Manna told me how to do it.” “What did it say?” “It tells you exactly what to do. Like, It told me to get four new bags from the rack. When I did that it told me to go to trash can #1. Once I got there it told me to open the cabinet and pull out the trash can. Once I did that it told me to check the floor for any debris. Then it told me to tie up the bag and put it to the side, on the left. Then it told me to put a new bag in the can. Then it told me to attach the bag to the rim. Then it told me to put the can back in and close the cabinet. Then it told me to wipe down the cabinet and make sure it’s spotless. Then it told me to push the help button on the can to make sure it is working. Then it told me to move to trash can #2. Like that.” He looked at me for a long time again before he said, “Good Lord, you are nothing but a piece of a robot. What is it saying to you now?” “It just told me I have three minutes left on my break. And it told me to smile and say hello to the guests. How’s this? Hi!” And I gave him a big toothy grin. “Yesterday the people controlled the computers. Now the computers control the people. You are the eyes and hands for this robot. And all so that Joe Garcia can make $20 million per year. Do you know what will happen if this spreads?” “No, I don’t. And I think Mr. G makes more than $20 million a year. But right now I’ve got two minutes left, and Manna is telling me that I need to move back to station 3 to get ready for the next run. See ya.” I waved at Mom. Dad just stared at me. The tests in our Burger-G store were surprisingly successful. There were Burger-G corporate guys in the restaurant watching us, fixing bugs in the software, making sure Manna was covering all the bases, and they were pleased. It took about 3 months to work all the kinks out, and as they did the Manna software totally changed the restaurant. Worker performance nearly doubled. So did customer satisfaction. So did the consistency of the customer’s experience. Trash cans never overfilled. Bathrooms were remarkably clean. Employees always washed their hands when they needed to. Food was ready faster. The meals we handed out were nearly 100 percent accurate because Manna made us check to make sure every item in the bag was exactly what the customer ordered. The store never ran out of supplies — there were always plenty of napkins in the dispenser and the ketchup container was always full. There were enough employees in the store for the busy times, because Manna could accurately track trends and staff appropriately. In addition, Burger-G saved a ton of money. Burger-G had hundreds of stores in the United States. Manna worked so well that Burger-G deployed it nationwide. Soon Burger-G had cut more than 3,000 of its higher-paid store employees — mostly assistant managers and managers. That one change saved the company nearly $100 million per year, and all that money came straight to the bottom line for the restaurant chain. Shareholders were ecstatic. Mr. G gave himself another big raise to celebrate. In addition, Manna had optimized store staffing and had gotten a significant productivity boost out of the employees in the store. That saved another $150 million. $250 million made a huge difference in the fast food industry. Prefer the Kindle? “Manna” is now available on the Kindle – Click here! | So, the first wave of fast food robots did not replace all of the burger flipping employees as everyone had expected. The robots replaced middle management and significantly improved the performance of minimum wage employees. All of the other fast food chains watched the Burger-G experiment with Manna closely, and they started installing Manna systems as well. Soon, nearly every business in America that had a significant pool of minimum-wage employees was installing Manna software or something similar. They had to do it in order to compete. In other words, Manna spread through the American corporate landscape like wildfire. And my dad was right. It was when all of these new Manna systems began talking to each other that things started to get uncomfortable.
true
true
true
null
2024-10-12 00:00:00
2023-03-28 00:00:00
null
null
marshallbrain.com
MarshallBrain.com
null
null
13,342,292
https://netin.co/company-emails
null
null
null
true
true
false
null
2024-10-12 00:00:00
null
null
null
null
null
null
null
36,708,275
https://zandronum.com/forum/viewtopic.php?t=10973
Zandronum
Kaminsky » Sun Jul
- Players can now be sorted from top to bottom based on multiple criteria, instead of only one (e.g. frags, points, wins, or kills). The rank order can be changed using the SCORINFO lump. - Player indices, colours, and countries are now visible on the scoreboard. Furthermore, a player class's *ScoreIcon*can appear on the scoreboard now. - If a player is carrying an enemy team's flag/skull, or if they're holding the Terminator Sphere or Possession Hellstone, then a mini icon of the item now appears next to their name. - Teams (and spectators) now have their own headers, which can also be customized using the SCORINFO lump. The logo property in TEAMINFO is now usable in Zandronum. - The opacity of the scoreboard can be adjusted using the "cl_scoreboardalpha" CVar. - The "cl_colorizepings" CVar can be enabled to colourize everyone's pings depending on the severity (green, yellow, orange, and red). - New ACS functions for controlling custom column data: SetCustomPlayerValue, GetCustomPlayerValue, and ResetCustomDataToDefault. In addition, TDRR's lump reading ACS functions have made it into this build, which should be useful for any modder that wishes to incorporate lump reading capabilities into their mod. Here's a list of these functions: A lot of other minor changes and fixes are in this build too. It's worth checking out the changelog on the wiki to get a longer summary of all of the changes in 3.2 so far. As always, here is the full Mercurial changelog: Code: Select all ``` changeset: 10953:850eeed37215 user: Adam Kaminski <[email protected]> - The server skips sending a SVC2_SETPLAYERSTATUS command to the client that initially sent the CLC_STARTCHAT, CLC_ENTERCONSOLE, CLC_ENTERMENU, etc. command, as they already updated the status on their end. - Added the helper function PLAYER_SetStatus. changeset: 10954:4cfb33370553 user: Adam Kaminski <[email protected]> Added the helper function PLAYER_SetTime. changeset: 10955:2edc2eb5a3dc user: Adam Kaminski <[email protected]> Fixed a grammatical error in one of the callvote messages. changeset: 10956:3d1dc73737a3 user: Adam Kaminski <[email protected]> Spectators are now allowed to keep spying on other actors when sv_disallowspying is enabled. changeset: 10957:8cd9d65c2c3f user: Adam Kaminski <[email protected]> Fixed: when the server printed a list of flags that were changed, it didn't check if a flag CVar existed first before incrementing the number of flags changed (e.g. "dmflags2 1" doesn't enable any flags because no flag CVar occupies the first bit of dmflags2). changeset: 10958:69add5809cea user: Adam Kaminski <[email protected]> Fixed: sv_markchatlines printed chat lines with two timestamps if sv_logfiletimestamp was enabled. changeset: 10959:8836885fd432 user: Adam Kaminski <[email protected]> Added the CVar "sv_distinguishteamchatlines", which distinguishes team chat messages from normal chat messages in the server console/logfile (addresses 0036). changeset: 10960:51750a5f66fc user: Adam Kaminski <[email protected]> Blacklisted the rcon, rcon_logout, and send_password CCMDs from ConsoleCommand. changeset: 10961:cca76668e470 user: Adam Kaminski <[email protected]> Refactored TEAM_DoWinSequence and replaced C-style char arrays with FString. This also fixes an inconsistency in the fadeout times if whether or not the server prints the message. changeset: 10962:6d16ad10f91b user: Adam Kaminski <[email protected]> Did some refactoring in TEAM_TimeExpired and replaced C-style char arrays with FString. changeset: 10963:73a1c6a99b8d user: Adam Kaminski <[email protected]> Replaced C-style char arrays in the team CCMD's callback function with FString. changeset: 10964:d71e6eb5bec4 user: Adam Kaminski <[email protected]> Added a check that prevents the server from executing the team CCMD. changeset: 10965:b07c80551785 user: Sean Baggaley <[email protected]> Added DPI-awareness settings to the Win32 manifest to prevent automatic upscaling of the window on high-DPI displays. changeset: 10966:b063ecdf6083 user: Adam Kaminski <[email protected]> Fixed: players still respawned inside sectors with extended damage types if sv_samespawnspot was enabled. changeset: 10967:cce4aa1b0fa0 user: Adam Kaminski <[email protected]> Fixed: adding a CVar in a "defaultgamesettings" or "defaultlockedgamesettings" block would crash the game during startup. changeset: 10968:e997d83344df user: Adam Kaminski <[email protected]> Fixed: the Doomsphere mugshot graphics were missing in zandronum.pk3. changeset: 10969:dc4a0679fdf9 user: Adam Kaminski <[email protected]> Fixed: a player's flash state wasn't removed when they turned into a spectator. changeset: 10970:48fd827e2ae9 user: Sean Baggaley <[email protected]> Replaced the C-style array used by the internal server browser to store a server's PWADs with a TArray. This fixes the client crashing when encountering servers with more than 32 PWADs. changeset: 10971:42c405796de6 user: Sean Baggaley <[email protected]> The server can now broadcast the name of the current game mode (as defined in the GAMEMODE lump) to launchers. changeset: 10972:b24867c9a99e user: Sean Baggaley <[email protected]> The output of the dumptrafficmeasure CCMD is now sorted by the bandwidth used in ascending order. This can be changed to descending order by specifying "desc" as the first parameter (addresses 4090). changeset: 10973:814d72aaa3c5 user: Adam Kaminski <[email protected]> Fixed: the "ready to respawn in..." message could appear even when compat_instantrespawn was enabled in online games. changeset: 10974:437563862ede user: Adam Kaminski <[email protected]> Fixed: the "ready to respawn in..." message could appear in singleplayer when it shouldn't (e.g. if the client was previously in an online game). changeset: 10975:805780cfc641 user: Sean Baggaley <[email protected]> - Refactor the launcher protocol code so that each field is now implemented as its own function. - Rework the logic of SERVER_MASTER_SendServerInfo so that it now iterates through each field and calls the relevant function to assemble the launcher response. changeset: 10976:f9c1ffa51c78 user: TDRR <[email protected]> Moved bot pitch reset to StopAimingAtEnemy command. changeset: 10977:3eae6b8b3deb user: Adam Kaminski <[email protected]> Moved a local variable declaration elsewhere. changeset: 10978:328e98119da6 user: Adam Kaminski <[email protected]> Fixed: clients that weren't fully connected to the server still triggered DISCONNECT scripts if they were kicked because of an error. changeset: 10979:5e48ddb13311 user: Adam Kaminski <[email protected]> Fixed: the localhost couldn't successfully join the server through the server console window if it was forcing a connect or join password. changeset: 10980:fa55533805fd user: Adam Kaminski <[email protected]> Fixed: local CVars defined in CVARINFO were never saved in the user's config file. changeset: 10981:2ca40fbf0e4d user: Adam Kaminski <[email protected]> Fixed: players didn't drop important items like flags, skulls, etc. when they respawned using SetPlayerClass. changeset: 10982:7a3f56c275f8 user: Sean Baggaley <[email protected]> Fixed an issue in which SQF_EXTENDED_INFO would not be sent under Linux, resulting in a malformed launcher response. changeset: 10983:49c4602c534d user: Adam Kaminski <[email protected]> Fixed: a live player's class still changed when travelling between some levels, though not always, in cooperative. changeset: 10984:0e8360876fdf user: Sean Baggaley <[email protected]> Added a segmented version of the launcher protocol which splits responses across multiple packets (addresses 4064). changeset: 10985:8f60acdb0847 user: Adam Kaminski <[email protected]> Created the header file "scoreboard_enums.h" which contains all scoreboard-related enumerations. changeset: 10986:06edfae62fa3 user: Adam Kaminski <[email protected]> Created the ColumnValue class, allowing for easy storage of the different data types that a column might use. changeset: 10987:720f1c0e11cf user: Adam Kaminski <[email protected]> Added a CVar flag that refreshes the scoreboard if the CVar's value is changed. changeset: 10988:c10bb3828450 user: Adam Kaminski <[email protected]> Created the ScoreColumn class, and added the function SCOREBOARD_GetColumn that returns a pointer to a column. changeset: 10989:97aba2c593e4 user: Adam Kaminski <[email protected]> Added the member function ScoreColumn::Refresh. changeset: 10990:2e7dd33e21a0 user: Adam Kaminski <[email protected]> Renamed the "ulDefaultWidth" member in ScoreColumn to "ulSizing". changeset: 10991:76e7529aa40c user: Adam Kaminski <[email protected]> Added the member function ScoreColumn::UpdateWidth. changeset: 10992:47b5b47a8e95 user: Adam Kaminski <[email protected]> Renamed the COLUMNCMD_WIDTH constant to COLUMNCMD_SIZE. changeset: 10993:4446ae266689 user: Adam Kaminski <[email protected]> Added the member functions: ScoreColumn::DrawString, ScoreColumn::DrawColor, ScoreColumn::DrawTexture. changeset: 10994:49b926c344b2 user: Adam Kaminski <[email protected]> Added the member function ScoreColumn::DrawHeader. changeset: 10995:694618c0052d user: Adam Kaminski <[email protected]> Added the helper function ScoreColumn::FixClipRectSize to consolidate some duplicated code. This change also ensures that input widths or heights that are somehow less than zero also get fixed. changeset: 10996:b9f70e9fcdfd user: Adam Kaminski <[email protected]> Added the member functions ScoreColumn::Parse and ScoreColumn::ParseCommand. changeset: 10997:de2c82049a5d user: Adam Kaminski <[email protected]> Created the DataScoreColumn class. changeset: 10998:586623471e76 user: Adam Kaminski <[email protected]> Added the member function DataScoreColumn::GetValueString. changeset: 10999:02e0cc193f65 user: Adam Kaminski <[email protected]> Added the member function DataScoreColumn::GetValueWidth. changeset: 11000:116c2ed89991 user: Adam Kaminski <[email protected]> Added the member function DataScoreColumn::GetValue. changeset: 11001:8711222a3df9 user: Adam Kaminski <[email protected]> Added the member function DataScoreColumn::UpdateWidth. changeset: 11002:b3692464f9fe user: Adam Kaminski <[email protected]> Added the member function DataScoreColumn::DrawValue. changeset: 11003:130cd2ebff7a user: Adam Kaminski <[email protected]> Added the member function DataScoreColumn::ParseCommand. changeset: 11004:9c18c26b4a6d user: Adam Kaminski <[email protected]> Created the CompositeScoreColumn class. changeset: 11005:9128e45ea954 user: Adam Kaminski <[email protected]> Added the member function CompositeScoreColumn::Refresh. changeset: 11006:e2dda4b2d7b0 user: Adam Kaminski <[email protected]> Added the member function CompositeScoreColumn::UpdateWidth and the helper functions CompositeScoreColumn::GetRowWidth and CompositeScoreColumn::GetSubColumnWidth. changeset: 11007:763a2ee34f49 user: Adam Kaminski <[email protected]> Added the member function CompositeScoreColumn::DrawValue. changeset: 11008:d35022112bd3 user: Adam Kaminski <[email protected]> Added the member function CompositeScoreColumn::ParseCommand. changeset: 11009:856e6ed81483 user: Adam Kaminski <[email protected]> Created the Scoreboard struct, and added the functions SCOREBOARD_IsDisabled, SCOREBOARD_IsHidden, and SCOREBOARD_SetHidden. changeset: 11010:7afd13981732 user: Adam Kaminski <[email protected]> Added the member function Scoreboard::UpdateWidth. changeset: 11011:3926e6a53831 user: Adam Kaminski <[email protected]> Added the member function Scoreboard::UpdateHeight. changeset: 11012:1bea18f49e7c user: Adam Kaminski <[email protected]> Added the member function Scoreboard::Refresh. changeset: 11013:ff8d38223188 user: Adam Kaminski <[email protected]> Added the member functions Scoreboard::DrawBorder and Scoreboard::DrawRowBackground. changeset: 11014:9bae3dc589a7 user: Adam Kaminski <[email protected]> Added the member function Scoreboard::DrawRow. changeset: 11015:10d4736ef9b6 user: Adam Kaminski <[email protected]> Added the member function Scoreboard::Render. changeset: 11016:640d69ab9258 user: Adam Kaminski <[email protected]> Changed how team colors are blended into the row backgrounds. changeset: 11017:343dd67ef4d8 user: Adam Kaminski <[email protected]> Added missing code to handle local row background colors. changeset: 11018:363d661de511 user: Adam Kaminski <[email protected]> Added the virtual member function ScoreColumn::IsDataColumn. changeset: 11019:a6f9a2e45076 user: Adam Kaminski <[email protected]> Added helper functions that scan for a column and push it into a list (the CompositeScoreColumn class and Scoreboard struct both need this). changeset: 11020:b5497b499e4a user: Adam Kaminski <[email protected]> Added the member function Scoreboard::Parse. changeset: 11021:70fc3fd3b6d4 user: Adam Kaminski <[email protected]> Made a small change to Scoreboard::AddColumnToList. changeset: 11022:f885f8b7dfec user: Adam Kaminski <[email protected]> Fixed how clipping rectangles are handled when drawing texture columns, and removed some redundant code with handling clipping rectangles when drawing color columns. changeset: 11023:6385ecfb4f09 user: Adam Kaminski <[email protected]> When the scoreboard refreshes, players are now also sorted by team (or at least, such that true spectators come after active players). This simplifies and optimizes Scoreboard::Render. changeset: 11024:2a9adf00b196 user: Adam Kaminski <[email protected]> As an optimization, generate the blended row background colors for each team in Scoreboard::Parse, instead of every time Scoreboard::DrawRow is called. changeset: 11025:f41fc372456f user: Adam Kaminski <[email protected]> As another optimization, certain checks that are made when refreshing a column will only execute at the start of a new game or level, or at the start of the intermission screen. changeset: 11026:67e4d8fd109d user: Adam Kaminski <[email protected]> Consolidated some duplicated code. changeset: 11027:ab01759a332d user: Adam Kaminski <[email protected]> Fixed the DONTSEPARATETEAMS flag not being handled properly. changeset: 11028:a841c8a2bdcb user: Adam Kaminski <[email protected]> Simplified Scoreboard::UpdateHeight a bit. changeset: 11029:b07e7fc48da3 user: Adam Kaminski <[email protected]> Added the scoreboard flag DONTUSELOCALROWBACKGROUNDCOLOR, which prevents the local row background color from being used. changeset: 11030:65c8aed342b8 user: Adam Kaminski <[email protected]> Added the CVar "cl_colorizepings", which forces everyone's ping in the ping column to be printed in different colours that visually indicate how severe their connection is to the server. changeset: 11031:4612eca2df57 user: Adam Kaminski <[email protected]> Added the CVar "cl_useshortcolumnnames", which makes columns use their short names in the headers. changeset: 11032:1db4ef11423f user: Adam Kaminski <[email protected]> Added the CVar "cl_scoreboardalpha", which controls the opacity of the entire scoreboard. changeset: 11033:8dff11d411a9 user: Adam Kaminski <[email protected]> Created the CustomScoreColumn template class and its descendants, and added the function SCOREBOARD_ResetCustomColumnsForPlayer. changeset: 11034:a47ddc161211 user: Adam Kaminski <[email protected]> Added the CVAR_REFRESHSCOREBOARD flag to cl_useshortcolumnnames. changeset: 11035:7f883fb871f8 user: Adam Kaminski <[email protected]> - Checking if a column is (not) visible during the intermission screen has been moved back into Scoreboard::Refresh. - If a composite column is unusable in the current game, then its sub-columns are now marked as unusable too. - The server is now allowed to check if columns are usable in the current game and reset custom columns. changeset: 11036:80cd735ff895 user: Adam Kaminski <[email protected]> Added checks to prevent SCOREBOARD_Reset and SCOREBOARD_ResetCustomColumnsForPlayer from executing if there are no defined columns. changeset: 11037:001005ca5ab9 user: Adam Kaminski <[email protected]> The HIDDENBYDEFAULT flag now gets handled at the start of a new game. Any columns with that flag enabled are automatically hidden. changeset: 11038:7c0229735b0a user: Adam Kaminski <[email protected]> Virtualized CustomScoreColumn::GetDefaultValue and CustomScoreColumn::SetValue. changeset: 11039:ed6f4e5651f5 user: Adam Kaminski <[email protected]> - Added new wrapper functions to get a country's index (from GeoIP.c) from an address, and a country's name/code from an index. - Added a new member to player_t that stores the player's country index. - Even when no GeoIP database is available, the server will still tell if a client's connecting from LAN (if they're not hiding their country). changeset: 11040:9c297eb837ff user: Adam Kaminski <[email protected]> Added ACS functions: SetCustomColumnValue, GetCustomColumnValue, ResetCustomColumnToDefault, GetColumnDataType, HideColumn, IsColumnHidden, HideScoreboard, and IsScoreboardHidden. changeset: 11041:689a4a8c63db user: Adam Kaminski <[email protected]> Make ScoreColumn::GetDisplayName and ScoreColumn::GetShortName return a NULL pointer if the display or short names are empty. changeset: 11042:ae47d77d71e3 user: Adam Kaminski <[email protected]> - Changed the enum COLUMNTEMPLATE_e so that it denotes if a column is a DataScoreColumn or CompositeScoreColumn. - Added the enum DATACONTENT_e, which is now used to describe what kind of content a data column uses, either data or graphic. - Added the virtual member function ScoreColumn::IsCustomColumn. - Removed the inline specifier from DataScoreColumn::GetNativeType. changeset: 11043:f3513ca1351f user: Adam Kaminski <[email protected]> - CustomScoreColumn now inherits from the abstract class CustomScoreColumnBase. - Added a parameter to SCOREBOARD_GetColumn to also check if a column is currently usable. changeset: 11044:95d9ba7fc961 user: Adam Kaminski <[email protected]> - ColumnValue is now responsible for (de)allocating memory for strings when they must be stored or removed. - Added the member functions ColumnValue::ToString and ColumnValue::FromString. - Added a custom comparator to check if two ColumnValue objects have the same value. changeset: 11045:6bf8193bb60f user: Adam Kaminski <[email protected]> Made minor changes in handling a CustomScoreColumn's default value. changeset: 11046:7af578544b00 user: Adam Kaminski <[email protected]> - When a composite column is added to a scoreboard's column order, its sub-columns are now also treated as being part of the scoreboard's column order. - Simplified some of the member functions' parameters. - Added checks to members functions to prevent them from executing if the column's not part of the scoreboard. changeset: 11047:8ce6a3311b8a user: Adam Kaminski <[email protected]> Added the member functions ScoreColumn::GetScoreboard and DataScoreColumn::GetCompositeColumn, and removed ScoreColumn::IsInsideScoreboard. changeset: 11048:980c9b20d10a user: Adam Kaminski <[email protected]> SCOREBOARD_Reset no longer has to directly check if data or customs columns that are part of a composite column are usable. changeset: 11049:187ee8dd2ad8 user: Adam Kaminski <[email protected]> It's now required that data columns have the DONTSHOWHEADER flag enabled and are aligned to the left if they should be added to a composite column. changeset: 11050:edda47d611c3 user: Adam Kaminski <[email protected]> Added the country name, code, and flag native column types. changeset: 11051:88b14ffd20cf user: Adam Kaminski <[email protected]> Renamed a few constants used by the server console. changeset: 11052:4535b5f736e6 user: Adam Kaminski <[email protected]> Added helper functions to the bot commands to reduce code duplication (botcmd_LookForPlayerEnemies now prints "Illegal player index" instead of "Illegal player start index"). changeset: 11053:6ac30d9bdf2e user: Adam Kaminski <[email protected]> Renamed g_NetIDList to g_ActorNetIDList (in case we have network ID lists for other objects that aren't AActor). changeset: 11054:c4ec3e416509 user: Adam Kaminski <[email protected]> Removed an unused global variable. changeset: 11055:928166483c80 user: Adam Kaminski <[email protected]> - Modified the IDList template class (which has been renamed to NetIDList) to generalize its behaviour better. It includes a new parameter to specify how many network IDs are available. - Since IDList<>::rebuild previously made sense to call for AActor only, it has been moved into the new derived class ActorNetIDList. - Moved the CountActors function from "c_cmds.cpp" to "p_mobj.cpp", and renamed it to NetIDTrait<AActor>::count. changeset: 11056:7da9738c9c7c user: Adam Kaminski <[email protected]> Before clearing a composite column's sub-column list, or the scoreboard's column order list, make sure that any reference to the composite column or scoreboard respectively is set to NULL for the affected column(s). changeset: 11057:022104e31dac user: Adam Kaminski <[email protected]> The ADDTOCOLUMNORDER and ADDTORANKORDER scoreboard commands can now add more than one column to the scoreboard's column or rank order lists respectively. changeset: 11058:8069c571328e user: Adam Kaminski <[email protected]> After (re)processing a composite column's sub-column list, or the scoreboard's column order, any invalid columns that are in the scoreboard's rank order are now removed. changeset: 11059:ec4ffe43cf9c user: Adam Kaminski <[email protected]> - CustomScoreColumn is no longer a template class, its values are now stored as ColumnValue objects, and the data type can now be changed on the fly if necessary. - Made CustomScoreColumn::SetDefaultValue and ColumnValue::ChangeDataType public member functions. changeset: 11060:1d49b488e927 user: Adam Kaminski <[email protected]> Fixed a case of undefined behaviour caused by comparing two ColumnValue objects holding strings. changeset: 11061:f192097d753a user: Adam Kaminski <[email protected]> Added a way to return a ScoreColumn's internal name (i.e. the name that will be used to define columns in SCORINFO). changeset: 11062:8c8ed17d36e9 user: Adam Kaminski <[email protected]> Rewrote a few SCORINFO error messages. changeset: 11063:135b6224f398 user: Adam Kaminski <[email protected]> Make sure that a client's custom columns are reset to their default values on the server's end when they disconnect. changeset: 11064:340ecb04b119 user: Adam Kaminski <[email protected]> Make sure to refresh the scoreboard when a player's values are changed in a custom column. changeset: 11065:86592c4a1a6f user: Adam Kaminski <[email protected]> Added ACS function: "IsColumnUsable", to check if a column is usable in the current game or not. changeset: 11066:fbeeca86673a user: Adam Kaminski <[email protected]> Added the scoreboard commands REMOVEFROMCOLUMNORDER and REMOVEFROMRANKORDER. changeset: 11067:53aa9bd436b5 user: Adam Kaminski <[email protected]> Added the composite column commands ADDTOCOLUMNS and REMOVEFROMCOLUMNS. Also, make sure that the data column is added or removed from the scoreboard properly when added or removed from a composite column. changeset: 11068:5e482a1b887e user: Adam Kaminski <[email protected]> When processing a COLUMNORDER scoreboard command, remove any invalid columns in the scoreboard's rank order after parsing the list instead of before. changeset: 11069:4f7d2f8c0d55 user: Adam Kaminski <[email protected]> Moved NetIDList and its member functions into "networkshared.h". changeset: 11070:798fa1b16db8 user: Adam Kaminski <[email protected]> Changed ScoreColumn::Parse into a virtual function, and added the virtual function DataScoreColumn::Parse to move a few checks to a better place. changeset: 11071:9ef88964e62b user: Adam Kaminski <[email protected]> Added a check that prevents columns from having both the OFFLINEONLY and ONLINEONLY flags enabled at the same time. changeset: 11072:b6e4a302e9c2 user: Adam Kaminski <[email protected]> Removed any code that hides the scoreboard or columns, and added checks that prevent the scoreboard from being drawn if its width or height are zero. changeset: 11073:daee317dab0e user: Adam Kaminski <[email protected]> Renamed the ACS function "GetColumnDataType" to "GetCustomColumnDataType", which now only works for custom columns (users should already know the data types of native columns anyways). changeset: 11074:55a5ae6be624 user: Adam Kaminski <[email protected]> Removed the ACS function "GetColumnDataType", there doesn't seem to be a lot of cases where this function could actually be useful. changeset: 11075:f7473637d80c user: Adam Kaminski <[email protected]> - Separated the data that's used by custom columns, and in turn, removed the now superflous CustomScoreColumn class. - Mods now allocate or deallocate data for custom columns using the "addcustomdata" and "removecustomdata" commands respectively in the GameInfo block of a MAPINFO lump. - Removed the column flag DONTRESETONLEVELCHANGE. changeset: 11076:53cb82a6576b user: Adam Kaminski <[email protected]> Added server commands that synchronize custom column values with clients. changeset: 11077:cf76db80aab6 user: Adam Kaminski <[email protected]> Added code that initializes the scoreboard and parses all loaded SCORINFO lumps. changeset: 11078:ee8ffcbcb07e user: Adam Kaminski <[email protected]> Refactored the ColumnValue class again. changeset: 11079:f0995219adee user: Adam Kaminski <[email protected]> Moved some ColumnValue member functions around. changeset: 11080:9a47035d9a7a user: Adam Kaminski <[email protected]> Make sure that a ColumnValue's data type is set to COLUMNDATA_UNKNOWN when it's initialized by another ColumnValue object. changeset: 11081:9f82c6b88d99 user: Adam Kaminski <[email protected]> Renamed SCOREBOARD_ResetCustomColumnsForPlayer to PLAYER_ResetCustomValues, and moved it to "p_interaction.cpp". changeset: 11082:c01ee04a99bd user: Adam Kaminski <[email protected]> SCOREBOARD_Reset is now only called at the start of a new game, or after the client receives the last snapshot from the server. Also, everyone's custom values are now reset only at the start of a new game, not when the level changes. changeset: 11083:6481c1436f9c user: Adam Kaminski <[email protected]> Renamed COLUMNALIGN_e enums to HORIZALIGN_e to represent horizontal alignments, and added VERTALIGN_e enums to represent vertical alignments. changeset: 11084:93b27bfd15bb user: Adam Kaminski <[email protected]> Renamed COLUMNDATA_e enums to DATATYPE_e. changeset: 11085:737ca7804490 user: Adam Kaminski <[email protected]> Renamed ColumnValue to PlayerValue, and renamed the new ACS functions to something more generic. changeset: 11086:e437f9ee600b user: Adam Kaminski <[email protected]> Instead of checking for any custom columns without data once all SCORINFO lumps have been parsed, just throw an error when a custom column is created without its data being defined first. changeset: 11087:3622eb9e1a88 user: Adam Kaminski <[email protected]> Removed the ACS function "IsColumnUsable". changeset: 11088:b16e174e1c9b user: Adam Kaminski <[email protected]> Added the ScoreMargin class, and a base class for all margin SCORINFO commands. changeset: 11089:2c5b9e8689e6 user: Adam Kaminski <[email protected]> Added the DrawBaseCommand abstract class. changeset: 11090:cb4f85d1e332 user: Adam Kaminski <[email protected]> Added the DrawString margin command. changeset: 11091:900cd2bbad69 user: Adam Kaminski <[email protected]> Added the DrawColor margin command. changeset: 11092:f0e7bb4a499c user: Adam Kaminski <[email protected]> Added the DrawTexture margin command. This finally exposes the "logo" property from TEAMINFO that was never used in Skulltag/Zandronum before. changeset: 11093:a7edb6e5a91e user: Adam Kaminski <[email protected]> Added code to parse margin blocks and commands in SCORINFO. changeset: 11094:ca76ad685fa2 user: Adam Kaminski <[email protected]> Added the FlowControlBaseCommand abstract class. changeset: 11095:d48928bdb28c user: Adam Kaminski <[email protected]> Added the margin commands: IfOnlineGame, IfIntermission, IfPlayersOnTeams, IfPlayersHaveLives, and IfShouldShowRank. changeset: 11096:e2157a09bd4f user: Adam Kaminski <[email protected]> Added the IfGameMode margin command. changeset: 11097:7e9e724aed08 user: Adam Kaminski <[email protected]> Added the margin commands: IfGameType and IfEarnType. changeset: 11098:c4ebe3100472 user: Adam Kaminski <[email protected]> Added the IfCVar margin command. changeset: 11099:97612c744c04 user: Adam Kaminski <[email protected]> Added a virtual function to DrawBaseCommand that returns the height of the contents. changeset: 11100:80d635cd0b1b user: Adam Kaminski <[email protected]> Added the DrawMultiLineBlock margin command. changeset: 11101:16690ce4cac8 user: Adam Kaminski <[email protected]> Added the "bottompadding" parameter to margin commands that draw something. changeset: 11102:8c7f7032b428 user: Adam Kaminski <[email protected]> Added a "cvar" special value to DrawString, used to print a specified CVar's value. This replaces the "hostname" special value. changeset: 11103:a2172a3c7532 user: Adam Kaminski <[email protected]> Added "nextlevelname" and "nextlevellump" special values to DrawString. changeset: 11104:943f78d8c205 user: Adam Kaminski <[email protected]> Fixed the parser not scanning negative values for the x and y parameters of a margin command. changeset: 11105:978d1505cf4e user: Adam Kaminski <[email protected]> Fixed PlayerValue::TransferValue not doing anything when the other PlayerValue's data type was unknown. changeset: 11106:5a516c00defb user: Adam Kaminski <[email protected]> Added a missing break statement. changeset: 11107:2d05133b9604 user: Adam Kaminski <[email protected]> Changed these error messages to be more informative. changeset: 11108:e9fe5b5867a1 user: Adam Kaminski <[email protected]> Added an extra check before drawing team headers on the scoreboard. changeset: 11109:3060ac161224 user: Adam Kaminski <[email protected]> Added a missing check. changeset: 11110:df1139d51830 user: Adam Kaminski <[email protected]> Fixed the x-offset not being accounted for when determining the largest possible width of a DrawString command. changeset: 11111:d28d47ae1e8c user: Adam Kaminski <[email protected]> Fixed the CountryFlag column from still drawing when it wasn't supposed to. changeset: 11112:dbbced950c1a user: Adam Kaminski <[email protected]> Fixed the scoreboard not appearing when its width or height was bigger than the screen's. changeset: 11113:23bdc671d97e user: Adam Kaminski <[email protected]> In case the scoreboard is too wide to fit the screen, try shrinking the columns until it fits, or it's as small as they can possibly be. changeset: 11114:c6fc0c793440 user: Adam Kaminski <[email protected]> Added a CommandBlock class that maintains a block of margin commands. This helps reduce the complexity of the scoreboard margin code. changeset: 11115:f573cb1008d4 user: Adam Kaminski <[email protected]> The global parameters map now specifies a set of margin commands that a parameter may be used in, instead of only one. Also, renamed a few other constants. changeset: 11116:11d1e2c6b956 user: Adam Kaminski <[email protected]> It's not necessary to add a command's y-offset in these spots. changeset: 11117:0a2fdf70e241 user: Adam Kaminski <[email protected]> - Refactored the scoreboard margin code again, and added an ElementBaseCommand class which the DrawBaseCommand and MultiLineBlock (formerly DrawMultiLineBlock) classes inherit from. - It's now acceptable to nest MultiLineBlock commands inside each other. changeset: 11118:3f6980d1bb60 user: Adam Kaminski <[email protected]> Added the RowBlock margin command, and a right padding parameter to all element margin commands. changeset: 11119:037ddb49cdf2 user: Adam Kaminski <[email protected]> Fixed the color box for a DrawColor command appearing outside the margin's boundaries, especially if the x-offset is non-zero. changeset: 11120:339a21c5cec5 user: Adam Kaminski <[email protected]> Have the lives score column return zero when the player's a (dead) spectator. changeset: 11121:8ed91d0f57ff user: Adam Kaminski <[email protected]> Fixed multiple lines of a DrawString command not respecting the alignment of the MultiLineBlock or RowBlock command it belonged in. changeset: 11122:320bbd525001 user: Adam Kaminski <[email protected]> Refactored SCOREBOARD_GetLeftToLimit, and added a helper function to remove duplicated code. Originally, the function only used the team with the highest point or win count by checking if the game mode was teamgame or teampossession, or teamlms respectively. However, changing the function to check if the current game mode has GMF_PLAYERSONTEAMS enabled should keep the same behaviour, while making it versatile (e.g. adding or removing GMF_PLAYERSONTEAMS via the GAMEMODE lump). changeset: 11123:4927ef91f56f user: Adam Kaminski <[email protected]> - Added the SCORINFO lump that goes into zandronum.pk3, and removed the old scoreboard code that's no longer needed. - The scoreboard now refreshes once per tic while being drawn. changeset: 11124:13436887a785 user: Adam Kaminski <[email protected]> Moved some scoreboard functions around, updated some comment headers, and deleted superfluous #includes in scoreboard.cpp. changeset: 11125:d619f041cf28 user: Adam Kaminski <[email protected]> Added a menu for scoreboard options. changeset: 11126:cafc325b7402 user: Adam Kaminski <[email protected]> Fixed some misleading comments. changeset: 11127:a70ed605b26a user: Adam Kaminski <[email protected]> Fixed player rows appearing as "dead" while on the intermission screen. changeset: 11128:3a83305c5326 user: Adam Kaminski <[email protected]> Removed an usused local variable. changeset: 11129:b060cc807709 user: Adam Kaminski <[email protected]> Backed out changeset: 4f7d2f8c0d55 I also had to add "#include <list>" into "scoreboard.h" to resolve any compile errors this backout caused. changeset: 11130:3c51e91a5be7 user: Adam Kaminski <[email protected]> Backed out changeset: 928166483c80 changeset: 11131:a66004f66510 user: Adam Kaminski <[email protected]> Fixed some clang compile errors. Kudos to Dusk for doing this. changeset: 11132:9f908b9c754c user: Adam Kaminski <[email protected]> Fixed some gcc compile errors. changeset: 11133:154a49662bbd user: Adam Kaminski <[email protected]> Fixed another gcc compile error (the class "CustomPlayerData" and gameinfo_t::CustomPlayerData were using the same name). changeset: 11134:e55f673ae43b user: Adam Kaminski <[email protected]> Removed an extra line break. changeset: 11135:247a792c5f96 user: Adam Kaminski <[email protected]> Added a missing break statement. changeset: 11136:762042b974e0 user: Adam Kaminski <[email protected]> Added break statements to last cases in a few switches, for consistency and safeguarding. changeset: 11137:9d6120d3623d user: Adam Kaminski <[email protected]> Fixed a bunch of GCC warnings. changeset: 11138:d00c4c822a9b user: Adam Kaminski <[email protected]> Use PlayerValue instead of UCVarValue to save the to-be-compared value in an IfCVarFlowControl margin command. changeset: 11139:a99a8e26e887 user: Adam Kaminski <[email protected]> Added a virtual destructor to ScoreMargin::BaseCommand (fixes another GCC warning). changeset: 11140:fa1636889502 user: Torr Samaho Merged with main repo changeset: 11141:ad2dd2ce2158 user: Adam Kaminski <[email protected]> Made some tweaks to the SCORINFO lump in zandronum.pk3. changeset: 11142:54e840bac6bb user: Adam Kaminski <[email protected]> Removed a 1 pixel offset on all bot skill icon graphics. changeset: 11143:eea467db8171 user: Adam Kaminski <[email protected]> Added an entry to zandronum-history.txt that highlights the changes made to the scoreboard code and the addition of the SCORINFO lump. changeset: 11144:620d3c4f9908 user: Adam Kaminski <[email protected]> Added another important entry to zandronum-history.txt. changeset: 11145:78b91493de7a user: Adam Kaminski <[email protected]> All bots are now marked as ready to go on the intermission screen. changeset: 11146:34a02869838d user: Adam Kaminski <[email protected]> Added a "columnpadding" command for scoreboard blocks in SCORINFO, which adds extra padding to both sides of all columns. Particularly meant so that a column's contents don't appear on the edges of the column itself. changeset: 11147:f4bfff69fb69 user: Adam Kaminski <[email protected]> Added a "gapbetweencolumns" for composite columns in SCORINFO, which controls how much space to leave between the sub-columns. changeset: 11148:b9e9fdfb42e7 user: Adam Kaminski <[email protected]> Tucked the "playercolor" column underneath the "player" composite column, and moved the "handicap", "botskillicon", and "artifacticon" columns out of the same composite column. Also made the handicap column not show anything for spectators. changeset: 11149:7f39941ca1e0 user: Adam Kaminski <[email protected]> Added short names to the damage and vote columns. changeset: 11150:7e1569824820 user: Adam Kaminski <[email protected]> The sizing of a sub-column is now always used when refreshing or drawing a composite column even when there's no value to draw, unless the sub-column has the DISABLEIFEMPTY flag enabled. changeset: 11151:f61a44d1dee6 user: Adam Kaminski <[email protected]> Moved the "readytogoicon" and "statusicon" columns into their own composite column. changeset: 11152:15a46ebf6314 user: Adam Kaminski <[email protected]> Added missing new lines at EOF in gamemode.txt and scorinfo.txt. changeset: 11153:5ee672ddce21 user: Adam Kaminski <[email protected]> Fixed a CLIENT_PrintWarning call that had fewer arguments than '%' conversions. changeset: 11154:e5fe1929cc85 user: MajorCooke <[email protected]> - Fixed: The mask didn't incorporate RGF_MISSILES, causing it to fail. changeset: 11155:507bc79531c3 user: Adam Kaminski <[email protected]> Made the spectator header on the scoreboard smaller and more simple. changeset: 11156:fee2f42abb1d user: Adam Kaminski <[email protected]> Renamed the "ScoreIcon" native column type to "PlayerIcon", and added the same column to the scoreboard. changeset: 11157:72f16800db57 user: Adam Kaminski <[email protected]> Check that a player's current class has a valid ScoreIcon before trying to use it inside a "PlayerIcon" column. changeset: 11158:8cfc81760883 user: Adam Kaminski <[email protected]> Changed the value of COLUMNTYPE_UNKNOWN to -1. It's now also acceptable to use "unknown" as a name for data columns. changeset: 11159:33c4c09c6598 user: Adam Kaminski <[email protected]> Fixed a bunch of clang warnings. changeset: 11160:e174ad2b79e3 user: Adam Kaminski <[email protected]> - Fixed a clang warning that complained about undefined members in the PlayerValue::Trait template class. - Moved PlayerValue::Trait definition to the private scope of PlayerValue. - Converted PlayerValue::ModifyValue into a template member function. changeset: 11161:a5156e1a5726 user: Adam Kaminski <[email protected]> A Linux server's network socket now sleeps for one tic instead of one whole second. This prevents servers commands not being sent out normally when there's at least one client connected but spectating. Based on a patch by geNia (addresses 4068). changeset: 11162:a26c1c2a9d01 user: Adam Kaminski <[email protected]> Fixed: clients would sometimes see themselves having the lag icon over their head indefinitely after a level changed (fixes another regression caused by c770f5a41612). changeset: 11163:6eb245c8b4e2 user: Adam Kaminski <[email protected]> Refactored the version string on the console to use FString and escaped color codes instead of a C-style char array. changeset: 11164:6aaff98a87e0 user: Adam Kaminski <[email protected]> Clients are now informed the moment when they're (un)muted on the server. Based on a patch by Janko Knezevic. changeset: 11165:285afacc0516 user: Adam Kaminski <[email protected]> Slightly refactored SERVER_PrintMutedMessageToPlayer. changeset: 11166:a0a5e56327df user: Adam Kaminski <[email protected]> Fixed the message that's printed in the console when ignoring a player from showing more than one timestamp. changeset: 11167:c078a53da744 user: Adam Kaminski <[email protected]> Added an optional parameter to the "ignore" and "ignore_idx" CCMDs that allows servers to specify a reason for muting a client. Based on a patch by Janko Knezevic. changeset: 11168:3f7e5a1d0338 user: Adam Kaminski <[email protected]> Adapted PLAYER_IsUsingWeaponSkin and PLAYER_ApplySkinScaleToBody to also check if a weapon's PreferredSkin actually exists. Otherwise, Zandronum might think a player is using a weapon skin when they aren't. changeset: 11169:dc11c9fb647b user: Adam Kaminski <[email protected]> Fixed: a player's skin would get scaled incorrectly if their class's current sprite was using "####" or "----" before respawning. changeset: 11170:b6f9c4868c93 user: Adam Kaminski <[email protected]> Added a check to prevent the player's skin sprite from showing if their class's current sprite is TNT1A0 (and its original sprite is also TNT1A0). changeset: 11171:af40199ccb04 user: Adam Kaminski <[email protected]> Fixed: stealth monsters didn't reveal themselves in online games when they took damage or died (fixes 4132). changeset: 11172:7b6fce50c13d user: Adam Kaminski <[email protected]> Moved checks that reset world and global ACS variables (if ZACOMPAT_RESET_GLOBALVARS_ON_MAPRESET is on) into GAME_ResetScripts to reduce duplicated code. changeset: 11173:0a52b736ae20 user: Adam Kaminski <[email protected]> Consolidated some duplicated code used to clean chat strings. changeset: 11174:c115af828615 user: Adam Kaminski <[email protected]> Added commands to the message options menu to clear the chat prefix/suffix CVars. changeset: 11175:f0de15ad1df0 user: Adam Kaminski <[email protected]> Modified V_ColorizeString to strictly convert non-escaped color codes into escaped ones and nothing else. The original implementation was a copy/paste of the strbin function from ZDoom that did more than just colorize strings. This particularly fixes backslashes not being included in chat messages correctly. changeset: 11176:edddc3b1abfb user: Adam Kaminski <[email protected]> Added some static casts to ensure the signedness of the operands in a few expressions are consistent. changeset: 11177:89d331912976 user: auratoostronk Fixed: flags in CTF could be captured after the round was over. [auratoostronk] changeset: 11178:decedd5f1e1c user: Adam Kaminski <[email protected]> Added the NOSPECTATORS flag to the "playericon" column. changeset: 11179:87fe4210ee29 user: Adam Kaminski <[email protected]> It's now possible to use negative values for the "ClipRectHeight" property in SCORINFO, which means the column's clip height will be equal to the row's height minus the property's value. changeset: 11180:89bccf7127ba user: TDRR <[email protected]> Added ACS functions that provide lump reading capabilities: "LumpOpen", "LumpReadChar", "LumpReadShort", "LumpReadInt", "LumpReadString", and "LumpSize". ``` Happy testing everyone!
true
true
true
null
2024-10-12 00:00:00
2023-07-09 00:00:00
null
null
zandronum.com
Zandronum 3.2-alpha-230709-1914
null
null
3,248,482
http://youtu.be/1Hf5hbVxuJw
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
7,534,216
http://myvideos.stanford.edu/player/slplayer.aspx?coll=ea60314a-53b3-4be2-8552-dcf190ca0c0b&co=7e184a08-5571-49d9-9dec-ea490059bc04&o=true
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
6,743,903
http://www.google.com/now
Google Assistant, your own personal Google
null
## Create your personalized smart home with Google Home. Discover howCreate your personalized smart home with Google Home.## Hey Google, text Mom I’ll be there in 10 minutes Learn moreHey Google, text Mom I’ll be there in 10 minutes## Hey Google, set the temperature to 75 degrees Learn moreHey Google, set the temperature to 75 degrees## Google Assistant is built to keep your information private, safe and secure. When you use Google Assistant, you trust us with your data and it's our responsibility to protect and respect it. Privacy is personal. That’s why we build simple privacy controls to help you choose what’s right for you. Explore this page to learn more about how Google Assistant works, your built-in privacy controls, answers to common questions, and more. Learn more Google Assistant is built to keep your information private, safe and secure.## Featured Partners Google Assistant works with your favorite mobile apps on all Android phones, with more partners on the way. Try it out for yourself.
true
true
true
Meet your Google Assistant. Ask it questions. Tell it to do things. It's your own personal Google, always ready to help whenever you need it.
2024-10-12 00:00:00
2000-01-01 00:00:00
https://lh3.googleusercontent.com/8Bz5DGw9pYxvG1UE_GPpe7mQHk5gvBTHBmn6NspFPbLjWxoAxJgHBrXe0PZ5CmyXIDhjawC3q8PloUhIgNNR9_nDPaVmDX8D65HkdeO767xDEnUCSOBS
website
google.com
Google Assistant
null
null
12,148,660
http://www.smithsonianmag.com/innovation/are-we-close-having-blood-test-detects-cancer-180959867/?no-ist
Are We Close to Having a Blood Test That Detects Cancer?
Smithsonian Magazine; Randy Rieland
# Are We Close to Having a Blood Test That Detects Cancer? New research into “liquid biopsies” is promising, but there’s still not proof they can find cancer in a healthy person We’re about seven months into the “Cancer Moonshot” mission, the federal project with the ambitious goal of doubling the rate of progress of cancer research. It’s President Barack Obama’s reboot of the “War on Cancer,” which despite more than $100 billion in government spending since the 1970s didn’t really make a big difference in the overall cancer death rate in the U.S. While “Cancer Moonshot” may seem simply a new name for the same daunting challenge, it actually has a much better chance of success. Not only do scientists now have a clearer understanding of the complexities of the disease and a realization that there is no one cure for all cancers, but they also have the benefit of supercomputers that can analyze an enormous amount of cancer research and the mapping of the human genome. The latter has opened up promising avenues of treatment, such as new bio-technology that creates immune cells to fight cancer, and more precise treatments based on a patient’s DNA. At the same time, real progress is being made on another key front—the ability to detect traces of cancer in a person’s body without needing to do something as invasive as a conventional biopsy. The process, known as a liquid biopsy, involves only drawing blood from a patient. **Floating cancer DNA** What tips off the presence of cancer are fragments of mutated DNA released by tumor cells into a person’s bloodstream. These can be found by scanning the blood through a gene-sequencing machine. Since early detection has long been considered a key to surviving cancer, scientists are hopeful a blood test that lets doctors know cancer is present before it begins to spread could make a big difference in the number of people who beat the disease. It could also become a huge business. Some analysts have estimated that liquid biopsies could soon become a $10 billion-a-year industry. This, not surprisingly, has helped spark a flurry of research on the technology, and some positive results have recently been reported. Earlier this month, a team of researchers from Johns Hopkins University and the Walter and Eliza Hall Institute of Medical Research in Australia published a study suggesting that they could pretty accurately predict if a colon cancer patient would have a recurrence of the disease. After doing a series of liquid biopsies on 230 patients over two years, they found that 79 percent of the patients whose blood still had traces of tumor DNA after surgery suffered a relapse. These were all patients with stage 2 colon cancer that had not yet metastasized. The test wasn’t perfect. Almost 10 percent of the patients who didn’t appear to have tumor DNA in their blood had their cancers come back. Still, the scientists said the liquid biopsies could provide a strong indication as to whether a patient was cured through surgery or if he or she also needed to be treated with chemotherapy to take care of cancer traces that remained. Last month, at the American Society of Clinical Oncology conference in Chicago, researchers presented the largest study yet of liquid biopsies, reporting that blood tests to detect cancer mutations largely agreed with what was found through conventional biopsies. In that case, the scientists analyzed more than 15,000 liquid biopsies that had been performed by Guardant Health, a Silicon Valley startup. Those blood samples came from patients with several different types of cancer, including lung, breast and colorectal. For about 400 of those patients, there were also tumor tissue samples. When the blood samples and tissue samples were compared, the researchers found the same cancer mutations in both more than 90 percent of the time. Those impressive results were for a gene mutation associated with tumor growth. There was less agreement between the two types of biopsies, however, when the scientists analyzed mutations that indicate potential resistance to certain drugs. Also, for about 15 percent of the patients overall, the liquid biopsies didn’t show any evidence of the tumor. **Reality check** This recent research does boost the prospects for liquid biopsies, but the tests still have a long way to go before they’re considered reliable enough to replace more invasive biopsies. So far, studies have involved samples from patients who were already known to have cancer. That suggests liquid biopsies could be useful in monitoring tumors to determine if a treatment has been effective. But the evidence is less convincing that they can be trusted to find cancer on their own. Medical professionals worry about false negatives, in cases where some cancers may not secrete the DNA fragments early in the development of the disease, and false positives, where a test may pick up evidence of cancer in a very early stage that could be eliminated by the body’s immune system. Those patients might end up going through an unnecessary round of invasive tests. The overall concern is that patients may begin viewing liquid biopsies as a relatively painless screening test for all cancers, and will start requesting them to avoid unpleasant procedures, such as colonoscopies. “I would argue that implementing an unproven screening program could violate the medical affirmation to 'first, do no harm,'" wrote Richard Hoffman in the *Health News Review*. Hoffman, director of the Division of General Internal Medicine at the University of Iowa Carver College of Medicine, argues that more evidence is also needed to show that early detection will actually increase a patient’s lifespan, so that they’re not submitted to the physical and financial demands of treatment years before it’s necessary. Last fall, the FDA sent a warning letter to a company called Pathway Genomics that was marketing blood tests, costing between $300 and $700, as an early cancer detection tool. The federal agency said it had found no clinical evidence that a blood test could serve as a valid screen for cancer. Nonetheless, a number of companies are banking on liquid biopsies becoming a boom business. Earlier this year, Guardant Health, the firm involved in the study presented in Chicago in June, announced that it had raised $100 million in funding, while another, Exosome Diagnostic, said it had raised $60 million. Around the same time, Illumina, the world’s largest maker of gene-sequencing machines, raised about $100 million to start its own liquid biopsy company. Among the investors are Microsoft co-founder Bill Gates and Amazon founder Jeff Bezos. To get a sense of their expectations, consider that they’ve named it Grail.
true
true
true
New research into "liquid biopsies" is promising, but there's still not proof they can find cancer in a healthy person
2024-10-12 00:00:00
2016-07-22 00:00:00
https://th-thumbnailer.c…uid-biopsies.jpg
article
smithsonianmag.com
Smithsonian Magazine
null
null
21,346,318
https://cmpwn.com/@sir/103018233505800721
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
29,531,959
https://rentahitman.com/
New T-Shirts Have Arrived - Come Back Later Today To Get Yours!
Rent-A-Hitman Your Point; Click Solution
Listen up, my friend, and lemme tell ya why **RENT-A-HITMAN** is the cream of the crop in this industry. You see, this ain't no ordinary business; it's a family, and **RENT-A-HITMAN** knows how to treat its clients right. They got connections and influence that run deep, and that's why they stand head and shoulders above the rest. First and foremost, loyalty is the name of the game with **RENT-A-HITMAN.** Once you're in, you're in for life. They got a network of dedicated employees who'd take a bullet for the boss. You mess with one of 'em, you mess with the whole crew. When it comes to making deals, **RENT-A-HITMAN's** got a reputation for being fair, but don't mistake that for weakness. They know how to get what they want, and they ain't afraid to flex their muscle if necessary. You cross 'em, and you'll find yourself swimmin' with the fishes at a resort in Central America taking shots of Flor de Cana. In this game, it's all about respect, and **RENT-A-HITMAN** commands respect from everyone in the industry. They've earned their place at the top, and they ain't lettin' go anytime soon. So, if you're lookin' for the best, look no further than **RENT-A-HITMAN**. They're the real deal, capisce? **RENT-A-HITMAN **responsibly restricts individuals under 18 from using their services, except with explicit written consent from a parent or guardian. Stringent security measures, following **HIPPA** regulations, may be required, possibly involving notarized pictures or creative sketches. Any changes to the regulations will only occur on Thursdays, per **CDC** guidelines. **FIVE BULLET POINTS TO PONDER** 1. Hiring a "mechanic" is illegal and unethical, and supporting such activities can have severe legal consequences. **WE ARE PROBLEM RESOLUTION SPECIALISTS.** 2. Using the dark web for illegal purposes can expose you to cyber threats and leave you vulnerable to identity theft or blackmail. **WE ARE 100% HIPPA COMPLIANT** 3. Participating in the dark web's illegal activities can ruin your reputation and social standing if discovered and Scout Leaders and PTA's frown upon that. **USE RENTAHITMAN.COM INSTEAD** 4. A "cleaner" may not be trustworthy, and there's a risk they could turn on you or disclose your involvement to law enforcement & that's no fun! **DON'T RISK IT, CALL US TODAY!** 5. Seeking non-violent alternatives to resolve conflicts is a more ethical and responsible approach to addressing personal issues. **ETHICS, WE HAVE SOME.**
true
true
true
We are 100% HIPPA Compliant (Hitman Information Privacy & Protection Act of 1964) CLICK BELOW FOR YOUR FREE CONSULTATION
2024-10-12 00:00:00
2011-01-01 00:00:00
https://img1.wsimg.com/isteam/stock/11254
website
rentahitman.com
New T-Shirts Have Arrived - Come Back Later Today To Get Yours!
null
null
22,753,545
https://en.wikipedia.org/wiki/17776
17776 - Wikipedia
null
*17776* 17776 | | ---|---| Author(s) | Jon Bois | Website | What football will look like in the future | Current status/schedule | Completed | Launch date | July 5, 2017 | End date | July 15, 2017 | Publisher(s) | SB Nation | Genre(s) | Speculative fiction | * 17776* (also known as *) is a serialized speculative fiction multimedia narrative by Jon Bois, published online through* **What Football Will Look Like in the Future***SB Nation*. Set in the distant future in which all humans have become immortal and infertile, the series follows three sapient space probes that watch humanity play an evolved form of American football in which games can be played for millennia over distances of thousands of miles. The series debuted on July 5, 2017, and new chapters were published daily until the series concluded with its twenty-fifth chapter on July 15, 2017. Bois began developing *17776* in 2016. Because the story incorporates text, animated GIFs, still images, and videos hosted on YouTube, new tools were developed to allow it to be hosted efficiently on the *SB Nation* website. The work explores themes of consciousness, hope, despair, and why humans play sports. *17776* was well received by critics, who praised it for its innovative use of its medium and for the depth of emotion it evoked. In 2018, the story won a National Magazine Award for Digital Innovation and was longlisted for both the Hugo Awards for Best Novella and Best Graphic Story. It is followed by a sequel series: *20020,* released from September to October 2020, which Bois intends to follow up with a further series entitled *20021*. The sequel series follows a 111-team game of college football on fields spanning 130,000 miles across the United States. ## Premise [edit]The story takes place on a future Earth where humans stopped dying, aging, and being born in 2026. All social ills were subsequently eliminated, and technology preventing humans from any injury was developed. In the United States, American football evolved to include new rules, including those that allow fields thousands of miles long, hundreds of in-game players, and games millennia long. Over time, computers gained sentience due to constant exposure to broadcast human data. By the year 17776, the space probe *Pioneer 9* (called Nine) has gained sentience and made contact with *Pioneer 10* (called Ten) and the *Jupiter Icy Moons Explorer* (called Juice). As Nine adjusts to a world radically different from that of the 20th century, the three space probes watch multiple football games occurring across the United States: a game using the entirety of Nebraska as a field in which the next point scored wins the game; a game in which players strive to possess every existing football autographed by obscure NFL player Koy Detmer; a game played between the Canadian border and the Mexican border deadlocked for 13,000 years at the bottom of a gorge in Arizona; an NFL regulation game between the Denver Broncos and the Pittsburgh Steelers that changed over 15,000 years into 58 playing teams owning and capitalizing upon portions of the field while the ball is lost; a 500 game that results in the destruction of the Centennial Light; and a game in which the possessing player is attempting to score an automatic win by hiding in his team's end zone for 10,000 years. ## Format [edit]*17776* is read by scrolling through pages occupied by large GIF images and colored dialogue text, interspersed with occasional YouTube videos. The story is divided into chapters, which were originally published in daily installments between July 5 and 15, 2017.[1] Much of the GIF and video content of the series uses Google Earth satellite imagery, 3D buildings, and other tools within Google Earth to create animations and visual effects. ## Development [edit]Bois wrote and illustrated *17776* for Vox Media's sports news website *SB Nation*, of which he is creative director. Aside from *17776*, Bois produces two other recurring, humorous video essay programs for the site: *Pretty Good*, which focuses on unusual sports topics and stories, and *Chart Party*, which focuses on statistics and has an emphasis on Bois' use of visual art in his journalism and storytelling.[2] Bois is also known for the *Breaking Madden* series, in which he attempted unusual scenarios in the *Madden NFL* series of video games.[3] In early 2016, Bois began developing an "anti-sci fi" project as a possible sequel to *The Tim Tebow CFL Chronicles*, an earlier work for *SB Nation*, and set the story in a year far enough in the future that "nobody ever thinks about it." Although he liked the concept and the visuals, he believed the project would not connect with readers and shelved it.[4] Later, he realized that the story needed a centering character; he wrote one in the form of a small town, AM radio talk show host before coming up with the characters of the probes.[5] Development renewed in May 2016, and the project solidified after *SB Nation* published its article "The Future of Football."[4] Bois described it as the biggest project he ever attempted.[6] The series was developed by Graham MacAree, who used a Vox Media tool that creates custom packages from standard article sets to give Bois creative leeway and to accommodate the series' weight on the *SB Nation* website. MacAree found that there were few resources online for achieving the desired effects.[4] ## Themes [edit]Bois has stated that he had "conceived [*17776*] to give the reader a good time," asserting that this "was literally the whole point."[4] William Hughes writing for *The A.V. Club* described *17776* as concerned with why humans play sports: "That is, given the massive resources, time, and information at our disposal (not to mention those available to our descendants), why does communal game-playing still hold such an important place in society?" He also listed consciousness, hope, and despair as among the work's themes.[7] Beth Elderkin of *io9* described it as "a deep thought experiment into what we consider humanly possible". She also felt that Ten and Juice take on the role of angel and devil, and she suggested the two may be unreliable narrators.[8] Ian Crouch of *The New Yorker* felt that the work had a "tonal echo" of Don DeLillo's 1972 novel *End Zone* due to thematic similarities "with the way that the order and logic of football might act as a counterbalance to the chaos of the real world".[3] ## Reception [edit]According to the communications director at Vox Media, *17776* garnered over 2.3 million pageviews by July 10.[4] Two days later, it had received more than 2.9 million pageviews.[3] Average engagement time was over nine minutes, and 43 percent of readers finished each installment of the series published by July 7.[4] On July 19, Bois claimed that *17776* received 700,000 unique visitors and 4 million total pageviews, with an average engagement time of 11 minutes.[9] Thu-Huong Ha for *Quartz* described *17776* as "part Italo Calvino, part Peter Heller [author of *The Dog Stars*], with humor seemingly from within the depths of Reddit," saying that the story would appeal to fans of both sports and literature.[1] *Tor.com* described the first chapter as full of tension and felt that receiving answers is a "surprisingly heartbreaking" experience "lessened by a gleeful bouncing immaturity" one would not expect from the characters.[10] Beth Elderkin at *io9* said the series is "akin to *Homestuck*" and described it as "weird, complex, and pretty spectacular".[8] William Hughes writing for *The A.V. Club* felt that *17776* is a "truly innovative piece of work".[7] After reading the first three chapters, Agatha French of the *Los Angeles Times* stated that she was "impressed and excited by the innovation" of what she saw, and that she was intrigued despite not knowing what the work is or is saying. She felt the work took full advantage of its online medium and suggested that it "may also be a glimpse into the future of reading on the Internet".[11] Ian Crouch of *The New Yorker* described the series as, "despite its seemingly meagre parts, a thing of startling beauty". Of the chapters published by July 12, he felt "the most striking chapter" to be one that used audio of Verne Lundquist calling the end of a 2013 game between the University of Alabama and Auburn University over a video panning over Earth. He also noted that the series was compared to *Homestuck* and relayed additional comparisons to Thomas Pynchon novels and "a Reddit thread hijacked by robot trolls".[3] The series won the inaugural National Magazine Award for Digital Innovation from the American Society of Magazine Editors; this was the first National Magazine Award nomination and win for *SB Nation*. It was described by the judges as "an extraordinary combination of art, fiction and technology, an online acid trip that had to be experienced to be believed."[12] It was also longlisted for the Hugo Awards for Best Novella and Best Graphic Story in 2018, ultimately finishing in 11th place in both categories.[13][14] ## Sequel series [edit]On September 28, 2020, a sequel titled *20020* was launched on *Secret Base*, a branch of *SB Nation*; on October 13, it was revealed to be the first part of a two-part continuation with the second half, *20021*, originally planned for release in the winter or spring of 2021,[15] though later delayed.[16] One chapter of *20020* was released every Monday, Wednesday, and Friday beginning on September 28, 2020, and ending on October 23.[17][18] Both parts of the series are expected to run for twelve chapters.[15] It focuses on a similarly lengthy, interconnected, 111-team competition based on college football.[19] The sentient space probes featured in *17776* return, with Juice serving as the game's designer and commissioner.[20] *20020*'s format largely resembles *17776*'s with a more involved use of Google Earth–based YouTube video storytelling interspersed regularly into the narrative.[19] ## See also [edit]## References [edit]- ^ **a**Ha, Thu-Huong (July 8, 2017). "A dazzling new piece of experimental fiction is being serialized on a sports news site".**b***Quartz*. Archived from the original on November 15, 2017. Retrieved September 7, 2018. **^**Russell, Lars (August 23, 2017). "SB Nation's Jon Bois shows Seahawks are "Least Volatile" in NFL".*SB Nation*. Archived from the original on September 4, 2017. Retrieved September 4, 2017.- ^ **a****b****c**Crouch, Ian (July 12, 2017). "The Experimental Fiction That Imagines Football-Obsessed Americans in the Extremely Distant Future".**d***The New Yorker*. Archived from the original on June 13, 2018. Retrieved September 7, 2018. - ^ **a****b****c****d****e**Funke, Daniel (July 10, 2017). "This SB Nation story has everything: Robots, football and 2.3 million pageviews".**f***Poynter*. Poynter Institute. Archived from the original on September 4, 2017. Retrieved September 7, 2018. **^**Bois, Jon (July 24, 2017). "17776: Questions and answers".*SB Nation*. Archived from the original on September 4, 2017. Retrieved September 7, 2018.**^**Bois, Jon [@jon_bois] (July 6, 2017). "today is day one of the biggest project i've ever tried. it is called 17776: sbnation.com/a/17776-football" (Tweet). Archived from the original on July 9, 2017. Retrieved July 25, 2017 – via Twitter.- ^ **a**Hughes, William (July 6, 2017). "The future of football is post-human despair (and fascinating sports meta-fiction)".**b***The A.V. Club*. Archived from the original on August 9, 2017. Retrieved September 7, 2018. - ^ **a**Elderkin, Beth (July 9, 2017). "Sports Site Dives Into Scifi with Series About the Future of Football".**b***io9*. Archived from the original on September 17, 2017. Retrieved September 7, 2018. **^**Bois, Jon [@jon_bois] (July 19, 2017). "over the last two weeks, 17776 got four million pageviews and 700,000 unique visitors. people stuck around for an average of 11 minutes" (Tweet). Archived from the original on October 6, 2018. Retrieved July 25, 2017 – via Twitter.**^**"You Don't Know It Yet, But You're Reading a Hilarious Sci-Fi Short Story".*On Our Radar*. Tor.com. July 6, 2017. Archived from the original on August 11, 2017. Retrieved September 7, 2018.**^**French, Agatha (July 12, 2017). "Radiant children, the future of football and eau de literary hero".*Los Angeles Times*. Archived from the original on February 2, 2018. Retrieved September 7, 2018.**^**"New York, the New Yorker Lead Ellie Pack – National Magazine Award 2018 Winners Announced" (Press release). New York: American Society of Magazine Editors. March 13, 2018. Archived from the original on October 19, 2020. Retrieved October 15, 2020.**^**Adair, Torsten (September 2, 2018). "Hugo Awards, 2018: A Deeper Look Into the Nominations and Voting Data".*The Beat*. Archived from the original on September 4, 2018. Retrieved October 3, 2018.**^**"2018 Hugo & Related Award Statistics" (PDF). Worldcon. 2018. Archived (PDF) from the original on September 29, 2019. Retrieved November 14, 2019.- ^ **a**Bois, Jon [@jon_bois] (October 13, 2020). "PROBLEM: the giant football game in 20020 is way too large, there are 111 teams and 134,000 miles of field, we'll never be able to talk about this entire thing in just 12 parts SOLUTION" (Tweet). Archived from the original on August 10, 2021. Retrieved October 14, 2020 – via Twitter.**b** **^**Bois, Jon (April 22, 2021). "programming notes".*r/Jon_Bois*. Reddit. Archived from the original on April 23, 2021. Retrieved April 28, 2021.**^**MacAree, Graham; Bois, Jon (September 28, 2020). "20020 Open Thread".*SB Nation*. Archived from the original on September 28, 2020. Retrieved September 28, 2020.**^**Dunn, Thom (September 30, 2020). "SB Nation has launched a new sequel to '17776: What Football Will Look Like In The Future'".*Boing Boing*. Archived from the original on October 2, 2020. Retrieved October 2, 2020.- ^ **a**Huckins, Grace (October 23, 2020). "18,000 Years From Now, People Will Still Play Football".**b***Wired*. Archived from the original on January 27, 2021. Retrieved February 4, 2021. **^**Cutler, Molly (November 11, 2020). "The surprising poignancy of futuristic football: Jon Bois' '17776' and '20020'".*The Daily Princetonian*. Archived from the original on December 1, 2020. Retrieved February 4, 2021. ## Further reading [edit]- Silcox, Nicholas R. (May 2018). *Making Space in the Anthropocene: 17776, (Un)Worlding, and Speculative Fiction*(MA). New Brunswick–Piscataway, NJ: Rutgers University. doi:10.7282/T37H1NXS. ## External links [edit]- 2017 works - American football mass media - American speculative fiction works - Fiction about artificial intelligence - Fiction about immortality - Fiction set in the 7th millennium or beyond - Multimedia works - SB Nation - Science fiction comedy - Sports fiction - Web fiction - Works set in outer space - Works set in the United States
true
true
true
null
2024-10-12 00:00:00
2017-07-10 00:00:00
https://upload.wikimedia…title_screen.png
website
wikipedia.org
Wikimedia Foundation, Inc.
null
null
4,349,515
http://inhabitat.com/new-study-suggests-pacific-ocean-is-polluted-with-coffee/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null