id
int64 3
41.8M
| url
stringlengths 1
1.84k
| title
stringlengths 1
9.99k
⌀ | author
stringlengths 1
10k
⌀ | markdown
stringlengths 1
4.36M
⌀ | downloaded
bool 2
classes | meta_extracted
bool 2
classes | parsed
bool 2
classes | description
stringlengths 1
10k
⌀ | filedate
stringclasses 2
values | date
stringlengths 9
19
⌀ | image
stringlengths 1
10k
⌀ | pagetype
stringclasses 365
values | hostname
stringlengths 4
84
⌀ | sitename
stringlengths 1
1.6k
⌀ | tags
stringclasses 0
values | categories
stringclasses 0
values |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
4,720,758 |
https://github.com/elcuervo/minuteman
|
GitHub - elcuervo/minuteman: Fast analytics using Redis
|
Elcuervo
|
Minutemen were members of teams from Massachusetts that were well-prepared militia companies of select men from the American colonial partisan militia during the American Revolutionary War.
They provided a highly mobile, rapidly deployed force that allowed the colonies to respond immediately to war threats, hence the name.
`gem install minuteman`
Configuration exists within the `config`
block:
```
Minuteman.configure do |config|
# You need to use Redic to define a new Redis connection
config.redis = Redic.new("redis://127.0.0.1:6379/1")
# The prefix affects operations
config.prefix = "Tomato"
# The patterns is what Minuteman uses for the tracking/counting and the
# different analyzers
config.patterns = {
dia: -> (time) { time.strftime("%Y-%m-%d") }
}
end
```
Tracking is the most basic scenario for Minuteman:
```
# This will create the "landing:new" event in all the defined patterns and since
# there is no user here it will create an annonymous one.
# This user only exists in the Minuteman context.
user = Minuteman.track("landing:new")
# The id it's an internal representation, not useful for you
user.id # => "1"
# This is the unique id. With this you can have an identifier for the user
user.uid # => "787c8770-0ac2-4654-9fa4-e57d152fa341"
# You can use the `user` to keep tracking things:
user.track("register:page")
# Or use it as an argument
Minuteman.track("help:page", user)
# Or track several users at once
Minuteman.track("help:page", [user1, user2, user3])
# By default all the tracking and counting events are triggered with `Time.now.utc`
# but you can change that as well:
Minuteman.track("setup:account", user, Time.new(2010, 2, 10))
```
There is a powerful engine behind all the operations which is Redis + Lua <3
```
# The analysis of information relies on `Minuteman.patterns` and if you don't
# change it you'll get acess to `year`, `month`, `day`, `hour`, `minute`.
# To get information about `register:page` for today:
Minuteman.analyze("register:page").day
# You can always pass a `Time` instance to set the time you need information.
Minuteman.analyze("register:page").day(Time.new(2004, 2, 12))
# You also have a shorthand for analysis:
register_page_month = Minuteman("register:page").month
# And the power of Minuteman relies on the operations you can do with that.
# Counting the amount:
register_page_month.count # => 10
# Or knowing if a user is included in that set:
register_page_month.include?(User[42]) # => true
# But the most important part is the ability to do bit operations on that:
# You can intersect sets using bitwise AND(`&`), OR(`|`), NOT(`~`, `-`) and XOR(`^`).
# Also you can use plus(`+`) and minus(`-`) operations.
# In this example we'll get all the users that accessed our site via a promo
# invite but didn't buy anything
(
Minuteman("promo:email").day & Minuteman("promo:google").day
) - Minuteman("buy:success").day
```
Since Minuteman 2.0 there's the possibility to have counters.
```
# Counting works in a very similar way to tracking but with some important
# differences. Trackings are idempotent unlike countings and they do not provide
# operations between sets... you can use plain ruby for that.
# This will add 1 to the `hits:page` counter:
Minuteman.add("hits:page")
# You can also pass a `Time` instance to define when this tracking ocurred:
Minuteman.add("hits:page", Time.new(2012, 20, 01))
# And you can also send users to also count the times that given user did that
# event
Minuteman.add("hits:page", Time.new(2012, 20, 01), user)
# You can access counting information similar to tracking:
Minuteman.count("hits:page").day.count # => 201
# Or with a shorthand:
Counterman("hits:page").day.count # => 201
```
Minuteman 2.0 adds the concept of users which can be annonymous or have a relation with your own database.
```
# This will create an annonymous user
user = Minuteman::User.create
# Users are just a part of Minuteman and do not interfere with your own.
# They do have some properties like a unique identifier you can use to find it
# in the future
user.uid # => "787c8770-0ac2-4654-9fa4-e57d152fa341"
# User lookup works like this:
# And you can use that unique identifier as a key in a cookie to see what your
# users do when no one is looking
Minuteman::User['787c8770-0ac2-4654-9fa4-e57d152fa341']
# But since the point is to be able to get tied to your data you can promote a
# user, from anonymous to "real"
user.promote(123)
# Lookups also work with promoted ids
Minuteman::User["123"]
# Having a user you can do all the same operations minus the hussle.
# Like tracking:
user.track("user:login")
# or adding:
user.add("failed:login")
# or counting
user.count("failed:login").month.count # => 23
# But also the counted events go to the big picture
Counterman("failed:login").month.count # => 512
```
| true | true | true |
Fast analytics using Redis. Contribute to elcuervo/minuteman development by creating an account on GitHub.
|
2024-10-12 00:00:00
|
2012-10-30 00:00:00
|
https://opengraph.githubassets.com/dd30c50352f55d5c92f25ece8cdf067411d217aeb0cf1ad306c5b7a1add5a0bd/elcuervo/minuteman
|
object
|
github.com
|
GitHub
| null | null |
19,990,891 |
https://www.toptal.com/typescript/dependency-injection-discord-bot-tutorial
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
5,955,181 |
http://blog.databigbang.com/scraping-for-semi-automatic-market-research/
|
Web Scraping for Semi-automatic Market Research
|
Sebastian Wain
|
It is easy to web scrape Microsoft TechNet Forums (look at the xml output here: http://social.technet.microsoft.com/Forums/en-US/mdopappv/threads?outputAs=xml)and normalize the resulting information to have a better idea of each thread’s rank based on views and initial publication date. Knowing how issues are ranked can help a company choose what to focus on.
This code was used to scrape Microsoft TechNet’s forums. In the example below we scraped the App-V forum, since it is one of the application virtualization market’s leaders along with VMware ThinApp, and Symantec Workspace Virtualization.
These are the top ten threads for the App-V forum:
- “Exception has been thrown by the target of an invocation”
- Office 2010 KMS activation Error: 0xC004F074
- App-V 5 Hotfix 1
- Outlook 2010 Search Not Working
- Java 1.6 update 17 for Kronos (webapp)
- Word 2010 There was a problem sending the command to the program
- Utility to quickly install/remove App-V packages
- SAP GUI 7.1
- The dreaded: “The Application Virtualization Client could not launch the application”
- Sequencing Chrome with 4.6 SP1 on Windows 7 x64
The results show how frequently customers have issues with virtualizing Microsoft Office, Key Management Services (KMS), SAP, and Java. App-V competitors like Symantec Workspace Virtualization and VMWare ThinApp have similar problems. Researching markets this way gives you a good idea of areas where you can contribute solutions.
The scraper stores all the information in a SQLite database. The database can be exported using the *csv_App-V.py* script to an UTF-8 CSV file. We imported the file with Microsoft Excel and then normalized the ranking of the threads. To normalize it we divided the number of views by the age of the thread so threads with more views per day rank higher. Again, the scraper can be used on any Microsoft forum on Social TechNet. Try it out on your favorite forum.
## Code
**Prerequisites: **lxml.html
The code is available at microsoft-technet-forums-scraping [github] . It was written by Matias Palomera from Nektra Advanced Computing, who received valuable support from Victor Gonzalez.
## Usage
- Run
*scrapper-App-V.py* - Then run
*csv_App-V.p*y - The results are available in the App-V.csv file
## Acknowledgments
Matias Palomera from Nektra Advanced Computing wrote the code.
## Notes
- This is a single thread code. You can take a look at our discovering web resources code to optimize it with multithreading.
- Microsoft has given scrapers a special gift: it is possible to use the outputAs variable in the URL to get the structured information as XML instead of parsing HTML web pages.
- Our articles Distributed Scraping With Multiple Tor Circuits and Running Your Own Anonymous Rotating Proxies show how to Implement your own rotating proxies infrastructure with Tor.
Great article.
I had to make one small change to the scrapper_App-V.py code to get it to run.
In get_threads_url change ‘?outputAs=xml’ to ‘&outputAs=xml’
| true | true | true | null |
2024-10-12 00:00:00
|
2013-06-27 00:00:00
| null | null |
databigbang.com
|
blog.databigbang.com
| null | null |
5,906,411 |
http://techcrunch.com/2013/06/19/feedly-cloud-goes-live/
|
Feedly Cloud Goes Live To Replace Google Reader's Backend, Power New Web Version Of Feedly's App | TechCrunch
|
Sarah Perez
|
In 10 days, Google’s RSS feed-reading service Google Reader will shut down for good. In its wake, developers working on products in the RSS ecosystem have been stepping up to deliver apps, tools and other services to fill the void. Today, one of the frontrunners, Feedly, is transforming itself from RSS application to RSS platform, with the public debut of Feedly Cloud, the infrastructure that has been powering Feedly’s own apps and those from a small handful of approved developers.
That infrastructure will also now power a new, standalone web version of Feedly (one that doesn’t rely on a browser extension), something that’s been among Feedly users’ top requests.
The company first announced partnerships with RSS app makers Reeder, Press, Nextgen Reader, Newsify and gReader earlier this month, all of which are moving to support the Feedly API ahead of the Google Reader shutdown. For end users of those applications, Google Reader often powered the backend of their feed-reading experience, but the front end (the visual interface) was handled by a third party. Now those users can seamlessly transition away from Google Reader dependence, without any extra effort on their part.
Feedly, through its “Normandy” project,” has been working to clone the Reader API, and it’s now running that on Google’s App Engine platform. Today, in addition to Feedly’s own apps for iOS, Android, Chrome, Safari and Firefox, and now web, as well as those select third-party partners listed above, the company is making its backend infrastructure more broadly available. It’s adding new Feedly Cloud partners IFTTT, Sprout Social, gNewsReader for BlackBerry 10 and Symbian/MeeGo, Press, Pure News Widget, and Meneré, in addition to those above, and will onboard others still in the weeks ahead.
Though Feedly once struggled in the shadow of Google Reader, it has emerged in recent weeks as one of the top alternatives for end users in need of a new home for their feeds ahead of Reader’s demise.
On the iPhone, Feedly’s app is currently No. 4 in the News section of the iOS App Store in the U.S., according to rankings from Distimo, and it’s No. 6 on Android. And while its overall ranking in the top charts has been sometimes sporadic, it has consistently stayed at the top of the news section in both stores for several months.
Feedly says it now reaches 12 million users, up from 4 million pre-Reader retirement. Millions of new users have come to the service, and, more importantly, the company says that 68 percent of new users become weekly actives. Maybe 12 million users is a drop in the bucket for a web giant like Google, but for a small startup, these are notable numbers. However, Digg.com is preparing to launch its own RSS reader, too, just before Reader’s shutdown, which could change the current landscape if it’s any good.
Ahead of today’s public launch of Feedly Cloud, Feedly has been processing over 25 million RSS feeds daily, accounting for billions of articles, and has seen over 200 developers getting in touch to request access to the Feedly API.
The new version of Feedly on the web is live now.
| true | true | true |
In 10 days, Google's RSS feed-reading service Google Reader will shut down for good. In its wake, developers working on products in the RSS ecosystem have been stepping up to deliver apps, tools and other services to fill the void. Today, one of the frontrunners, Feedly, is transforming itself from RSS application to RSS platform, with the public debut of Feedly Cloud, the infrastructure that has been powering Feedly's own apps and those from a small handful of approved developers.
|
2024-10-12 00:00:00
|
2013-06-19 00:00:00
|
article
|
techcrunch.com
|
TechCrunch
| null | null |
|
17,581,415 |
https://phys.org/news/2018-07-social-media-globally.html
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
24,710,981 |
https://blog.playstation.com/2020/10/07/ps5-teardown-an-inside-look-at-our-most-transformative-console-yet/
|
PS5 Teardown: An inside look at our most transformative console yet
|
Masayasu Ito
|
Your first look at PS5's internal components that will power the next generation of amazing games.
It’s getting close to November, and we are very excited for the launch of PlayStation 5 console. Today, we wanted to give you a sneak peek at the console’s interior, so you can take a look at all of the magic happening inside the PS5 that brings out the beautiful games you’ll experience this holiday season.
We began conceptualizing PS5 in 2015, and we’ve spent the past five years designing and developing the console.
Our team values a well thought out, beautifully designed architecture. Inside the console is an internal structure looking neat and tidy, which means that there aren’t any unnecessary components and the design is efficient. As a result, we’re able to achieve our goal of creating a product with a high degree of perfection and quality.
In this teardown video of the PS5 console, you will be able to see how we have thoughtfully integrated our technology into this console.
We felt it was inevitable to make a generational leap in terms of performance in order to deliver a new, next-generation gaming experience. However, to do so, we had to balance every aspect of the system, from focusing on reducing the noise level to enhancing the cooling capacity, more than ever before.
We’ve also highlighted the mechanism in the video below that we’ve incorporated into the PS5 console to make the operating sounds even quieter. After an extensive and complex trial and error process, we were pleased with the end result and I can not wait for our fans to get their hands on the PS5 console and “hear” it for themselves.
Although we have faced unprecedented challenges this year with many of us working remotely from home throughout the world, we are pleased to be able to deliver a new transformative experience to you with PS5 this November.
**Do not try this at home. Risk of exposure to laser radiation, electric shock, or other injury. Disassembling your PS5 console will invalidate your manufacturer’s guarantee.*
## Comments are closed.
188 Comments
Loading More Comments
| true | true | true |
Your first look at PS5's internal components that will power the next generation of amazing games.
|
2024-10-12 00:00:00
|
2020-10-07 00:00:00
|
article
|
playstation.com
|
PlayStation.Blog
| null | null |
|
3,672,039 |
https://github.com/superjoe30/groovebasin
|
GitHub - andrewrk/groovebasin: Music player server with a web-based user interface.
|
Andrewrk
|
Music player server with a web-based user interface.
Run it on a server connected to some speakers in your home or office. Guests can control the music player by connecting with a laptop, tablet, or smart phone. Further, you can stream your music library remotely.
Groove Basin works with your personal music library; not an external music service. Groove Basin will never support DRM content.
-
The web client feels like a desktop app, not a web app. It predicts what the server will do in order to hide network lag from the user.
-
Auto DJ which automatically queues random songs, favoring songs that have not been queued recently.
-
Drag and drop upload. Drag and drop playlist editing. Keyboard shortcuts for everything.
-
Lazy multi-core EBU R128 loudness scanning (tags compatible with ReplayGain) and automatic switching between track and album mode. "Loudness Zen"
-
Streaming support. You can listen to your music library - or share it with your friends - even when you are not physically near your home speakers.
-
Groove Basin protocol. Write your own client using the protocol specification, or check out gbremote, a simple command-line remote control.
-
MPD protocol support. This means you already have a selection of clients which integrate with Groove Basin. For example MPDroid.
-
Last.fm scrobbling.
-
File system monitoring. Add songs anywhere inside your music directory and they instantly appear in your library.
For Ubuntu 17.04 Zesty:
`sudo apt-get install nodejs libgrooveloudness-dev libgroovefingerprinter-dev libgrooveplayer-dev libgroove-dev`
- Clone this repo and cd to it.
`npm run build`
`npm start`
For Ubuntu 18.04 Bionic:
- Install node-groove and its dependencies from source by following these instructions: https://github.com/andrewrk/node-groove/blob/2.x/README.md#ubuntu-1804
- Edit
`package.json`
, and change the`"groove"`
dependency to point to the directory where node-groove is installed. (The path is instead of a version number.) - Resume step 2 above.
When Groove Basin starts it will look for `config.json`
in the current
directory. If not found it creates one for you with default values.
Use this to set your music library location and other settings.
It is recommended that you generate a self-signed certificate and use that instead of using the public one bundled with this source code.
```
$ npm run dev
```
This will install dependencies, build generated files, and then start the sever. It is up to you to restart it when you modify assets or server files.
Pull requests, feature requests, and bug reports are welcome! Live discussion in #libgroove on Freenode.
- Music library organization
- Accoustid Integration
- Finalize GrooveBasin protocol spec
| true | true | true |
Music player server with a web-based user interface. - andrewrk/groovebasin
|
2024-10-12 00:00:00
|
2011-07-18 00:00:00
|
https://opengraph.githubassets.com/1a67ac4b4e6284df03328cb7a54803d73c35895c2177e0d579bba99d919ac1bc/andrewrk/groovebasin
|
object
|
github.com
|
GitHub
| null | null |
39,215,938 |
https://blog.rocketgraph.io/posts/install-pgaudit
|
Install pgAudit in your AWS RDS instance
| null |
# Install pgAudit in your AWS RDS instance
## Basic setup
First we’ll have to create an AWS RDS DB for this. We’ll use minimal permissions for this setup so that we can easily understand what’s going on.
### Step 1: Create Database
Create your AWS account. Go to AWS RDS tab and click on “Create Database”
### Step 2: Select Database
Standard create and postgreSQL
Select pg Version to the latest
And use free tier for now
### Step 3: Add credentials
Username: postgres MasterPassword: postgres
For now.
### Step 4: Allocate storage
Allocate the minimum possible storage for now
### Step 5: Network
Create a new VPC for this RDS DB and create new security groups. Make ssure that they allow all traffic for now. Let’s improve the security later
### Step 6: Set public access
Remember to set Public access to true since we want to login from psql.
### Step 7: DB Parameter group
Remember the DB Parameter group name since we need to tweak it later for installing pgAudit
### Step 8: Configure logging
This is the important part
Configure log exports to Cloudwatch. Create a service linked role with necessary permissions if you haven’t. Give it admin access for now.
That’s it now click on “create database” and wait for the instance to be available
### Step 9: Installing pgAudit
Follow this video for setting up your DB Parameter group, installing pgAudit and enabling it.
To load pgAudit, you need to configure the DB Parameter group to the following:
Set pgaudit.log to `none`
, we’ll change this later from psql:
Load shared libraries via `shared_preload_libraries`
:
Set the pgaudot.role to the `rds_pgaudit`
Restart the instance and you should be able to use pgAudit
Log into your RDS using psql using
```
psql postgresql://postgres:[email protected]:5432/postgres
```
Then create a role for pgAudit
`postgres=> CREATE ROLE rds_pgaudit; `
Check that the libraries are loaded
`postgres=> show shared_preload_libraries;`
Then enable the extension
`postgres=> CREATE EXTENSION pgaudit;`
Enable logs for pgAudit
For now, for testing purposes set log level to CREATE
`postgres=> ALTER DATABASE test_database set pgaudit.log="CREATE"; `
But ideally, it should be
`postgres=> ALTER DATABASE test_database set pgaudit.log="ALL"; `
### Step 10: Testing that pgAudit works
In the psql shell
```
postgres=> CREATE TABLE test_table (id int);
postgres=> SELECT * FROM test_table;
```
Go to logs & events tab
and check that the logs are reflected by clicking view on the latest written log file
## Rocketgraph setup
Alternatively you can create a project using Rocketgraph that comes with pgAudit enabled. You just need to do step 9. And once you create extension, you can see all your logs nicely like this:
You can check out the demo here
| true | true | true |
Install and enable pgAudit in AWS RDS instance for logging your Postgres events
|
2024-10-12 00:00:00
| null |
/images/social.jpg
|
website
|
howtocode.io
|
howtocode.io
| null | null |
14,630,332 |
https://blog.openbazaar.org/why-is-decentralization-important/
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
19,833,998 |
https://www.improbable.com/airchives/classical/articles/peanut_butter_rotation.html
|
The Effects of Peanut Butter on the Rotation of the Earth
| null |
## The Effects of Peanut Butter on the Rotation of the Earth
**EDITOR'S NOTE:**
With publication of this paper we are hereby amending our longstanding policy
regarding co-authors. Previously we rejected any research paper that had
more than ten co-authors. Many of our contributors, especially high-energy
physicists, have pointed out that in some fields, especially high energy
physics, research journals routinely publish papers that have one hundred
or more co-authors. Accordingly, we are removing the restriction.
by
George August, Ph.D., Anita Balliro, Ph.D., Pier Barnaba, Ph.D., Anne Battis,
Ph.D., Constantine Battis, Ph.D., John Battis, Ph.D. Nathaniel Baum, Ph.D.,
S. Becket, Ph.D., A. G. Bell, Ph.D., Moe Berg, Ph.D., B. J. Bialowski, Ph.D.,
Edward Biester, Ph.D., Joseph Blair, Ph.D., Ceevah Blatman, Ph.D., Ken Bloom,
Ph.D., I. V. Boesky, Ph.D., Dorothy Bondelevitch, Ph.D., Calliope Boratgis,
Ph.D., K. T. Boundary, Ph.D., Gerald Brennan, Ph.D., Nuala Broderick, Ph.D.,
James Burke, Ph.D., Richard Butkus, Ph.D., James Carter, Ph.D., Alexander
Cartwright, Ph.D., Caren Cayer, Ph.D., Mary Chung, Ph.D., W. Spencer Churchill,
Ph.D., M. Louise Ciccone, Ph.D., Theodore B. Cleaver, Ph.D., Selma Frances
Coltin, Ph.D., Carlos Cordeiro, Ph.D., Theodore Crabtree, Ph.D., Samuel
Cunningham, Ph.D., James Michael Curley, Ph.D., Gwen Davis, Ph.D., Paul
Delamere, Ph.D., R. C. De Bodo, Ph.D., P. deMan, Ph.D., Arthur Derfall,
Ph.D., Helen Diver, Ph.D., Edward Doctoroff, Ph.D., Robert Dorson, Ph.D.,
Wayne Drooks, Ph.D., William Claude Dukinfield, Ph.D., James Durante, Ph.D.,
Alan Dyson, Ph.D., Raeline Eaton, Ph.D., D. D. Eisenhauer, Ph.D., Kent Fielden,
Ph.D., Elizabeth Finch, Ph.D., Raymond Flynn, Ph.D., Charles Follett, Ph.D.,
Kevin Forshay, Ph.D., George Frazier, Ph.D., Katherine Fulton, Ph.D., R.
J. Gambale, Ph.D., Jerome Garcia, Ph.D., Judith Garland, Ph.D., Hannah Gilligan,
Ph.D., Daniel Goldfarb, Ph.D., Michael Goldfarb, Ph.D., Archie Goodwin,
Ph.D., Yulia Govorushko, Ph.D., Sharon P. D. Greene, Ph.D., David W. Griffith,
Ph.D., Sheldon Gulbenkian, Ph.D., Frances Gumm, Ph.D., R. O. Guthrie, Ph.D.,
Kathleen Gygi, Ph.D., Margo Hagopian, Ph.D., Richard Hannay, Ph.D., Joseph
Hardy, Ph.D., Stephen Hardy, Ph.D., Gary Hartpence, Ph.D., Edward Haskell,
Ph.D., S. J. Hawkins, Ph.D., Kevin Hegg, Ph.D., Lilly N. Hellman, Ph.D.,
Robert A. Hertz, Ph.D., Louise D. Hicks, Ph.D., Lyndon Holmes, Ph.D., Mycroft
Holmes, Ph.D., O. W. Holmes, Ph.D., Tardis Hoo, Ph.D., J. E. Hoover, Ph.D.,
E. A. Horton, Ph.D., Lawrence Howard, Ph.D., Moe Howard, Ph.D., Ginger Hsu,
Ph.D., David Hubbs, Ph.D., Loretta Huttlinger, Ph.D., Stanley Hwang, Ph.D.,
Harriet Kasden, Ph.D., Susan Jablonski, Ph.D., Mittie Jackson, Ph.D., Rebecca
Johnson, Ph.D., Deacon Jones, Ph.D., Edward T. T. Jones, Ph.D., Conrad Joseph,
Ph.D., K. T. Kanawa, Ph.D., Liza Karpook, Ph.D., Daniel Kaye, Ph.D., William
Keeler, Ph.D., Waldemar Kester, Ph.D., John M. Keynes, Ph.D., Olga Korbut,
Ph.D., Susan Krock, Ph.D., Kerran Lauridson, Ph.D., Nicholas Leone, Ph.D.,
Meg Anne Lesser, Ph.D., Lucille S. Levesque, Ph.D., Joseph Lichtblau, Ph.D.,
Barbara Linden, Ph.D., Robert Lippa, Ph.D., Charles Lovejoy, Ph.D., Frances
Lynch, Ph.D., Thomas Maccarone, Ph.D., Maureen Madigan, Ph.D., James Mahoney,
Ph.D., Catherine Maloney, Ph.D., Jules Maigret, Ph.D., G. Maniscalco. Ph.D.,
Ray B. B. Mancini, Ph.D., Julius Marx, Ph.D., Cynthia Mason, Ph.D., James
Matoh, Ph.D., Abigail Mays, Ph.D., Zachariah Mays, Ph.D., Charles McCarthy,
Ph.D., Joseph McCarthy, Ph.D., Ann McKechnie, Ph.D., Charles Augustus Milverton,
Ph.D., Robert Mishkin, Ph.D., Jack Moran, Ph.D., Charles Morgan, Ph.D.,
Stephen Mosher, Ph.D., Lisa Mullins, Ph.D., Sarah Natale, Ph.D., Ned Newton,
Ph.D., R. M. Nixon, Ph.D., Grover Norquist, Ph.D., Ngai Ng, Ph.D., Kevin
O'Malley, Ph.D., Joel Orloff, Ph.D., Frank Patterson, Ph.D., John Pesky,
Ph.D., Peter Pienar, Ph.D., Margaret Pinette, Ph.D., Philip Ravino, Ph.D.,
Celia Reber, Ph.D., Bertrand Roger, Ph.D., Frederick Rogers, Ph.D., Dexter
Rosenbloom, Ph.D., George H. Ruth, Ph.D., Kathleen Rutherford, Ph.D., Robert
Ryder, Ph.D., George Scheinman, Ph.D., Aimee Semple, Ph.D., William Shoemaker,
Ph.D., Joseph Slavsky, Ph.D., Olivia Smith, Ph.D., Simon Silver, Ph.D.,
Orenthal J. Simpson, Ph.D., Jeffrey Spaulding, Ph.D., Richard Starkey, Ph.D.,
David Alan Steele, Ph.D., Y. Struchkov, Ph.D., Quentin Sullivan, Ph.D.,
Ann Sussman, Ph.D., Ezra Tamsky, Ph.D., Kumiko Terezawa, Ph.D., Marge Thatcher,
Ph. D., Mark Theissen, Ph.D., Marilyn Tucker, Ph.D., Christina Turner, Ph.D.,
Brenda C. W. Twersky, Ph.D., Frederick A. Von Stade, Ph.D., F. Skiddy Von
Stade, Ph.D., Bertha Vanation, Ph.D., William Veeke, Ph.D., Norma Verrill,
Ph.D., Y. Y. Vlahos, Ph.D., Marko Vukcic, Ph.D., Paul Waggoner, Ph.D., Teresa
Wallace, Ph.D., Thomas Waller, Ph.D., J. Ward, Ph.D., John H. Watson, M.D.,
Michael Weddle, Ph.D., Merton Weinberg, Ph.D., Lawrence Welk, Ph.D., Kevin
White, Ph.D., Andrew Williams, Ph.D., John Williams, Ph.D., Theodore Williams,
Ph.D., William Williams, Ph.D., Eileen Wynn, Ph.D., Chin-chin Yeh, Ph.D.,
and Ethel Youngman, Ph.D.
So far as we can determine, peanut butter has no effect on the rotation
of the earth.
© Copyright 2003 Annals
of Improbable Research (AIR)
| true | true | true | null |
2024-10-12 00:00:00
|
2003-01-01 00:00:00
| null | null | null | null | null | null |
18,526,954 |
https://github.com/ReimuNotMoe/ydotool
|
GitHub - ReimuNotMoe/ydotool: Generic command-line automation tool (no X!)
|
ReimuNotMoe
|
Generic Linux command-line automation tool (no X!)
** ydotool is not limited to Wayland.** You can use it on anything as long as it accepts keyboard/mouse/whatever input. For example, X11, text console, "RetroArch OS", fbdev apps (fbterm/mplayer/SDL1/LittleVGL/Qt Embedded), etc.
Our **ultra-lightweight** JavaScript runtime, *Resonance*, will be released in Q2 2024 in LGPL license.
ydotool will then be rewritten in JavaScript afterwards, to enable more people to understand the code & contribute.
**You have NO reason to reject this. The RAM consumption (RSS) will NOT exceed 1MB.**
The man page is not always up to date. Please use `--help`
to ensure correctness.
This project is now refactored. (from v1.0.0)
Changes:
- Rewritten in pure C99
- No external dependencies
- Uses a lot less memory & no dynamic memory allocation
Breaking Changes:
`recorder`
removed because it's irrelevant. It will become a separate project- Command chaining and
`sleep`
are removed because this should be your shell's job `ydotool`
now must work with`ydotoold`
- Usage of
`click`
and`key`
are changed
Good News:
- Some people can finally build this project offline
`key`
now (only) accepts keycodes, so it's not limited to a specific keyboard layout- Now it's possible to implement support for different keyboard layouts in
`type`
Currently implemented command(s):
`click`
- Click on mouse buttons`mousemove`
- Move mouse pointer to absolute position`type`
- Type a string`key`
- Press keys`debug`
- Print the socket, number of parameters and parameter values`bakers`
- Show the honorable bakers`stdin`
- Sends the key presses as it was a keyboard (i.e from ssh) See PR #229
Switch to tty1 (Ctrl+Alt+F1), wait 2 seconds, and type some words:
```
ydotool key 29:1 56:1 59:1 59:0 56:0 29:0; sleep 2; ydotool type 'echo Hey guys. This is Austin.'
```
Close a window in graphical environment (Alt+F4):
```
ydotool key 56:1 62:1 62:0 56:0
```
Relatively move mouse pointer to -100,100:
```
ydotool mousemove -x -100 -y 100
```
Move mouse pointer to 100,100:
```
ydotool mousemove --absolute -x 100 -y 100
```
Mouse right click:
```
ydotool click 0xC1
```
Mouse repeating left click:
```
ydotool click --repeat 5 --next-delay 25 0xC0
```
Repeat the keyboard presses from stdin:
```
ydotool stdin
```
`ydotoold`
(daemon) program requires access to `/dev/uinput`
. **This usually requires root permissions.**
See `/usr/include/linux/input-event-codes.h`
ydotool works differently from xdotool. xdotool sends X events directly to X server, while ydotool uses the uinput framework of Linux kernel to emulate an input device.
When ydotool runs and creates a virtual input device, it will take some time for your graphical environment (X11/Wayland) to recognize and enable the virtual input device. (Usually done by udev)
So, if the delay was too short, the virtual input device may not get recognized & enabled by your graphical environment in time.
In order to solve this problem, a persistent background service, ydotoold, is made to hold a persistent virtual device, and accept input from ydotool.
Since v1.0.0, the use of ydotoold is mandatory.
**CMake 3.22+ is required.**
There are a few extra options that can be configured when running CMake
- BUILD_DOCS=ON|OFF - whether to build the documentation, depends on
`scdoc`
. Default: ON - SYSTEMD_USER_SERVICE=ON|OFF - whether to use systemd user service file, depends on
`systemd`
. Default: ON - SYSTEMD_SYSTEM_SERVICE=ON|OFF - whether to use systemd system service file, depends on
`systemd`
. Default: OFF - OPENRC=ON|OFF - whether to use openrc service file. Default: OFF (TBD)
```
mkdir build
cd build
cmake ..
make -j `nproc`
```
If issues appears, check the build options, but try to install the dependecies:
Debian-based:
```
sudo apt install scdoc
```
RHEL-based:
```
sudo dnf install scdoc
```
Currently, ydotool does not recognize if the user is using a custom keyboard layout. In order to comfortably use ydotool alongside a custom keyboard layout, the user could use one of the following fixes/workarounds:
In sway, the process is fairly easy. Following the instructions there, you would end up with something like:
```
input "16700:8197:DELL_DELL_USB_Keyboard" {
xkb_layout "us,us"
xkb_variant "dvorak,"
xkb_options "grp:shifts_toggle, caps:swapescape"
}
```
The identifier for your keyboard can be obtained from the output of `swaymsg -t get_inputs`
.
You can use Per-device input configs in Hyprland. Simply add following snippet to your config:
```
device:ydotoold-virtual-device {
kb_layout = us
kb_variant =
kb_options =
}
```
As mentioned here, consider using a hardware-based configuration that supports using a custom layout without configuring it in software.
This project is now being maintained **thanks to all the people that are supporting this project!**
All backers and sponsors are listed here.
- Donate on our Patreon
- Buy our products on our Official Store, if they are meaningful to you (please leave a message:
`ydotool user`
, so we can know that this purchase is for supporting ydotool)
Article: "Open Source" is Broken
Independent software developers in China, like us, have 10 times more life pressure than Marak, the author of faker.js. Since ydotool has the opportunity to benefit large IT companies who won't pay a penny to us, we've changed the license to AGPLv3. These large IT companies are the main cause of life pressure here, such as the "996" working hours.
**Marak's fate will repeat on all open source developers eventually (of course we aren't talking about those who were born in billionare families) if we just keep fighting with each other and do nothing to improve the situation. If you make open source software as well, don't hesitate to ask for donations if you actually need them.**
Also make sure you understand all the terms of AGPLv3 before using this software.
| true | true | true |
Generic command-line automation tool (no X!). Contribute to ReimuNotMoe/ydotool development by creating an account on GitHub.
|
2024-10-12 00:00:00
|
2018-11-24 00:00:00
|
https://opengraph.githubassets.com/0c12323ef17a53012699cfe876d062f745c151bd87050f8431bb746bf9f6b646/ReimuNotMoe/ydotool
|
object
|
github.com
|
GitHub
| null | null |
7,192,940 |
http://blog.cdw.com/hands-on-with-googles-new-chromebox-for-meetings
|
Research Hub
|
CDW
|
# Research Everything IT
##### RECENTLY ADDED
### Read the Latest in the Research Hub
Nov 01, 2024
Digital Workspace
5 Best Practices for Telecom Expense Management for Global Enterprises
Article
4 min
Having a solid telecom expense management strategy — and partnering with a trusted vendor in this space — are among the keys to elevating the digital experience for all stakeholders and ensuring your organization communicates smarter, not harder.
Oct 08, 2024
Data Analytics
The Hidden Perks of Data Virtualization
Article
6 min
This unique combination of lesser-known capabilities makes data virtualization an even more compelling solution for your organization.
Oct 08, 2024
Cloud
When the Cloud Is Right for Organizations Deploying Artificial Intelligence
Article
5 min
Artificial intelligence is changing the way companies should manage their cloud presence.
Oct 07, 2024
Security
Automating Security and Compliance to Streamline Resource Creation
Use Case
3 min
Enhancing Infrastructure as Code (IaC) deployments with a focus on AWS security groups enables innovation and provisioning at scale.
Oct 07, 2024
Digital Workspace
Lift and Shift is Still the Best Way to Quickly Modernize Federal IT Systems
Article
4 min
Discover how lift and shift strategies benefit the federal government to ensure your system can quickly modernize to meet the needs of your citizens.
Oct 02, 2024
Cloud
Preparing for a Secure, Seamless Cloud Migration
Use Case
3 min
See how cloud governance and platform engineering keep an organization’s computing resources compliant with its security standards during a migration.
##### TRENDING
### What Other IT Pros are Researching
View All
Sep 16, 2022
Networking
Why You Should Consider an Upgrade to Wi-Fi 6 or 6e
article
3 min
A wireless upgrade can help organizations meet users’ growing demand for connectivity.
Oct 03, 2022
Security
Don't Get Hooked: Avoid Becoming the Bait of a Phishing Email
article
3 min
Take a look at this infographic to learn what to look out for in a suspicious email.
Sep 23, 2022
Digital Workspace
Conversation Design Puts AI One Step Closer to Humans
article
4 min
Conversation interfaces can enable customer interaction with automated systems more naturally.
Sep 09, 2022
Cloud
When a DDoS attack comes, defend your applications with an AWS firewall
article
3 min
CDW Managed Services for AWS protects customer web applications using AWS WAF Security Automations.
SECURITY
### Create a Secure Digital Environment
Helping to protect you—and your end users—from security breaches.
Oct 07, 2024
Security
Automating Security and Compliance to Streamline Resource Creation
Use Case
3 min
Enhancing Infrastructure as Code (IaC) deployments with a focus on AWS security groups enables innovation and provisioning at scale.
Oct 01, 2024
Security
IAM SSO Streamlines Onboarding and Offboarding AWS Users
Use Case
3 min
With industrial IoT and commercial telematics demanding advanced identity management, see how the IAM SSO Federation facilitates just-in-time access.
Sep 18, 2024
Security
How Purple Team Exercises Can Enhance Your Threat Management Strategy
Article
5 min
Purple team exercises test your threat management posture by simulating attacks on your systems, processes and technologies. Here’s how this tactic can help fine-tune your defenses against evolving threats through collaboration and shared learning.
Sep 12, 2024
Security
How One Company Streamlined Its Physical Security by Shifting to the Cloud
Case Study
7 min
A business-to-business automotive parts company turned to CDW to help it upgrade its existing physical security and video surveillance systems by moving from an on-premises solution to a cloud-based platform.
##### Collaboration
### A Digital Workspace for New Ways of Working
View All
Nov 01, 2024
Digital Workspace
5 Best Practices for Telecom Expense Management for Global Enterprises
Article
4 min
Having a solid telecom expense management strategy — and partnering with a trusted vendor in this space — are among the keys to elevating the digital experience for all stakeholders and ensuring your organization communicates smarter, not harder.
Oct 07, 2024
Digital Workspace
Lift and Shift is Still the Best Way to Quickly Modernize Federal IT Systems
Article
4 min
Discover how lift and shift strategies benefit the federal government to ensure your system can quickly modernize to meet the needs of your citizens.
Sep 25, 2024
Digital Workspace
3 Ways CPaaS Enhances Customer Engagement
Article
4 min
Customers are more loyal when they can engage with brands seamlessly. Communications Platform as a Service (CPaaS) can help your organization deliver effective customer experiences that will have customers returning again and again.
Aug 27, 2024
Digital Workspace
The Challenges Rural Governments face when Acquiring and Retaining It Staff
Article
4 min
State and local governments in rural areas face budget and recruitment hurdles in their efforts to deliver better services to citizens.
##### CLOUD
### Complete Your Cloud Journey
Oct 08, 2024
Cloud
When the Cloud Is Right for Organizations Deploying Artificial Intelligence
Article
5 min
Artificial intelligence is changing the way companies should manage their cloud presence.
Oct 02, 2024
Cloud
Preparing for a Secure, Seamless Cloud Migration
Use Case
3 min
See how cloud governance and platform engineering keep an organization’s computing resources compliant with its security standards during a migration.
Sep 27, 2024
Cloud
What’s Next in the Cloud?
Article
13 min
CDW’s recent survey shows that many organizations have achieved cloud maturity, but IT leaders must maintain a focus on evolving strategic and security challenges.
Sep 26, 2024
Cloud
Empowering Innovation Through Stronger Cloud Security
Use Case
3 min
The breakneck pace of keeping consumers engaged requires a cloud strategy designed to keep up.
| true | true | true |
Research everything IT in the CDW Research Hub, from computer hardware and software to IT solutions and services, learn from our IT experts.
|
2024-10-12 00:00:00
|
2024-10-08 00:00:00
|
https://webobjects2.cdw.com/is/image/CDW/cdw23-og-logo-1200
|
website
|
cdw.com
|
CDW.com
| null | null |
29,259,773 |
https://www.insider.com/first-female-chinese-astronaut-spacewalk-wang-yaping-2021-11
|
A female astronaut made history this weekend when she became the first Chinese woman to ever complete a spacewalk
|
Cheryl Teh
|
- Astronaut Wang Yaping ventured outside China's Tiangong space station for six hours on November 8.
- She made the spacewalk to install equipment with fellow Shenzhou-13 crew member Zhai Zhigang.
- Shenzhou-13's three-member crew is on a six-month mission to move construction forward at the Chinese space station.
Astronaut Wang Yaping has become the first woman in Chinese history to walk in space.
Wang, 41, is one of the three astronauts — or taikonauts, China's term for the country's space explorers — currently stationed at the Tiangong space station. She is part of the Shenzhou-13 crew, which blasted off from a launch center in the Gobi Desert on October 16 to embark on a six-month mission in the Tianhe module, the core of the space station.
Wang and fellow astronaut Zhai Zhigang, 55, left the Tianhe module and ventured out into space on November 8 for six hours to install equipment and test the station's robotic arm, per a media release from the China Manned Space Agency (CMS). The third crew member, Ye Guangfu, 41, assisted the team from within the space station.
"This marks the first extravehicular activity of the Shenzhou-13 crew, and it is also the first in China's space history involving the participation of a woman astronaut," the CMS said in its statement. "The whole process was smooth and successful."
In a video published by the South China Morning Post, the astronauts are seen performing tasks outside their space station and waving hello via a livestream beamed to Earth.
Wang, Zhai, and Ye will continue to work on the space station during their six-month stay. China's space agency said the team will "carry out tasks such as mechanical arm operation, extravehicular activities, and modules transfer" and test how a long-term stay in space would work within the country's space station, in terms of resource management and life support.
Tiangong, which can be translated to mean "heavenly palace," is currently under construction. Its core module Tianhe is slated to eventually be connected to two other sections, Mengtian and Wentian. China plans to finish its new space station by the end of 2022, a task that will involve at least seven more missions, per Reuters.
Under the 2011 Wolf Amendment, Chinese astronauts were banned from launching to the International Space Station (ISS) by US law. However, the ISS may be out of commission by the 2030s, which might leave Tiangong as the only working space station at that point.
China previously sent another crew into space, composed of taikonauts Tang Hongbo, Nie Haisheng, and Liu Boming. The trio, who made up the Shenzhou-12 crew, landed in Mongolia on September 17 after a three-month stay at the Tiangong space station. At the time, Shenzhou-12's mission was the longest space flight in Chinese history, but that record is now set to be broken by Shenzhou-13's taikonauts.
| true | true | true |
Wang ventured outside China's Tiangong space station for six hours on November 8 on a spacewalk with fellow crew member Zhai Zhigang.
|
2024-10-12 00:00:00
|
2021-11-08 00:00:00
|
https://i.insider.com/616aa26f41af0d00193ef9a1?width=1200&format=jpeg
|
article
|
businessinsider.com
|
Insider
| null | null |
4,475,435 |
http://blog.stackmob.com/2012/09/hackathons-should-your-company-sponsor-them/
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
5,919,900 |
http://www.macrumors.com/2013/06/21/first-trailer-for-ashton-kutchers-jobs-released/
|
First Trailer for Ashton Kutcher's 'Jobs' Released
|
Jordan Golson
|
# First Trailer for Ashton Kutcher's 'Jobs' Released
The first trailer for Ashton Kutcher's 'Jobs' was released earlier today, giving the wider public its first glimpse at the movie after it premiered at the Sundance Film Festival to mixed reviews earlier this year.
The film was originally scheduled for release in April, but will now receive a widespread release on August 16, 2013. A minute-long clip was released back in January but the trailer showcases a number of scenes from the film.
"Jobs", which stars Ashton Kutcher as Steve Jobs and Josh Gad as Steve Wozniak, is one of several Jobs-related films in the works or already released. Back in April, rumor site Funny or Die released a rather poorly received
"iSteve" comedy film starring Justin Long, who had played the "Mac" character in Apple's long-running "Mac vs. PC" ad campaign.
A third film is being written by Aaron Sorkin and is the official adaptation of Jobs' authorized biography by Walter Isaacson. The film, which is still in the early stages of development, is planned to encompass three 30-minute scenes showing Jobs backstage just prior to the launches of the original Mac, NeXT, and the iPod.
## Popular Stories
iOS 18.1 will be released to the public in the coming weeks, and the software update introduces the first Apple Intelligence features for the iPhone. Below, we outline when to expect iOS 18.1 to be released. iOS 18.1: Apple Intelligence Features Here are some of the key Apple Intelligence features in the iOS 18.1 beta so far: A few Siri enhancements, including improved understanding...
Apple's iPhone development roadmap runs several years into the future and the company is continually working with suppliers on several successive iPhone models simultaneously, which is why we sometimes get rumored feature leaks so far ahead of launch. The iPhone 17 series is no different – already we have some idea of what to expect from Apple's 2025 smartphone lineup. If you plan to skip...
Alleged photos and videos of an unannounced 14-inch MacBook Pro with an M4 chip continue to surface on social media, in what could be the worst product leak for Apple since an employee accidentally left an iPhone 4 prototype at a bar in California in 2010. The latest video of what could be a next-generation MacBook Pro was shared on YouTube Shorts today by Russian channel Romancev768, just...
Rumors strongly suggest Apple will release the seventh-generation iPad mini in November, nearly three years after the last refresh. Here's a roundup of what we're expecting from the next version of Apple's small form factor tablet, based on the latest rumors and reports. Design and Display The new iPad mini is likely to retain its compact 8.3-inch display and overall design introduced with...
The current Apple TV was released two years ago this month, so you may be wondering when the next model will be released. Below, we recap rumors about a next-generation Apple TV. In January 2023, Bloomberg's Mark Gurman reported that a new Apple TV was planned for release in the first half of 2024:Beyond the future smart displays and new speaker, Apple is working on revamping its TV box....
Apple often releases new Macs in the fall, but we are still waiting for official confirmation that the company has similar plans this year. We're approaching the middle of October now, and if Apple plans to announce new Macs before the holidays, recent history suggests it will happen this month. Here's what we know so far. As of writing this, it's been 220 days since Apple released a new...
| true | true | true |
The first trailer for Ashton Kutcher's 'Jobs' was released earlier today, giving the wider public its first glimpse at the movie after it...
|
2024-10-12 00:00:00
|
2013-06-21 00:00:00
|
article
|
macrumors.com
|
MacRumors.com
| null | null |
|
22,565,303 |
https://variety.com/2020/tv/news/tonight-late-night-jimmy-fallon-seth-meyers-nbc-suspend-production-1203532962/
|
Colbert, Fallon, Meyers Late-Night Shows Will Suspend Production
|
Brian Steinberg
|
NBC and CBS said they would suspend production of its two flagship late-night programs for a period of at least two weeks, the latest bit of fallout around wee-hours TV related to the spread of coronavirus.
Starting Friday, “The Tonight Show” and “Late Night” will suspend production through a previously planned hiatus, which had been scheduled for the week of March 23. “We will continue to monitor the situation closely and make decisions about future shows as we get closer to the start of production,” NBC said in a statement. Meanwhile, CBS’ said “Late Show” would forgo production of three original broadcast scheduled for next week and would instead move into a hiatus through March 30. “We will continue to monitor the situation closely,” CBS said.
All the shows had planned to start broadcasting Monday without a live, in-studio audience, a nod to the new requirements of life under the spread of coronavirus, with BIll Maher’s program on HBO just joining the others. All of the nation’s late-night programs had made similar decisions, which means every national late-night program will proceed without one of the format’s bedrock elements – a live crowd that can react to all the jokes and sketches, and even influence the host’s actions and tone over the course of a segment or a night.
“Tonight” will tape an original episode for Thursday night without an audience, NBC said. Guests include Dr. Oz, Mandy Moore, and Dane DeHaan. “Late Night with Seth Meyers” will air an encore tonight, but post an original “Closer Look” segment – one of the show’s signature features – for digital consumption. The show had previously booked actors John Krasinski and Regina Hall as well as Bones UK for Thursday’s broadcast.
### Popular on Variety
The lack of audience can change the shows’ energy. On Wednesday night, Samantha Bee held forth without a full audience on TBS’ “Full Frontal,” the first of the programs to broadcast without a studio crowd. With only a handful of her own staff in the studio, she seemed looser, laughing at different points in the show’s various segments – almost as if she was just trying out jokes on her friends. It made for a more relaxed presentation, one in which the show may not be everything, but at least it could go on.
“Saturday Night Live,” which depends on the reactions of its audience to help guide the program, is on hiatus until March 28. No decision about having a live studio audience has been announced.
*More to come…*
| true | true | true |
NBC said it would suspend production of its two flagship late-night programs for a period of at least two weeks
|
2024-10-12 00:00:00
|
2020-03-12 00:00:00
|
article
|
variety.com
|
Variety
| null | null |
|
1,668,065 |
http://github.com/joelthelion/autojump/wiki
|
Home
|
Wting
|
-
Notifications
You must be signed in to change notification settings - Fork 706
# Home
## This wiki is rarely updated. Please refer to the readme.md or man pages for up to date documentation.
For a quick introduction to Autojump, see this video.
One of the most used shell commands is "cd". A quick survey among my friends revealed that between 10 and 20% of all commands they type are actually cd commands! Unfortunately, jumping from one part of your system to another with cd requires you to enter almost the full path, which isn't very practical and requires a lot of keystrokes.
*autojump* is a faster way to navigate your filesystem. It works by maintaining a database of the directories you use the most from the command line. The `autojump -s`
command shows you the current contents of the database. You need to work a little bit before the database becomes usable. Once your database is reasonably complete, you can "jump" to a commonly "cd"ed directory by typing:
```
j <dirspec>
```
where dirspec is a few characters of the directory you want to jump to. It will jump to the most used directory whose name matches the pattern given in dirspec. Note that autojump **isn't meant** to be a drop-in replacement for cd, but rather a complement. Cd is fine when staying in the same area of the filesystem; autojump is there to help when you need to jump far away from your current location.
Autojump supports tab-completion. Try it! Autojump should be compatible with Bash 4 and zsh. Please report any problems!
Pierre Gueth contributed a very nice applet for freedesktop desktops (Gnome/KDE/...). It is called "*jumpapplet*", try it!
Thanks to Simon Marache-Francisco's outstanding work, autojump now works perfectly with zsh.
```
j mp3
```
could jump to "/home/gwb/my mp3 collection", if that is the directory in which you keep your mp3s.
```
j --stat
```
will print out something in the lines of:
```
54.5: /home/shared/musique
60.0: /home/joel/workspace/coolstuff/glandu
83.0: /home/joel/workspace/abs_user/autojump
96.9: /home/joel/workspace/autojump
141.8: /home/joel/workspace/vv
161.7: /home/joel
Total key weight: 1077
```
The "key weight" reflects the amount of time you spend in a directory.
Use the github downloads to get the latest release, or use git to get the bleeding edge version (should usually work)
For automatic installation, make sure that install.sh is executable. If not (or if not sure), run
chmod +x install.sh
Once it is executable, run
./install.shIt will tell you any necessary steps from there.
Manual installation of autojump is very simple: copy autojump to /usr/bin, autojump.sh to /etc/profile.d, and autojump.1 to /usr/share/man/man1. Make sure you source the appropriate file in your .bashrc:
```
source /etc/profile.d/autojump.sh
```
If you do not have root access to your machine, copy @autojump@ to a directory that is in the @PATH@ (for example, @$HOME/local/bin@), copy @autojump.bash@ somewhere convenient, and add @source /path/to/autojump.bash@ in your @.bashrc@.
Joel Schaerer William Ting Pierre Gueth (applet) Simon Marache-Francisco (zsh) Daniel Jackoway and others (installation)
Contact us: [email protected]
*LICENSE*
autojump is distributed under the terms of the GPL, version 3.
*PACKAGING*
For *Arch Linux* it is available from the [community] repository. Until Feb, 2011 there was a bug in Arch's packaging of bash, this has now been resolved. Make sure your bash package is at least bash-4.1.009-4.
Autojump is now officially a part of *Debian* Sid, thanks to Tanguy Ortolo's work. (for policy reasons, it requires manual activation after installing, see /usr/share/doc/autojump/README.Debian).
Thibault North contributed packages for *Fedora*. They should now be included in the distro. You can also install autojump on *Redhat/CentOS* using the EL5 / EL6 repos.
Olivier Mehani wrote an "ebuild for *Gentoo*":http://scm.narf.ssji.net/svn/gentoo-portage/browser/app-shells/autojump/autojump-12.ebuild. Thanks! It should now be integrated to portage, so you should be able to emerge it directly.
Autojump is officially supported in the rolling-release branch of *Frugalware*. Thanks to Fabien Bourgeois for making the package!
Binh Nguyen kindly provides a "build script for *Slackware*":http://slackbuilds.org/repository/13.1/system/autojump/
Thanks to Neeraj Verma's work, autojump is available as an official port for FreeBSD.
Autojump is also available on "*Macports*":https://trac.macports.org/browser/trunk/dports/sysutils/autojump/Portfile
*Homebrew* has a formula for Autojump, simply `brew install autojump`
temporary way to install autojump
```
cd /usr/local
curl https://github.com/mxcl/homebrew/pull/9560.patch | git am
brew install autojump
```
I would be very interested by packages for other distros. If you think you can help me with the packaging, please contact me! Note that apart from bash(zsh) and python, autojump doesn't have any dependencies.
| true | true | true |
A cd command that learns - easily navigate directories from the command line - wting/autojump
|
2024-10-12 00:00:00
|
2018-02-01 00:00:00
|
https://opengraph.githubassets.com/c2f213f8d01ae2c5eb00c78b57268cc735d76ee9f04b91a7c5dfb305fa4aaac2/wting/autojump
|
object
|
github.com
|
GitHub
| null | null |
33,147,176 |
https://www.peterullrich.com/prevent-overlapping-schedules-with-ecto-and-postgres
|
Prevent overlapping time ranges with Ecto and Postgres
| null |
## Hey! I recorded two video courses!
If you like this article, you will also like the courses! Check them out here!
Happy whatever day you’re reading this! Before we begin, let me ask you:
```
What's the best present you can ever give?
A broken drum.
Nobody can beat that.
```
Alright. Now, let’s get down to business. Here’s another question for you: Imagine you build an appointment scheduling service. A service that allows patients to book an appointment with their doctor. You want to prevent double-booking of a doctor for the same time slot. So, if one patient has an appointment on Tuesday from 9am to 10am, another patient shouldn’t be able to book that same time slot. How would you implement this?
If you thought about adding the check somewhere in your Elixir code, you’re not wrong, but there’s a better way. If you thought: “Hey, I can do that in the database!”, congrats to you because that’s **exactly** what we will do today!
## 🔗 A quick dive into Exclusion Constraints
Raise your hand if you ever created a `unique_index`
or added a `not null`
to a field in your migration. Is that a hand up there? Good. That means you have worked with Constraints before. Even if you haven’t raised your hand, chances are that you will encounter a unique index or a non-null constraint soon. However, one constraint that you probably never heard about is an Exclusion Constraint.
A `unique constraint`
guarantees that a database column never contains the same value twice. When used with multiple columns, it guarantees that the columns never contain the same combination of values twice. So, a `unique constraint`
checks the existing data with the input data for `equality`
and fails if you try to insert the same data again.
An `exclusion constraint`
is a much more powerful extension of that. It can modify the existing values using many built-in functions before comparing the existing data with the input data. Instead of a simple `is equal to`
comparison, you can define a list of complex comparisons to check for any condition you like, not only equality. If all comparisons return `true`
, Postgres blocks the insertion of the data. The most common use of `exclusion constraints`
is the `X overlaps Y`
comparison. This comparison is especially useful when you want to prevent two DateTime ranges from overlapping.
We could implement the comparison in our Elixir application as well, but moving it to our database has a few advantages. If we wanted to implement this in Elixir, we would always need at least two database queries. First, we need to check whether an overlapping appointment exists already. Most likely, this would require checking the database with a rather complex query. If the query returns nothing, we go ahead and insert our data. With an exclusion constraint, you can combine both steps into one insertion step and let Postgres handle the complex comparison. Postgres’ indexes are highly optimized and written in super-fast C++, so using a constraint will probably always beat an Elixir-based implementation.
One caveat though is that you make yourself more dependent on the Postgres database if you use an exclusion constraint instead. You can’t simply migrate to another database that doesn’t support exclusion constraints.
With all that in mind, let’s see how we can implement an exclusion constraint for our appointment scheduling service.
## 🔗 The Schema
Our service has an `Appointment`
-model, which looks like this:
```
schema "appointments" do
field :doctor_id, :integer
field :canceled, :boolean, default: false
field :from, :utc_datetime
field :until, :utc_datetime
end
```
We create an appointment for a given doctor and two DateTime timestamps `from`
and `until`
. We could implement a custom TSRange type instead of using two individual timestamps, but this will do for now. Users are allowed to cancel an appointment, which is tracked in the `canceled`
field.
## 🔗 The Changeset
Our `appointment`
changeset is relatively simple:
```
def changeset(appointment, attrs) do
appointment
|> cast(attrs, [:doctor_id, :canceled, :from, :until])
|> exclusion_constraint(
:from,
name: :overlapping_appointments
)
|> validate_required([:doctor_id, :from, :until])
end
```
Note the exclusion_constraint/3 function here (Kudos to Michele Balistreri who implemented this function 👏). It instructs Ecto to convert any violations of the exclusion constraint into an `Ecto.ConstraintError`
. If we don’t add this, Ecto will raise the constraint error as an unhandled exception instead. By using `exclusion_constraint/3`
, we can catch and return proper error messages instead of crashing the process when we try to insert an overlapping appointment. Let’s create the constraint next.
## 🔗 The Migration
We create the exclusion constraint in the migration by executing some custom SQL code. Here’s the entire migration:
```
defmodule Demo.Repo.Migrations.CreateAppointments do
use Ecto.Migration
def up do
create table(:appointments) do
add :doctor_id, :integer
add :canceled, :boolean, default: false
add :from, :utc_datetime
add :until, :utc_datetime
end
execute "CREATE EXTENSION IF NOT EXISTS btree_gist;"
execute """
ALTER TABLE appointments
ADD CONSTRAINT overlapping_appointments
EXCLUDE USING GIST (
doctor_id WITH =,
tsrange("from", "until", '[)') WITH &&
) WHERE (NOT canceled);
"""
end
def down do
drop table(:appointments)
end
end
```
Since we execute custom SQL code with `execute`
, we need to divide the migration into `up`
and `down`
functions instead of using a single `change`
function. Otherwise, Ecto wouldn’t know how to roll back the custom code.
Now, let’s go through the migration. First, we create the `appointments`
table. Nothing fancy here. Then, we enable the `btree_gist`
extension, if it isn’t active already. We need that extension to create an index for the exclusion constraint. Now comes the interesting part.
After creating the table, we alter it and add a constraint called `overlapping_appointments`
. We define the constraint using the EXCLUDE USING instruction. We use a `GIST`
index here, but you can use any other index type as well. Now, let’s look at the comparisons.
Each comparison has the format `field WITH operator`
. You can use any `field`
from your schema and any built-in operator. In our case, we first check the `doctor_id`
for equality with the `=`
operator. This effectively filters the existing data for the `doctor_id`
for which we want to insert an appointment.
Next, we want to compare the appointment time slot using Postgres’ overlap operator `&&`
. This operator is probably the most efficient method to compare overlapping time intervals, geometric shapes, or any other range type. To use it, we first need to create a `timestamp range without a time zone`
from the `from`
and `until`
fields. We do that using the tsrange operator. Its parameters are the lower bound of the range `from`
, the upper bound `until`
, and a description of whether the bounds are `inclusive`
or `exclusive`
.
We want to create a time range up to, but not including, our upper bound and use the `[)`
bounds. This way, we can insert more sane upper bounds like this:
```
%{
from: ~U[2022-01-01 09:00:00Z],
until: ~U[2022-01-01 10:00:00Z]
}
```
The result of the `tsrange`
-function using these values and the `[)`
bounds will create a time range from `09:00:00`
until `09:59:59`
, because it excludes the upper bound. The next appointment can then start at `10:00:00`
and end at `11:00:00`
. This makes the timestamp a bit more readable, but might confuse future-you because you’ll certainly forget about this detail and will wonder why you see overlapping timestamps in your database. So, use exclusive upper bounds mindfully.
If you want to have explicit timestamps, you can use the `[]`
bounds instead. This will include your upper bound in the range. It means that you need to define the appointments with timestamps like this though:
```
%{
from: ~U[2022-01-01 09:00:00Z],
# Note the `09:59:59` upper bound here
until: ~U[2022-01-01 09:59:59Z]
}
```
The next appointment would then start at `10:00:00`
and end at `10:59:59`
. It’s up to you which bounds to use.
Note: If you have timestampswithtime zones, you can use the`tstzrange`
-operator instead. BUT: For your own sanity, better use`utc_datetime`
forallyour DateTime data. Trust me. Timezones in the database arePAIN.
Now, that we created the range, we can use the `&&`
-operator to check whether the existing and the input range overlap. The complete comparison looks like this:
```
tsrange("from", "until", '[)') WITH &&
```
The `&&`
-operator also checks for partial overlaps (e.g. `09:00-10:00 and 09:30-10:30`
overlap and `09:00-10:00 and 09:15-09:30`
overlap also). So, whenever an existing range and the input range overlap, this operator returns `TRUE`
and our insertion will fail. Neat!
Lastly, we want to ignore canceled appointments and add the `where (NOT canceled)`
-condition. This instructs Postgres to ignore all canceled appointments in the exclusion constraint. We can now insert overlapping appointments if one of them is canceled.
## 🔗 Giving it a spin
With all this in place, let’s run our migrations with `mix ecto.migrate`
and try it out!
Let’s create an appointment first:
```
iex> Demo.Appointments.create_appointment(
%{
doctor_id: 1,
canceled: false,
from: ~U[2022-01-01 09:00:00Z],
until: ~U[2022-01-01 09:59:00Z]
}
)
{:ok, _appointment}
```
Now, let’s try to insert an overlapping appointment and see what happens:
```
iex> Demo.Appointments.create_appointment(
%{
doctor_id: 1,
canceled: false,
from: ~U[2022-01-01 09:00:00Z],
until: ~U[2022-01-01 09:59:00Z]
}
)
{:error,
#Ecto.Changeset<
action: :insert,
changes: %{...},
errors: [
from: {
"violates an exclusion constraint",
[
constraint: :exclusion,
constraint_name: "overlapping_appointments"
]
}
],
data: #Demo.Appointments.Appointment<>,
valid?: false
>
}
```
Postgres and Ecto block the insertion and return a proper error message. Nice! Exactly what we wanted 💪
Next, let’s try to create an appointment that overlaps a canceled appointment:
```
iex> Demo.Appointments.create_appointment(
%{
doctor_id: 1,
canceled: true,
from: ~U[2022-01-01 09:00:00Z],
until: ~U[2022-01-01 09:59:00Z]
}
)
{:ok, _appointment}
iex> Demo.Appointments.create_appointment(
%{
doctor_id: 1,
canceled: false,
from: ~U[2022-01-01 09:00:00Z],
until: ~U[2022-01-01 09:59:00Z]
}
)
{:ok, _appointment}
```
This also works! Very nice 🥳
### 🔗 Another small benefit
You might have noticed that we don’t check the order of the `from`
and `until`
timestamps in our changeset. So, a user could insert an `until`
that is earlier than the `from`
, which is not what we want (but should expect). Preferably, we would check this in our changeset before executing any database action, but in case that we forgot about it, Postgres has got our back. Let’s try to insert a turned-around time range and see what happens:
```
iex> Demo.Appointments.create_appointment(
%{
doctor_id: 1,
canceled: false,
from: ~U[2022-01-01 09:59:00Z]
until: ~U[2022-01-01 09:00:00Z],
}
)
** (Postgrex.Error) ERROR 22000 (data_exception) range lower bound must be less than or equal to range upper bound
```
Interesting! Postgres checks the order of our two timestamps when it creates the `tsrange`
. So, without writing any Elixir code, this sanity check for timestamps comes out of the box just by using the exclusion constraint. It is still a good idea to implement the order check in your changeset though.
## 🔗 Conclusion
And that’s it! I hope you enjoyed this article! If you have questions or comments, let’s discuss them on Twitter. Follow me on Twitter or subscribe to my newsletter below if you want to get notified when I publish the next blog post. Until then, thanks for reading!
| true | true | true |
Block overlapping appointments efficiently in your database using exclusion constraints and don't worry about double-bookings ever again!
|
2024-10-12 00:00:00
|
2022-10-09 00:00:00
| null |
peterullrich.com
|
peterullrich.com
| null | null |
|
12,995,077 |
https://www.gnu.org/philosophy/surveillance-vs-democracy.html
|
How Much Surveillance Can Democracy Withstand? - GNU Project
|
Richard Stallman
|
## How Much Surveillance Can Democracy Withstand?
by Richard StallmanThanks to Edward Snowden's disclosures, we know that the current
level of general surveillance in society is incompatible with human
rights. Expecting every action to be noted down makes people censor and
limit themselves. The repeated harassment and prosecution of dissidents,
sources, and journalists in the US and elsewhere provides
confirmation. We need to reduce the level of general surveillance,
but how far? Where exactly is the
*maximum tolerable level of surveillance*, which we must ensure
is not exceeded? It is the level beyond which surveillance starts to
interfere with the functioning of democracy, in that whistleblowers
(such as Snowden) are likely to be caught.
Faced with government secrecy, we the people depend on whistleblowers to tell us what the state is doing. (We were reminded of this in 2019 as various whistleblowers gave the public increments of information about Trump's attempt to shake down the president of Ukraine.) However, today's surveillance intimidates potential whistleblowers, which means it is too much. To recover our democratic control over the state, we must reduce surveillance to the point where whistleblowers know they are safe.
Using free/libre software, as I've advocated since 1983, is the first step in taking control of our digital lives, and that includes preventing surveillance. We can't trust nonfree software; the NSA uses and even creates security weaknesses in nonfree software to invade our own computers and routers. Free software gives us control of our own computers, but that won't protect our privacy once we set foot on the Internet.
Bipartisan legislation to “curtail the domestic surveillance powers” in the U.S. is being drawn up, but it relies on limiting the government's use of our virtual dossiers. That won't suffice to protect whistleblowers if “catching the whistleblower” is grounds for access sufficient to identify him or her. We need to go further.
### Table of contents
- The Upper Limit on Surveillance in a Democracy
- Information, Once Collected, Will Be Misused
- Robust Protection for Privacy Must Be Technical
- First, Don't Be Foolish
- We Must Design Every System for Privacy
- Remedy for Collecting Data: Leaving It Dispersed
- Remedy for Internet Commerce Surveillance
- Remedy for Travel Surveillance
- Remedy for Communications Dossiers
- But Some Surveillance Is Necessary
- Conclusion
### The Upper Limit on Surveillance in a Democracy
If whistleblowers don't dare reveal crimes and lies, we lose the last shred of effective control over our government and institutions. That's why surveillance that enables the state to find out who has talked with a reporter is too much surveillance—too much for democracy to endure.
An unnamed U.S. government official ominously told journalists in 2011 that the U.S. would not subpoena reporters because “We know who you're talking to.” Sometimes journalists' phone call records are subpoenaed to find this out, but Snowden has shown us that in effect they subpoena all the phone call records of everyone in the U.S., all the time, from Verizon and from other companies too.
Opposition and dissident activities need to keep secrets from states that are willing to play dirty tricks on them. The ACLU has demonstrated the U.S. government's systematic practice of infiltrating peaceful dissident groups on the pretext that there might be terrorists among them. The point at which surveillance is too much is the point at which the state can find who spoke to a known journalist or a known dissident.
### Information, Once Collected, Will Be Misused
When people recognize that the level of general surveillance is too high, the first response is to propose limits on access to the accumulated data. That sounds nice, but it won't fix the problem, not even slightly, even supposing that the government obeys the rules. (The NSA has misled the FISA court, which said it was unable to effectively hold the NSA accountable.) Suspicion of a crime will be grounds for access, so once a whistleblower is accused of “espionage,” finding the “spy” will provide an excuse to access the accumulated material.
In practice, we can't expect state agencies even to make up excuses to satisfy the rules for using surveillance data—because US agencies already lie to cover up breaking the rules. These rules are not seriously meant to be obeyed; rather, they are a fairy-tale we can believe if we like.
In addition, the state's surveillance staff will misuse the data for personal reasons. Some NSA agents used U.S. surveillance systems to track their lovers—past, present, or wished-for—in a practice called “LOVEINT.” The NSA says it has caught and punished this a few times; we don't know how many other times it wasn't caught. But these events shouldn't surprise us, because police have long used their access to driver's license records to track down someone attractive, a practice known as “running a plate for a date.” This practice has expanded with new digital systems. In 2016, a prosecutor was accused of forging judges' signatures to get authorization to wiretap someone who was the object of a romantic obsession. The AP knows of many other instances in the US.
Surveillance data will always be used for other purposes, even if this is prohibited. Once the data has been accumulated and the state has the possibility of access to it, it can misuse that data in dreadful ways, as shown by examples from Europe, the US, and most recently Turkey. (Turkey's confusion about who had really used the Bylock program only exacerbated the basic deliberate injustice of arbitrarily punishing people for having used it.)
You may feel your government won't use your personal data for repression, but you can't rely on that feeling, because governments do change. As of 2021, many ostensibly democratic states are ruled by people with authoritarian leanings, and the Taliban have taken over Afghanistan's systems of biometric identification that were set up at the instigation of the US. The UK is working on a law to repress nonviolent protests that might be described as causing “serious disruption.” The US could become permanently repressive in 2025, for all we know.
Personal data collected by the state is also likely to be obtained by outside crackers that break the security of the servers, even by crackers working for hostile states.
Governments can easily use massive surveillance capability to subvert democracy directly.
Total surveillance accessible to the state enables the state to launch a massive fishing expedition against any person. To make journalism and democracy safe, we must limit the accumulation of data that is easily accessible to the state.
### Robust Protection for Privacy Must Limit Technology for Collecting Data
The Electronic Frontier Foundation and other organizations propose a set of legal principles designed to prevent the abuses of massive surveillance. These principles include, crucially, explicit legal protection for whistleblowers; as a consequence, they would be adequate for protecting democratic freedoms—if adopted completely and enforced without exception forever.
However, such legal protections are precarious: as recent history shows, they can be repealed (as in the FISA Amendments Act), suspended, or ignored.
Meanwhile, demagogues will cite the usual excuses as grounds for total surveillance; any terrorist attack, even one that kills just a handful of people, can be hyped to provide an opportunity.
If limits on access to the data are set aside, it will be as if they had never existed: years worth of dossiers would suddenly become available for misuse by the state and its agents and, if collected by companies, for their private misuse as well. If, however, we stop the collection of dossiers on everyone, those dossiers won't exist, and there will be no way to compile them retroactively. A new illiberal regime would have to implement surveillance afresh, and it would only collect data starting at that date. As for suspending or momentarily ignoring this law, the idea would hardly make sense.
### First, Don't Be Foolish
To have privacy, you must not throw it away: the first one who has to protect your privacy is you. Avoid identifying yourself to web sites, contact them with Tor, and use browsers that block the schemes they use to track visitors. Use the GNU Privacy Guard to encrypt the contents of your email. Pay for things with cash.
Keep your own data; don't store your data in a company's “convenient” “cloud” server. It's safe, however, to entrust a data backup to a commercial service, provided you put the files in an archive and encrypt the whole archive, including the names of the files, with free software on your own computer before uploading it.
For privacy's sake, you must avoid nonfree software; if you give control of your computer's operations to companies, they are likely to make it spy on you. Avoid service as a software substitute; in addition to giving others control of how your computing is done, it requires you to hand over all the pertinent data to the company's server.
Protect your friends' and acquaintances' privacy, too. Don't give out their personal information except how to contact them, and never give any web site your list of email or phone contacts. Don't tell a company such as Facebook anything about your friends that they might not wish to publish in a newspaper. Better yet, don't be used by Facebook at all. Reject communication systems that require users to give their real names, even if you are happy to divulge yours, since they pressure other people to surrender their privacy.
Self-protection is essential, but even the most rigorous self-protection is insufficient to protect your privacy on or from systems that don't belong to you. When we communicate with others or move around the city, our privacy depends on the practices of society. We can avoid some of the systems that surveil our communications and movements, but not all of them. Clearly, the better solution is to make all these systems stop surveilling people other than legitimate suspects.
### We Must Design Every System for Privacy
If we don't want a total surveillance society, we must consider surveillance a kind of social pollution, and limit the surveillance impact of each new digital system just as we limit the environmental impact of physical construction.
For example: “smart” meters for electricity are touted for sending the power company moment-by-moment data about each customer's electric usage, including how usage compares with users in general. This is implemented based on general surveillance, but does not require any surveillance. It would be easy for the power company to calculate the average usage in a residential neighborhood by dividing the total usage by the number of subscribers, and send that to the meters. Each customer's meter could compare her usage, over any desired period of time, with the average usage pattern for that period. The same benefit, with no surveillance!
We need to design such privacy into all our digital systems [1].
### Remedy for Collecting Data: Leaving It Dispersed
One way to make monitoring safe for privacy is to keep the data dispersed and inconvenient to access. Old-fashioned security cameras were no threat to privacy(*). The recording was stored on the premises, and kept for a few weeks at most. Because of the inconvenience of accessing these recordings, it was never done massively; they were accessed only in the places where someone reported a crime. It would not be feasible to physically collect millions of tapes every day and watch them or copy them.
Nowadays, security cameras have become surveillance cameras: they are connected to the Internet so recordings can be collected in a data center and saved forever. In Detroit, the cops pressure businesses to give them unlimited access to their surveillance cameras so that they can look through them at any and all times. This is already dangerous, but it is going to get worse. Advances in facial recognition may bring the day when suspected journalists can be tracked on the street all the time to see who they talk with.
Internet-connected cameras often have lousy digital security themselves, which means anyone can watch what those cameras see. This makes internet-connected cameras a major threat to security as well as privacy. For privacy's sake, we should ban the use of Internet-connected cameras aimed where and when the public is admitted, except when carried by people. Everyone must be free to post photos and video recordings occasionally, but the systematic accumulation of such data on the Internet must be limited.
(*) I assume here that the security camera points at the inside of a store, or at the street. Any camera pointed at someone's private space by someone else violates privacy, but that is another issue.
### Remedy for Internet Commerce Surveillance
Most data collection comes from people's own digital activities. Usually the data is collected first by companies. But when it comes to the threat to privacy and democracy, it makes no difference whether surveillance is done directly by the state or farmed out to a business, because the data that the companies collect is systematically available to the state.
The NSA, through PRISM, has gotten into the databases of many large Internet corporations. AT&T has saved all its phone call records since 1987 and makes them available to the DEA to search on request. Strictly speaking, the U.S. government does not possess that data, but in practical terms it may as well possess it. Some companies are praised for resisting government data requests to the limited extent they can, but that can only partly compensate for the harm they do to by collecting that data in the first place. In addition, many of those companies misuse the data directly or provide it to data brokers.
The goal of making journalism and democracy safe therefore requires that we reduce the data collected about people by any organization, not just by the state. We must redesign digital systems so that they do not accumulate data about their users. If they need digital data about our transactions, they should not be allowed to keep them more than a short time beyond what is inherently necessary for their dealings with us.
One of the motives for the current level of surveillance of the Internet is that sites are financed through advertising based on tracking users' activities and propensities. This converts a mere annoyance—advertising that we can learn to ignore—into a surveillance system that harms us whether we know it or not. Purchases over the Internet also track their users. And we are all aware that “privacy policies” are more excuses to violate privacy than commitments to uphold it.
We could correct both problems by adopting a system of anonymous payments—anonymous for the payer, that is. (We don't want to help the payee dodge taxes.) Bitcoin is not anonymous, though there are efforts to develop ways to pay anonymously with Bitcoin. However, technology for digital cash was first developed in the 1980s; the GNU software for doing this is called GNU Taler. Now we need only suitable business arrangements, and for the state not to obstruct them.
Another possible method for anonymous payments would use prepaid phone cards. It is less convenient, but very easy to implement.
A further threat from sites' collection of personal data is that security breakers might get in, take it, and misuse it. This includes customers' credit card details. An anonymous payment system would end this danger: a security hole in the site can't hurt you if the site knows nothing about you.
### Remedy for Travel Surveillance
We must convert digital toll collection to anonymous payment (using digital cash, for instance). License-plate recognition systems recognize all cars' license plates, and the data can be kept indefinitely; they should be required by law to notice and record only those license numbers that are on a list of cars sought by court orders. A less secure alternative would record all cars locally but only for a few days, and not make the full data available over the Internet; access to the data should be limited to searching for a list of court-ordered license-numbers.
The U.S. “no-fly” list must be abolished because it is punishment without trial.
It is acceptable to have a list of people whose person and luggage will be searched with extra care, and anonymous passengers on domestic flights could be treated as if they were on this list. It is also acceptable to bar non-citizens, if they are not permitted to enter the country at all, from boarding flights to the country. This ought to be enough for all legitimate purposes.
Many mass transit systems use some kind of smart cards or RFIDs for payment. These systems accumulate personal data: if you once make the mistake of paying with anything but cash, they associate the card permanently with your name. Furthermore, they record all travel associated with each card. Together they amount to massive surveillance. This data collection must be reduced.
Navigation services do surveillance: the user's computer tells the map service the user's location and where the user wants to go; then the server determines the route and sends it back to the user's computer, which displays it. Nowadays, the server probably records the user's locations, since there is nothing to prevent it. This surveillance is not inherently necessary, and redesign could avoid it: free/libre software in the user's computer could download map data for the pertinent regions (if not downloaded previously), compute the route, and display it, without ever telling anyone where the user is or wants to go.
Systems for borrowing bicycles, etc., can be designed so that the borrower's identity is known only inside the station where the item was borrowed. Borrowing would inform all stations that the item is “out,” so when the user returns it at any station (in general, a different one), that station will know where and when that item was borrowed. It will inform the other station that the item is no longer “out.” It will also calculate the user's bill, and send it (after waiting some random number of minutes) to headquarters along a ring of stations, so that headquarters would not find out which station the bill came from. Once this is done, the return station would forget all about the transaction. If an item remains “out” for too long, the station where it was borrowed can inform headquarters; in that case, it could send the borrower's identity immediately.
### Remedy for Communications Dossiers
Internet service providers and telephone companies keep extensive data on their users' contacts (browsing, phone calls, etc). With mobile phones, they also record the user's physical location. They keep these dossiers for a long time: over 30 years, in the case of AT&T. Soon they will even record the user's body activities. It appears that the NSA collects cell phone location data in bulk.
Unmonitored communication is impossible where systems create such dossiers. So it should be illegal to create or keep them. ISPs and phone companies must not be allowed to keep this information for very long, in the absence of a court order to surveil a certain party.
This solution is not entirely satisfactory, because it won't physically stop the government from collecting all the information immediately as it is generated—which is what the U.S. does with some or all phone companies. We would have to rely on prohibiting that by law. However, that would be better than the current situation, where the relevant law (the PAT RIOT Act) does not clearly prohibit the practice. In addition, if the government did resume this sort of surveillance, it would not get data about everyone's phone calls made prior to that time.
For privacy about who you exchange email with, a simple partial solution is for you and others to use email services in a country that would never cooperate with your own government, and which communicate with each other using encryption. However, Ladar Levison (owner of the mail service Lavabit that US surveillance sought to corrupt completely) has a more sophisticated idea for an encryption system through which your email service would know only that you sent mail to some user of my email service, and my email service would know only that I received mail from some user of your email service, but it would be hard to determine that you had sent mail to me.
### But Some Surveillance Is Necessary
For the state to find criminals, it needs to be able to investigate specific crimes, or specific suspected planned crimes, under a court order. With the Internet, the power to tap phone conversations would naturally extend to the power to tap Internet connections. This power is easy to abuse for political reasons, but it is also necessary. Fortunately, this won't make it possible to find whistleblowers after the fact, if (as I recommend) we prevent digital systems from accumulating massive dossiers before the fact.
Individuals with special state-granted power, such as police, forfeit their right to privacy and must be monitored. (In fact, police have their own jargon term for perjury, “testilying,” since they do it so frequently, particularly about protesters and photographers.) One city in California that required police to wear video cameras all the time found their use of force fell by 60%. The ACLU is in favor of this.
Corporations are not people, and not entitled to human rights. It is legitimate to require businesses to publish the details of processes that might cause chemical, biological, nuclear, fiscal, computational (e.g., DRM) or political (e.g., lobbying) hazards to society, to whatever level is needed for public well-being. The danger of these operations (consider the BP oil spill, the Fukushima meltdowns, and the 2008 fiscal crisis) dwarfs that of terrorism.
However, journalism must be protected from surveillance even when it is carried out as part of a business.
### Conclusion
Digital technology has brought about a tremendous increase in the level of surveillance of our movements, actions, and communications. It is far more than we experienced in the 1990s, and far more than people behind the Iron Curtain experienced in the 1980s, and proposed legal limits on state use of the accumulated data would not alter that.
Companies are designing even more intrusive surveillance. Some project that pervasive surveillance, hooked to companies such as Facebook, could have deep effects on how people think. Such possibilities are imponderable; but the threat to democracy is not speculation. It exists and is visible today.
Unless we believe that our free countries previously suffered from a grave surveillance deficit, and ought to be surveilled more than the Soviet Union and East Germany were, we must reverse this increase. That requires stopping the accumulation of big data about people.
### End Note
- The condition of
*not being monitored*has been referred to as ambient privacy. - In the 2020s, facial recognition deepens the danger of surveillance cameras. China already identifies people by their faces so as to punish them, and Iran is planning to use it to punish women who violate religion-imposed dress codes.
A version of this article was first published in Wired in October 2013.
Also consider reading “A radical proposal to keep your personal data safe,” published in The Guardian in April 2018.
| true | true | true | null |
2024-10-12 00:00:00
|
2024-07-21 00:00:00
| null | null | null | null | null | null |
9,955,787 |
http://kildall.com/bad-data-sf-evictions-and-airbnb/
|
Bad Data: SF Evictions and Airbnb - Scott Kildall
|
Scott Kildall
|
# Bad Data: SF Evictions and Airbnb
The inevitable conversation about evictions at San Francisco every party…art organizations closing, friends getting evicted…the city is changing. It has become a boring topic, yet it is absolutely, completely 100% real.
For the Bad Data series — 12 data-visualizations depicting socially-polarized, scientifically dubious and morally ambiguous dataset, each etched onto an aluminum honeycomb panel — I am featuring two works: *18 Years of Evictions in San Francisco* and *2015 AirBnb Listings* for *exactly* this reason. These two etchings are the centerpieces of the show.
This is the reality of San Francisco, it is changing and the data is ‘bad’ — not in the sense of inaccurate, but rather in the deeper sense of cultural malaise.
By the way, the reception for the “Bad Data” show is this Friday (July 24, 2015) at *A Simple Collective*, and the show runs through August 1st.
The Anti-Eviction Mapping Project has done a great job of aggregating data on this discouraging topic, hand-cleaning it and producing interactive maps that animate over time. They’re even using the Stamen map tiles, which are the same ones that I used for my Water Works project.
When I embarked on the Bad Data series, I reached out to the organization and they assisted me with their data sets. My art colleagues may not know this, but I’m an old-time activist in San Francisco. This helped me with getting the datasets, for I know that the story of evictions is not new and certainly not on this scale.
In 2001, I worked in a now-defunct video activist group called Sleeping Giant, which worked on short videos in the era when Final Cut Pro made video-editing affordable and when anyone with a DV camera could make their own videos. We edited our work, sold DVDs and had local screenings, stirring up the activist community and telling stories from the point-of-view of people on the ground. Sure, now we have Twitter and social media, but at the time, this was a *huge deal* in breaking apart the top-down structures of media dissemination.
Here is * No Nos Vamos *a hastily-edited video about evictions in San Francisco. Yes, this was 14 years ago.
I’ve since moved away from video documentary work and towards making artwork: sculpture, performance, video and more. The video-activist work and documentary video in general felt overly confining as a creative tool.
My current artistic focus is to transform datasets using custom software code into physical objects. I’ve been working with the amazing fabrication machines at Autodesk’s Pier 9 facility to make work that was not previously possible.
Ths dataset (also provided through the SF Rent Board) includes all the no-fault evictions in San Francisco, I got my computer geek on…well, I do try to use my programming powers for non-profit work and artwork.
I mapped the data into vector shapes using the C++ open source toolkit, called OpenFrameworks and wrote code which transformed the ~9300 data points into plotable shapes, which I could open in Illustrator. I did some work tweaking the strokes and styles.
This is what the etching looks like from above, once I ran int through the water jet. There were a lot of settings and tests to get to this point, but the final results were beautiful.
The material is a 3/4″ honeycomb aluminum. I tuned the high-pressure from the water-jet to pierce through the top layer, but not the bottom layer. However, the water has to go somewhere. The collisions against the honeycomb produce unpredictable results.
…just like the evictions themselves. We don’t know the full effect of displacement, but can only guess as the city is rapidly becoming less diverse. The result is below, a 20″ x 20″ etching.
###### Bad Data: 18 Years of San Francisco Evictions
The Airbnb debate is a little less clear-cut. Yes, I do use Airbnb. It is incredibly convenient. I save money while traveling and also see neighborhoods I’d otherwise miss. However, the organization and its effect on city economies is a contentious one.
For example, there is the hotel tax in San Francisco, which after 3 years, they finally consented to paying — 14% to the city of San Francisco. Note: this is after they had a successful business.
There also seems to be a long-term effect on rent. Folks, and I’ve met several who do this, are renting out places as tenants on Airbnb. Some don’t actually live in their apartments any longer. The effect is to take a unit off the rental market and mark it as a vacation rental. Some argue that this also skirts the law rent-control in the first place, which was designed as a compromise solution between landlords and tenants.
There are potential zoning issues, as well…a myriad of issues around Airbnb.
###### BAD DATA: 2015 AIRBNB LISTINGS, etching file
In any case, the location of the Airbnb rentals (self-reported, not a complete list) certainly fit the premise of the Bad Data series. It’s an amazing dataset. Thanks to darkanddifficult.com for this data source.
## Leave a Reply
Want to join the discussion?Feel free to contribute!
| true | true | true |
The inevitable conversation about evictions at San Francisco every party…art organizations closing, friends getting evicted…the city is changing. It has become a boring topic, yet it is absolutely, completely 100% real. For the Bad Data series — 12 data-visualizations depicting socially-polarized, scientifically dubious and morally ambiguous dataset, each etched onto an aluminum honeycomb panel — I am featuring two works: 18 Years of Evictions in […]
|
2024-10-12 00:00:00
|
2015-07-24 00:00:00
|
article
|
kildall.com
|
Scott Kildall
| null | null |
|
8,850,938 |
http://www.bbc.com/news/technology-30705361
|
CES 2015: Warning over data grabbed by smart gadgets
| null |
# CES 2015: Warning over data grabbed by smart gadgets
- Published
**A "deeply personal" picture of every consumer could be grabbed by futuristic smart gadgets, the chair of the US Federal Trade Commission has warned.**
Speaking at CES, external, Edith Ramirez said a future full of smart gadgets that watch what we do posed a threat to privacy.
The collated data could create a false impression if given to employers, universities or companies, she said.
Ms Ramirez urged tech firms to make sure gadgets gathered the minimum data needed to fulfil their function.
## Losing respect
The internet of things (IoT), which will populate homes, cars and bodies with devices that use sophisticated sensors to monitor people, could easily build up a "deeply personal and startlingly complete picture" of a person's lifestyle, said Ms Ramirez.
The data picture would include details about an individuals credit history, health, religious preferences, family, friends and a host of other indicators, she said.
The granularity of the data that could be gathered by existing devices was without precedent, she said, and likely to get more detailed as time went on.
An individual's preference for reality TV or the History Channel could be tracked by tablets or smart TV sets and then shared with other organisations in a way that could prove damaging, she said.
"Will this information be used to paint a picture of you that you won't see but that others will?" she asked, wondering if it would influence the types of services people were offered, ads they were shown or what assumptions firms made about their lifestyle.
The FTC boss acknowledged that the IoT had the potential to improve health and boost economic growth, but said this should not come at the expense of individual privacy.
"I question the notion that we must put sensitive consumer data at risk on the off-chance a company might someday discover a valuable use for the information," she said.
Data should only be gathered for a specific purpose, said Ms Ramirez, adding that any firm that did not respect privacy would lose the trust of potential customers.
Click here for more coverage from the BBC at CES 2015, external
- Published7 January 2015
- Published28 November 2014
- Published12 November 2014
| true | true | true |
A "deeply personal" picture of every consumer could be grabbed by futuristic smart gadgets, warned the chair of the US Federal Trade Commission.
|
2024-10-12 00:00:00
|
2015-01-07 00:00:00
|
article
|
bbc.com
|
BBC News
| null | null |
|
4,901,063 |
http://www.negativland.com/news/?page_id=17
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
12,607,206 |
http://daily.jstor.org/how-to-read-bones-like-a-scapulimancer/
|
How to Read the Bones Like a Scapulimancer - JSTOR Daily
|
Jacob Mikanowski
|
The earliest surviving Chinese texts are inscribed on the shoulder blades of cattle and turtle shells. These “oracle bones” date back more than 3,000 years, to the time of the Shang Dynasty, in the late Bronze Age. In traditional Chinese historiography, the Shang were the second dynasty to rule over China, after the legendary Xia. For a long time, oracle bones were also considered to be purely fictional, among the legends describing the prehistory of the state. That changed in 1899, when an imperial bureaucrat named Wang Yirong fell ill with malaria and sought treatment from a doctor peddling “dragon bones.”
Wang had taken a position at the court in Beijing, seeking to rescue China before it crumbled under the weight of foreign invasion, domestic chaos, and palace intrigue. He was also a scholar, a student of ancient scripts and antiquities. Shortly after taking up his post, Wang developed a fever. He sent a member of his household out to the apothecary to bring him the appropriate medicine. The servant returned with a packet of Dr. Fan’s Fresh Dragon Bones, ready for grinding. The bones were indeed fresh; they were washed out of a riverbank earlier that year by flooding near a village called Xiaotun. Luckily, Wang and his friend, the novelist Liu E, noticed that they were inscribed with figures, similar to early Chinese, but—to them—indecipherable. Here, they realized, was an example of an ancient version of the Chinese script, so old it hadn’t been recorded anywhere else.
Before they could do anything about this discovery, history intervened. The Boxer Rebellion began in the north. Wang watched as the rebels were mown down by gunfire, and, in despair, he committed suicide. It was left to Liu, Wang’s friend, to continue the work of investigating and assembling the inscribed oracle bones. He tracked the original shipment to an itinerant medicine salesman from Shanxi named Fan Weiqing. But Fan didn’t want to tell Liu where the bones came from, for fear that he would try to muscle him out of the dragon bone business. Discouraged, Liu traveled across the countryside, buying up all the bones he could get his hands on without ever discovering their source. Eventually, he built up a sizable corpus of inscriptions.
In the 1930s, however, Chinese scholars pinpointed where the bones originated, and set about conducting comprehensive excavations of the ancient royal capital at Anyang (ancient Yin).
The oracle bones provide a window into the daily life of the court, albeit a strange one, since most of the surviving text doesn’t concern ordinary communication between people, but rather, between people and gods. Divination was a way of addressing questions to the spirit world, alerting ancestors about the daily activities of king and court. Gaining spiritual approval for their plans was a task of signal importance: In the words of David Keightley, the foremost Western interpreter of the oracle bones, “to the Shang kings… pyromantic reassurance was the sine qua non of daily life.” (*Pyromancy* is a general term for divination by fire. *Scapulimancy* is the technical term for divination by shoulder bone; *plastromancy* is divination by turtle shell.)
Prophecies had to be gathered and assessed every day. Divination was a complex, technical process that could only be performed by experts. It required a tremendous expenditure of resources and time. Shang notations show the receipt of as many as 500 or 1,000 turtle shells at a time. (These were most likely delivered to the Shang from southern tributaries, though there are signs they may also have come from turtles bred in captivity on-site.) Similar numbers of oxen—whether foreign or domestic—would be slaughtered in connection with the prophesying as well.
Before divination could be performed, the bones had to be prepared. First, they were stripped of flesh. Then, craftsmen drilled a series of small pits into the shells. When it was time to ask the spirits a question, they inserted hot brands into the hollows and waited for the shell to crack. The shape of the crack contained the spirits’ response to each question. The questions were inscribed onto the surface of the bone. Usually, they were presented as a neutral statement (for instance, “In the present moon, it will rain”), allowing the spirits to reply in the affirmative or negative.
This pantheon they addressed included nature powers, former lords, and predynastic powers, a supreme power called Di, and legendary ancestors like Ku, one of whose consorts allegedly gave birth to the founder of the dynasty after swallowing an egg left by a dark bird. The questions covered the full range from personal to political. The diviners of the Shang court asked about the state of the harvest, the success of military campaigns, and the proper order of sacrifices, including human sacrifices. They also probed more mundane matters. It was important to keep the spirit world apprised of goings on at court. If the king was going to go hunting, the former lords had to be alerted, just as the nature powers had to be informed if he intended to dance for rain.
Often, the crucial thing to learn was which ancestor was causing trouble or needed to be appeased. If the king had a toothache, a series of questions would be addressed to his distinguished predecessors in the spirit world—his uncle, father, grandfather, etc.—to find out which one was causing the pain (in the record we have, it turned out to be his uncle).
The king himself was the chief diviner in the kingdom. He interpreted the cracks, and his prognostications were recorded on the bones. This necessitated careful record keeping. A scribe engraved the precise date of each divination on the bone. Often, they also noted its result. Since the prestige of the king’s rule depended on his mantic powers, the post-facto fact-checking of his prophecies almost always tilted in his favor. Sometimes, though, the postmortem reports show a hint of disagreement: Rain fell on the wrong day or when it wasn’t forecast at all. At other times, the king hedged his bets: A birth would be auspicious (i.e., male—the Shang were as patriarchal as their counterparts in the Western Bronze Age) if it happened on one of two days. When the queen then gave birth on one of the other eight days of the week and it turned out to be inauspicious (female), he couldn’t be said to be wrong, he just wasn’t precisely accurate.
Some scholars see in this small sliver of discrepancy the birth of objective history. Others suspect that literacy itself developed in response to the needs of diviners—writing was invented not as a means of communication among humans, but between humans and gods.
But these are speculations. What is certain is that the Shang elevated the art of divination by bone to its greatest heights. They were, however, not the only ones to practice it. Everywhere from Yamato, Japan, to the Arabian Peninsula, people practiced scapulimancy. A scapula warned Attila about his defeat at the battle of Châlons, while Genghis Khan used scorched sheep bones as a check on the reports of his astrologers. One of the most intriguing uses of scapulimancy comes from the Naskapi, an Algonquian-speaking tribe who lived in the tundra lands of Canada’s Labrador Peninsula. The pattern of cracks in a caribou shoulder blade contained their prophecies. The most important question was always this: What direction should the hunters go in search of game? This was a matter of life and death. First, the hunters would try to find the answer in a dream. When they woke up, the bone would hold the key to interpreting what they saw while they were asleep.
In the 1950s, the Yale anthropologist Omar Khayyam Moore theorized that there was a good reason for this practice. If the Naskapi hunters pursued caribou in the same places year after year, they risked depleting their stocks to fatally low levels. The unpredictable nature of the shoulder bone cracks served as a check on any unconscious biases that could have led to overhunting from pursuing the same paths season after season. If this is true, the practice supports two vastly different ideas of prophecy. To the Naskapi, the scapula was a random number generator, a 20-sided die, a way of trusting chaos to guarantee the security of the tribe. For the Shang, by contrast, the scapula bone worked like a radio, tuning in to the spirit world. Its predictions were above all a way of keeping uncertainty at bay.
The matter for which the oracle bones were most often consulted was this: “In the next 10 days, there will be no disasters.” It’s less of a question, really, than a wish—one rooted in an anxiety that, despite the gulf of time, feels all too familiar.
| true | true | true |
In Shang Dynasty China, fortune-telling with oracle bones was the key to political power.
|
2024-10-12 00:00:00
|
2016-09-28 00:00:00
|
article
|
jstor.org
|
JSTOR Daily
| null | null |
|
7,285,054 |
http://www.nasa.gov/content/goddard/planet-sized-space-weather-explosions-at-venus/#.UwdzrvldXv5
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
17,949,091 |
https://medium.com/the-blogging-life/a-week-of-reflection-vol-20-sept-9-2018-55fefadebf4a
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
22,913,332 |
https://rootsofprogress.org/a-builder-manifesto
|
A builder manifesto
| null |
by Jason Crawford · April 18, 2020 · 3 min read
Marc Andreessen says we were unprepared for the covid pandemic because: “We chose not to *build*.”
The problem isn’t money, or capitalism, or technical competence:
The problem is desire. We need to
wantthese things. The problem is inertia. We need to want these things more than we want to prevent these things. The problem is regulatory capture. We need to want new companies to build these things, even if incumbents don’t like it, even if only to force the incumbents to build these things. And the problem is will. We need to build these things.
Amen. And I’ll add:
The problem is ignorance. We don’t know how far we’ve come, and we don’t teach our children.
The problem is complacency. We take progress for granted, as if it were automatic or inevitable. It isn’t: for most of human history we moved forward only weakly and sporadically. Progress only happens when we resolve to make it happen.
The problem is fear. Every harm, large or small, actual or potential, real or imagined, becomes a rule, a regulation, a thread in a ribbon of red tape that has brought sclerosis to our institutions, public and private. We’ve bought safety at the price of speed, without debating or deciding it.
The problem is guilt. We worry that what we build is “unsustainable”, or that it increases inequality, when we should be proud that what we build drives growth and moves humanity forward.
The problem is hatred. Hatred of technology, of industry, of money, of capitalism—hatred that blinds people to the immense good these forces do in the world for all humanity.
The problem is entitlement. Not knowing what it takes to put food on the table, the roof over their heads, or the shirts on their backs, too many people see these things and much more as birthrights. Not knowing that wealth is *created*, they see the rich as thieves and scorn even their gifts.
The problem is tribalism. We don’t teach our children to think, and so they learn only to feel. Without a commitment to truth, without confidence in their own judgment, they fall back on age-old patterns of ingroup vs. outgroup—witness a world where even the efficacy of a drug becomes a partisan issue.
What is the solution?
We need to learn to appreciate progress—both what we’ve already done, and why we can’t stop now. We need to tell the amazing story of progress: how comfort, safety, health, and *luxury* have become commonplace, and what a dramatic achievement that has been. We need to learn where progress comes from, to understand its causes. And we need to pass all that knowledge on to the next generation.
We need to glorify the inventor, the creator, the maker—the *builder*. The independent mind who defies tradition and authority. The scientists, technologists and industrialists who pursue a creative vision, against the crowd and against the odds, facing risk with courage and setbacks with resilience, working relentlessly over years and decades to bring about a better world.
We need to inspire young people to take part in this story, to step up in their turn and to one day lead the way, knowing that it is up to each generation to pick up the torch of progress and carry it forward.
We need to invest. We need to fund science and research, both basic and applied. We need to bring back the great corporate invention labs that helped create the modern world.
And then we need to get out of the way. Unwind the regulatory state. No matter where you fall on any political spectrum, acknowledge that the creeping bureaucracy has crept too far, and that it’s time to start untangling the thicket of regulations. We can maintain a reasonable and even continually increasing standard of safety, while at the same time valuing speed, efficiency, and cost, and most fundamentally, allowing for individual judgment.
Andreessen concludes:
Our nation and our civilization were built on production, on building. Our forefathers and foremothers built roads and trains, farms and factories, then the computer, the microchip, the smartphone, and uncounted thousands of other things that we now take for granted, that are all around us, that define our lives and provide for our well-being. There is only one way to honor their legacy and to create the future we want for our own children and grandchildren, and that’s to build.
Amen. Let’s make building and progress into a philosophy, a religion, a movement.
Build!
Get posts by email:
| true | true | true |
Let's make progress into a philosophy, a religion, a movement
|
2024-10-12 00:00:00
|
2020-04-18 00:00:00
|
article
|
rootsofprogress.org
|
The Roots of Progress
| null | null |
|
37,143,768 |
https://sublimeinternet.substack.com/p/letter-to-a-friend-who-is-thinking
|
letter to a friend who is thinking of starting something new
|
Sari Azout
|
# letter to a friend who is thinking of starting something new
### if you are thinking of leaving your job to start a company or passion project, this letter is for you too
*Happy Sunday, friends. In 2020, I wrote a letter to a friend who was thinking about leaving her job to pursue her passion. If you’re a founder/artist/creator/entrepreneur, this letter is for you too. *🙏🏽
Dear friend,
I know you’ve been thinking about doing something on your own.
There are a lot of resources to draw upon to help you find product-market fit. There are less resources designed to prepare you for the psychological challenges.
The passion economy promises to bring greater alignment between your life and your work. And yet, in the passion economy, the real risk is that your job has to earn a living and meet the needs of your soul.
As you let the possibility of merging your career and your passion live in your mind, I’ve tried to distill the patterns that I’ve observed into a series of questions:
*Will you use this opportunity to grow and evolve or will you use it to beat yourself up?**How will you avoid insecurity work?**Can you learn to enjoy the process as the end in itself, not the means?**Will you default to the norms of your industry, or will you be an original?**What tools will you use to quiet your ego and see reality clearly?**Do you have clarity on what kind of financial value you aim to create?*
**Will you use this opportunity to grow and evolve or will you use it to beat yourself up? **
**Will you use this opportunity to grow and evolve or will you use it to beat yourself up?**
Ideas start out small, weak and fragile.
In order to grow, ideas need financial capital.
But they also need emotional capital — good energy, positivity, and resilience. The best way to control your emotional capital is to fine tune your internal monologue and replace your hunger for approval with a desire to grow.
Jeff Bezos has a wonderful quote about this:* “Invention requires a long-term willingness to be misunderstood. When you do something that you genuinely believe in, that you have conviction about, for a long period of time, well-meaning people may criticize that effort. To sustain yourself over this time, you can’t look for accolades, and you can’t rely on being understood.”*
The only remedy I know is patience.
Stay the course.
**How will you avoid insecurity work? **
**How will you avoid insecurity work?**
When you’re anxious, there is no quicker relief than checking things. Checking your Substack subscriber stats, refreshing your inbox and order confirmations.
Scott Belsky coined the term insecurity work to describe work that does not move the ball forward, but is quick enough that you can do it multiple times a day without realizing.
Unlike insecurity work, deep work often feels elusive because it takes time.
It requires weaning yourself from distractions and being unencumbered by the highs and lows of the day to day.
Your ultimate objective is to ride the waves of your business with serenity.
**Can you learn to enjoy the process as the end in itself, not the means? **
**Can you learn to enjoy the process as the end in itself, not the means?**
In the beginning, the dissonance between the scale of your aspirations and the reality of your days will riddle you with anxiety. You will be tempted to strip the unknown of its surprises and travel to the future: What if my customers churn? What if a competitor introduces a better product? What if I run out of ideas?
But spending your time wrestling with the future is an invitation to ride on the envy trolley, to look at another’s peaks with jealousy and end your days in sadness.
Joseph Campbell’s words ring so true here: *“If you can see your path laid out in front of you step by step, you know it's not your path. Your own path you make with every step you take. That's why it's your path.”*
Looking back won’t serve you. Looking above won’t serve you. The trick is to look ahead, like a biker riding into the sunset, with lightness, excitement and humility.
**Will you default to the norms of your industry, or will you be an original? **
**Will you default to the norms of your industry, or will you be an original?**
Your business exists in the context of a marketplace, but also in the context of your lived experience.
Defaulting to the norms of your industry will shape your business to be similar to the rest, where the best entrepreneurs zero-in on their self expression. Do you have an eye for good design? Inject design into a tasteless industry. Do you have a knack for writing? Share your journey and commit to learning in public. Are you funny? Inject humor into your copy.
You have to be willing to overcome the defaults and orient your business around the things that define you, all the way down to your KPIs. The things you measure should reflect the things you value.
When you’re feeling small, remind yourself this is the artist’s struggle and find comfort in Anna Quindlen’s words: *Once you've read Anna Karenina, Bleak House, The Sound and the Fury, To Kill a Mockingbirdand A Wrinkle in Time, you understand that there is really no reason to ever write another novel. Except that each writer brings to the table, if she will let herself, something that no one else in the history of time has ever had. And that is herself, her own personality, her own voice.*
**What tools will you use to quiet your ego and see reality clearly?**
**What tools will you use to quiet your ego and see reality clearly?**
In fear mode, your brain will bend reality to meet your prior experiences and vulnerabilities.
You have to free your mind of the narrative that the world is out to get you, and notice the difference between imagination and reality. When you catch yourself saying “nobody likes my work”, witness your thoughts and replace them with “I am struggling”.
I’ve found a useful way to develop a relationship with the truth is to step out of yourself and consider your circumstance from a cosmic perspective.
**Do you have clarity on what kind of financial value you aim to create?**
**Do you have clarity on what kind of financial value you aim to create?**
Do you have enough savings to sustain yourself during a ramp up period? If it takes longer to ramp up, how will you gain financial security? Is your significant other on board? Will 1,000 fans paying $10 a month satisfy you? Is your goal to build a sustainable business or are you willing to spend 10 years building a business on a lottery chance to take it public? Are your financial goals truly yours or are they borrowed from somebody else?
I love this quote from marathoner Dick Collins: “*Decide before the race the conditions that will cause you to stop and drop out. You don’t want to be out there saying, ‘Well gee, my leg hurts, I’m a little dehydrated, I’m sleepy, I’m tired, and it’s cold and windy.’ And talk yourself into quitting. If you are making a decision based on how you feel at that moment, you will probably make the wrong decision.”*
Friend, I hope you can see that what will propel you to thrive has a little to do with your skills and a lot to do with your mind.
I hope you get to know your inner world. I hope you thrive financially while living your values. I hope you focus less on what you achieve and more on who you become. I hope you learn to be kind to yourself. I hope you fall in love with the process. I hope you see the point of pursuing passion work is not to drain yourself to create work that eclipses your life, but rather to create a life you are proud of. I hope this new venture takes you far away from conformism and enables you to make a life and a living on your own terms, with your spirit and creativity unhindered.
Sari
If you know someone who would benefit from reading this, share this issue
I'll be completely honest - I don't remember the last time I stopped and read an entire issue of a newsletter. Sari, you have no idea how timely this issue was for me.
As someone who has been starting (and launching) creative projects for over a decade, I've come across A LOT of tactical advice and resources (like you mention on product-market fit), but I have always felt a huge gap when it comes to prepping people for the psychological and emotional toll it takes on you. You definitely struck some chords for me and I can't thank you enough for sharing. saving this issue to revisit early and often.
Sari, this was amazing. Thanks a lot for sharing this beautiful piece full of wisdom and kindness.
| true | true | true |
if you are thinking of leaving your job to start a company or passion project, this letter is for you too
|
2024-10-12 00:00:00
|
2023-07-30 00:00:00
|
article
|
substack.com
|
The Sublime
| null | null |
|
300,509 |
http://myreckonings.com/wordpress/2008/01/09/the-art-of-nomography-i-geometric-design/
| null | null | null | true | false | false | null | null | null | null | null | null | null | null | null |
39,427,091 |
https://daringfireball.net/linked/2024/02/18/epic-games-ios-store
| null | null | null | true | false | false | null | null | null | null | null | null | null | null | null |
1,040,055 |
http://arvindn.livejournal.com/122550.html
|
Log in
| null |
?
?
LiveJournal
Find more
Communities
RSS Reader
Shop
Help
Log in
Log in
Join free
Join
English
(en)
English (en)
Русский (ru)
Українська (uk)
Français (fr)
Português (pt)
español (es)
Deutsch (de)
Italiano (it)
Беларуская (be)
Authorization
Log in
No account?
Create an account
Remember me
Forgot password
Log in
Log in
QR code
| true | true | true |
Your life is the best story! Just start your blog today!
|
2024-10-12 00:00:00
|
2003-01-01 00:00:00
|
website
|
livejournal.com
|
livejournal.com
| null | null |
|
39,343,875 |
https://www.theregister.com/2024/02/10/anz_bank_github_copilot/
|
ANZ Bank finds GitHub Copilot makes coders more productive
|
Thomas Claburn
|
# ANZ Bank test drives GitHub Copilot – and finds AI does give a helping hand
## Expert Python programmers saw the most benefit
GitHub Copilot has steered software engineers at the Australia and New Zealand Banking Group (ANZ Bank) toward improved productivity and code quality, and the test drive was enough for the finance house to deploy the generative AI programming assistant in production workflows.
From mid-June, 2023 through the end of July that year, the Melbourne-based ANZ Bank conducted an internal trial of GitHub Copilot that involved 100 of the firm's 5,000 engineers.
The six-week trial, consisting of two weeks of preparation and four weeks of code challenges, sought to examine how participants felt about using GitHub Copilot with Microsoft Visual Studio Code and to measure the impact the AI-based system had on programmers' productivity, code quality, and software security.
The experiment's findings have been documented in a report with a title that could use a little more finesse: "The Impact of AI Tool on Engineering at ANZ Bank, An Empirical Study on GitHub Copilot within Corporate Environment."
Co-authored by Sayan Chatterjee, cloud architect at ANZ, and Louis Liu, engineering AI and data analytics capability area lead at ANZ, the report cites several prior studies about programming productivity with Copilot.
One study from Microsoft, which now owns GitHub, found coding with an AI assistant improved productivity by more than 55 percent – not a surprise given other vendor surveys.
An ACM/IEEE study on programming with AI help suggested robo-assistance was more of a trade-off: It found that Copilot generated more code, although the quality of software generated was worse than human-built software.
- Making sense of Microsoft's 'confusing' Copilot functionality carnival
- JetBrains' unremovable AI assistant meets irresistible outcry
- AI is changing search, for better or for worse
- If you use AI to teach you how to code, remember you still need to think for yourself
ANZ Bank sought to conduct its own evaluation, citing the potential benefit of AI on productivity while also acknowledging that the technology "raises inherent risks, uncertainties and unintentional consequences regarding intellectual property, data security and privacy."
Those risks – highlighted by the ongoing copyright lawsuit against GitHub, Microsoft, and OpenAI over Copilot – aren't addressed in the study, except as an nod to regulatory compliance.
"Prior to starting the experiment, risks related to intellectual property, data security and privacy were assessed in conjunction with ANZ’s legal and security teams to arrive at a set of guidelines," it said.
The bank experiment examined what effect Copilot has on: Developer sentiment and productivity, as well as code quality and security. It required participating software engineers, cloud engineers, and data engineers to tackle six algorithmic coding challenges per week using Python. Those in the control group were not allowed to use Copilot but were allowed to search the internet or use Stack Overflow.
"The group that had access to GitHub Copilot was able to complete their tasks 42.36 percent faster than the control group participants," the report says. "...The code produced by Copilot participants contained fewer code smells and bugs on average, meaning it would be more maintainable and less likely to break in production."
Both of these results were deemed statistically significant. As for security, the experiment was inconclusive.
"The experiment could not generate meaningful data which would measure code security, "the report says. "However, the data suggest that Copilot did not introduce any major security issues into the code."
The data suggest that Copilot did not introduce any major security issues into the code
This may have been due to the nature of the challenges, which were designed to be short enough that participants could complete them along with their usual daily work. As such, the submitted challenges were fairly short and didn't leave a lot of room for bugs, the report notes.
In terms of sentiment, those using Copilot felt positive about the experience, though not strongly so.
"They felt it helped them review and understand existing code, create documentation, and test their code; they felt it allowed them to spend less time debugging their code and reduced their overall development time; and they felt the suggestions it provided were somewhat helpful, and aligned well with their project's coding standards," the report says.
One intriguing finding is that Copilot was the most useful to the most experienced programmers.
"Assessment of productivity based on Python proficiency found Copilot was beneficial to participants for all skill levels but was most helpful for those who were 'Expert' Python programmers," the study says, adding that the AI helper provided the most improvement (in terms of time saved) on hard tasks.
While observing that the mildly positive endorsements from participants indicate that Copilot can be improved further, the report nonetheless endorsed putting Copilot into production workflows at the bank.
"As of the writing of this paper, GitHub Copilot has already seen significant adoption within the organization, with over 1,000 users using it in their workflows," the report concludes, adding that a broader investigation of the Copilot's productivity impact is underway. ®
**Counterpoint:** AI assistance is leading to lower source code quality, researchers claim
40
| true | true | true |
Expert Python programmers saw the most benefit
|
2024-10-12 00:00:00
|
2024-02-10 00:00:00
|
article
|
theregister.com
|
The Register
| null | null |
|
10,367,139 |
http://www.careermetis.com/this-is-how-i-hack-into-my-flow-state-everyday/
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
5,685,977 |
http://frontendmasters.com/courses/responsive-web-design/retrofitting-twitter/
|
https://frontendmasters.com/courses/css-grid/
| null | null | true | true | false | null |
2024-10-12 00:00:00
| null | null | null |
frontendmasters.com
|
frontendmasters.com
| null | null |
18,345,993 |
https://lithub.com/how-horror-changed-after-wwi/
|
How Horror Changed After WWI
| null |
# How Horror Changed After WWI
### W. Scott Poole on the Abyss Opened Up By the Great War
What exactly constitutes horror? Being spooked by the dark, and by the dead who might return in it, may have haunted the earliest human consciousness. Ceremonial burial predates all written history; the act apparently represented an effort to placate the corpse so it would not make an unwelcome return. The roots of religion itself may be in this impulse, with gifts to the dead constituting the first ritual.
In fact, much of what we think of as “natural human life” may stem from the terror of death and of the dead. Even sexual desire, and our constantly changing conceptions of gender roles that accompany it, may have much to do with the terror of the dead. The urge to reproduce, once inextricably linked to sex, may have a connection to a neurotic fantasy of cheating death by creating an enduring legacy. You can test the primal strength of this cultural idea by noting how no one questions the rationality of reproduction, even in a world of rapidly dwindling resources. Meanwhile, people who choose not to have children often receive both religious and secular disdain as selfish, the breakers of an unspoken social contract, or simply odd.
Does the fear of death that drives us mean that horror has always been our dark companion, a universal human experience in which cave paintings and movie screens are simply different media for the same spooky message? Not exactly. The idea of death and ruin as entertainment, even something one could build a lifestyle around, appears first in the 18th century in novels like Horace Walpole’s *The Castle of Otranto* (1764) and Matthew Lewis’s *The Monk* (1796). Commentators called this taste Gothic because the interest in ruins and castles called to mind the Gothic architecture of the Middle Ages.
Our contemporary term “goth,” used to describe everything from a style of music to black fingernail polish, of course comes from those 18th-century goths. The wealthy of that century could fully indulge this new fascination, turning estates into faux medieval manors and forcing their servants, on top of all their other indignities, to appear at parties dressed in robes that made them look like what Clive Bloom describes as “ghoulish monks.” It’s hard to call this precisely a popular taste, as the novels of suspense that inspired these ideas were damned or banned in some places and very few people had a suitable estate, or enough money or servants, for playing haunted house.
“Much of what we think of as “natural human life” may stem from the terror of death and of the dead. Even sexual desire, and our constantly changing conceptions of gender roles that accompany it, may have much to do with the terror of the dead.”“Horror” in today’s sense had yet to be born. The word existed, but it had an appropriately weird history. According to the Oxford English Dictionary, “horror” first appeared in the 14th century as a synonym for “rough” or “rugged.” Over centuries, the term started to carry the connotation of something so “rough,” in the sense of “sordid and vulgar,” that it caused a physical shudder.
Once Gothic romanticism appeared in the late 18th century, the word “horror” showed up in poetry and prose in something close to the modern sense. English Romantic poet Robert Southey’s praise of “Dark horror” in 1791 echoed his interest in eerie works in Germany. Even here, the word suggests something a bit sordid rather than spookiness for fun. In a 1798 treatise entitled “On Objects of Terror,” English essayist Nathan Drake used “horror” interchangeably with “disgust” and advised artists only to “approach the horrid” rather than actually enter its darkened halls. In Drake’s view, “horror” meant physical revulsion, not a good scare. Even into the 1800s, the meaning of the word always suggested the body in a state of intense distress. A medical manual from 1822 used the word in this way: “The first attack [of sickness] commences with a horror.”
“The war has left its imprint in our souls [with] all these visions of horror it has conjured up around us,” wrote French author Pierre de Mazenod in 1922, describing the Great War. His word, *horreur*, appears in various forms in an incredible number of accounts of the war, written by English, German, Austrian, French, Russian, and American veterans. The years following the Great War became the first time in human history the word “horror” and its cognates appeared on such a massive scale. Images of catastrophe abounded. The Viennese writer Stefan Zweig, one of the stars in the firmament of central Europe’s decadent and demonic café culture before 1914, wrote of how “bridges are broken between today and tomorrow and the day before yesterday” in the conflict’s wake. Time was out of joint. When not describing the war as horror, the imagery of all we would come to associate with the word appeared. One French pilot passing over the ruined city of Verdun described the landscape as a haunted waste and a creature of nightmare, “the humid skin of a monstrous toad.”
The horror of the Great War consumed the lives of soldiers and civilians alike; it sought them out in their sleep, their imagination, and, bizarrely, in their entertainments. The “horror film” had existed almost from the time of the invention of the motion picture itself in the late 19th century. But a new kind of terror film manifested in the years following the Great War. The spook shows not only became more numerous; they took a ghastly turn, dealing more openly with the fate of the dead, even the bodies of the dead. Moreover, an unclassifiable kind of fiction began to appear, frequently called “weird” (as in the pulp magazine *Weird Tales*, 1923) because there seemed no better word for it. The public, and its practitioners, began calling it horror fiction. Art and literature pursued some of the same themes, even though at the time a strict division between “high” and “low” culture prevented many critics from seeing horror on canvas or in poetry, and still prevents a few today. British modernist Virginia Woolf wrote that certain feelings proved no longer possible after 1914; a sentiment might be expressed in words, she felt, but the body and mind could not experience the sensibilities “one used to have.”
By the same token, new feelings about death and the macabre began to seek expression. The root of these new cultural forms had a specific and terribly uncomfortable origin: the human corpse. Stirring up the primal, perhaps universal fear of the dead, the Great War had placed human beings in proximity to millions of corpses that could not be buried. Worse, many could not be identified, and more than a few did not even look like what we think human bodies should look like. Shells, machine guns, gas, and a whole array of technology had muddled them into misshapen forms, empty matter, if still disturbingly organic. The horror of the Great War, traumatically reenacted over and over and over after 1918 down to the present moment, drew its chill from the shattered, bloated, fragmented corpses that covered the wastelands made by the war.
“The “horror film” had existed almost from the time of the invention of the motion picture itself in the late 19th century. But a new kind of terror film manifested in the years following the Great War.”My reading of the roots of horror in the Great War is anything but Freudian. That said, Sigmund Freud appears throughout this account as one who not only was affected by the war but also made some interesting observations about how 1914–1918 transformed European culture and consciousness. He complicated his own ideas because of the conflict, conceding that a death instinct may play at least as important a role in how human beings experience life as sexuality and childhood traumas related to it. Thanatos can be as significant, sometimes more significant, than Eros.
One of the ideas Freud entertained concerned how the war changed the subjects that fiction might explore. “It is evident that the war is bound to sweep away [the] conventional treatment of death,” he wrote in 1915. While his own sons and many of his students fought at the front, he declared, “Death will no longer be denied; we are forced to believe in it. People really die; and no longer one by one. But many, often tens of thousands, in a single day.” Freud hoped, and he expressed it only as a hope, that the return of “primitive” passions, a kind of “bedazzlement” with death, would end when peace returned. He would be mightily disappointed.
One unlikely voice, who indeed had luckily avoided service in the Austro-Hungarian army, helped explain this new, festering reality. Walter Benjamin loved hashish and women too much to get around to writing a proper history; instead, he wrote essays of such beauty and depth that we are still puzzling and wondering over them almost 80 years after his death. He did, during the period that this book covers, fitfully scribble at essays, fiction, personal reflections, and a sprawling unfinished work he called simply “The Arcades Project.”
In 1936, four years before his untimely death, Benjamin wrote an essay he called “The Storyteller,” in which, amid his meditation on the nature of memory, he said this about the Great War:
Was it not noticeable that at the end of the war men returned from the battlefield grown silent—not richer, but poorer in communicable experience? What 10 years later was poured out in the flood of war books was anything but the experience that pours from mouth to mouth.
Benjamin of course knew about, and often criticized, the vast literature that the war produced, especially those works that talked of the “sublimity” of conflict, praised the “beauty” of unremitting violence, and often shaded over into fascism’s dreams of mythic warriors. He wrote the words above about the armless veteran one saw at the café, the cousin who had been at Ypres or Gallipoli and sat silent at family gatherings, sometimes staring off into the middle distance. He wrote of the men who wore masks meant to hide their terrible facial wounds, disfigurements sometimes made even more eerie by the first, halting efforts at combat plastic surgery.
Of these men’s experience of war, Benjamin continued:
A generation that had gone to school on a horse-drawn streetcar now stood under the open sky in the countryside in which nothing remained unchanged but the clouds and, beneath those clouds, in a field of force of destructive torrents and explosions, was the tiny, fragile human body.
These vulnerable bodies—millions of which would be transformed into corpses by history’s first fully mechanized killing machine, which we call World War I—haunt every decade of the 20th century. Their eyes, filled first with shock and soon with nothingness, became the specters of despairing creativity for a generation of filmmakers, writers, and artists who themselves often went about their work with shattered bodies and psyches. All of the uncertainty and dread that folklore and popular tale had ever associated with automata, the disconcerting effect of a mirror, shadows, and puppets seemed suddenly to become historical reality in the sheer number of millions of dead, millions more permanently disabled and disfigured, and bodies that came marching home like empty husks, the person whom family and friends had known before 1914 having been left in another place.
We must write of such things, Benjamin counseled, “with as much bitterness as possible.” For many in his generation, bitterness even proved too weak a concoction.
I hope the air feels thick with static, the smell of an alchemy gone awry, a precursor to what Benjamin once described as a “single catastrophe” that tore history apart like a massive explosion, “piling wreckage upon wreckage.” Because not only did single acts of horror happen, they produced a world of horror that we still live in, both in our imaginations and in our daily lives. The artists, writers, and directors who experienced the Great War, most of them directly, never stopped having the same nightmare, over and again, a nightmare they told the world. Meanwhile, like a spell gone wrong, the Great War conjured up a new world, a sort of alternate reality distinct from what most people before 1914 expected their lives to be. It was a dark dimension where horror films, stories, and art became a Baedeker’s guide to the new normal rather than entertaining diversions. Monsters had come out of the abyss.
I have tried to write of these things with as much bitterness as possible.
__________________________________
*From *Wasteland: The Great War and The Origins Of Modern Horror. *Courtesy of Counterpoint Press. Copyright 2018 by W. Scott Poole.*
###### Previous Article
**The Zombies of Karl Marx: Horror in Capitalism's Wake**
###### Next Article
**Lit Hub Daily: October 31, 2018**
| true | true | true |
What exactly constitutes horror? Being spooked by the dark, and by the dead who might return in it, may have haunted the earliest human consciousness. Ceremonial burial predates all written history…
|
2024-10-12 00:00:00
|
2018-10-31 00:00:00
|
article
|
lithub.com
|
Literary Hub
| null | null |
|
20,116,532 |
https://spectrum.ieee.org/automaton/robotics/space-robots/jpl-design-for-a-clockwork-rover-to-explore-venus
|
JPL's Design for a Clockwork Rover to Explore Venus
|
Evan Ackerman
|
The longest amount of time that a spacecraft has survived on the surface of Venus is 127 minutes. On March 1, 1982, the USSR’s Venera 13 probe parachuted to a gentle landing and managed to keep operating for just over two hours by hiding all of its computers inside of a hermetically sealed titanium pressure vessel that was pre-cooled in orbit. The surface temperature on Venus averages 464 °C (867 °F), which is hotter than the surface of Mercury (the closest planet to the sun), and hot enough that conventional electronics simply will not work.
It’s not just the temperature that makes Venus a particularly nasty place for computers—the pressure at the surface is around 90 atmospheres, equivalent to the pressure 3,000 feet down in Earth’s ocean. And while you can be relieved that the sulfuric acid rain that you’ll find in Venus’ upper atmosphere doesn’t reach the surface, it’s also so dark down there (equivalent to a heavily overcast day here on Earth) that solar power is horrendously inefficient.
Surface photographs from the Soviet Venera 13 probe, which landed on Venus and operated for just over two hours.Images: NASA
The stifling atmosphere that makes the surface of Venus so inhospitable also does a frustratingly good job of minimizing the amount that we can learn about the surface of the planet from orbit, which is why it would be really, really great to have a robot down there poking around for us. The majority of ideas for Venus surface exploration have essentially been the same sort of thing that the Soviets did with the Venera probes: Stuffing all the electronics inside of an insulated container hooked up to a stupendously powerful air conditioning system, probably driven by some alarmingly radioactive plutonium-powered Stirling engines. Developing such a system would likely cost billions in research and development alone.
A conventional approach to a Venus rover like this is difficult, expensive, and potentially dangerous, but a team of engineers at NASA’s Jet Propulsion Laboratory (JPL), in Pasadena, Calif., have come up with an innovative new idea for exploring the surface of Venus. If the problem is the electronics, why not just get rid of them, and build a mechanical rover instead?
With funding from the NASA Innovative Advanced Concepts (NIAC) program, the JPL team wants to see whether it might be possible to build a Venus exploration rover *without* conventional sensors, computers, or power systems. The Automaton Rover for Extreme Environments (AREE) would use clockwork gears and springs and other mechanisms to provide the majority of the rover’s functionality, including power generation, power storage, sensing, locomotion, and even communication: no electronics required. Bring on the heat.
Internal view of the Voskhod spacecraft IMP “Globus” navigation instrument.Photo: Francoisguay via Wikipedia
In this overwhelmingly electronic world, most of us don’t have a proper appreciation for the kinds of things that can be done with mechanical computers. Two thousand years ago (give or take a century or two), the ancient Greeks constructed the Antikythera mechanism, which could calculate the position of the sun and moon, show the phase of the moon, predict eclipses, track calendar cycles, and possibly even show the locations of five planets using a carefully designed set of at least 30 bronze intermeshed gears driven by a crank.
Between the 17th and 19th centuries, Blaise Pascal, Gottfried Leibniz, and Charles Babbage all developed mechanical computers that could perform a variety of arithmetic functions. And a bit more recently, in the 1940s, mechanical computers were used extensively in violently practical applications like artillery fire control systems and aerial bomb sights.
The Russians used a mechanical computer called Globus for positional calculations on their spacecraft up until 2002, but in general, everything is now going electronic. This is fine and good, except for on Venus, where most electronics are impractical.
JPL’s concept for AREE is to create a robot with a minimal amount of electronics, instead relying as much as possible on purely mechanical systems that can handle high temperatures for weeks, months, or even years with no problems. Jonathan Sauder is a technologist and mechatronics engineer at JPL’s Technology Infusion Group, and the lead on the AREE project. We spoke with him for more details on how the project got started, and how everything is going to work.
**IEEE Spectrum: How’d you come up with the idea for AREE?**
**Jonathan Sauder:***I was sitting around with a bunch of engineers, and we were working in a concurrent design session. During one of the coffee breaks, we were talking about cool mechanisms and components, and how cool would it be to do a purely mechanical spacecraft, what that would look like, and where you would use it. We realized that there are two places that make a lot of sense for something like this, where electronics don’t survive: One is Venus, because the longest we’ve been able to survive on the surface of Venus is two hours because electrical systems overheated overhead, and one is around Jupiter, because of the high radiation environment that disrupts electronics.*
**Is it really possible to build a robotic exploration rover with no electronics?**
*We started out in our NIAC Phase I proposal thinking that we were going to build a fully mechanical rover architecture that would not use any electrical subsystems or electronics at all, replacing all the standard electrical subsystems with mechanical computing. As we started to dig into it more, we realized that you can’t build a traditional Mars Curiosity-style rover with a centralized core processor ... Instead, what we’ve had to do is focus on something that gives more of a distributed architecture, where we have many simple mechanisms around the device, guiding it, signaling it, telling it where to go.*
*Originally we were going to try to do a number of our scientific measurements mechanically as well. As we started to look into that, we just couldn’t quite get the resolution of data that you need to image or measure things like temperature and pressure. There are some various high-temperature electronics that have been developed—silicon carbide and gallium systems—that do operate at high temperatures. The problem is that they’re at a really low level of integration. So what that means is that you can’t do traditional electrical systems with them, and you can’t do anything close to what would be required for a rover. So our idea is to built a mobility platform that would be able to locomote, seeing new places and operating for a lot longer than you could with the systems that currently exist.*
An early concept image for the AREE featuring a legged design.Image: ESA/J. Whatmore/NASA/JPL-Caltech
**Where did you begin with the design for AREE?**
*The primary goal is to first design our locomotion architecture to be as robust as possible. And then the second goal is to use as many simple, distributed, reactive mechanisms as we can to sort of guide the rover as it works its way across the surface of Venus. You’ll notice that in some of our earlier images, the rover looked a lot like Theo Jansen’s Strandbeests, these semi-autonomous creatures that roamed the beaches of the Netherlands. A Strandbeest operates off just a couple simple sensors, which control whether the legs move backwards or forwards, and it has built-in logic to avoid soft sand and water.*
*Early on in our conceptual development, we actually worked with Jansen: He came to JPL for a two-day collaborative engineering session, and we were getting all his expertise in 30 years working with Strandbeests. One of the first things he mentioned was that the legs have to go. And you know, when the person who’s created the Strandbeest tells you the legs have to go in your Venus rover, it means you probably need to find a different architecture. The key issue is that, while the legs work great on flat soft beaches, once you start getting to more variable terrain (like an unknown Venus environment), the legs will not be stable enough and the rover will have a very high probability of tipping over and getting damaged.*
*That’s what inspired our architecture change from Phase I to Phase II, where we went from this really cool-looking legged rover to a maybe slightly less cool-looking but much more robust and probably much more implementable rover that looks like a World War I tank.*
The Phase II concept for AREE features tank treads for locomotion and an internal wind turbine. There are several significant advantages to the tank design, besides just not tipping over quite as often. Since it’s vertically symmetrical, if it does flip over for some reason, it can keep on going. This is by no means the final design, and the JPL team is starting to look at wheels as well, since wheels may be more robust due to fewer moving parts.Image: NASA/JPL-Caltech
**Can you describe how AREE will be able to navigate across the surface of Venus?**
*Basically what we’re doing is developing some very specialized systems in terms of obstacle avoidance and determining whether there’s enough power to move or not, rather than a standard centralized system where you have a rover that can do multiple processes or be reconfigured or changed at any time via software.*
*We’re trying to make the mechanisms as simple as possible, to do one specialized task, but to do that specialized task really well. Maybe it’s when the robot bumps into an object, it’ll flip a lever, which causes the rover to drive backwards a little bit, rotate by 90º degrees, and drive forwards. We can only do one obstacle-avoidance pattern, but you can repeat that multiple times and be able to eventually able to work our way around an obstacle.*
Obstacle avoidance is another simple mechanical system that uses a bumper, reverse gearing, and a cam to back the rover up a bit after it hits something, and then reset the bumper and the gearing afterwards to continue on. During normal forward motion, power is transferred from the input shaft through the gears on the right hand side of the diagram and onto the output shaft. The remaining gears will spin but not transmit power. When the rover contacts an obstacle, the reverse gearing is engaged by the synchronizer, thus having the opposite effect. After the cam makes a full revolution it will push the bumper back to its forward position. A similar cam can be used to turn the wheels of the rover at the end of the reverse portion of the drive.Image: Jonathan Sauder/NASA/JPL-Caltech
**How are AREE’s capabilities fundamentally unique from other Venus lander proposals?**
*Right now there are several Venus mission concepts, each of which would cost as much as what a Mars Curiosity rover would or more, that either land in one location, or they get two locations of data. Most proposals are highly complex and looking at 2 to 24 hours on the surface. We’re looking at extending that amount of time to a month, essentially, with this rover concept, and that’s really where the key innovation comes in: Being able to sample multiple locations on the surface of Venus and understanding how things change with time.*
AREE compared to other proposed Venus rovers and concepts.Image: Jonathan Sauder/NASA/JPL-Caltech
**Can you describe your vision for AREE, if everything goes as well as you’re hoping?**
*The ideal robot would be something that could go into some of the roughest terrain on Venus called the tessera, which is this very rough and rocky lava. Our goal would be to track this rover during its mission on that terrain, taking geological samples as we travel to help us understand how Venus evolved. For the ideal rover, it’d be nice to get a little bit larger than 1.5 meters: Right now it’s restricted by the heat shield size. If we could, we would expand the rover to 2.5 meters in order to overcome larger obstacles and get more wind energy. *
*Eventually, the goal would be to place a rover that would essentially be a Venus juggernaut that can get over most obstacles and keep trekking forward, driving itself along slowly but steadily, collecting samples and weather data as it goes.*
The concept of operations for traversing across Venus plains and to the tessarae. During the primary mission of 116 Earth days (one Venus diurnal cycle), the rover will traverse 35 km. An extended mission will traverse up to 100 km over the course 3 years.Image: Jonathan Sauder/NASA/JPL-Caltech
At this point, you may be wondering just why the heck it’s worth sending a clockwork rover to explore the surface of Venus if we’re never going to hear from it again, because without electronics, how can it send any data back to us? There are certainly ways to *store* data mechanically: It’s easy to temporarily store numbers, and you can inscribe about 1 megabit of data onto a metal phonograph record. But then what?
One idea, which is somehow not as crazy as it sounds, would be to use hydrogen balloons to hoist these metal records into the upper atmosphere of Venus, where they would be intercepted by a high altitude solar powered drone (!), which would then read the records and transmit their contents to a satellite in orbit. The researchers also considered a vacuum tube radio, but while vacuum tubes are quite happy to operate at high temperatures, they’re vulnerable to becoming de-vacuumed in the Venusian atmosphere.
The solution that the AREE researchers came up with instead is this: radar reflectors. A radar reflector mounted on the rover could be seen from orbit, and by putting a shutter in front of the reflector, the rover could transmit something like 1000 bits every time a satellite passed over it. Adding multiple reflectors with different reflectivity along with shutters operating at different frequencies could allow a maximum of 32 unique variables to be transmitted per day. You wouldn’t even need to be transmitting specific numbers to send back valuable data, Sauder says, because just putting a reflector underneath a fan could be used to measure relative wind speeds at different locations over time.
So now that you’ve got this ingeniously capable and robust robotic rover that can survive on Venus, the final thing to figure out is what kind of scientific exploration it’ll be able to do, and that’s a particularly difficult question for AREE, as the NIAC Phase 1 proposal explains:
One of the greatest weaknesses of a purely mechanical system is its ability to make science measurements. Beyond communications, one of the key areas that could effectively use high temperature electronics is the instrument. More complex measurements, especially those related to geologic measurements, require electronic solutions.
Late last year, NASA announced HOTTech, the Hot Operating Temperature Technology Program, which is providing funding to support “the advanced development of technologies for the robotic exploration of high-temperature environments … with temperatures approaching 500 degrees Celsius or higher.” The AREE team hopes that HOTTech will result in some science instruments that will be able to survive on their rover, although if not, they also have some ideas for a few interesting ways of doing science without any electronics. These include measuring wind speed from a wind turbine, temperature and pressure from thermally expanding materials, and chemical properties from rods that react to certain desired chemicals.
AREE stores wind power in a composite clock spring, much like a pocket watch. The mechanical system shown above can measure the energy stored in the rover’s springs, and uses a clutch to deliver power to the locomotion system when enough has been stored up. If you only want the rover to run after a certain amount of time, or after other conditions have been met, mechanical logic gates can be added to incorporate the output of a clock, or other sensors.Image: Jonathan Sauder/NASA/JPL-Caltech
To be clear, it’s not like Sauder and his team are trying to make all of this mechanical stuff for fun: It really is necessary to explore Venus affordably for longer than just a day or two. “Our goal with this project is specifically not replicate things that have already been done or will soon be done in the high temperature electronics area,” Sauder says, “but provide a set of mechanical solutions for things that might take longer to develop where there is no clear current solution.”
The technology that’s being developed for AREE has applications elsewhere in the solar system, and not just in high radiation environments like Jupiter’s moon Europa. Right here on Earth, AREE could be useful for taking samples from very close to an active volcano, or from within highly radioactive environments. Another advantage of AREE is that it can be completely sterilized at a very high temperature without affecting its functionality. If, say, you find a lake under the icecap on Mars with some weird tentacle-y things swimming around in it, you could send a send in a sterilized AREE to collect a sample without worrying about contamination.
At this point, AREE has received Phase 2 NIAC funding for continued development. The team is working on a more detailed study of the locomotion system, which will likely involve swapping the tank treads out for something wheel-based and more robust. They’re also developing a high temperature mechanical clock, one of the fundamental parts of any autonomous mechanical computer, and Sauder says that he expects some exciting results from building and testing a radar target signaling system within the next year. We're certainly excited: this is one of the most innovative robots we've ever seen, and we can't wait for it to get to Venus.
The Automation Rover for Extreme Environments team, led by Sauder, also includes Evan Hilgemann, Michael Johnson, Aaron Parness (whose research we’ve written about before), Bernie Bienstock, and Jeffery Hall, with Jessie Kawata and Kathryn Stack as additional authors on the NIAC Phase 1 final report, which you can read here.
Evan Ackerman is a senior editor at *IEEE Spectrum*. Since 2007, he has written over 6,000 articles on robotics and technology. He has a degree in Martian geology and is excellent at playing bagpipes.
| true | true | true |
Ancient technology inspires a future Venus rover that can operate for years at 500 °C
|
2024-10-12 00:00:00
|
2017-08-08 00:00:00
|
article
|
ieee.org
|
IEEE Spectrum
| null | null |
|
8,503,854 |
http://www.theguardian.com/world/2014/oct/23/pope-francis-life-sentence-hidden-death-penalty-torture
|
Pope Francis blasts life sentences as ‘hidden death penalty’
|
Agence France-Presse; Guardian staff reporter
|
Pope Francis has branded life-long prison terms “a hidden death sentence” in an attack on “penal populism” that included severe criticism of countries that facilitate torture.
In a wide-ranging speech to a delegation from the International Association of Penal Law, the pontiff said believers should oppose life-long incarceration as strongly as the use of capital punishment.
“All Christians and men of good faith are therefore called upon today to fight, not only for the abolition of the death penalty – whether it is legal or illegal and in all its forms – but also to improve the conditions of incarceration to ensure that the human dignity of those deprived of their freedom is respected.
“And this, for me, is linked to life sentences. For a short time now, these no longer exist in the Vatican penal code. A sentence of life (without parole) is a hidden death penalty.”
In comments likely to enhance his reputation as one of the most liberal of popes, Francis went on to slam the risk of sentencing becoming disproportionately severe.
“In recent decades a belief has spread that through public punishment the most diverse social problems can be resolved, as if different diseases could all be cured by the same medicine.”
Reiterating Catholic teaching that capital punishment is a sin, the pope also made what appeared to be a thinly veiled attack on the European countries which have facilitated US demands for extraordinary rendition of terror suspects to detention centres in parts of the world where they can be tortured with impunity.
“These abuses will only stop if the international community firmly commits to recognising … the principle of placing human dignity above all else.”
## Comments (…)
Sign in or create your Guardian account to join the discussion
| true | true | true |
Pontiff slates countries facilitating torture and says using prisons to fix social problems is like treating all diseases with one drug
|
2024-10-12 00:00:00
|
2014-10-23 00:00:00
|
article
|
theguardian.com
|
The Guardian
| null | null |
|
8,957,584 |
http://brookeallen.com/pages/archives/1353
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
1,623,040 |
http://ocaml.janestcapital.com/?q=node/82
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
19,322,469 |
https://build.affinity.co/migrating-to-typescript-five-practical-tips-7a57149034d8
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
25,431,203 |
https://www.shironekolabs.com/posts/superrt/
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
25,817,072 |
https://sendfiles.dev/
|
SendFiles
| null |
You need to enable JavaScript to run this app.
| true | true | true |
Web site to securely transmit files between browsers
|
2024-10-12 00:00:00
| null | null | null | null | null | null | null |
4,841,982 |
http://codeautomat.com/
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
7,508,571 |
http://pcgmedia.com/cubical-drift-explain-their-voxel-rpg-builder-planets3/
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
16,132,428 |
https://press.f-secure.com/2018/01/12/intel-amt-security-issue-lets-attackers-bypass-login-credentials-in-corporate-laptops/
|
For media
| null |
Press release
### F‑Secure unveils landmark scam intelligence and impacts report
Brand-new report highlights vital emerging consumer scam trends and insights for service providers.
Spokespeople, reports and guides. Go to our newsroom to see the latest news.
June 13, 2024
Discover how people really feel about their digital security in 2024.
March 27, 2024
F‑Secured, the complete guide to online security in 2024.
President and CEO of F‑Secure. Laaksonen joined F‑Secure in 2012 and has led the Content Cloud business, Consumer Business in North and Latin America, global commercial operations in Managed Detection & Response Business Unit, and the Consumer Security Business Unit. Appointed as President and CEO in 2022. Prior to F‑Secure, Laaksonen led international business units and venture-backed start-ups in the fields of security, cloud computing, telecoms, predictive analytics, and mobile services.
Head of Threat Intelligence, F‑Secure. Kankaala has been in the industry for almost ten years, and her background lies in offensive security (white hacking). Featuring as a professional hacker on Team Whack — a TV series for Finnish Broadcasting Company YLE that demonstrates how everything is hackable — Kankaala is well-versed in helping consumers understand cyber security, removing the jargon and simplifying the issue, something she is passionate about.
Director of Business Development, Embedded Security, F‑Secure. With 28 years’ experience in technology, the last 15 of which in cyber security, Tom Gaffney is passionate about online privacy and security. Partnering with some of the world’s leading telecommunication providers to deliver safe online experiences, he has dedicated his career to educating people on the privacy issues surrounding digital services and teaching users how to interact digitally in the most secure way.
Threat Advisor, F‑Secure. Having written about security and privacy since 2012, Joel Latto strives to raise awareness about the consumer cyber security threat landscape with educational and engaging communications. A member of the F‑Secure AI Task Force, his commitment to demystifying cyber security has led him to focus on online scams — uncovering how criminals evade social media countermeasures and creating awareness campaigns informed by research.
Download F‑Secure logos. The logos come in a range of color variations to provide maximum visibility across different backgrounds but where possible always use the positive version of the logo.
Press release
Brand-new report highlights vital emerging consumer scam trends and insights for service providers.
Press release
F-Secure, a global leader in consumer cyber security, is proud to announce a strategic partnership with one of the largest and most respected mobile service providers in Asia.
Press release
Cyber security leader cements its dedication to scam protection, utilizing AI for unique SMS scam detection, banking and shopping protection features.
| true | true | true | null |
2024-10-12 00:00:00
|
2024-10-08 00:00:00
|
website
|
f-secure.com
|
company.f-secure.com
| null | null |
|
2,411,264 |
http://venturefizz.com/blog/octane-seth-lieberman-ceo-pangea-media
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
26,216,150 |
https://graphics.reuters.com/GLOBAL-ENVIRONMENT/SAND/ygdpzekyavw/index.html
|
The messy business of sand mining
|
Marco Hernandez; Simon Scarr; Katy Daigle
|
A 21st century construction boom is driving unregulated sand mining around the world - eroding rivers and coastlines, disrupting ecosystems and hurting livelihoods.
From Shanghai to Seattle, the world’s cities are built on sand - massive amounts of sand. It’s in the cement and concrete that make the bulk of most buildings. The glass in those buildings’ windows is made with sand, too. So is the tarmac laid onto the roads around them.
Sand is the planet’s most mined material, with some 50 billion tons extracted from lakes, riverbeds, coastlines and deltas each year, according to the United Nations Environment Programme. Per person, that’s about 6,570 kilogrammes (14,500 pounds) per year - more than an elephant’s weight in sand.
Demand for sand is only expected to grow, as the global population continues to climb, cities expand and countries further develop. But in much of the world, sand mining faces little to no government scrutiny. There are scant regulations for protecting the environment, or workers’ safety. And few entities monitor or document the trade for its impact.
The result is that sand is being extracted far more quickly than it can naturally be replaced. That’s causing environmental damage and, in some cases, jeopardising livelihoods.
“This isn’t an issue that’s relevant for only some places. Sand is a critical material for every country,” said ecologist Aurora Torres at the Universite Catholique de Louvain in Belgium. She researches how sand mining can affect both the natural world and people’s well-being.
But many people still “don’t know about these problems,” Torres said. “The number of extraction sites is just huge, which makes it hard to monitor. And we researchers still don’t know many things about the magnitude of the impacts.”
Demand for sand has surged in the last two decades, thanks to urbanisation and construction in China, India and other fast-developing countries.
In just the last 10 years, China’s cities went from housing 51% of the national population to about 60%, according to the National Bureau of Statistics of China. To handle the housing pressure, cities have expanded, adding new structures and building out roads.
China already has used more cement since 2006 than was used in the United States during the entire 20th century.
China is not alone in its drive to build infrastructure and housing. In fact, sand is in such demand that cargoes are regularly shipped around the world. In 2018, those trades were worth some $1.9 billion, according to Harvard’s Atlas of Economic Complexity.
The vast majority of mined sand, however, gets used in the country where it was extracted. In parts of Africa, sand mining has helped people build sturdier homes. But in some cases it has also left the ground pocked with open pits, which can fill up in rainy seasons, providing breeding ponds for disease-carrying mosquitoes.
People have suffered directly from illegal activity as well. In India, 193 people died in accidents related to sand mining operations or sites in 2019-2020, according to a January report by the rights group South Asia Network on Dams, Rivers and People. About half of those deaths occurred from drowning in mining pits, including 76 “minor kids or young children or teenagers who entered the river to have a bath, unaware of deep pits in the riverbed".
The right kind of sand
It’s hard to imagine a shortage of sand. It covers millions of square kilometers across the world’s many deserts, piled in some places into towering dunes.
But desert sand is useless for construction. The wind-weathered grains are too small and smooth for binding in concrete. Sea sand has similar properties from being tossed by ocean currents. But it can be used in land reclamation projects, such as in China, Singapore and Hong Kong.
The sand that’s ideally sized, shaped and cut out for construction comes from shorelines and the beds of rivers and lakes. This is also where to find silica sand, which is melted down to make glass for everything from windshields to smartphone screens.
Some countries, including China and the United States, have begun producing “crushed rock” as an alternative, blasting into rock beds and then grinding the rubble down to cement-suitable aggregates. But that requires investment in both equipment and power to run the machinery, which many small, informal mining operations don’t have.
Scientists have called for a global programme to monitor and manage the industry as a first step to controlling the plunder. Standardising the industry would also mean miners don’t have to become criminals to operate.
Experts also note a need for more materials recycling. Already, the mass of all human-made materials is greater than that of all living things on Earth, according to research published in December in the journal Nature.
“Some people talk about a global sand scarcity, which doesn’t make sense since we don’t have any data that could prove that,” said researcher Torres. What’s needed is more research to map out where sand exists and determine mining volumes “to ensure supply on an increasingly crowded planet without compromising biodiversity”.
Mining hot spots
Sand mining took off only decades ago. The method of extraction depends on where the sand is located. On land or along rivers, it is often dug up with backhoes, shovels or bare hands. Along coastlines, miners use dredging boats or suction pumps.
The damage from sand extraction can be seen clearly in satellite images, with coastlines eroded, ecosystems destroyed, and even entire small islands in Southeast Asia wiped off the map. Rivers can see major environmental disruption, including the erosion of river banks to the point where they collapse, and the destruction of breeding habitats for riverine animals including birds and crocodiles.
Rivers
Small-scale mining operations along Vietnam’s Mekong River and its tributaries have robbed the region of sand, while upstream dams prevent it from being replenished. As a result, the delta is sinking about 2 centimetres (0.75 inches) each year, according to local officials and Duong Van Ni, an expert on the Mekong River at the College of Natural Resources Management of Can Tho University, the largest city in the Mekong delta region.
Rapid erosion, meanwhile, is destroying homes and threatening livelihoods across the Southeast Asian country’s largest rice-growing region.
The impact of sand mining is clear in this stretch of the Da Dang River, in the Vietnamese province of Lam Dong. River banks have badly degraded over a five-year period, illustrated in these satellite images released by Digital Globe and Airbus and analysed by Earthrise Media.
Da Dang River - Vietnam
Many places around the world bear the scars of rampant sand extraction. In southern India, for example, extensive sand mining along the Palar River fed a construction boom in the city of Chennai. In the images below, the dry, sandy river bed appears to have been scraped and excavated over the years, leaving a large depression now filled with water.
A state court intervened in 2013 to ban mining in the Palar. But that ban expired in 2018.
Palar River - India
When rivers are dredged, the evidence can be hidden beneath the water until disaster occurs.
Local politicians near India’s Phalguni River blame the extraction of sand for undermining the foundations of the Mullarapatna Bridge, causing it to collapse just 30 years after it was built. The bridge was located near Mangaluru in the southern state of Karnataka, one of India’s top spots for rampant sand mining, according to the Indian Bureau of Mines.
Mangaluru - India
In the rivers of Azad Kashmir in northern Pakistan, at least seven different sand mining operations can be seen in this satellite snapshot from May last year.
Azad Kashmir - Pakistan
Beaches
Near the southwest Indian town of Alappad, beaches have been eroding gradually for six decades. But sand mining has caused wide stretches to vanish in the last decade alone.
Satellite images from 2003 to 2019 reveal deep cuts into the sand on the beach, as well as watery space where sand, trees and other vegetation were removed entirely.
Alappad - India
Lakes
Just like rivers, freshwater lakes can hold vast amounts of coarse sand, ideal for construction. One of the most obvious examples of sand extraction is in Poyang Lake, the largest freshwater lake in China, located in Jiangxi Province.
The lake’s inflow and outflow of water have been disrupted in recent years by sand mining, as well as by dams and landscape changes. In 2020 the lake filled to its highest recorded level, prompting authorities to declare a “red alert” as flooding threatened surrounding areas.
This 2017 image of just a portion of the lake shows dozens of ships dredging sand or carrying it away.
Poyang Lake - China
At sea
Since June, Chinese dredgers have swarmed around the Taiwan-administered Matsu Islands, dropping anchor and scooping up vast amounts of sand from the ocean bed for land reclamation projects in China.
Aside from Matsu, where 13,300 people live, Taiwan’s coast guard says China also has been dredging in the shallow waters near the median line of the Taiwan Strait, which has long served as an unofficial buffer separating China and Taiwan.
Last year, Taiwan expelled nearly 4,000 Chinese sand-dredgers and sand-transporting vessels from waters under its control, most of them in the area close to the median line, according to the coast guard. That’s a 560% surge from the 600 Chinese vessels repelled in all of 2019.
Chinese sand mining vessels expelled from Taiwanese waters
At one point last year, more than 200 Chinese sand-dredging and transport boats were spotted operating south of Nangan, the main Matsu islet, three Taiwanese officials told Reuters. Lin Chie-ming, the coast guard commander, recalled encountering a similar scene with about 100 Chinese boats on Oct. 25. His team expelled seven Chinese vessels that breached Matsu waters that day.
Matsu Islands - Taiwan
While scientists advocate for more oversight of the sand industry, and more effort by industries to recycle old materials, they are also exploring other solutions.
Two years ago, a group of researchers noted that, as climate change speeds the melting of ice in Greenland, more water is flowing toward the ocean carrying sediments that are deposited at the coastline - sediments that could potentially be used in the island’s construction industry or sold to boost the Greenland economy.
Within days of the study’s publication in the journal Nature, Greenland’s politicians “decided they wanted to investigate this”, said co-author Mette Bendixen, who researches how climate change is impacting the Arctic landscape at the University of Colorado Boulder’s Institute of Arctic and Alpine Research.
It’s still unclear if the sand is even suitable for construction. Bendixen tried once to row out in a dinghy to collect samples, but turned back after getting caught in the currents.
But while mining Arctic sand might help Greenlanders, it’s far from a solution to meet the global sand appetite. “The key is really to find ways to monitor the way we’re using sand right now,” she said.
By
Marco Hernandez, Simon Scarr and Katy Daigle
Editing by
Kenneth Maxwell
Sources:
Earthrise Media; Maxar Technologies; Airbus; Landsat, NASA. U.S. National Oceanic and Atmospheric Administration (NOAA). United Nations Environment Programme (UNEP). U.S. Geological Survey. National Bureau of Statistics of China. China’s Ministry of Natural Resources.
| true | true | true |
How a 21st century construction boom is driving unregulated sand mining around the world.
|
2024-10-12 00:00:00
|
2021-02-18 00:00:00
|
article
|
reuters.com
|
Reuters
| null | null |
|
436,697 |
http://www.infoworld.com/tools/quiz/news/NQ20090116-news-quiz.php
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
14,700,237 |
http://www.tribonet.org/its-kind-of-a-drag/
| null | null | null | true | false | false | null | null | null | null | null | null | null | null | null |
5,450,971 |
http://www.mushroomnetworks.com/blog/2013/03/27/mushroom-networks-and-t-mobile-usa-distributor-one-shop-wireless-partner-to-deliver-broadband-bonding-solutions/
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
29,968,017 |
https://threatpost.com/carding-marketplace-unicc-shuts-down/177688/
|
Top Illicit Carding Marketplace UniCC Abruptly Shuts Down
|
Becky Bracken
|
A top underground market for buying and selling stolen credit-card details, UniCC, has announced it’s shutting down operations.
The site accounted for about 30 percent of carding scam business and, since it was launched in 2013, handled about $358 million in cryptocurrency transactions, according to the Elliptic Threat Intel team, which published the announcement from UniCC leadership.
“Our team retires,” the UniCC leadership posted on underground carding sites in both English and Russian. “Don’t build any conspiracy theories about us leaving, it is (a) weighted decision, we are not young and our health do(es) not allow to (us) work like this any longer.”
The post, signed “your Unicc Team,” gives users 10 days to spend their balances.
“We ask you to be smart and not follow any fakes tied to our comeback and other things,” the notice concluded.
**Carding Marketplace Shakeup**
UniCC’s business was booming after the December 2020 takedown of Joker’s Stash, formerly the carding marketplace of choice. Elliptic noted the overall market for stolen credit-card data last year topped more than $1.4 billion just in Bitcoin.
But in recent months, Elliptic pointed out that other underground marketplaces appear to be hanging up the towel. The White House Market announced it was shutting down in October; and by November, Cannazon went dark. In December it was Torrez’ turn. By early January, Monopoly Market was unexpectedly inaccessible, the report added.
The departures could be a reaction to law-enforcement activities, the Elliptic Threat Team said, but it’s just as likely underground carding marketplace admins are using the chaos to make off with their users’ account balances.
“The wave of recent departures has potentially been a trigger for UniCC’s retirement, as illicit actors see an opportunity in the turbulence to either run away with users’ funds or retire to avoid increased law-enforcement attention,” the report added.
At the same time, new entrants into the stolen data marketplace game are looking to gain a foothold. In August, a new carding marketplace AllWorld.Cards launched with a huge stunt — they released the payment data for 1 million stolen credit cards on the Dark Web for free.
These illicit payment-card data marketplaces can also be an attractive target for fellow hackers.
Last April, Swarmshop was breached and the carding site’s database of stolen payment data was leaked online.
“Tens of thousands of new cards were listed for sale on the market each day, and it was known for having many different vendors — with the fierce competition keeping prices relatively low,” Elliptic noted. “As UniCC retires, focus will now be on who emerges as the main successor. Meanwhile, the operators behind UniCC will be seeking to cash out their formidable profits.”
*Password** **Reset: **On-Demand Event**:** Fortify 2022 with a password-security strategy built for today’s threats. This Threatpost Security Roundtable, built for infosec professionals, centers on enterprise credential management, the new password basics and mitigating post-credential breaches. Join Darren James, with Specops Software and Roger Grimes, defense evangelist at KnowBe4 and Threatpost host Becky Bracken. **Register & stream this FREE session today** – sponsored by Specops Software.*
| true | true | true |
UniCC controlled 30 percent of the stolen payment-card data market; leaving analysts eyeing what’s next.
|
2024-10-12 00:00:00
|
2022-01-14 00:00:00
|
article
|
threatpost.com
|
Threatpost
| null | null |
|
18,543,373 |
https://github.com/jofpin/trape
|
GitHub - jofpin/trape: People tracker on the Internet: OSINT analysis and research tool by Jose Pino
|
Jofpin
|
People tracker on the Internet: Learn to track the world, to avoid being traced.
Trape is an **OSINT** analysis and research tool, which allows people to track and execute intelligent **social engineering** attacks in real time. It was created with the aim of teaching the world how large Internet companies could obtain **confidential information** such as the status of sessions of their websites or services and control their users through their browser, without their knowledge, but It evolves with the aim of helping **government** organizations, companies and **researchers** to track the cybercriminals.
At the beginning of the year 2018 was presented at **BlackHat Arsenal in Singapore**: https://www.blackhat.com/asia-18/arsenal.html#jose-pino and in multiple security events worldwide.
**LOCATOR OPTIMIZATION:**Trace the path between you and the target you're tracking. Each time you make a move, the path will be updated, the location of the target is obtained silently through a bypass made in the browsers, allowing you to skip the location request on the victim's side, and at the same time maintain a precision of**99%**in the locator.
**APPROACH:**When you're close to the target, Trape will tell you.
**REST API:**Generates an API (random or custom), and through this you can control and monitor other Web sites on the Internet remotely, getting the traffic of all visitors.
-
**PROCESS HOOKS:**Manages social engineering attacks or processes in the target's browser.---
**SEVERAL:**You can issue a phishing attack of any domain or service in real time as well as send malicious files to compromise the device of a target.---
**INJECT JS:**You keep the JavaScript code running free in real time, so you can manage the execution of a**keylogger**or your own custom functions in JS which will be reflected in the target's browser.---
**SPEECH:**A process of audio creation is maintained which is played in the browser of the target, by means of this you can execute personalized messages in different voices with languages in Spanish and English. -
**PUBLIC NETWORK TUNNEL:**Trape has its own**API**that is linked to ngrok.com to allow the automatic management of public network tunnels; So you can publish the content of your trape server which is executed locally to the Internet, to manage hooks or public attacks.
**CLICK ATTACK TO GET CREDENTIALS:**Automatically obtains the target credentials, recognizing your connection availability on a social network or Internet service.
-
**NETWORK:**You can get information about the user's network.---
**SPEED:**Viewing the target's network speed. (Ping, download, upload, type connection)---
**HOSTS OR DEVICES:**Here you can get a scan of all the devices that are connected in the target network automatically.
-
**PROFILE:**Brief summary of the target's behavior and important additional information about your device.---
**GPU**---**ENERGY**
Session recognition is one of trape most interesting attractions, since you as a researcher can know remotely what service the target is connected to.
**USABILITY:**You can delete logs and view alerts for each process or action you run against each target.
First unload the tool.
```
git clone https://github.com/jofpin/trape.git
cd trape
python3 trape.py -h
```
If it does not work, try to install all the libraries that are located in the file **requirements.txt**
```
pip3 install -r requirements.txt
```
Example of execution
```
Example: python3 trape.py --url http://example.com --port 8080
```
If you face some problems installing the tool, it is probably due to Python versions conflicts, you should run a Python 2.7 environment :
```
pip3 install virtualenv
virtualenv -p /usr/bin/python3 trape_env
source trape_env/bin/activate
pip3 install -r requirements.txt
python3 trape.py -h
```
**HELP AND OPTIONS**
```
user:~$ python3 trape.py --help
usage: python3 trape.py -u <> -p <> [-h] [-v] [-u URL] [-p PORT]
[-ak ACCESSKEY] [-l LOCAL]
[--update] [-n] [-ic INJC]
optional arguments:
-h, --help show this help message and exit
-v, --version show program's version number and exit
-u URL, --url URL Put the web page url to clone
-p PORT, --port PORT Insert your port
-ak ACCESSKEY, --accesskey ACCESSKEY
Insert your custom key access
-l LOCAL, --local LOCAL
Insert your home file
-n, --ngrok Insert your ngrok Authtoken
-ic INJC, --injectcode INJC
Insert your custom REST API path
-ud UPDATE, --update UPDATE
Update trape to the latest version
```
**--url** In this option you add the URL you want to clone, which works as a decoy.
**--port** Here you insert the port, where you are going to run the **trape server**.
**--accesskey** You enter a custom key for the **trape panel**, if you do not insert it will generate an **automatic key**.
**--injectcode** trape contains a **REST API** to play anywhere, using this option you can customize the name of the file to include, if it does not, generates a random name allusive to a token.
**--local** Using this option you can call a local **HTML file**, this is the replacement of the **--url** option made to run a local lure in trape.
**--ngrok** In this option you can enter a token, to run at the time of a process. This would replace the token saved in configurations.
**--version** You can see the version number of trape.
**--update** Option used to upgrade to the latest version of **trape**.
**--help** It is used to see all the above options, from the executable.
This tool has been published educational purposes. It is intended to teach people how bad guys could track them, monitor them or obtain information from their credentials, we are not responsible for the use or the scope that someone may have through this project.
We are totally convinced that if we teach how vulnerable things really are, we can make the Internet a safer place.
This development and others, the participants will be mentioned with name, Twitter and charge.
-
**CREATOR**--- Jose Pino - @jofpin - (
**Security Researcher**)
I invite you, if you use this tool helps to share, collaborate. Let's make the Internet a safer place, let's report.
The content of this project itself is licensed under the Creative Commons Attribution 3.0 license, and the underlying source code used to format and display that content is licensed under the MIT license.
Copyright, 2018 by Jose Pino
| true | true | true |
People tracker on the Internet: OSINT analysis and research tool by Jose Pino - jofpin/trape
|
2024-10-12 00:00:00
|
2017-10-31 00:00:00
|
https://opengraph.githubassets.com/28824d5e4de36d2c8042c4738e02fbeecf35f9b6db7213be6ae987fac42a6d30/jofpin/trape
|
object
|
github.com
|
GitHub
| null | null |
17,088,474 |
https://launchbasket.com/building-a-20k-year-cryptocurrency-simulation-game-the-story-of-altcoin-fantasy-with-founder-tom-chan/
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
234 |
http://en.wikipedia.org/wiki/Long-Term_Capital_Management
|
Long-Term Capital Management - Wikipedia
|
Authority control databases International VIAF National United States Czech Republic Israel
|
# Long-Term Capital Management
Industry | Investment services |
---|---|
Founded | 1994 |
Founder | John W. Meriwether |
Defunct | 1998 private bailout arranged by U.S. Fed; 2000 dissolution |
Headquarters | Greenwich, Connecticut, U.S. |
Key people | Myron Scholes Robert C. Merton John Meriwether |
Products | Financial services Investment management |
**Long-Term Capital Management L.P.** (**LTCM**) was a highly leveraged hedge fund. In 1998, it received a $3.6 billion bailout from a group of 14 banks, in a deal brokered and put together by the Federal Reserve Bank of New York.[1]
LTCM was founded in 1994 by John Meriwether, the former vice-chairman and head of bond trading at Salomon Brothers. Members of LTCM's board of directors included Myron Scholes and Robert C. Merton, who three years later in 1997 shared the Nobel Prize in Economics for having developed the Black–Scholes model of financial dynamics.[2][3]
LTCM was initially successful, with annualized returns (after fees) of around 21% in its first year, 43% in its second year and 41% in its third year. However, in 1998 it lost $4.6 billion in less than four months due to a combination of high leverage and exposure to the 1997 Asian financial crisis and 1998 Russian financial crisis.[4] The master hedge fund, **Long-Term Capital Portfolio L.P.**, collapsed soon thereafter, leading to an agreement on September 23, 1998, among 14 financial institutions for a $3.65 billion recapitalization under the supervision of the Federal Reserve.[1] The fund was liquidated and dissolved in early 2000.[5]
## Founding
[edit]LTCM Partners | |
---|---|
John Meriwether | Former vice chair and head of bond trading at Salomon Brothers; MBA, University of Chicago |
Robert C. Merton | Leading scholar in finance; Ph.D., Massachusetts Institute of Technology; Professor at Harvard University |
Myron Scholes | Co-author of Black–Scholes model; Ph.D., University of Chicago; Professor at Stanford University |
David W. Mullins Jr. | Vice chairman of the Federal Reserve; Ph.D. MIT; Professor at Harvard University; was seen as potential successor to Alan Greenspan |
Eric Rosenfeld | Arbitrage group at Salomon; Ph.D. MIT; former Harvard Business School professor |
William Krasker | Arbitrage group at Salomon; Ph.D. MIT; former Harvard Business School professor |
Greg Hawkins | Arbitrage group at Salomon; Ph.D. MIT; worked on Bill Clinton's campaign for Arkansas state attorney general |
Larry Hilibrand | Arbitrage group at Salomon; Ph.D. MIT |
James McEntee | Bond trader |
Dick Leahy | Executive at Salomon |
Victor Haghani | Arbitrage group at Salomon; Masters in Finance, LSE |
John Meriwether headed Salomon Brothers' bond arbitrage desk until he resigned in 1991 amid a trading scandal.[6] According to Chi-fu Huang, later a Principal at LTCM, the bond arbitrage group was responsible for 80–100% of Salomon's global total earnings from the late 1980s until the early 1990s.[7]
In 1993 Meriwether created Long-Term Capital as a hedge fund and recruited several Salomon bond traders; Larry Hilibrand and Victor Haghani in particular would wield substantial clout[8] and two future winners of the Nobel Memorial Prize, Myron Scholes and Robert C. Merton.[9][10] Other principals included Eric Rosenfeld, Greg Hawkins, William Krasker, Dick Leahy, James McEntee, Robert Shustak, and David W. Mullins Jr.
The company consisted of Long-Term Capital Management (LTCM), a Delaware-incorporated company based in Greenwich, Connecticut. LTCM managed trades in Long-Term Capital Portfolio LP, a partnership registered in the Cayman Islands. The fund's operation was designed to have extremely low overhead; trades were conducted through a partnership with Bear Stearns and client relations were handled by Merrill Lynch.[11]
Myron Scholes (left) and Robert C. Merton were principals at LTCM. |
Meriwether chose to start a hedge fund to avoid the financial regulation imposed on more traditional investment vehicles, such as mutual funds, as established by the Investment Company Act of 1940 – funds which accepted stakes from 100 or fewer individuals each with more than $1 million in net worth were exempt from most of the regulations that bound other investment companies.[12] The bulk of the money raised, in late 1993, came from companies and individuals connected to the financial industry.[13] With the help of Merrill Lynch, LTCM also secured hundreds of millions of dollars from high-net-worth individual including business owners and celebrities, as well as private university endowments and later the Italian central bank. By 24 February 1994, the day LTCM began trading, the company had amassed just over $1.01 billion in capital.[14]
## Trading strategies
[edit]The main strategy was to find pairs of bonds which should have a predictable spread between their prices, and then when this spread widened further to basically place a bet that the two prices would come back towards each other.[1]
The core investment strategy of the company was then known as involving convergence trading: using quantitative models to exploit deviations from fair value in the relationships between liquid securities across nations, and between asset classes (i.e. Fed model-type strategies). In fixed income the company was involved in US Treasuries, Japanese Government Bonds, UK Gilts, Italian BTPs, and Latin American debt, although their activities were not confined to these markets or to government bonds.[15] LTCM was the brightest star on Wall Street at that time.[16]
### List of Major 1998 Trades
[edit]#### Fixed Income Arbitrage
[edit]- Short US swap spread
- Euro Cross-Swap
- Long US mortgages hedged
- Swap curve Japan
- Italian swap spread
- Fixed income volatility
- On-the-run/off-the-run spread
- Junk bond arbitrage
#### Equity
[edit]- Short equity volatility
- Risk arbitrage
- Equity relative value
#### Emerging Markets
[edit]- Long emerging market sovereigns
- Long emerging market currency
- Long emerging market equity hedged to S&P 500
#### Other
[edit]- Yield curve trades
- Short high-tech stocks
- Convertible arbitrage
- Index arbitrage
### Fixed income arbitrage
[edit]Fixed income securities pay a set of coupons at specified dates in the future, and make a defined redemption payment at maturity. Since bonds of similar maturities and the same credit quality are close substitutes for investors, there tends to be a close relationship between their prices (and yields). Whereas it is possible to construct a single set of valuation curves for derivative instruments based on LIBOR-type fixings, it is not possible to do so for government bond securities because every bond has slightly different characteristics. It is therefore necessary to construct a theoretical model of what the relationships between different but closely related fixed income securities should be.
For example, the most recently issued treasury bond in the US – known as the benchmark – will be more liquid than bonds of similar but slightly shorter maturity that were issued previously. Trading is concentrated in the benchmark bond, and transaction costs are lower for buying or selling it. As a consequence, it tends to trade more expensively than less liquid older bonds, but this expensiveness (or richness) tends to have a limited duration, because after a certain time there will be a new benchmark, and trading will shift to this security newly issued by the Treasury. One core trade in the LTCM strategies was to purchase the old benchmark – now a 29.75-year bond, and which no longer had a significant premium – and to sell short the newly issued benchmark 30-year, which traded at a premium. Over time the valuations of the two bonds would tend to converge as the richness of the benchmark faded once a new benchmark was issued. If the coupons of the two bonds were similar, then this trade would create an exposure to changes in the shape of the typically upward sloping yield curve: a flattening would depress the yields and raise the prices of longer-dated bonds, and raise the yields and depress the prices of shorter-dated bonds. It would therefore tend to create losses by making the 30-year bond that LTCM was short more expensive (and the 29.75-year bond they owned cheaper) even if there had been no change in the true relative valuation of the securities. This exposure to the shape of the yield curve could be managed at a portfolio level, and hedged out by entering a smaller steepener in other similar securities.
### Leverage and portfolio composition
[edit]Because the magnitude of discrepancies in valuations in this kind of trade is small (for the benchmark Treasury convergence trade, typically a few basis points), in order to earn significant returns for investors, LTCM used leverage to create a portfolio that was a significant multiple (varying over time depending on their portfolio composition) of investors' equity in the fund. It was also necessary to access the financing market in order to borrow the securities that they had sold short. In order to maintain their portfolio, LTCM was therefore dependent on the willingness of its counterparties in the government bond (repo) market to continue to finance their portfolio. If the company were unable to extend its financing agreements, then it would be forced to sell the securities it owned and to buy back the securities it was short at market prices, regardless of whether these were favorable from a valuation perspective.
At the beginning of 1998, the firm had equity of $4.7 billion and had borrowed over $124.5 billion with assets of around $129 billion, for a debt-to-equity ratio of over 25 to 1.[17] It had off-balance sheet derivative positions with a notional value of approximately $1.25 trillion, most of which were in interest rate derivatives such as interest rate swaps. The fund also invested in other derivatives such as equity options.
John Quiggin's book *Zombie Economics* (2010) states, "These derivatives, such as interest rate swaps, were developed with the supposed goal of allowing firms to manage risk on exchange rates and interest rate movements. Instead, they allowed speculation on an unparalleled scale."[18]
### Secret and opaque operations
[edit]LTCM was open about its overall strategy, but very secretive about its specific operations, including scattering trades among banks. And in perhaps a disconcerting note, "since Long-Term was flourishing, no one needed to know exactly what they were doing. All they knew was that the profits were coming in as promised," or at least perhaps what should have been a disconcerting note when looked at in hindsight.[19]
Opaqueness may have made even more of a difference and investors may have had even a harder time judging the risk involved when LTCM moved from bond arbitrage into arbitrage involving common stocks and corporate mergers.[19]
### UBS investment
[edit]Under prevailing US tax laws, there was a different treatment of long-term capital gains, which were taxed at 20.0 percent, and income, which was taxed at 39.6 percent. The earnings for partners in a hedge fund was taxed at the higher rate applying to income, and LTCM applied its financial engineering expertise to legally transform income into capital gains. It did so by engaging in a transaction with UBS (Union Bank of Switzerland) that would defer foreign interest income for seven years, thereby being able to earn the more favorable capital gains treatment. LTCM purchased a call option on 1 million of their own shares (valued then at $800 million) for a premium paid to UBS of $300 million. This transaction was completed in three tranches: in June, August, and October 1997. Under the terms of the deal, UBS agreed to reinvest the $300 million premium directly back into LTCM for a minimum of three years. In order to hedge its exposure from being short the call option, UBS also purchased 1 million of LTCM shares. Put-call parity means that being short a call and long the same amount of notional as underlying the call is equivalent to being short a put. So the net effect of the transaction was for UBS to lend $300 million to LTCM at LIBOR+50 and to be short a put on 1 million shares. UBS's own motivation for the trade was to be able to invest in LTCM – a possibility that was not open to investors generally – and to become closer to LTCM as a client. LTCM quickly became the largest client of the hedge fund desk, generating $15 million in fees annually.
### Diminishing opportunities and broadening of strategies
[edit]LTCM attempted to create a splinter fund in 1996 called LTCM-X that would invest in even higher risk trades and focus on Latin American markets. LTCM turned to UBS to invest in and write the warrant for this new spin-off company.[20]
LTCM faced challenges in deploying capital as their capital base grew due to initially strong returns, and as the magnitude of anomalies in market pricing diminished over time. James Surowiecki concludes that LTCM grew such a large portion of such illiquid markets that there was no diversity in buyers in them, or no buyers at all, so the wisdom of the market did not function and it was impossible to determine a price for its assets (such as Danish bonds in September 1998).[21]
In Q4 1997, a year in which it earned 27%, LTCM returned capital to investors. It also broadened its strategies to include new approaches in markets outside of fixed income: many of these were not market neutral – they were dependent on overall interest rates or stock prices going up (or down) – and they were not traditional convergence trades. By 1998, LTCM had accumulated extremely large positions in areas such as merger arbitrage (betting on differences between a proprietary view of the likelihood of success of mergers and other corporate transactions would be completed and the implied market pricing) and S&P 500 options (net short long-term S&P volatility). LTCM had become a major supplier of S&P 500 vega, which had been in demand by companies seeking to essentially insure equities against future declines.[22]
## Early skepticism
[edit]Despite the fund's prominent leadership and strong growth at LTCM, there were skeptics from the very beginning. Investor Seth Klarman believed it was reckless to have the combination of high leverage and not accounting for rare or outlying scenarios.[19] Software designer Mitch Kapor, who had sold a statistical program with LTCM partner Eric Rosenfeld, saw quantitative finance as a faith, rather than science. Nobel Prize winning economist Paul Samuelson was concerned about extraordinary events affecting the market.[19] Economist Eugene Fama found in his research that stocks were bound to have extreme outliers. Furthermore, he believed that, because they are subject to discontinuous price changes, real-life markets are inherently more risky than models. He became even more concerned when LTCM began adding stocks to their bond portfolio.[19]
Warren Buffett and Charlie Munger were two of the individual investors that Meriwether approached in 1993 to invest in the fund. Both analyzed the company but turned down the offer, considering the leverage plan to be too risky.[14]
## Downturn
[edit]### Riskier investments starting in 1997
[edit]LTCM's profit percentage for 1996 was 40%. However, for 1997, it was "only" 17%, which was actually right at average for hedge funds. A big reason was that other companies were by now following LTCM's example; greater competition left fewer arbitrage opportunities for LTCM themselves.[1]
As a result, LTCM began investing in emerging-market debt and foreign currencies. Some of the major partners, particularly Myron Scholes, had their doubts about these new investments. For example, when LTCM took a major position in the Norwegian krone, Scholes warned that they had no "informational advantage" in this area.[1]
In June 1998 – which was before the Russian financial crisis – LTCM posted a 10% loss, which was their biggest monthly loss to date.[1]
### 1997 Asian financial crisis
[edit]Although 1997 had been a very profitable year for LTCM (27%), the lingering effects of the 1997 Asian crisis continued to shape developments in asset markets into 1998. Despite the crisis originating in Asia, its effects were not confined to that region. The rise in risk aversion had raised concerns amongst investors regarding all markets heavily dependent on international capital flows, and this shaped asset pricing in markets outside Asia too.[24]
### 1998 Russian financial crisis
[edit]Although periods of distress have often created tremendous opportunities for relative value strategies, this did not prove to be the case on this occasion, and the seeds of LTCM's demise were sown before the Russian default of 17 August 1998. LTCM had returned $2.7 bn to investors in Q4 of 1997, although it had also raised a total in capital of $1.066 bn from UBS and $133 m from CSFB. Since position sizes had not been reduced, the net effect was to raise the leverage of the fund.
In May and June 1998 returns from the fund were -6.42% and -10.14% respectively, reducing LTCM's capital by $461 million. This was further aggravated by the exit of Salomon Brothers from the arbitrage business in July 1998. Because the Salomon arbitrage group (where many of LTCM's strategies had first been incubated) had been a significant player in the kinds of strategies also pursued by LTCM, the liquidation of the Salomon portfolio (and its announcement itself) had the effect of depressing the prices of the securities owned by LTCM and bidding up the prices of the securities LTCM was short. According to Michael Lewis in the New York Times article of July 1998, returns that month were circa -10%. One LTCM partner commented that because there was a clear temporary reason to explain the widening of arbitrage spreads, at the time it gave them more conviction that these trades would eventually return to fair value (as they did, but not without widening much further first).
Such losses were accentuated through the 1998 Russian financial crisis in August and September 1998, when the Russian government defaulted on its domestic local currency bonds.[25] This came as a surprise to many investors because according to traditional economic thinking of the time, a sovereign issuer should never need to default given access to the printing press. There was a flight to quality, bidding up the prices of the most liquid and benchmark securities that LTCM was short, and depressing the price of the less liquid securities it owned. This phenomenon occurred not merely in the US Treasury market but across the full spectrum of financial assets. Although LTCM was diversified, the nature of its strategy implied an exposure to a latent factor risk of the price of liquidity across markets. As a consequence, when a much larger flight to liquidity occurred than had been anticipated when constructing its portfolio, its positions designed to profit from convergence to fair value incurred large losses as expensive but liquid securities became more expensive, and cheap but illiquid securities became cheaper. By the end of August, the fund had lost $1.85 billion in capital.
Because LTCM was not the only fund pursuing such a strategy, and because the proprietary trading desks of the banks also held some similar trades, the divergence from fair value was made worse as these other positions were also liquidated. As rumors of LTCM's difficulties spread, some market participants positioned in anticipation of a forced liquidation. Victor Haghani, a partner at LTCM, said about this time "it was as if there was someone out there with our exact portfolio,... only it was three times as large as ours, and they were liquidating all at once."
Because these losses reduced the capital base of LTCM, and its ability to maintain the magnitude of its existing portfolio, LTCM was forced to liquidate a number of its positions at a highly unfavorable moment and suffer further losses. A vivid illustration of the consequences of these forced liquidations is given by Lowenstein (2000).[26] He reports that LTCM established an arbitrage position in the dual-listed company (DLC) Royal Dutch Shell in the summer of 1997, when Royal Dutch traded at an 8%–10% premium relative to Shell. In total $2.3 billion was invested, half of which was "long" in Shell and the other half was "short" in Royal Dutch.[27] LTCM was essentially betting that the share prices of Royal Dutch and Shell would converge because in their belief the present value of the future cashflows of the two securities should be similar. This might have happened in the long run, but due to its losses on other positions, LTCM had to unwind its position in Royal Dutch Shell. Lowenstein reports that the premium of Royal Dutch had increased to about 22%, which implies that LTCM incurred a large loss on this arbitrage strategy. LTCM lost $286 million in equity pairs trading and more than half of this loss is accounted for by the Royal Dutch Shell trade.[28]
The company, which had historically earned annualized compounded returns of almost 40% up to this point, experienced a flight to liquidity. In the first three weeks of September, LTCM's equity tumbled from $2.3 billion at the start of the month to just $400 million by September 25. With liabilities still over $100 billion, this translated to an effective leverage ratio of more than 250-to-1.[29]
## 1998 bailout
[edit]Long-Term Capital Management did business with nearly every important person on Wall Street. Indeed, much of LTCM's capital was composed of funds from the same financial professionals with whom it traded. As LTCM teetered, Wall Street feared that Long-Term's failure could cause a chain reaction in numerous markets, causing catastrophic losses throughout the financial system.
After LTCM failed to raise more money on its own, it became clear it was running out of options. On September 23, 1998, Goldman Sachs, AIG, and Berkshire Hathaway offered then to buy out the fund's partners for $250 million, to inject $3.75 billion and to operate LTCM within Goldman's own trading division. The offer of $250 million was stunningly low to LTCM's partners because at the start of the year their firm had been worth $4.7 billion. Warren Buffett gave Meriwether less than one hour to accept the deal; the time lapsed before a deal could be worked out.[30]
Seeing no options left, the Federal Reserve Bank of New York organized a bailout of $3.625 billion by the major creditors to avoid a wider collapse in the financial markets.[31] The principal negotiator for LTCM was general counsel James G. Rickards.[32] The contributions from the various institutions were as follows:[33][34]
- $300 million: Bankers Trust, Barclays, Chase, Credit Suisse First Boston, Deutsche Bank, Goldman Sachs, Merrill Lynch, J.P.Morgan, Morgan Stanley, Salomon Smith Barney, UBS
- $125 million: Société Générale
- $100 million: Paribas and Lehman Brothers
[35][36] - Bear Stearns and Crédit Agricole
[37]declined to participate.
In return, the participating banks got a 90% share in the fund and a promise that a supervisory board would be established. LTCM's partners received a 10% stake, still worth about $400 million, but this money was completely consumed by their debts. The partners once had $1.9 billion of their own money invested in LTCM, all of which was wiped out.[38]
The fear was that there would be a chain reaction as the company liquidated its securities to cover its debt, leading to a drop in prices, which would force other companies to liquidate their own debt in a vicious cycle.
The total losses were found to be $4.6 billion. The losses in the major investment categories were (ordered by magnitude):[26]
- $1.6 bn in swaps
- $1.3 bn in equity volatility
- $430 mn in Russia and other emerging markets
- $371 mn in directional trades in developed countries
- $286 mn in Dual-listed company pairs (such as VW, Shell)
- $215 mn in yield curve arbitrage
- $203 mn in S&P 500 stocks
- $100 mn in junk bond arbitrage
- no substantial losses in merger arbitrage
Long-Term Capital was audited by Price Waterhouse LLP. After the bailout by the other investors, the panic abated, and the positions formerly held by LTCM were eventually liquidated at a small profit to the rescuers. Although termed a bailout, the transaction effectively amounted to an orderly liquidation of the positions held by LTCM with creditor involvement and supervision by the Federal Reserve Bank. No public money was injected or directly at risk, and the companies involved in providing support to LTCM were also those that stood to lose from its failure. The creditors themselves did not lose money from being involved in the transaction.
Some industry officials said that Federal Reserve Bank of New York involvement in the rescue, however benign, would encourage large financial institutions to assume more risk, in the belief that the Federal Reserve would intervene on their behalf in the event of trouble (see Greenspan put). Federal Reserve Bank of New York actions raised concerns among some market observers that it could create moral hazard since even though the Fed had not directly injected capital, its use of moral suasion to encourage creditor involvement emphasized its interest in supporting the financial system.[39]
LTCM's strategies were compared to "picking up nickels in front of a bulldozer"[40] – a likely small gain balanced against a small chance of a large loss, like the payouts from selling an out-of-the-money naked call option. This contrasts with the market efficiency aphorism that there are no $100 bills lying on the street, as someone else has already picked them up.
## Aftermath
[edit]In 1998, the chairman of Union Bank of Switzerland resigned as a result of a $780 million loss incurred from writing put options on LTCM, which had become significantly in-the-money due to LTCM's collapse.[3]
After the bailout, Long-Term Capital Management continued operations. In the year following the bailout, it earned 10%. By early 2000, the fund had been liquidated, and the consortium of banks that financed the bailout had been paid back, but the collapse was devastating for many involved. Mullins, once considered a possible successor to Alan Greenspan, saw his future with the Fed dashed. The theories of Merton and Scholes took a public beating. In its annual reports, Merrill Lynch observed that mathematical risk models "may provide a greater sense of security than warranted; therefore, reliance on these models should be limited."[41]
After helping unwind LTCM, John Meriwether launched JWM Partners. Haghani, Hilibrand, Leahy, and Rosenfeld signed up as principals of the new firm. By December 1999, they had raised $250 million for a fund that would continue many of LTCM's strategies – this time, using less leverage.[42] With the credit crisis of 2008, JWM Partners LLC was hit with a 44% loss from September 2007 to February 2009 in its Relative Value Opportunity II fund. As such, JWM Hedge Fund was shut down in July 2009.[43] Meriwether then launched a third hedge fund in 2010 called JM Advisors Management. A 2014 *Business Insider* article stated that his later two funds used "the same investment strategy from his time at LTCM and Salomon."[19]
## Analysis
[edit]Historian Niall Ferguson proposed that LTCM's collapse stemmed in part from their use of only five years of financial data to prepare their mathematical models, thus drastically under-estimating the risks of a profound economic crisis:
The firm's value at risk (VaR) models had implied that the loss Long Term suffered in August was so unlikely that it ought never to have happened in the entire life of the universe. But that was because the models were working with just five years' worth of data. If the models had gone back even eleven years, they would have captured the 1987 stock market crash. If they had gone back eighty years they would have captured the last great Russian default, after the 1917 Revolution. Meriwether himself, born in 1947, ruefully observed: "If I had lived through the Depression, I would have been in a better position to understand events." To put it bluntly, the Nobel prize winners had known plenty of mathematics, but not enough history.
— Niall Ferguson,The Ascent of Money.[4]
These ideas were expanded in a 2016 CFA article written by Ron Rimkusk, which pointed out that the VaR model, one of the major quantitative analysis tool by LTCM, had several flaws in it. A VaR model is calculated based on historical data, but the data sample used by LTCM excluded previous economic crises such as those of 1987 and 1994. VaR also could not interpret extreme events such as a financial crisis in terms of timing.[44]
## See also
[edit]- Black–Scholes model
- Commodity Futures Modernization Act of 2000
- Game theory
- Greenspan put
- James Rickards
- Kurtosis risk
- Limits to arbitrage
- Martingale (betting system)
- Martingale (probability theory)
- Probability theory
- St. Petersburg paradox
- Value at risk
*When Genius Failed: The Rise and Fall of Long-Term Capital Management*- Black swan problem
## Notes
[edit]- ^
**a****b****c****d****e**"Too Interconnected to Fail?" Archived 2021-01-31 at the Wayback Machine Stephen Slivinski, senior editor of**f***Region Focus*, quarterly publication of the Federal Reserve Branch of Richmond [Virginia], which is the 5th of 12 districts of the U.S. Federal Reserve system, Summer 2009. **^**The Bank of Sweden Prize in Economic Sciences 1997 Archived 2006-04-27 at the Wayback Machine. Robert C. Merton and Myron S. Scholes pictures. Myron S. Scholes with location named as "Long Term Capital Management, Greenwich, CT, USA" where the prize was received.- ^
**a****b***A financial History of the United States Volume II: 1970–2001*, Jerry W. Markham, Chapter 5: "Bank Consolidation", M. E. Sharpe, Inc., 2002 - ^
**a**Ferguson, Niall (2008).**b***The ascent of money: a financial history of the world*. London: Allen Lane. p. 329. ISBN 978-1-84614-106-5. **^**Greenspan, Alan (2007).*The Age of Turbulence: Adventures in a New World*. The Penguin Press. pp. 193–195. ISBN 978-1-59420-131-8.**^**Dunbar 2000, pp. 110–pgs 111–112**^**"Chi-Fu Huang: From Theory to Practice" (PDF). Archived from the original (PDF) on 2015-09-23.**^***When Genius Failed*. 2011. p. 55.While J.M. presided over the firm and Rosenfeld ran it from day to day, Haghani and the slightly senior Hilibrand had the most influence on trading.
**^**Dunbar 2000, pp. 114–116**^**Loomis 1998**^**Dunbar 2000, pp. 125, 130**^**Dunbar 2000, p. 120**^**Dunbar 2000, p. 130- ^
**a**Dunbar 2000, p. 142**b** **^**Henriques, Diana B.; Kahn, Joseph (1998-12-06). "BACK FROM THE BRINK; Lessons of a Long, Hot Summer".*The New York Times*. ISSN 0362-4331. Archived from the original on 2021-05-25. Retrieved 2015-08-22.**^**De Goede, Marieke (2001). "Discourses of Scientific Finance and the Failure of Long-Term Capital Management".*New Political Economy*.**6**(2): 149–170. doi:10.1080/13563460120060580. ISSN 1356-3467. S2CID 220355463.**^**Lowenstein 2000, p. 191**^**Zombie Economics: How Dead Ideas Still Walk among Us Archived 2023-04-26 at the Wayback Machine, John Quiggin (University of Queensland in Australia), Ch. 2 The Efficient Market Hypothesis, subsection "The Long-Term Capital Management Fiasco" (pages 55-58), Princeton University Press, 2010.- ^
**a****b****c****d****e**Yang, Stephanie (Jul 11, 2014). "The Epic Story Of How A 'Genius' Hedge Fund Almost Caused A Global Financial Meltdown".**f***Business Insider*. Archived from the original on 2021-05-15. Retrieved 12 July 2024. **^**Lowenstein 2000, pp. 95–97**^**Surowiecki, James (2005). "Chapter 11.IV".*The wisdom of crowds*. New York: Anchor Books. p. 240. ISBN 9780385721707. OCLC 61254310.**^**Lowenstein 2000, pp. 124–25**^**Lowenstein 2000, p. xv**^**O'Rourke, Breffni (1997-09-09). "Eastern Europe: Could Asia's Financial Crisis Strike Europe?".*RadioFreeEurope/RadioLiberty*. Archived from the original on 2016-04-14. Retrieved 2015-08-22.**^**Bookstaber, Richard (2007).*A Demon Of Our Own Design*. USA: John Wiley & Sons. pp. 97–124. ISBN 978-0-470-39375-8.- ^
**a**Lowenstein 2000**b** **^**Lowenstein 2000, p. 99**^**Lowenstein 2000, p. 234**^**Lowenstein 2000, p. 211**^**Lowenstein 2000, pp. 203–04**^**Partnoy, Frank (2003).*Infectious Greed: How Deceit and Risk Corrupted the Financial Markets*. Macmillan. p. 261. ISBN 978-0-8050-7510-6.**^**Kathryn M. Welling, "Threat Finance: Capital Markets Risk Complex and Supercritical, Says Jim Rickards" Archived 2016-12-21 at the Wayback Machine*welling@weeden*(February 25, 2010). Retrieved May 13, 2011**^**Wall Street Journal, 25 September 1998**^**"Bloomberg.com: Exclusive". Archived from the original on 2007-09-30. Retrieved 2017-03-08.**^**"Lehman Says It's 'Solvent'".*Barron's*. Retrieved 2020-10-31.**^**Lowenstein 2000**^**http://eml.berkeley.edu/~webfac/craine/e137_f03/137lessons.pdf Archived 2021-03-04 at the Wayback Machine[*bare URL PDF*]**^**Lowenstein 2000, pp. 207–08**^**GAO/GGD-00-67R Questions Concerning LTCM and Our Responses Archived 2012-04-19 at the Wayback Machine General Accounting Office, February 23, 2000**^**Lowenstein 2000, p. 102**^**Lowenstein 2000, p. 235**^**Lowenstein 2000, p. 236**^**"John Meriwether to shut hedge fund - Bloomberg".*Reuters*. July 8, 2009. Archived from the original on 5 February 2021. Retrieved 11 January 2018.**^**Rimkus, Ron (2016-04-18). "Long-Term Capital Management".*Financial Scandals, Scoundrels & Crises*. CFA Institute. Archived from the original on 2021-04-23. Retrieved 2020-10-11.
## Bibliography
[edit]- Coy, Peter; Wooley, Suzanne (21 September 1998). "Failed Wizards of Wall Street".
*Business Week*. Archived from the original on January 29, 1999. Retrieved 2006-09-04. - Crouhy, Michel; Galai, Dan; Mark, Robert (2006).
*The Essentials of Risk Management*. New York: McGraw-Hill Professional. ISBN 978-0-07-142966-5. - Dunbar, Nicholas (2000).
*Inventing Money: The story of Long-Term Capital Management and the legends behind it*. New York: Wiley. ISBN 978-0-471-89999-0. - Jacque, Laurent L. (2010).
*Global Derivative Debacles: From Theory to Malpractice*. Singapore: World Scientific. ISBN 978-981-283-770-7.. Chapter 15: Long-Term Capital Management, pp. 245–273 - Loomis, Carol J. (1998). "A House Built on Sand; John Meriwether's once-mighty Long-Term Capital has all but crumbled. So why did Warren Buffett offer to buy it?".
*Fortune*. Vol. 138, no. 8. - Lowenstein, Roger (2000).
*When Genius Failed: The Rise and Fall of Long-Term Capital Management*. Random House. ISBN 978-0-375-50317-7. - Weiner, Eric J. (2007).
*What Goes Up, The Uncensored History of Modern Wall Street*. New York: Back Bay Books. ISBN 978-0-316-06637-2.
## Further reading
[edit]- "Eric Rosenfeld talks about LTCM, ten years later". MIT Tech TV. 2009-02-19. Archived from the original on 2020-11-11. Retrieved 2009-11-16.
- Siconolfi, Michael; Pacelle, Mitchell; Raghavan, Anita (1998-11-16). "All Bets Are Off: How the Salesmanship And Brainpower Failed At Long-Term Capital".
*The Wall Street Journal*. - "Trillion Dollar Bet".
*Nova*. PBS. 2000-02-08. Archived from the original on 2021-04-29. Retrieved 2017-09-04. - MacKenzie, Donald (2003). "Long-Term Capital Management and the Sociology of Arbitrage".
*Economy and Society*.**32**(3): 349–380. CiteSeerX 10.1.1.457.9895. doi:10.1080/03085140303130. S2CID 145790602.[*permanent dead link*] - Fenton-O'Creevy, Mark; Nicholson, Nigel; Soane, Emma; Willman, Paul (2004).
*Traders: Risks, Decisions, and Management in Financial Markets*. Oxford University Press. ISBN 9780199226450. - Gladwell, Malcolm (2002). "Blowing Up".
*The New Yorker*. Archived from the original on 2011-02-24. - MacKenzie, Donald (2006).
*An Engine, not a Camera: How Financial Models Shape Markets*. The MIT Press. ISBN 978-0-262-13460-6. - Poundstone, William (2005).
*Fortune's Formula: The Untold Story of the Scientific Betting System that Beat the Casinos and Wall Street*. Hill and Wang. ISBN 978-0-8090-4637-9. - Case Study: Long-Term Capital Management erisk.com
- Meriwether and Strange Weather: Intelligence, Risk Management and Critical Thinking austhink.org
- US District Court of Connecticut judgement on tax status of LTCM losses Archived 2009-03-27 at the Wayback Machine
- Michael Lewis – NYT – How the Eggheads Cracked-January 1999 Archived 2020-11-04 at the Wayback Machine
- Stein, M. (2003):
*Unbounded irrationality: Risk and organizational narcissism at Long Term Capital Management*, in: Human Relations 56 (5), S. 523–540.
- Long-Term Capital Management
- 1998 in economic history
- Financial services companies established in 1994
- Financial services companies disestablished in 2000
- Corporate scandals
- Financial crises
- Hedge funds
- Hedge fund firms in Connecticut
- Defunct hedge funds
- 1994 establishments in Connecticut
- 2000 disestablishments in Connecticut
| true | true | true | null |
2024-10-12 00:00:00
|
2002-01-10 00:00:00
| null |
website
|
wikipedia.org
|
Wikimedia Foundation, Inc.
| null | null |
41,738,596 |
https://bolt.new
|
bolt.new
| null |
Introducing bolt.new: Dev sandbox with AI from StackBlitz
What do you want to build?
Prompt, run, edit, and deploy full-stack web apps.
You've used all your remaining tokens.
Buy more
| true | true | true |
Prompt, run, edit & deploy web apps
|
2024-10-12 00:00:00
| null |
object
| null |
bolt.new
| null | null |
|
16,401,927 |
https://medium.com/@brandonmp/browsing-the-web-like-a-badass-fa3a9c51ea5e
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
9,182,409 |
http://techcrunch.com/2015/03/10/apples-latest-betrayal/
|
Apple's Latest Betrayal | TechCrunch
|
Matt Burns
|
“Seriously, fuck them,” wrote one tweeter called M.J. The person was speaking about Apple and the new MacBook the company recently announced. There are countless other tweets and comments with the same sentiment. Right now there’s visceral hate directed at the company. A swathe of consumers feel betrayed by the stark design of the new MacBook. Our original post on the topic was shared over 25,000 times. It’s a polarizing design.
The new MacBook thinks different. It has more in common with a tablet than most laptops. Think of it as an iPad that has a keyboard and runs OS X. And like the iPad, it only has one port, which is the cause of the outcry.
Most computers have several ports scattered around the frame. There’s usually one for charging, a couple USB ports for various tasks, and some sort of port to output video. The new MacBook combines all three into a single USB-C port. This means users will not be able to, say, charge the laptop and an iPhone at the same time or input data from a flash drive while outputting video to an external monitor. Sure you’re going to have some nice accessories to do all these things down the line but this barebones approach is a pretty hard sell in a world where some laptops still have a serial port.
This is Apple’s world and we just live in it.
To Apple’s credit, the company must see a market for such a computer. The low-power Intel chipset that powers the computer likely doesn’t provide enough oomph to play computer games but it should render GIFs just fine. This is a couch computer. It’s a Facebook and Twitter machine. It even looks like a great programming computer. Watch the Apple event yesterday. The company didn’t demonstrate any of its new software on the new MacBook including the Photos app. Simply put, the new MacBook isn’t for photo editing. It’s for Facebooking.
Expectations are high for Apple. Had a company like HP or Lenovo released a watered-down computer like the new MacBook, there likely wouldn’t have been an outcry, but rather a collective chuckle. For some reason, a swath of Apple fans expects the company to build every product to meet their needs. If it doesn’t, feelings of betrayal sneak in. This happened with the original MacBook Air.
Apple released the first MacBook Air in 2008. It cost $1,799 and, like the new MacBook, was a svelte wonder of technology. But it lacked ports. The industry cried foul, pointing out that it only had a power port, a single USB port and a Micro-DVI port. It was missing a DVD-ROM and Ethernet port, a travesty in an era of burgeoning Wi-Fi and the slow decline of physical media. In 2008 this was a big deal. Software was still shipped on disks and Wi-Fi was hard to find. Apple fans felt betrayed. They felt forgotten. If a customer wanted Apple’s latest and greatest machine, they would have to buy into interacting with a computer without a CD drive or wired Internet.
Eventually, Apple dropped Ethernet from its entire MacBook line and the MacBook Air is now the least expensive laptop Apple offers.
The new MacBook joins the MacBook Air and MacBook Pro. It’s not a replacement for either – at least not yet. But it bears a nameplate previously retired: MacBook. It’s not an Air, it’s not a Pro. It’s just a MacBook, which was long the company’s stalwart, low-cost machine against the rising tide of Microsoft Windows.
It’s highly likely that in a generation or two that Apple will drop the price of the MacBook to under a thousand. Will the MacBook Air survive? Maybe not. Apple is steadily making the MacBook Pro smaller. It’s easy to see a future where the MacBook will be the company’s only inexpensive laptop and a slightly slimmer MacBook Pro will be the other option if you want silly things like multiple USB ports, SD card slots and a MagSafe power adapter.
Until then, a 13-inch MacBook Air is a better buy than the new MacBook. The battery lasts nearly as long, the computer is more powerful and it has plenty of ports. Plus, nobody has ever said that they wished their MacBook Air was just a bit thinner. But maybe, soon, they will.
| true | true | true |
"Seriously, fuck them," read the tweet. The person was speaking about Apple and the new MacBook the company recently announced. There are countless other tweets and comments with the same sentiment. Right now there's visceral hate directed at the company. A swath of consumers feel betrayed by the stark design of the new MacBook. Our original post on the topic was shared over 25,000 times. For good reason, too.
|
2024-10-12 00:00:00
|
2015-03-10 00:00:00
|
article
|
techcrunch.com
|
TechCrunch
| null | null |
|
16,717,479 |
https://turaku.com/blog/2018/03/why-turaku-a-new-password-manager.html
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
15,257,433 |
https://extranewsfeed.com/how-russia-created-the-most-popular-texas-secession-page-on-facebook-fd4dfd05ee5c
|
Must Read Media Technology Stories, Blogs, and Resources
| null |
Advertising Integrations That "Go Viral": my Journey in Attention Management and Marketing ResearchPublished on September 18, 2024
Why Web3 Projects Need Ongoing PR to Stay Relevant: The Promise vs. The RealityPublished on September 17, 2024
Porn Filters Compared: OpenDNS, Neustar, CleanBrowsing, Norton, Yandex and AdGuardPublished on December 22, 2017
Elena Shabanova, Senior Product Designer at PolyAI: Women in Tech InterviewPublished on April 18, 2024
Mastering SEO in the Era of Large Language Models: Evolving Tactics for LLM-Powered Search EnginesPublished on May 11, 2023
11 Ways Your Competitors Can Hit You With Negative SEO Attacks And How To Bolster Your DefensesPublished on July 29, 2022
SEO Metrics Tool Provider MOZ Refuses to Address Manipulated Domain Authority Scores (DA)Published on May 23, 2023
| true | true | true |
Whether it's a new social media app, live streams, or movies, there's always something to read or watch. Learn more about the media that's being consumed by the world.
|
2024-10-12 00:00:00
|
2024-10-06 00:00:00
| null | null |
hackernoon.com
|
Hackernoon
| null | null |
13,869,215 |
https://github.com/zone-eu/zone-mta/blob/master/README.md
|
zone-mta/README.md at master · zone-eu/zone-mta
|
Zone-Eu
|
We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
| true | true | true |
📤 Modern outbound MTA cross platform and extendable server application - zone-eu/zone-mta
|
2024-10-12 00:00:00
|
2016-09-06 00:00:00
|
https://opengraph.githubassets.com/c9d3cf3419b0d6e03880fdaea3b0ac270b4bab299ba536d846f290bd75a7ca61/zone-eu/zone-mta
|
object
|
github.com
|
GitHub
| null | null |
1,936,955 |
http://blog.spiderlabs.com/2010/11/advanced-topic-of-the-week-mitigating-slow-http-dos-attacks.html
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
24,974,875 |
https://opensource.com/article/20/11/python-code-viztracer
|
Understand your Python code with this open source visualization tool
|
Tian Gao
|
It's challenging to understand your Python project as it gets larger and more complex. Even when you write the entire project, it's impossible to know how it works fully. Debugging and profiling your code is essential to better understanding it.
VizTracer is a tool to help you understand Python code by tracing and visualizing its execution. Without making any changes to your source code, VizTracer can log function entries/exits, function arguments/returns, and any arbitrary variables, then display the data using an intuitive front-end Google Trace-Viewer.
Here is an example of running a Monte Carlo tree search:
Every function is logged and visualized in stack style on a timeline so that you can see what is happening when you run a program. You can zoom in to see the details at any specific point:
VizTracer can also automatically log function arguments and return value; you can click on the function entry and see the detail info:
Or you can create a whole new signal and use it to log variables. For example, this shows the cost value when you do a gradient descent:
In contrast to other tools with complicated setups, VizTracer is super-easy to use and does not have any dependencies. You can install it from pip with:
`pip install viztracer`
And trace your program by entering (where `<your_script.py>`
is the name of your script):
`viztracer <your_script.py>`
VizTracer will generate an HTML report in your working directory that you can open in Chrome.
VizTracer offers other advanced features, such as filters, which you can use to filter out the functions that you do not want to trace so that you'll have a cleaner report. For example, to include only the functions in files, you are interested in:
`viztracer include_files ./ --run <your_script.py>`
To record the function arguments and return value:
`viztracer --log_function_args --log_return_value <your_script.py>`
To log any arbitrary variables matching a certain regex:
```
# log variables starts with a
viztracer --log_var a.* --run <your_script.py>
```
You can get other features, like custom events to log numeric values and objects, by making minor modifications to your source code.
VizTracer also includes a virtual debugger (vdb) that can debug VizTracer's log file. vdb debugs your executed code (much like pdb) so that you can understand the code flow. Helpfully, it supports running back in time because it knows everything that happened.
Unlike some prototypes, VizTracer implements its core in pure C, which significantly reduces the overhead to a level similar to cProfile.
VizTracer is open source, released under the Apache 2.0 License, and supports all common operating systems platforms (Linux, macOS, and Windows). You can learn more about its features and access its source code on GitHub.
## 1 Comment
| true | true | true |
It's challenging to understand your Python project as it gets larger and more complex. Even when you write the entire project, it's impossible to know how it works fully.
|
2024-10-12 00:00:00
|
2020-11-02 00:00:00
| null |
opensource.com
|
Opensource.com
| null | null |
|
14,731,030 |
https://arstechnica.com/tech-policy/2017/07/state-department-concocting-fake-intellectual-property-twitter-feud/
|
State Department concocting “fake” intellectual property “Twitter feud”
|
David Kravets
|
The US State Department wants to team up with other government agencies and Hollywood in a bid to create a "fake Twitter feud" about the importance of intellectual property rights. As part of this charade, the State Department's Bureau of Economic Affairs says it has been seeking the participation of the US Office of Intellectual Property Enforcement, the Motion Picture Association of America, the Recording Industry Association of America, the US Patent and Trademark Office, and "others."
To make the propaganda plot seem more legitimate, the State Department is trying to enlist Stanford Law School and "similar academic institutions" to play along on the @StateDept feed on Twitter.
"We're not going to participate," Mark Lemley, the director of the Stanford Program in Law, Science, and Technology at Stanford Law School, told Ars in an e-mail. He recently received an e-mail (PDF) and a telephone call from the State Department seeking his assistance.
"Apparently there is not enough fake news for the US government," Lemley told his Facebook followers. On the Facebook post, he redacted the name of the official who sent him the letter out of privacy interests. The RIAA declined comment, as did the trademark office. The MPAA said it is not participating.
## Plotting propaganda
The propaganda plot became public on July 4, when Lemley posted the State Department's plan on his Facebook account. In the State Department e-mail to Lemley, the agency mentions a phone message left with the professor, and it discusses "fake" news, saying:
So a little bit of a recap from the message that I left you this morning. The Bureau of Economic and Business Affairs wants to start a fake Twitter feud. For this feud, we would like to invite you and other similar academic institutions to participate and throw in your own ideas!
Wary over whether this was actually "fake news" about a "fake Twitter feud," we asked the State Department for comment. The agency confirmed the authenticity of the June 26 e-mail sent to Lemley—in a roundabout way, of course.
| true | true | true |
“Our public diplomacy office is still settling on a hashtag,” State Department says.
|
2024-10-12 00:00:00
|
2017-07-06 00:00:00
|
article
|
arstechnica.com
|
Ars Technica
| null | null |
|
21,929,494 |
https://wiki.tcl-lang.org/page/tcl-duktape
|
tcl-duktape
| null |
What | tcl-duktape |
Where | https://github.com/dbohdan/tcl-duktape |
Description | A binary Tcl extension that proves bindings for Duktape, an embedded JavaScript interpreter. |
Online demo | https://rkeene.dev/js-repl/ hosts an integrated Tcl/JS REPL shell. |
Platforms | Tested on Linux and FreeBSD. |
Prerequisites | Tcl 8.5 or newer, TclOO required for the OO wrapper. |
Updated | 2023-08-18 (v0.11.1) |
License | MIT |
Contact | dbohdan |
Duktape is just a pair of .c/.h files, which makes the package easy to build. tcl-duktape allows you to call JavaScript code from Tcl and exposes a jsproc interface similar to the cproc interface in Critcl that allows you to write procedures in JavaScript. Included in the package is a TclOO API wrapper for tcl-duktape objects and one for JSON objects.
#!/usr/bin/env tclsh package require duktape package require duktape::oo set duktapeObj [::duktape::oo::Duktape new] $duktapeObj jsproc ::add {{a 0 number} {b 0 number}} { return a + b; } puts [add 1 2] $duktapeObj jsmethod cos {{deg 0 number}} { return Math.cos(deg * Math.PI / 180); } puts [$duktapeObj cos 360] $duktapeObj destroy
wiwo 17.02.2017: How can modules be loaded? When I try to load a JS file with require, I get "ReferenceError: identifier 'require' undefined"
dbohdan 2017-02-23: Duktape can't access the file system and thus has no Node.js-style require(). What you can do is read the module file in Tcl code and then have an interpreter instance eval it. There is an example that does this on the project wiki.
| true | true | true |
Tclers wiki
|
2024-10-12 00:00:00
|
2024-05-08 00:00:00
| null | null | null | null | null | null |
11,080,495 |
http://www.fool.com/investing/general/2016/02/10/tesla-motors-inc-earnings-demand-is-robust-operati.aspx
|
Tesla Motors, Inc. Earnings: Demand Is Robust, Operating Cash Flow Jumps | The Motley Fool
|
Daniel Sparks
|
Amid a volatile week for the stock market -- particularly for growth stocks -- **Tesla Motors** (TSLA -8.78%) stock looks poised to contribute its second day of big swings to this wild week when the market opens tomorrow. Shares rose as much as 14% during after-market hours and are up 9% at the time of this writing. An upbeat update on demand, production, cash flow, and guidance may be some of the items that have turned investors optimistic.
But first, here's a review of Tesla's financial results for the quarter, compared against the year-ago quarter and the previous quarter.
|
|
|
|
---|---|---|---|
|
$1.21 billion |
$957 million |
$937 million |
|
($2.44) |
($0.86) |
($1.78) |
|
$1.75 billion |
$1.1 billion |
$1.24 billion |
|
($0.87) |
(0.13) |
($0.58) |
While Tesla's revenue is growing, losses continued to widen on both a GAAP and non-GAAP basis. The wider loss is primarily attributable to a lower gross profit margin, which was pressured by "unfavorable labor and overhead allocations associated with lower-than-planned Model X production volume, and non-recurring asset impairment charges for obsolete painting equipment," as well as for a transition to "improved production processes and designs."
Here are some other notable highlights about the quarter's metrics.
**Demand is robust:** Model X reservations increased 75% compared to the prior year "despite extremely limited initial exposure for this vehicle," Tesla noted. And orders for the new Model S increased 35% during Q4. Overall, Tesla said it sees no "perceptible impact" from falling gas prices on order growth and that order rates for its vehicles have "continued to increase."
**Cash flow is looking up:** Tesla's cash flow statement is finally turning upward, with the company reporting $179 million in operating cash flow.
**Overall production:** Deliveries during Q4 were up 50% sequentially and 77% compared to the prior year, supported by record production.
**Model X production:** Only a few hundred Model X units were delivered during Q4, but the company says it is "now significantly increasing our Model X production throughout the balance of the quarter."
**Looking ahead**Here's a list of some of the company's expectations during the year.
- Tesla anticipates approaching a production rate of 1,000 Model X units per week in Q2.
- The company expects to deliver 80,000 to 90,000 Model S and X vehicles combined in 2016, up from about 50,600 vehicles in 2015.
- Management anticipates its automotive gross profit margin to improve in 2015, with Model S approaching 30% and and Model X approaching 25% by the year's end.
- The company still anticipates its capital expenditures during 2016 will actually be lower than in 2015.
- Tesla expects to be net-cash-flow positive for the full year, as well as achieve non-GAAP profitability for the full year. Plus, the company expects to exit the year with GAAP profitability.
- The Model 3 will be unveiled on March 31 and management says the important vehicle is on schedule for production to begin in late 2017.
| true | true | true |
Here are the key points behind Tesla's fourth-quarter earnings report.
|
2024-10-12 00:00:00
|
2016-02-10 00:00:00
|
article
|
fool.com
|
The Motley Fool
| null | null |
|
10,925,071 |
http://www.infront.com/blogs/the-infront-blog/2015/12/30/seo-and-adwords
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
6,434,533 |
http://tenderlovemaking.com/2012/06/18/removing-config-threadsafe.html
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
30,667,493 |
https://medium.com/@nurijanian/how-elon-musk-charlie-munger-aristotle-had-you-fooled-caea52c0943c
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
31,903,068 |
https://www.emergingtechbrew.com/stories/2022/06/24/at-one-indoor-farming-startup-the-sun-is-imitated-and-embraced
|
At one indoor farming startup, the sun is imitated and embraced
|
Jordan McDonald
|
Fully indoor farms spend a lot of their time and energy replicating the sun, but Kentucky-based AppHarvest tries to have it both ways.
The startup, founded in 2018, relies on sunlight during the day and high-powered lighting at night, unlike the fully indoor farms that only use artificial lighting. (Don’t worry, AppHarvest still has over 36,000 LEDs in its Morehead, Kentucky, plant.) It aims to enhance the traditional greenhouse concept with climate-control technology and a closed-loop water-recycling regimen that includes harvesting rainwater—a process which it says uses up to 90% less water compared to traditional farms.
AppHarvest claimed in its 2020 sustainability report that relying on the sun to help grow its plants in daylight hours has led to an almost 20% reduction in its electricity consumption compared with standard HPS lights, while ensuring that lights are only used 40% of the available time. LED lighting is typically the main energy expense for vertical farms, and energy is itself typically the main cost.
“We’ve taken this approach because we think we can get a lower cost in the long term by being able to use free sunlight and free rainwater. But there’s going to be applications, they’re different everywhere in the world,” Jonathan Webb, co-founder of AppHarvest, told Emerging Tech Brew. “If you’re in Anchorage, Alaska, it probably makes sense to do warehouse-style farming. If you’re in rural America, you might want to use the sunlight and rainwater that’s already available.”
AppHarvest isn’t the only indoor growing operation using this combined natural-LED lighting setup. Companies like Windset Farms and Mastronardi Produce, which has a partnership with AppHarvest, also use high-tech greenhouses to grow staple crops like tomatoes. But AppHarvest operates the nation’s biggest hydroponic greenhouse, opened last year, and plans to open 11 more facilities by 2025.
AppHarvest went public via SPAC in January 2021 at a valuation of over $1 billion, and in Q1 2022, it recorded $5.2 million in net sales, more than double the $2.3 million it generated in Q1 2021. But the company wrote in its Q1 2022 earnings release that it expects to incur losses “for the foreseeable future,” as it continues to build more facilities into 2025.
##### Keep up with the innovative tech transforming business
Tech Brew keeps business leaders up-to-date on the latest innovations, automation advances, policy shifts, and more, so they can make informed decisions about tech.
One challenge for the company is that while AppHarvest relies in large part on the most renewable energy source of all—the sun itself—nearly all of its electricity comes from fossil fuel-based sources, per its 2021 sustainability report published in June. Other large, fully indoor vertical farming operations, like Bowery Farming and Upward Farms, claim their farms run on 100% renewable energy.
In addition to pushing its local utilities and transmission organization, PJM, to adopt renewables, the company says it has entered into an agreement with Schneider Electric to advise them on purchasing renewable energy and has completed a feasibility study for on-site solar, should access to grid-level renewables move too slowly.
“We know that we have an uphill battle in coal country to take this advocacy position lobbying for renewable energy, but that’s part of our mission to forge a new sector and new opportunity for Central Appalachia,” Travis Parman, chief communications officer at AppHarvest, told Emerging Tech Brew.
| true | true | true |
AppHarvest uses both artificial and real sunlight, which it says lowers costs.
|
2024-10-12 00:00:00
|
2022-06-24 00:00:00
|
website
|
emergingtechbrew.com
|
Morning Brew
| null | null |
|
3,526,824 |
http://blogs.perl.org/users/adam_flott/2012/01/somafm.html
|
Adam Flott
|
Adam Flott
|
I like
I don't like
So I made something simple to scratch my itch.
Download from GitHub
I blog about Perl.
## Leave a comment
| true | true | true | null |
2024-10-12 00:00:00
|
2012-01-29 00:00:00
| null | null | null |
soma.fm
| null | null |
291,427 |
http://sethgodin.typepad.com/seths_blog/2008/08/every-sunday-is.html
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
10,215,708 |
http://arstechnica.com/tech-policy/2015/08/a-chicken-sandwich-cannot-be-copyrighted-judge-rules/
|
A chicken sandwich cannot be copyrighted, court rules
|
Jon Brodkin
|
There are many things you can copyright, but a chicken sandwich is not one of them, a US appeals court panel ruled Friday.
Because of the ruling, a former employee of a fried chicken franchise is not entitled to a percentage of the profits from a sandwich he "authored," wrote Chief Judge Jeffrey Howard in the decision of the US Court of Appeals for the First Circuit. The plaintiff, Norberto Colón Lorenzana, had filed a complaint seeking "All the earnings produced by his creation"—an amount not less than $10 million.
"The sandwich consists of a fried chicken breast patty, lettuce, tomato, American cheese, and garlic mayonnaise on a bun," the judge wrote. Colón had claimed that both the recipe and the name of the so-called Pechu Sandwich "is a creative work, of which he is the author," the judge noted.
Colón failed to persuade a district court, which pointed out that the Copyright Act protects works of authorship in eight categories, none of which includes chicken breasts placed between two slices of bread. The appeals court upheld the ruling.
"A recipe—or any instructions—listing the combination of chicken, lettuce, tomato, cheese, and mayonnaise on a bun to create a sandwich is quite plainly not a copyrightable work," Howard wrote. The name of the food item is also not copyrightable, because copyright protection cannot be extended to "words and short phrases, such as names, titles, and slogans," Howard wrote.
| true | true | true |
Man who put chicken inside a bun sought $10 million for theft of creative work.
|
2024-10-12 00:00:00
|
2015-08-25 00:00:00
|
article
|
arstechnica.com
|
Ars Technica
| null | null |
|
25,133,405 |
https://halifaxtheforum.org/china-handbook/en/
| null | null |
“A senior American security official has suggested it remains to be seen whether China will honour an agreement reached this year on state-sponsored cyberattacks on private businesses…”
**CTV NEWS:** Ending state-sponsored cyberattacks in China’s best interests: U.S. official
The Canadian Press
| true | true | true | null |
2024-10-12 00:00:00
|
2015-11-21 00:00:00
| null |
webpage
| null |
Halifax
| null | null |
302,645 |
http://technology.timesonline.co.uk/tol/news/tech_and_web/article4742147.ece
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
29,784,123 |
https://petapixel.com/2022/01/03/samsungs-new-tvs-let-you-buy-sell-and-display-photography-nfts/
|
Samsung's New TVs Let You Buy, Sell, and Display Photography NFTs
|
Jaron Schneider
|
# Samsung’s New TVs Let You Buy, Sell, and Display Photography NFTs
As part of a series of announcements it made for the Consumer Electronics Show (CES), Samsung unveiled that it would be adding an NFT marketplace into its smart TV system that would allow users to browse, buy, sell, and display art on Samsung TVs.
Samsung’s new Smart TVs will ship with what it calls its new “Smart Hub,” which is billed as a system that puts content curation and discovery front and center of the viewing experience. The company says that it will guide users to their favorite content or help them discover new content with less time spent through traditional search. The Smart Hub has three main modes — Media, Gaming, and Ambient — and can quickly transition between each. Samsung hasn’t clarified how the Smart Hub works with its current Smart TV application — called Tizen — or if it replaces it.
The Hub features four major features: the gaming hub (which is a way to discover and play games through televisions directly and is supported by Samsung’s partnership with NVIDIA GeForce Now, Stadia and Utomik), a “Watch Together” feature, Smart Calibration (an intuitive, integrated platform for discovering, purchasing and trading digital artwork through MICRO LED, Neo QLED and The Frame), and the NFT Platform.
## Samsung’s Integrated NFT Platform
The introduction of the NFT Platform marks the first time a major television manufacturer has decided to support non-fungible tokens in any capacit and provides a more tangible way for NFT owners to enjoy their purchases.
The company’s NFT aggregation platform is built into this system and allows users to browse, buy, sell, and display NFT art directly through a compatible Samsung television. The company says that the NFT platform it built features an intuitive, integrated platform. Any art bought or displayed is supported by the aformentioned Samsung Smart Calibration feature that automatically adjusts the display settings “to the creator’s preset values” to assure that how it appears is exactly, or as close as possible to, how the artist intended. *The Verge* compares the idea to Dolby Vision or Netflix Calibrated Modes as an example of how this might work.
## Availability and Compatibility
The NFT Marketplace feature will be supported on the company’s 2022 lineup of MICRO LED, Neo QLED, and The Frame television models. Samsung says that the platform is an aggregator, and as such, it is assumed that the platform isn’t just supported by one NFT marketplace, but several. The company hasn’t specified which marketplaces it will support, but that information should become more available leading up to the commercial availability of the televisions.
| true | true | true |
NFTs in your living room.
|
2024-10-12 00:00:00
|
2022-01-03 00:00:00
|
article
|
petapixel.com
|
PetaPixel
| null | null |
|
6,909,792 |
http://simoneloru.com/glass/?hn_dec
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
22,833,365 |
https://github.com/RobKohr/weighted-task-chooser
|
GitHub - RobKohr/weighted-task-chooser: Will select a random (weighted) task out of a to do list in a text file (or md file)
|
RobKohr
|
So you have a list of tasks to do in a text or markdown file:
```
- [ ] Task 1
- [ ] Task 2
- [4] Task 3
- [ ] Subtask of 3
- [ ] Task 4
Some notes on task 4
- [x] Task 5 - done, don't every pick this
- [ ] Task 6
```
This will randomly choose one of the parent tasks from that file. It also does weighting, where each task has a default weight based on the value in the brackets.
- npm install
- cp paths.txt.example paths.txt
- add a labeled path in paths.txt. Paths can be relative, absolute, or inside of this project
- node index.js PATH_NAME # to run this
- All things to randomize must begin with "- [ ]" or "- [42]" where 42 is the randomized weight of the task
- tasks without weight are defaulted to 1
- tasks with "- [x]" or "- [-]" are skipped
- Only parent tasks are the ones that are randomized, and they are returned with their children and any other content before the next parent tasks
- a child is something that is indented
- This was built with markdown in mind, but this could be any plain text file where lines start with "- [ ]
- No space can exist at the beginning of a line
- Haven't tested with windows files that have different new line characters. Probably needs to modify the regex line splitter for that. (feel free to create a PR if you do this)
Note: you can also use "* [ ]" instead of "- [ ]" if that is your preference.
| true | true | true |
Will select a random (weighted) task out of a to do list in a text file (or md file) - RobKohr/weighted-task-chooser
|
2024-10-12 00:00:00
|
2020-04-09 00:00:00
|
https://opengraph.githubassets.com/39e94bdd2ef07f48bb17aa740cd5b408310d9cbdf217d2fc83a89d301129c4ea/RobKohr/weighted-task-chooser
|
object
|
github.com
|
GitHub
| null | null |
40,545,497 |
https://www.youtube.com/watch?v=qiOtinFFfk8
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
17,980,158 |
https://medium.com/loon-for-all/1-connection-7-balloons-1-000-kilometers-74da60b9e283
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
4,306,602 |
http://www.quora.com/Computer-Programming/What-are-the-three-most-important-programming-languages-to-learn
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
1,742,376 |
http://mashable.com/2010/09/29/new-twitter-golden-ratio/
|
New Twitter Design Based on the Golden Ratio [IMAGE]
|
Brenna Ehrlich
|
From Pythagoras to Darren Aronofsky, the Golden Ratio -- an irrational mathematical constant found in everything from art to architecture -- has come pretty far. Lately, it was apparently the basis of the design for the new Twitter.
Check out the above picture from Twitter’s Creative Director Doug Bowman, found on Twitter's Flickr page. According to the caption: "To anyone curious about #NewTwitter proportions, know that we didn't leave those ratios to chance. This, of course, only applies to the narrowest version of the UI. If your browser window is wider, your details pane will expand to provide greater utility, throwing off these proportions. But the narrowest width shows where we started, ratio-wise."
Personally, I think the inclusion of the mind-bending and mythical ratio is rather elegant (but then I have a weird obsession with hypertexts like Fibonacci's Daughter). What do you think of the new Twitter design?
| true | true | true |
New Twitter Design Based on the Golden Ratio [IMAGE]
|
2024-10-12 00:00:00
|
2010-09-29 00:00:00
|
article
|
mashable.com
|
Mashable
| null | null |
|
10,215,871 |
http://meta.stackoverflow.com/questions/303865/warlords-of-documentation-a-proposed-expansion-of-stack-overflow?cb=1
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
33,011,473 |
https://news.livebook.dev/how-to-query-and-visualize-data-from-amazon-athena-using-livebook-4dfQ5y
|
How to query and visualize data from Amazon Athena using Livebook - Livebook.dev The Livebook Blog
| null |
# How to query and visualize data from Amazon Athena using Livebook
Livebook has built-in integrations with many data sources, including Amazon Athena.
In this blog post, you'll learn how to use Livebook to connect to Amazon Athena, execute a SQL query against it, and visualize the data.
You can also run this tutorial inside your Livebook instance by clicking the button below:
## Connecting to Amazon Athena using the Database connection Smart cell
To connect to Amazon Athena, you'll need the following info from your AWS account:
- AWS access key ID
- AWS secret access key
- Athena database name
- S3 bucket to write query results to
Now, let's create an Amazon Athena connection using a Database connection Smart cell. Click the options "Smart > Database connection > Amazon Athena":
Once you've done that, you'll see a Smart cell with input fields to configure your Amazon Athena connection:
Fill in the following fields to configure your connection:
- Add your AWS access key ID
- Add your AWS secret access key
- Add your Athena database name
- Add your S3 bucket in the "Output Location" field
Now, click the "Evaluate" icon to run that Smart cell. Once you've done that, the Smart cell will configure the connection and assign it to a variable called `conn`
.
## Querying Amazon Athena using the SQL Query Smart cell
Before querying Athena, we need to have an Athena table to query from. So, let's create one.
We'll create an Athena table based on a public dataset published on AWS Open Data. We'll use the GHCN-Daily dataset, which contains climate records measured by thousands of climate stations worldwide. Let's create an Athena table called `stations`
with metadata about those climate stations.
To do that, Add a new SQL Query Smart cell by clicking the options "Smart > SQL Query":
Copy and paste the SQL code below to the SQL Query cell:
CREATE EXTERNAL TABLE IF NOT EXISTS default.stations ( station_id string, latitude double, longitude double, elevation double, name string) ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.RegexSerDe' WITH SERDEPROPERTIES ( 'input.regex'='([^ ]*) *([^ ]*) *([^ ]*) *([^ ]*) *(.+)$') STORED AS INPUTFORMAT 'org.apache.hadoop.mapred.TextInputFormat' OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat' LOCATION 's3://livebook-blog/amazon-athena-integration' TBLPROPERTIES ( 'typeOfData'='file')
Now, click the "Evaluate" icon to run that Smart cell. Now we have an Athena table to query from.
Add a new SQL Query Smart cell and copy and paste the following SQL query to it:
select * from default.stations order by station_id
Evaluate that cell. It will query your Athena table and assign the results to a `result2`
variable. You'll see the result of that query in a tabular format like this:
## Visualizing geographic coordinates data using the Map Smart cell
Notice that the table we created has each climate station's latitude and longitude information. We can visualize that data with a map visualization using the Map Smart cell. Let's do that.
Add a Map Smart cell by clicking the options "Smart > Map":
Your Map Smart cell will look something like this:
Use your Map Smart cell to configure:
- the layer name
- the data source
- the coordinates format
- the longitude field
- the latitude field
Once you've configured the Smart cell, you can evaluate it, and it will build a map for you. It will look something like this:
That's it! Using Livebook Smart cells, you can connect to an Amazon Athena database, execute a SQL query against it and visualize the result.
| true | true | true |
In this blog post, you'll learn how to use Livebook to connect to Amazon Athena, execute a SQL query against it, and visualize the data.
|
2024-10-12 00:00:00
|
2022-10-01 00:00:00
|
https://img.announcekit.app/21466e49e82e984390f3ae792670d206?s=c7e33f86da1f2d766a8262fa53be63df
|
article
|
livebook.dev
|
How to query and visualize data from Amazon Athena using Livebook - Livebook.dev
| null | null |
21,421,996 |
https://blog.mozilla.org/nfroyd/2019/11/01/evaluating-bazel-for-building-firefox-part-2/
|
Nathan's Blog writing code to help other people write code
|
Boris
|
In our last post, we highlighted some of the advantages that Bazel would bring. The remote execution and caching benefits Bazel bring look really attractive, but it’s difficult to tell exactly how much they would benefit Firefox. I looked for projects that had switched to Bazel, and a brief summary of each project’s experience is written below.
The Bazel rules for nodejs highlight Dataform’s switch to Bazel, which took about 2 months. Their build involves some combination of “NPM packages, Webpack builds, Node services, and Java pipelines”. Switching plus enabling remote caching reduced the average time for a build in CI from 30 minutes to 5 minutes; incremental builds for local development have been “reduced to seconds from minutes”. It’s not clear whether the local development experience is also hooked up to the caching infrastructure as well.
Pinterest recently wrote about their switch to Bazel for iOS. While they call out remote caching leading to “build times [dropping] under a minute and as low as 30 seconds”, they state their “time to land code” only decreased by 27%. I wasn’t sure how to reconcile such fast builds with (relatively) modest decreases in CI time. Tests have gotten a lot faster, given that test results can be cached and reused if the tests in question have their transitive dependencies unchanged.
One of the most complete (relatively speaking) descriptions I found was Redfin’s switch from Maven to Bazel, for building a large amount of JavaScript modules and Java code, nearly 30,000 files in all. Their CI builds went from 40-90 minutes to 5-6 minutes; in fairness, it must be mentioned that their Maven builds were not parallelized (for correctness reasons) whereas their Bazel builds were. But it’s worth highlighting that they managed to do this incrementally, by generating Bazel build definitions from their Maven ones, and that the quoted build times did *not* enable caching. The associated tech talk slides/video indicates builds would be roughly in the 1-2 minute range with caching, although they hadn’t deployed that yet.
None of the above accounts talked about how long the conversion took, which I found peculiar. Both Pinterest and Redfin called out how much more reliable their builds were once they switched to Bazel; Pinterest said, “we haven’t performed a single clean build on CI in over a year.”
In some negative results, which are helpful as well, Dropbox wrote about evaluating Bazel for their Android builds. What’s interesting here is that other parts of Dropbox are heavily invested in Bazel, so there’s a lot of in-house experience, and that Bazel was significantly faster than their current build system (assuming caching was turned on; Bazel was significantly *slower* for clean builds without caching). Yet Dropbox decided to not switch to Bazel due to tooling and development experience concerns. They did leave open the possibility of switching in the future once the ecosystem matures.
The oddly-named Bazel Fawlty describes a conversion to Bazel from Go’s native tooling, and then a switch back after a litany of problems, including slower builds (but faster tests), a poor development experience (especially on OS X), and various things not being supported in Bazel leading to the native Go tooling still being required in some cases. This post was also noteworthy for noting the amount of porting effort required to switch: eight months plus “many PR’s accepted into the bazel go rules git repo”. I haven’t used Go, but I’m willing to discount some of the negative experience here due to the native Go tools being so good.
Neither one of these negative experiences translate exactly to Firefox: different languages/ecosystems, different concerns, different scales. But both of them cite the developer experience specifically, suggesting that not only is there a large investment required to actually do the switchover, but you also need to write tooling around Bazel to make it more convenient to use.
Finally, a 2018 BazelCon talk discusses two Google projects that made the switch to Bazel and specifically to use remote caching and remote execution on Google’s public-facing cloud infrastructure: Android Studio and TensorFlow. (You may note that this is the first instance where somebody has called out supporting remote execution as part of the switch; I think that implies getting a build to the point of supporting remote execution is more complicated than just supporting remote caching, which makes a certain amount of sense.) Android Studio increased their test presubmit coverage by 4x, presumably by being able to run more than 4x test jobs than previously due to remote execution. In the same vein, TensorFlow decreased their build and test times by 80%, and they could use significantly less powerful machines to actually run the builds, given that large machines in the cloud were doing the actual heavy lifting.
Unfortunately, I don’t think expecting those same reductions in test time, were Firefox to switch to Bazel, is warranted. I can’t speak to Android Studio, but TensorFlow has a number of unit tests whose test results can be cached. In the Firefox context, these would correspond to cppunittests, which a) we don’t have that many of and b) don’t take that long to run. The bulk of our tests depend in one way or another on kitchen-sink-style artifacts (e.g. libxul, the JS shell, omni.ja) which essentially depend on everything else. We could get some reductions for OS-specific modifications; Windows-specific changes wouldn’t require re-running OS X tests, for instance, but my sense is that these sorts of changes are not common enough to lead to an 80% reduction in build + test time. I suppose it’s also possible that we could teach Bazel that e.g. devtools changes don’t affect, say, non-devtools mochitests/reftests/etc. (presumably?), which would make more test results cacheable.
I want to believe that Bazel + remote caching (+ remote execution if we could get there) will bring Firefox build (and maybe even test) times down significantly, but the above accounts don’t exactly move the needle from belief to certainty.
Bazel Fawlty is named after the main character from Fawlty Towers, an English sitcom from the 1970s: https://en.wikipedia.org/wiki/Fawlty_Towers
Recent discussion on LWN is interesting, and mostly negative
https://lwn.net/Articles/802541/
I don’t find it surprising that uncached/non-remote-executed builds are significantly slower with Bazel : all that isolation and complete dependency specification enforcement must have quite an overhead. I don’t know how exactly they do it (chroot, symlinks, copying?) but it must be significant, especially on something like Windows.
| true | true | true | null |
2024-10-12 00:00:00
|
2019-11-01 00:00:00
| null | null |
mozilla.org
|
blog.mozilla.org
| null | null |
8,551,427 |
https://github.com/kenwheeler/mcfly
|
GitHub - kenwheeler/mcfly: Flux architecture made easy
|
Kenwheeler
|
Flux Architecture Made Easy
*What is McFly?*
When writing ReactJS apps, it is enormously helpful to use Facebook's Flux architecture. It truly complements ReactJS' unidirectional data flow model. Facebook's Flux library provides a Dispatcher, and some examples of how to write Actions and Stores. However, there are no helpers for Action & Store creation, and Stores require 3rd party eventing.
McFly is a library that provides all 3 components of Flux architecture, using Facebook's Dispatcher, and providing factories for Actions & Stores.
Check out this JSFiddle Demo to see how McFly can work for you:
McFly can be downloaded from:
http://kenwheeler.github.io/mcfly/McFly.js
McFly uses Facebook Flux's dispatcher. When McFly is instantiated, a single dispatcher instance is created and can be accessed like shown below:
```
var mcFly = new McFly();
return mcFly.dispatcher;
```
In fact, all created Actions & Stores are also stored on the McFly object as `actions`
and `stores`
respectively.
McFly has a **createStore** helper method that creates an instance of a Store. Store instances have been merged with EventEmitter and come with **emitChange**, **addChangeListener** and **removeChangeListener** methods built in.
When a store is created, its methods parameter specifies what public methods should be added to the Store object. Every store is automatically registered with the Dispatcher and the `dispatcherID`
is stored on the Store object itself, for use in `waitFor`
methods.
Creating a store with McFly looks like this:
```
var _todos = [];
function addTodo(text) {
_todos.push(text);
}
var TodoStore = mcFly.createStore({
getTodos: function() {
return _todos;
}
}, function(payload){
var needsUpdate = false;
switch(payload.actionType) {
case 'ADD_TODO':
addTodo(payload.text);
needsUpdate = true;
break;
}
if (needsUpdate) {
TodoStore.emitChange();
}
});
```
Use `Dispatcher.waitFor`
if you need to ensure handlers from other stores run first.
```
var mcFly = new McFly();
var Dispatcher = mcFly.dispatcher;
var OtherStore = require('../stores/OtherStore');
var _todos = [];
function addTodo(text, someValue) {
_todos.push({ text: text, someValue: someValue });
}
...
case 'ADD_TODO':
Dispatcher.waitFor([OtherStore.dispatcherID]);
var someValue = OtherStore.getSomeValue();
addTodo(payload.text, someValue);
break;
...
```
Stores are also created a with a ReactJS component mixin that adds and removes store listeners that call a **storeDidChange** component method.
Adding Store eventing to your component is as easy as:
```
var TodoStore = require('../stores/TodoStore');
var TodoApp = React.createClass({
mixins: [TodoStore.mixin],
...
```
McFly's **createActions** method creates an Action Creator object with the supplied singleton object. The supplied methods are inserted into a Dispatcher.dispatch call and returned with their original name, so that when you call these methods, the dispatch takes place automatically.
Adding actions to your app looks like this:
```
var mcFly = require('../controller/mcFly');
var TodoActions = mcFly.createActions({
addTodo: function(text) {
return {
actionType: 'ADD_TODO',
text: text
}
}
});
```
All actions methods return promise objects so that components can respond to long functions. The promise will be resolved with no parameters as information should travel through the dispatcher and stores. To reject the promise, return a falsy value from the action's method. The dispatcher will not be called if the returned value is falsy or has no actionType.
You can see an example of how to use this functionality here:
http://jsfiddle.net/thekenwheeler/32hgqsxt/
```
var McFly = require('mcfly');
var mcFly = new McFly();
```
```
/*
* @param {object} methods - Public methods for Store instance
* @param {function} callback - Callback method for Dispatcher dispatches
* @return {object} - Returns instance of Store
*/
```
```
/**
* @param {object} actions - Object with methods to create actions with
* @constructor
*/
```
| true | true | true |
Flux architecture made easy. Contribute to kenwheeler/mcfly development by creating an account on GitHub.
|
2024-10-12 00:00:00
|
2014-10-31 00:00:00
|
https://opengraph.githubassets.com/bfb1de90df1a3f53810f9df925d8f410b6bc73fb8ba77a973963531d9a9b86a9/kenwheeler/mcfly
|
object
|
github.com
|
GitHub
| null | null |
38,575,027 |
https://www.bbc.co.uk/news/science-environment-67642374
|
Tyrannosaur’s last meal was two baby dinosaurs
|
Victoria Gill
|
# Tyrannosaur’s last meal was two baby dinosaurs
- Published
**The last meal of a 75-million-year-old tyrannosaur has been revealed by scientists - two baby dinosaurs.**
Researchers say the preservation of the animal - and of the small, unfortunate creatures it ate - shines new light on how these predators lived.
It is "solid evidence that tyrannosaurs drastically changed their diet as they grew up," said Dr Darla Zelenitsky, from the University of Calgary.
The specimen is a juvenile gorgosaurus - a close cousin of the giant T. rex.
This particular gorgosaur was about seven years old - equivalent to a teenager in terms of its development. It weighed about 330kg when it died - about a tenth of the weight of a fully-grown adult.
The hind limbs of two, small bird-like dinosaurs called citipes are visible beneath its ribcage.
"We now know that these teenage [tyrannosaurs] hunted small, young dinosaurs," said Dr Zelenitsky, one of the lead scientists in this study, which is published in the journal Science Advances, external.
An array of earlier fossil evidence, including evident bite marks on the bones of larger dinosaurs that match tyrannosaur teeth, have allowed scientists to build a picture of how the three-tonne adult gorgosaurs attacked and ate very large plant-eating dinosaurs which lived in herds.
Dr Francois Therrien, from the Royal Tyrell Museum of Palaeontology, described these adult tyrannosaurs as "quite indiscriminate eaters". They probably pounced on large prey, "biting through bone and scraping off flesh," he told BBC News.
But, Dr Zelenitsky, added, "these smaller, immature tyrannosaurs were probably not ready to jump into a group of horned dinosaurs, where the adults weighed thousands of kilograms".
## 'Toes poking through the ribcage'
The fossil was originally discovered in the Alberta Badlands in 2009 - a hotspot for dinosaur hunters.
Entombed in rock, it took years to prepare and it wasn't immediately obvious that there was prey inside. Staff at Alberta's Royal Tyrell Museum of Palaeontology eventually noticed small toe-bones sticking out from the ribcage.
"The rock within the ribcage was removed to expose what was hidden inside," explained Dr Therrien, who is the other lead scientist in this study. "And lo and behold - the complete hind legs of two baby dinosaurs, both under a year old."
Dr Zelenitsky said that finding only the legs suggested that this teenage gorgosaurus "seems to have wanted the drumsticks - probably because that's the meatiest part".
The gorgosaurus is a slightly smaller, more ancient species than T. rex. Fully grown, these were - as Dr Therrien put it - "big, burly tyrannosaurs".
They transformed as they matured. "Juveniles were much more lightly built - with longer legs and very blade-like teeth," he explained. "Adults' teeth are all much rounder - we call them 'killer bananas'.
"This specimen is unique - it's physical proof of the juveniles' very different feeding strategy.
While the adults bit and scraped with their powerful "killer banana" teeth, "this animal was selecting and even dissecting its prey - biting off the legs and swallowing them whole".
Prof Steve Brusatte, a palaeontologist from the University of Edinburgh and the National Museum of Scotland, said that seeing prey in the dinosaur's guts gave a real insight into the animals: "They weren't just monsters, they were real, living things and pretty sophisticated feeders."
Recalling a depiction of T. rex in the 1993 film Jurassic Park - where the giant dinosaur chased a car through the fictional theme park - Prof Brusatte added: "A big, adult T. rex wouldn't have chased after a car - if cars or jeeps were around back then - its body was too big, and it couldn't move that fast.
"It would be the youngsters - [like this gorgosaur] - the children of T. rex that you'd have to keep an eye on."
## Related topics
- Published25 January 2021
- Published22 February 2022
| true | true | true |
Remains of baby dinosaurs inside another dinosaur reveal what a young predator ate 75m years ago.
|
2024-10-12 00:00:00
|
2023-12-08 00:00:00
|
article
|
bbc.com
|
BBC News
| null | null |
|
36,011,119 |
https://menu-by-ai-heroku-2.herokuapp.com
|
Streamlit
| null |
You need to enable JavaScript to run this app.
| true | true | true | null |
2024-10-12 00:00:00
| null | null | null | null | null | null | null |
24,079,170 |
http://www.rebol.com/docs/shell.html
|
REBOL Shell Interface
| null |
# REBOL Shell Interface
## Contents
The Call Function
Basic Syntax
Refinements
Other Shell Details
Waiting for Commands
Redirection
Redirecting Input
Redirecting Output and Errors
Combining Redirection Refinements
Redirecting to the REBOL Console
## Overview
This chapter describes how to use the **REBOL Shell Interface** to execute programs and shell commands and redirect command input and output.
A shell is an operating environment. It enables users to communicate with, execute, and control programs and the operating system. Shells vary widely depending on the operating system.
The Shell Interface enables REBOL to communicate with shell programs and the operating system. The Shell Interface uses the CALL function to execute built-in commands or programs. These built-in commands and programs are also known as shell commands. (In this chapter, the term shell command refers to both built-in commands and programs.)
With the Shell Interface, in Unix and Windows, input to a shell command can be redirected from a file, URL, port, string value, or the REBOL console. Both normal and error output from a shell command can be redirected to a file, URL, port, string value, or the REBOL console.
## The Call Function
### Basic Syntax
Use the CALL function to execute a shell command. The syntax of the CALL function is shown in the following example:
call argument
The CALL function accepts one argument, which can be a string or a block specifying a shell command and its arguments.
The following example shows a string as the CALL argument.
call "cp source.txt dest.txt"
In this example, CALL executes the Unix file copy command, cp, and passes two file arguments, source.txt and dest.txt. When CALL is evaluated, the string is translated into a shell command that calls cp and passes the two arguments. The result is the source.txt file being copied to the dest.txt file.
Use a block argument with CALL when you want to include REBOL values in the call to a shell command, as shown in the following example:
source: %source.txt dest: %dest.txt call reduce ["cp" source dest]
In this example, the cp command is called using a block value to pass the arguments source.txt and dest.txt. The block must be reduced for the CALL function to obtain the file names referred to by the source and dest varaibles.
The CALL function translates the file names in a block to the notation used by the shell. For example, the argument
[dir %/c/windows]
will be translated to:
{dir "c:\windows"}
for the shell call.
### Refinements
The following refinements can be used with the CALL function.
Refinement | Argument Datatypes | Description |
---|---|---|
wait | Causes CALL to wait for a shell command's return code. The CALL function then returns the shell command's return code to the REBOL program. | |
input | string, file name, port, URL, or none | Windows and Unix only: redirects string, file, port, and URL data as input to a shell command. |
output | string, file name, port, URL, or none | Windows and Unix only: redirects shell command output to a string, file, port, or URL. |
shell | Windows Only: Forces the use of a shell. (Because in Windows, programs started with CALL are not normally run from a shell, but are launched directly from the operating system.) | |
error | string, file name, port, URL, or none | Windows and Unix only: redirects shell command errors to a string, file, port, or URL. See |
console | Windows and Unix only: redirects all input and output of a shell command to the REBOL console. This enables the console to directly interact with a shell command. See |
### Other Shell Details
When REBOL uses a shell to execute a shell command, shell features can be used in the arguments passed by the CALL function. Shell features include redirection and expansion symbols. These symbols are the most common:
< standard input (stdin) > standard output (stdout) | pipe output to another command * wildcard to match zero or more characters ? match any single character
In Unix, the REBOL language calls all commands using a shell. The most common Unix shells are bsh (Bourne Shell), csh (C Shell), ksh (Korn Shell), and bash.
The Unix SHELL environment variable determines which shell REBOL uses to execute a command. By default, the environment variable is set to the path of the shell currently in use. For example, if REBOL is started from the C shell, the SHELL environment variable is set to the path for the C shell. Because all Unix shells support the -c option, which causes the shell to exit after a command executes, REBOL automatically uses this option when calling a Unix program.
REBOL normally calls Windows programs directly, bypassing the shell. Consequently, Windows shell features, such as redirection and expansion symbols, cannot be used in the arguments passed by the CALL command. However, when executing built-in Windows shell commands or DOS programs, REBOL uses the shell. When this happens, REBOL calls the command shell command.com (Windows) or cmd.exe (Windows NT). When calling a Windows shell command, REBOL automatically uses the /c switch, which causes the shell to exit after executing a command.
## Waiting for Commands
When shell commands are called, they normally run as a separate process in parallel with REBOL. They are asynchronous to REBOL.
However, there are times when you want to wait for a shell command to finish, such as when you are executing multiple shell commands. In addition, every shell command has a return code, which normally indicates the success or failure of the command. Typically, a shell command returns zero when it is successful and a non-zero value when it is unsuccessful.
The /wait refinement causes the CALL function to wait for a command's return code and return it to the REBOL program. You can then use the return code to verify that a command executed successfully, as shown in the following example:
if zero? call/wait "dir" [ print "worked" ]
In the above example, CALL successfully executes the Windows dir command, which is indicated by the zero return value. However, in the next example, CALL is unsuccessful at executing the xcopy command, which is indicated by the return value other than zero.
if not zero? code: call/wait "xcopy" [ print ["failed:" code] ]
When you use the /input, /output, /error, or /console refinements you automatically set the /wait refinement.
## Redirection
In Windows and Unix, input to a shell command can be redirected from a file, URL, string, or port. By default, a shell command's output and errors are ignored by REBOL. However, shell command output and errors can be redirected to a file, URL, port, string, or the REBOL console.
This section describes how to redirect input, output, and errors.
In AmigaOS and BeOS input/output redirection between REBOL and shell commands is not supported. Instead temporary files have to be created which can then be piped into shell commands:
write %/tmp/infile inputdata call/wait "sort /tmp/outfile" print read %/tmp/outfile
### Redirecting Input
In Windows and Unix, data from a file, URL, port, or string can redirected as input to a shell command using the /input refinement with CALL. The refinement argument must contain the name of the file, URL, port, or REBOL string whose data will be passed to the shell command.
The following examples show shell command input redirected from files, URLs, ports, and REBOL strings.
From a file:
call/input {grep "Name:"} %data
In this example, the %data file is redirected as input to the Unix grep command.
From a URL:
call/input {grep "REBOL"} http://data.rebol.com/
In this example, the REBOL URL is redirected as input to the Unix grep command.
From a port:
open-port: open %data call/input "sort" open-port
In this example, data from open-port is redirected as input to the Unix or Windows sort command.
**NOTE: Data from ports opened with the /direct refinement cannot be redirected to a shell command.**
From a string:
str: "this is a string of text" call/input "wc" str
In this example, the string str is redirected as input to the Unix wc command.
### Redirecting Output and Errors
In Windows and Unix, shell command output and errors can be redirected to a file, URL, port, or REBOL string using the /output and /error refinements with CALL. The refinement argument must contain the name of the file, URL, port, or REBOL string to receive the output or error. When redirected to a file, port, or URL, output overwrites any existing data. When redirected to a REBOL port or string, output is inserted at the current index position.
The following examples show shell command output and errors redirected to files, URLs, ports, and REBOL strings.
To a file:
call/output "dir /a /b *.r" %output.txt print read %output.txt feedback.r history.r nntp.r rebol.r user.r call/error "cp" %error.txt print read %error.txt cp: missing file arguments Try `cp --help' for more information.
To a URL:
call/output "ls *.r" ftp://your-ftp.com
To a port:
port: open/direct/binary %output.txt call/output "dir /a /b *.r" port close port print read %output.txt feedback.r nntp.r history.r rebol.r user.r
To a REBOL string:
err: copy "" call/error "cp source.txt" err print err cp: missing destination file Try `cp --help' for more information.
### Combining Redirection Refinements
The redirection refinements can be combined in a single call function. When more than one refinement is used in a single call function, the arguments must be presented in the same order as the corresponding refinements.
input-str: "data" output-str: copy "" error-str: copy "" call/input/output/error "sort" input-str output-str error-str
The following example uses the Unix wc command to count the lines in a web page and save the data to the file called line-count.txt:
call/input/output "wc -l" http://data.rebol.com/ %line-count.txt print read %line-count.txt 247
### Redirecting to the REBOL Console
In Windows and Unix the /console refinement makes the REBOL console the interface to a shell and allows the shell to take control of the REBOL console while a shell command is being executed. Shell command output and errors are printed to the console, and the console can be used to send input directly to the command.
To prevent either a command's output or errors from printing on the console, use the /output or the /error refinement with NONE as the argument. To prevent input to a command from being entered in the console, use the /input refinement with NONE as the argument.
The following example shows the output of a command being redirected to the REBOL console:
call/console "format a:" Insert new diskette for drive A: and press ENTER when ready... Checking existing disk format. Formatting 1.44M Format complete. Volume label (11 characters, ENTER for none)? New_Disk 1,457,664 bytes total disk space 1,457,664 bytes available on disk 512 bytes in each allocation unit. 2,847 allocation units available. Volume Serial Number is 1E27-16DA Format another (Y/N)? N
| true | true | true | null |
2024-10-12 00:00:00
|
2024-03-01 00:00:00
| null | null | null | null | null | null |
11,813,370 |
https://github.com/BurntSushi/pdoc
|
GitHub - mitmproxy/pdoc: API Documentation for Python Projects
|
Mitmproxy
|
API Documentation for Python Projects.
`pdoc -o ./html pdoc`
generates this website: pdoc.dev/docs.
`pip install pdoc`
pdoc is compatible with Python 3.9 and newer.
```
pdoc your_python_module
# or
pdoc ./my_project.py
```
Run `pdoc pdoc`
to see pdoc's own documentation,
run `pdoc --help`
to view the command line flags,
or check our hosted copy of the documentation.
pdoc's main feature is a focus on simplicity: pdoc aims to do one thing and do it well.
- Documentation is plain Markdown.
- First-class support for type annotations and all other modern Python 3 features.
- Builtin web server with live reloading.
- Customizable HTML templates.
- Understands numpydoc and Google-style docstrings.
- Standalone HTML output without additional dependencies.
Under the hood...
`pdoc`
will automatically link identifiers in your docstrings to their corresponding documentation.`pdoc`
respects your`__all__`
variable when present.`pdoc`
will traverse the abstract syntax tree to extract type annotations and docstrings from constructors as well.`pdoc`
will automatically try to resolve type annotation string literals as forward references.`pdoc`
will use inheritance to resolve type annotations and docstrings for class members.
If you have substantially more complex documentation needs, we recommend using Sphinx!
As an open source project, pdoc welcomes contributions of all forms.
This project is not associated with "pdoc3", which often falsely assumes our name. Quoting @BurntSushi, the original author of pdoc:
I'm pretty disgusted that someone has taken a project I built, relicensed it, attempted to erase its entry on the Python Wiki, released it under effectively the same name and, worst of all, associated it with Nazi symbols.
Source: pdoc3/pdoc#64
In contrast, the pdoc project strives to uphold a healthy community where everyone is treated with respect. Everyone is welcome to contribute as long as they adhere to basic civility. We expressly distance ourselves from the use of Nazi symbols and ideology.
The pdoc project was originally created by Andrew Gallant and is currently maintained by Maximilian Hils.
| true | true | true |
API Documentation for Python Projects. Contribute to mitmproxy/pdoc development by creating an account on GitHub.
|
2024-10-12 00:00:00
|
2013-08-04 00:00:00
|
https://repository-images.githubusercontent.com/11885132/6cc3e400-6f0c-11eb-89ec-62794706b9b0
|
object
|
github.com
|
GitHub
| null | null |
7,743,979 |
http://9to5mac.com/2014/05/14/likely-iphone-6-with-sharper-larger-1704-x-960-resolution-screen-in-testing/
|
iPhone 6 with larger, sharper 1704 x 960 resolution screen in testing - 9to5Mac
|
Mark Gurman
|
Apple is preparing to release a new iPhone with a larger screen later this year, and while multiple reports have indicated that the screen will be larger, the exact dimensions of the screen and its resolution have so far been guesswork.
Some industry watchers have speculated that Apple could stretch the iPhone software’s interface and retain the iPhone 5s’s screen resolution of 1136 x 640. This approach would allow all iOS software and App Store apps to function normally on the iPhone 6 without work from developers. The downside of this approach would be that the iPhone 6’s display would fall below Steve Jobs’ somewhat arbitrary 300 pixels per inch definition of ‘Retina’ for a phone.
Just like with the transition to the iPhone 4’s Retina display in 2010 and the transition to the iPhone 5’s taller screen in 2012, Apple is preparing major resolution changes for the iPhone 6 that will require software changes by both Apple and developers, according to people briefed on the specifications of the new device…
**History of screen changes:**
Before discussing the resolution and scaling changes for the next-generation iPhone, it is important to understand the history of the iPhone’s screen. Back in 2007, Apple introduced the original iPhone with a display with a resolution of 320 x 480. That is 320 pixels horizontally and 480 pixels vertically with a diagonal screen size of 3.5-inches. Apple retained this screen with the succeeding iPhone 3G and iPhone 3GS.
In 2010, Steve Jobs introduced the iPhone 4 with Retina display.
The iPhone 4’s screen was much sharper than its predecessors’ displays, but the actual screen size was exactly the same as the previous iPhones. To create this effect, Apple quadrupled the number of pixels in the display panel and doubled the pixel density of the graphics across iOS in each direction in order to create a sharper screen with the same physical button sizes across the system. The new resolution was 640 x 960, which is double the prior iPhone’s resolution on both axes.
The move from the iPhone 3GS to the iPhone 4 doubled the iPhone screen’s pixel density from 163 PPI to 326 PPI, and Apple claims that a display density over 300 PPI is considered “Retina” quality. With the move to the Retina display, iOS automatically rendered text and core system elements at the new “2X” resolution, but both Apple and third-party App Store developers were required to redesign all of their graphics in order for the images to appear sharp. Otherwise, the smaller images would render at two times their actual size, which would cause a pixelated effect.
In late-2012 with the iPhone 5, Apple enlarged the iPhone’s panel to 4-inches diagonally.
Apple retained the iPhone 4’s horizontal resolution of 640, but it increased the height of the iPhone 5 to 1136 pixels on the vertical axis, a story we broke almost exactly two years ago this month. The same 2X scaling mode from the iPhone 4’s Retina display technology was retained as was the density of 326 pixels-per-inch.
From a developer’s perspective, the current iPhone 5/5s/5c display has a resolution of 568 x 320, up from 480 x 320 in the original iPhone. However, there are actually twice as many pixels in each direction to create a sharper image. In other words, an iPhone 5s with a non-Retina (or “1X”) display would have an actual resolution of 568 x 320 (which is the 1136 x 640 resolution divided by 2). We’ll call this the “base resolution” of the iPhone 5/5s/5c.
**3X mode:**
Fast forward to 2014, and Apple is preparing to make another significant screen adjustment to the iPhone. Instead of retaining the current resolution, sources familiar with the testing of at least one next-generation iPhone model say that Apple plans to scale the next iPhone display with a **pixel-tripling (3X) mode**.
This means that Apple will likely be tripling the aforementioned “base resolution” (568 x 320) of the iPhone screen in both directions, and that the iPhone screen resolution will be scaled with an increase of 150% from the current 2X resolution of 1136 x 640. Of course, Apple tests several different iPhones and display technologies, so it is possible that Apple chooses to take another route for display specifications for the 2014 iPhone upgrade.
**1704 likely the new 1136:**
568 tripled is **1704** and 320 tripled is **960**, and sources indicate that Apple is testing a **1704 x 960 resolution display** for the iPhone 6. Tripling the iPhone 5’s base resolution would mean that the iPhone 6’s screen will retain the same 16:9 aspect ratio as the iPhone 5, iPhone 5s, and iPhone 5c.
Previously leaked iPhone 6 schematics from Foxconn indicate that the display will remain 16:9, and this further lends credence to the aforementioned resolution being in testing. This image shows a 16:9 iPhone screenshot overlaid onto the schematics:
Based on the new resolution and evidence from leaked iPhone 6 parts, it seems like the new iPhone’s display will be both taller and slightly wider. This is in comparison to the iPhone 4’s screen size not changing during the transition to Retina and 2X mode and the iPhone 5’s transition to the taller, but not wider, 4-inch screen.
**Denser screens:**
While the new iPhone’s resolution is certainly higher, the screen’s overall sharpness is based on the screen’s pixel density. The two most commonly discussed diagonal screen sizes for the next iPhone are 4.7-inches and 5.5-inches. Here’s what the pixel densities would be for both of those screen sizes assuming each uses the new 3X scaling mode and 1704 x 960 resolution:
4.7-inches diagonal would feature a display density of **416 PPI:**
5.5-inches diagonal would feature a display density of **356 PPI**:
So, regardless of whether Apple goes with a 4.7-inch or 5.5-inch panel—or both—the new iPhone(s) will have significantly more dense screens in comparison to current and past iPhones, which will result in crisper text, images, and video for users of the next-generation Apple smartphones. Also, by definition, both screens will have pixel densities that fit comfortably within Apple’s threshold for a “Retina” display.
**Larger iOS interface:**
The larger and denser next-generation iPhone display means that iOS’s user-interface will become slightly larger and sharper unless Apple re-architects the layout of iOS to become more optimized for the new screen. The mockup images created by *9to5*‘s Michael Steeber above and below demonstrate raw pixel scale between the iPhone 5/5s/5c’s 640 x 1136 resolution display and the iPhone 6’s probable 960 x 1704 resolution screen.
The sources say that core user interface elements, from iOS functions like the Home screen, Notification Center, and Settings panels, will simply appear like larger versions of those functions on the current iPhone display. However, sources also say it is likely that developers and Apple itself will be able to optimize some applications to better utilize the larger screen area. It is possible Apple could revamp the Home screen and other functions between now and this fall’s launch.
For example, it would make sense for Apple to allow for applications like Safari and Maps to better take advantage of the new screen space. Game developers may choose to reposition their controls for an improved gaming experience. This screen size shift varies from the transition from the iPhone 4/4S to the iPhone 5/5s/5c, which simply entailed making the screen taller. This allowed Apple to show more new messages in Mail and add another row of Home screen icons.
**iOS Simulator:**
To test how iOS 7 would react to the 3X resolution mode for the next iPhone model, prominent developer Steven Troughton-Smith modified the publicly available iOS Simulator application used for App Store software development. The screenshots displayed in the above mockups are sourced from that modified simulator. As can be seen in the images above, core system functions like Spotlight and the Home screen scale fairly well automatically, but applications like Calendar will need some new graphics under the hood. Apple is said to be (unsurprisingly) working on optimizing all graphics across iOS to fit the new 3X mode, so customers will experience sharp graphics across the new device. If you click the above images, you can download them in the full 1704 x 960 resolution.
**Developer transition:**
Back in 2010 when Apple began the transition from the standard iPhone display to the Retina display, iOS’s user-interface aesthetic was driven by Steve Jobs’ and Scott Forstall’s taste in “skeuomorphic” design. This meant a casino and green felt-themed Game Center app, a yellow legal pad-styled Notes app, and a Calendar app bound by faux leather. Now, in 2014, Apple’s iOS design is led by Jony Ive, and features more prominent content, clear text, vector graphics, and animations.
According to developers, the new iOS design aesthetic’s reduced reliance on “raster” graphics means that apps can more easily be updated (or even be automatically updated) for the next iPhone’s denser, larger display panel. To prove this, Troughton-Smith tested two of his iPhone applications on the modified iPhone simulator to see how they would run. Speed (shown right) seems to run almost perfectly without any changes. Grace (shown left), another app of Troughton-Smith’s, also worked on the new display without any updates.
But, of course, not all apps will automatically run with a sharp look on the iPhone 6. According to sources familiar with the new iPhone displays in testing, if an unoptimized iPhone 5 app is run on the iPhone 6, the app will fill the entire screen but the non-3X images within the app will be blurrier. Troughton-Smith’s applications scale well because they were built with vector graphics. This transition from 2X to 3X will be reminiscent to the transition from 1X to 2X when the first iPhones with Retina displays launched in 2010.
**Multi-resolution tools: **
Apple has been working on a new “multi-resolution” mode and developer toolset for future iOS devices that allows developers to more easily scale their applications to work on multiple new iOS device resolutions. It is likely that developers will be provided with these tools later this year so that they can begin work on optimizing apps (if even necessary) for the new iPhone’s display. Apple typically does not preview new hardware features months in advance, so Apple is unlikely to reveal the larger iPhone display size at WWDC early next month.
**Stronger chip:**
Sources also say that Apple has developed a new A8 system-on-a-chip for the next iPhone that focuses on marginal speed improvements rather than core architectural changes, but adds significant performance and efficiency enhancements in order to improve the iPhone’s battery life. With a larger, higher-resolution display combined with the next iPhone’s far thinner body, the A8 chip will be essential to maintaining the seamless, fluid iPhone experience that Apple prides itself on. Besides a new processor, it is likely that the new iPhone will include improved LTE components for voice-over-LTE support and various other new hardware elements.
**Future Apple devices:**
Apple’s new hardware- and software-based display technologies likely opens up the door for a flurry of future, higher-resolution iOS devices.
Noted analyst Ming-Chi Kuo previously indicated that Apple is working on a sharper full-size iPad, so perhaps in the same way that Apple transitioned the iPhone to 2X before the iPad, the new 3X iPhone resolution will be a precursor to improved iPad displays.
Apple is also on working on some secret new iOS device form factors, high-resolution external monitors, and wearable displays, so perhaps these new technologies play into those future hardware products as well.
**iOS 8 features and WWDC:**
Before introducing fresh iPhone hardware later this year, Apple will use the WWDC stage in June to discuss the next iPhone’s operating system, iOS 8. In addition to building in support for the new hardware, the new operating system features a slew of enhancements, applications, and refinements.
New applications in iOS 8 likely include a Healthbook app for tracking health and fitness data, an enhanced Maps app with public transit directions support, and a standalone iTunes Radio app. Other enhancements, including a new split-screen multitasking view for the iPad, will round-out the feature-set.
Sources say that Apple is seriously considering pushing back some of iOS 8’s functionality to iOS 8.1 or even later versions of iOS, including the transit mode in the Maps app.
Regardless of what Apple announces, we’ll be covering WWDC extensively and looking out for more clues to what features the next iPhone will include. The latest reports indicate that Apple is preparing to launch a 4.7-inch iPhone model in August and a larger, “phablet” model with a 5.5-inch screen in September.
*FTC: We use income earning auto affiliate links.* More.
Nice “iPhone 6” mock up at the top there, with the bezel wider on the left than the right…
“the iPhone screen resolution will be scaled with an increase of 150% from the current 2X resolution of 1136 x 640.”
You mean “an increase of 50% from the current 2X resolution of 1136 x 640.” Don’t you, Mark?
You mean the iPhone 6 still won’t produce video in 1080p??? Weak.
They give you all that facts and figures and that’s all you have to say at the end? Lol, nicely pointed out though.
I have found another mock up, but in 3D WebGL, so you can make it turn with the mouse (also specs) If you wanna have a look to have a better idea how the iphone 6 will look like here it is:
http://versus.com/es/apple-iphone-5s-vs-apple-iphone-6
They really need to do something with the missed notifications column. Completely useless. Widgets?
It seems likely that might make an appearance in my opinion. iOS 7’s private widget API is very well designed and has some unfinished bits such as sandboxing.
The “Missed” column is removed in iOS 8. At least for now it is.
Things are heating up, as we get closer to the official unveiling around September.
Mark is on a roll. I wonder what else he is holding back from us…
bezel please go.
I agree 100%…The bezel on the sides needs to go!! Apple has an opportunity here to have the first phone with no side bezel. The rendering they are showing here looks simply like a larger version of the iPhone 5s from the front……unimaginative, boring!! We want the iPhone 6 to really break new ground in phone design, not just look like a larger version of the 5s. I hope these renderings are not correct, and that the REAL iPhone 6 dazzles us with fresh new design!
It’s not exactly a choice to have bezels. Apple has a great engineering team but I don’t know whether or not that’s possible.
Very nice article! Rarely do I read an entire article or take the time to comment but recently I have become annoyed with bgr.com and their headline tactics to always include “HUGE LEAK, THE BIGGEST LEAK YET!”
It’s good to see a well thought out in depth article over here.
Happy to have you over here.
1920×1080 and DPI, lets the comments begin! WAR!
1917×1080 would require no changes to apps at all. So just go to that with ‘normal’ screens and lop off 2 lines on bottom and 1 on top, job done.
So does this mean that running with the iPhone 5/5s/5c’s resolution of 640 x 1136 was a mistake? I mean, it’s not as though the technology wasn’t ready… Android vendors have been peddling screen this size for 3+ years
Come to think of it, the iPhone 4/4s 640 / 960 resolution was only around for two years also, perhaps that’s the new trend. A biennial screen resolution bump?
7.6″ screen on the iPhone 7S in 2016!! = p
How on Earth do you come to that conclusion?
Aspect Ratio of 71:40 (yes these are the actual numbers) seams like a very wick copy of 16:9 aspect ratio which btw. was avoided by apple for many years and replaced with a with a more functional 16:10 aspect ratio.
Tripling the basic size seams like another desperate attempt of copying android devices.
Don’t get me wrong, I’m an Apple user for a long time but I’m getting more disappointed every year :(
Several reactions:
1) I’m still holding out hope for 1080p, at least for the 5.5″ model. At 5.5″, it becomes a very viable platform for watching video, and having that be as crisp as possible would be lovely (and would probably sell a bunch more of the max-storage-size version).
2) I hope that they don’t simply move to the 3x size while keeping the base resolution at 568 x 320. Having a larger, higher-resolution screen that can’t display any more information than the current 4″ model would be pointless in the extreme.
3) If you’re going to make legacy apps be fuzzier/more pixellated, scaling older apps from 2x the base resolution to 3x the base resolution isn’t any better than (less fuzzy than) scaling them to any other 16×9 resolution, like 1080P. And at the PPI values we’re discussing, that difference would be pretty subtle anyway.
4) 1080P screens are quickly becoming an industry standard, at whatever size. By moving to that resolution, Apple would greatly increase their supply chain flexibility.
5) Continuing to give developers the crutch of accommodating fixed resolutions based on the quarter-VGA choice for the original iPhone is ultimately counterproductive. Apple needs to give themselves more flexibility in iOS resolutions, and needs to be pushing their developers to write code to accommodate that. As both Android and MacOS support lots of different resolutions, this should be something that is well within most of their developers’ skill sets.
1) I don’t think there would be a maked-eye-visible difference between 960×1704 and 1080p.
2) The “base resolution” thing is just a software units issue. It doesn’t limit anything because the graphic coordinates are floating-point values in iOS.
3) I’m not sure I completely understand what you say there, but scaling up by half an integer factor (like 1.5 in this case) is second best to an integer factor (as was done for the 3S->4 transition): You get realignment every two rows/columns, and the anti-aliasing on the other rows/columns is as straightforward as it gets. (You’re right that at these resolutions the difference is subtle. The trickiest issue is dealing with rendered curves that were finely tuned to pixel-match at some resolution: They might no longer quite match when scaling with non-integers, though the half-integer scaling suffers less from that effect.)
4) With the quantities Apple orders this is not actually an issue, even if they spread supply over 3 or 4 providers.
5) Maybe, but the nearly-fixed-resolution model encourages pixel-perfect fixed-layout software design, which does appear to result in nicer software offerings than for platforms that assume dynamic layout is the norm.
It’s true that there isn’t a huge difference between 1080 vertical pixels and 960 vertical pixels and that it will be very difficult for the naked eye to tell the difference. But it will demand resampling 1080P video from 1920×1080 to 1704×960. That resampling will not be a perfect process and is bound to have some artifacts. The resampling issue is a much bigger problem than the slightly lower resolution.
1) I’m still holding out hope for 1080p, at least for the 5.5″ model. At 5.5″, it becomes a very viable platform for watching video, and having that be as crisp as possible would be lovely (and would probably sell a bunch more of the max-storage-size version).
There is no point in having 1080p at this size, as your eye would be completely unable to see the difference. It would simply slow the thing down, increase bandwidth requirements, and so on. As Steve Jobs said, you cannot see a difference in resolution above 300 dpi. In other words, 600 dpi and 300 dpi look exactly the same, but 600 dpi requires four times the data. A complete waste.
If you want a higher quality display, then bit depth (tonal range) and color space are the areas to seek improvements.
“As Steve Jobs said, you cannot see a difference in resolution above 300 dpi.”
Actually, as with many things that are part of the Jobsian Reality Distortion Field, that’s not exactly accurate. See this excellent explanation from a couple of years back: http://www.cultofmac.com/173702/why-retina-isnt-enough-feature/ The short version: That article compellingly argues that [a] Apple’s “Retina” calculation figures on 20/20 eyesight, which most people with normal vision don’t degrade to until they’re in their 60s and [b] that a true retina display would be more like 950PPI when held at the distance of a phone.
So, the 400 PPI of a 1080P 5.5″ screen would still be very definitely a usable improvement. Also, having to interpolate 1080P video to a slightly-lower resolution would add a lot of fuzziness that wouldn’t otherwise be there.
Hi,
Great point about interpolation. Yes, it would have been better if they had started with multiples of 1080p from the beginning.
But in terms of absolute resolution, I stand by my original statement. I am a professional cinematographer, I shoot and look at professional screens all day long. 720p looks great at least up to a 10″ monitor, and in fact most 17″ monitors do not exceed this resolution. 1080p comes in when you go bigger.
Even if you could see a slightly sharper picture, it would not affect your appreciation of the image. But color and tonal depth make a huge difference. Video standards in the future are heading in exactly that direction.
Apples ‘retina’ assertions were proved wrong by scientists (as apposed to Apple marketers) several times over in several published articles. This is simply not true. You would very easily be able to distinguish let’s say a 1080P resolution vs. a 720P resolution, especially at 5.5″.
Have you looked at a moving video image at both resolutions, at a normal viewing distance? You would not be able to distinguish the difference. I doubt you could even see a difference in a high resolution still photograph. And if you could, it would not make any difference in your experience of the photograph.
What you will experience is contrast, color, bit depth. I am talking from my own experience, not from some article–that you haven’t specifically referenced. I am on a panel that evaluates monitors for future television standards. A 2K monitor with superior bit depth and contrast looks better than a 4K monitor without those qualities. So it’s not so simple an issue.
I do see pixels on my iPhone 5 if I take it a little closer to my face. Not a big deal, but there’s room for improvement. I guess next iPhone will go beyond 400dpi
At 4 inches away from my eyes I can see the Atoms fly by, and at 2 1/2 inches from my eyes I can count the electrons orbiting the atoms. It’s very annoying. I wish they would fix that.
Apple is annoying sometimes, really…. They tend to make things sooooo complicated and just odd. Why not just user a normal, industry standard, screen resolution like 1080P? Sure I understand that they put themselves in this corner by using non standard resolutions to start with but why not take the opportunity afforded by the screen size change to start using real standards?
To make it even worse, this oddity is extremely “old school” in computing standards. Even Android and WP8 are able to scale in real time to any size screen. That the iPhone was never made to do that, even though OSx and Windows both already had this and had it for a very long time, just shows that they were trying too hard to match very old methods of programming and simplicity just to get software written for the iPhone. It is more complicated to write software to work in non-raster environments unfortunately. Though modern tools continue to make it easier every day.
When has apple ever stuck to “standards”?
Actually, Apple was one of the biggest forces behind the popularization of many our modern computing standards like USB and the mouse… You’ve got it all wrong.
http://www.networkworld.com/community/node/58519
Apple also implemented the BlueTooth stack into OS X 10.2 which came out in 2002, while Windows didn’t until XP service pack 2 released in 2004.
Apple has a long history of championing standards in their computers. They have not held to industry standards with their mobile devices in a few places, notably screen resolution and data port/charging port (which is good because microUSB is a shit connector).
Because they know better than some spec whores on the Internet what power and processing demands equal in their products.
Following the “standards” isn’t always the best option. There is nothing about the exact pixel count of 1920×1080 that should make you sleep better at night. It just happens to be the resolution of SOME videos, others being 720p or some of the upcoming higher res videos or even 21:9 videos and so on.
But even if all videos were exactly 1080p, it would be bad practice to design the iPhones screen around just those 1080p videos since watching videos probably occupies somewhere south of 5% of the time you use your iPhone. It would be an android-like disaster if Apple didn’t have their extremely consistent pixel density across all iOS devices. Devs know exactly how to design with precision and consistency, their artwork has the exact same size on all devices (relative to your eye, as the full sized iPad has a different density because its held at a different distance from your eye).
1080p on 4.7″ would be a meaningless difference of 43.9% in density compared to iPads and earlier iPhones, and apps that arent optimised would look blurry and awful. If the 5.5″ version (I still hope the rumors about it are jokes) would have the same “standard 1080p resolution” that would be even worse, both for consumers and for developers.
Screen dimensions at this resolution:
4.7 inch diag
104 mm by 58.60 mm
5.5 inch diag
121.70 mm by 68.60 mm
I know anything Mark Gurman reports can be fairly trustworthy. However, I am just a tad disappointed that it sounds like iOS will just be scaled larger and sharper @3x. I was kind of hoping that the interface would have grown more expansive with the interface remaining the same size. I guess that could still be the case. Either way, I knew the resolution would either be a 3X increase or whatever resolution would have kept the same 326PPI so the interface would remain the same size but with more room on screen to add stuff.
App file sizes keep getting larger, probably in part because the images have to be larger and larger to keep up with the increase in screen size, but they haven’t scaled up the storage space. How is 8/16 GB sufficient as a base capacity anymore?!?!
Wonder if the company Pixelworks is behind the technology, any idea? They are releasing mobile chip (has been under development for long time) and rumor a few in the stock market is that Apple paid them $10M recently for milestone payments. Payment was disclosed couple months ago in the end of year report (forced to disclose for SEC rules…> than 10% customer)
Also released recently was that the former SVP of Apple mobile has been working with Pixelworks since 2012 (maybe around the time development started?) and has now joined the board of directors. Just trying to put 2 and 2 together.
Great informed article, excellent job!
I am excited for the 5.5 inch screen, I had a Galaxy Note II for a while to see how the other half lives and while I did not care for Android, having a screen that size is fantastic.
Question..how will websites be handled at that size? Will it launch the mobile version or go right to the desktop version?
This makes no sense. @3x resolution of 568×320 would KIND OF make sense on the current iPhone 5/5c/5s displays. UI elements would keep their current size (as they should because they’re based on thumb size). Even on an iPhone 5 display it would suck quite a bit, because @2x graphics would appear blurry as 2 pixels can’t be pixel doubled to 3 pixels (meaning @4x would be the next logical step).
But stretching what would be @3x on an iPhone 5 to a larger 4.7″ display would be a total disaster. UI elements will become bigger-than thumb sized, labels can’t display more characters despite the larger screen. And all those websites that are optimized for mobile by being scalable should receive more than 320 non-retina dots in width to automatically increase the column width of text.
The worst part of it all would be that an @3x iPhone 5 resolution on a 4.7″ device would be something of a @2.5x density compared to all retina iPhones thus far.
The only logical possible solution is upping the pixel count from 1136×640 to 1334×752, this way keeping the 328ppi of all retina iPhones and the exact size of all UI elements, while introducing new dot dimensions to scalable things like websites (667×376 as opposed to 568×320). This is the exact same way the iPad handles the extra screen real estate. (note that the full size iPads have the same pixel density to the human eye as iPhones and iPad minis because they/re meant to be held further away from your eyes).
Let alone that those excessively high resolutions are the worst thing to happen to performance. Remember “the new iPad” (3rd gen)? Terribly slow and a total letdown after the iPad 2. I’ve played some of the most graphically intense games on 1080p android phones like the Nexus 5. Those games both look worse AND lag more compared to my 2 year old iPhone 5. Let alone expect infinity blade 3 graphics as seen on iPhone 5s on such resolutions.
I would think that the default mode for non-updated apps would be to scale them up to that new resolution, resulting in larger text and UI elements.
Apps that have been updated for the new resolution could adjust their UI to have normal sized text and UI elements and take advantage of the large amount of screen real estate.
Apple is either going to stretch existing apps or show them in a similar manner iPhone 4 apps were shown on an iPhone 5. Both options are awkward but I believe apple will choose for the latter. It’s bad if people get used to oversized UI and use normal sized UI simultaneously in other apps on the same device. Especially varying keyboards would be a disaster (ever used the iPhone keyboard on an iPad?). Furthermore stretching would be ugly since apps can’t be pixel doubled (its a nasty 17,5% scaling). leaving apps as they are, centered in the bigger display would be the best design compromise I believe.
thank you. The ‘We’ll call this the “base resolution” of’ is where it all falls apart. having a X2 on the iphone 5/s and a X3 on the new ‘must have’ units ruins it for devs and/or users. If Apple completely caves to ‘market growth is on bigger phones’ it would certainly show in a scheme like is described. This is not about making the best product, it’s about following the market and playing me too. That’s really really sad for anyone who’s been delighted with great products from apple in the past. They are not invested in making the best, they’re invested in quant metrics of an existing market…. Godspeed users….
300 DPI is not an arbitrary number. It’s the standard resolution for digital printing.
And the point at which pixels are invisible on a phone held at an average use distance
Nice design and performance
“the iPhone screen resolution will be scaled with an increase of 150% from the current 2X resolution of 1136 x 640.”
You mean “an increase of 50% from the current 2X resolution of 1136 x 640.” Don’t you, Mark?
Haha, i think you’re right… it’s a syntax issue. Correct me if I’m wrong but I think the following is true:
“SCALED UP TO 150% OF ORIGINAL” == “SCALED UP BY AN INCREASE OF 50% OVER ORIGINAL”
The difference in syntax/grammar is subtle but very meaningful in results. An “increase of 150% from the current” == 2840 X 1600.
It’s nitpicking at it’s finest though, sorry Mark. :-)
Reminds me of the first iPod. The comments on here are so evidently PR posts from some foreign country. Reads like Yahoo movie reviews, “Best film this summer!” “Really liked it!”
Bezels way too big
Agreed!!!!! Lose the bezels!! This design is boring and dated!!!
I know…. I really hope these “leaks” aren’t really it. I want to stay with iPhone, but that HTC One M8 Mini 2 is catching my eye lately….
http://www.anandtech.com/show/8005/htc-one-mini-2-announcement
How about we talk about what MOBILE PHONE users really want. Make the thing water and shock-proof. If you really care about a larger screen, then get an iPad!
What people care about being water or “shock” proof? Just take better care of your devices.
CNET did a study and found that 1 in 5 phones that are accidentally broken are from drops in the toilet and a second study showed that 40% of people use their phones on the can; so I would say a lot of people care.
http://www.cnet.com/news/study-19-percent-of-people-drop-phones-down-toilet/
Again, just take better care of your devices.
That resolution gives 6″ screen 326 dpi which is Apple standard retina display. I don’t think it’s meant for iPhone 6 but a future product with 6″ screen…maybe iPad Nano? So there, Apple got it all: 6″, 8″ and 10″ iPad categories.
I really hope that my web app (sublevel.net) will work at 3x. Multiplying things by 3 is so weird. I think Apple won’t release something like this.
Why wouldn’t they double the retina resolution, i.e. 2272 x 1280, not triple the base resolution? If most other phones can achieve 1920 x 1080, this would be possible and more sensible. Retina apps at 1136 x 640 would work as is (just doubled). Developers would be happier too!
Nice article. But I don’t think Apple will release anything inferior to 1080p. When the iphone 5 came out very few smartphone had a greater ppi screen. The iPhone 5s just followed the s trade. Now there are even 2k screen coming out, and I presume that the 2015 iPhone 6s will have the same screen as the 2014 iPhone 6. Buy that time costumers would notice the big screen gap iphones would have.
Maybe all these “leaks” are from Samsung…. LOL
The thing that I don’t is why they took such great care to not change the DPI and amount of pixels wide when increasing to 4″, but are now increasing the width but not keeping the DPI the same. People are going to have to remake graphics. The ratio of width to pixels is going to change!
My free app for getting organized (one of the most popular apps out in the past month or two), PaperBox (gopaperbox.com), should work reasonably well since it’s minimal graphics, but can you even making a game and having to update all of the graphics? Yikes!
1. look at new BBK VIVO Xplay 3S : http://liaow.com/vivo-xplay-3s.php
2.. look at new iPhone screen
Choice is Yours :)
So it STILL doesn’t display in 1080p??? C’mon man!!!!
If one might argue that the PCs are dying because of the tablets, the assumption of these Phablets rising in popularity is not too far off. Samsung has it with their Note 3, there is this Sony Xperia Z2, the Nokia Lumia 1060, I think this trend of bigger cellular devices in form of ‘phablets’ are a quite good strategic move and I think every mobile devices maker should join this competition if wanting to keep their market shares intact. So, Apple launching the iPhone6 might give its most bitter rival Samsung some pressure!
I understand the need for developers to have an “easy” upgrade path when it comes to screen resolution, and for that purpose, 1704 x 960 is very reasonable. I also understand that users will probably not be able to see a difference between 1704 x 960 and possible higher resolutions… My concern is that If Apple wants to be THE premium mobile phone, it may need to think beyond this upgrade path and just define a new higher screen resolution, i.e. 1080p (1920×1080), just to say they have it. Even that resolution is not the latest and greatest – just look to the new LG G3 for the biggest/best screen out there now.
With all that being said – the Apple “ecosystem” of hardware, software, and services is probably more important to have than a huge high resolution screen. As for me, a bigger iPhone screen, regardless of its resolution, will be a huge improvement and I will definitely be upgrading from my iPhone 5 – because the experience is not just about the screen size. It would just be nice for Apple to be able to say that it has a screen closer to the LG G3’s resolution/size than to an iPhone 4 screen size. Apple needs to get more competitive with their hardware and software, because for a lot of customers, big/high res screen = premium phone – not to mention that android 4.4 is pretty nice.
Based on this article, I’m going to hold on to my four-year-old iPhone 4 for a few more months.
Architect is not a verb, it’s a noun. Please use design instead. Change “…unless Apple re-architects the layout…” to “…unless Apple redesigns the layout…”
The entire point of a ‘retina’ display is that you cannot see the pixels anymore at the viewing distance a phone is used at. So images do not become sharper and text does not become more clear going to a higher pixel density after you’ve already surpassed the eyes ability to perceive pixels.
This is the trap that Android phones have fallen into. They’ve gone to screen densities to ‘one up’ Apple in a specs race, but in reality you cannot see the additional pixel density and having those extra pixels wastes cpu, memory and battery life. Which is another way of saying once you surpass the eyes ability to perceive pixels, there’s actually a downside to going more dense, it’s worse than having 0 benefit.
So my hope is that Apple keeps the current phones pixel density and just adds pixels to accommodate the new size. All this talk of them needing 1080p is ludicrous. Either the human eye can see the pixels or it cannot. If it can still see pixels then for sure, up the pixel density. But once no one can see the pixels anymore it’s a waste. It’s like arguing over which is prettier, infra red or ultra violet, since neither can be perceive by humans it’s a poor use of time.
Let Android users brag about 500 pixels per inch, their phone will be half an inch thick, weight 7 lbs, and still have crap battery life.
Thank you for the article. I look forward to the next posts, Mark
What is Apple doing for improved battery life on the ip 6. I would be willing to sacrifice a thicker phone for more battery life. My ip4s does not last a full day.
Curios to know how big the new iPhone6 really is check out this article with side by side screenshots of iPhone5 compared to the new iPhone6 as well as a simulator tool that allows you to see how different websites/webpages and images will appear w/n the new iPhone:
| true | true | true |
Apple is preparing to release a new iPhone with a larger screen later this year, and while multiple reports have indicated...
|
2024-10-12 00:00:00
|
2014-05-14 00:00:00
|
article
|
9to5mac.com
|
9To5Mac
| null | null |
|
3,892,953 |
http://www.theverge.com/2012/4/26/2975924/google-updates-spam-filtering-algorithm
|
Google updates its search filtering algorithm, targets SEO violators
|
Evan Rodgers
|
Google has announced an update to its ranking engine in an effort to reduce spam in search results. The update targets specific "black hat" search engine optimization (SEO) techniques, or methods used to increase the visibility of websites without any corresponding increase in content quality or relationship with the search query. Google states that the update has affected three precent of search queries in English, just over three percent of results in German, Chinese, and Arabic languages, and five percent of queries in Polish. This update has a smaller impact than Google's February update, referred to as Panda, which affected 12 percent of queries.
"Keyword stuffing" is specifically mentioned as one of the SEO techniques that the company frowns upon. This involves packing repetitive search terms into low visibility areas of a website in hopes of being picked up by a search engine. However, *Search Engine Land* points to techniques like "linking schemes" and "cloaking" that violate Google's Webmaster Guidelines. The first is the use of organized rings of link spammers that spread unrelated links all over the web, and the second is a sophisticated method of showing search engines a separate, curated site rather than the site available to real users. Sites that employ these tactics will have their visibility penalized, while sites that abide by the search engine's guidelines will gain varying levels of visibility.
*Search Engine Land* also performed a casual side-by-side comparison of Google's new filtering algorithm against results from Microsoft's Bing. Notably, Bing provided more relevant search results in some instances, but Google was found to provide marginally better results for pages that strictly adhere to approved SEO techniques. On the whole, *Search Engine Land* wasn't able to find any significant difference in Google's rankings, which aligns with the stated three percent change in search results.
| true | true | true |
Google updates its ranking algorithm in an effort to reduce spam and provide more relevant search results.
|
2024-10-12 00:00:00
|
2012-04-26 00:00:00
|
article
|
theverge.com
|
The Verge
| null | null |
|
25,151,520 |
https://www.youtube.com/watch?v=60z_hpEAtD8
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
5,065,938 |
http://www.breezejs.com/home
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
4,324,576 |
http://www.thestranger.com/seattle/tell-me-more/Content?oid=14330742
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
21,511,019 |
https://www.forbes.com/sites/trevornace/2019/02/28/nasa-says-earth-is-greener-today-than-20-years-ago-thanks-to-china-india/#58849f8d6e13
|
NASA Says Earth Is Greener Today Than 20 Years Ago Thanks To China, India
|
Trevor Nace
|
NASA has some good news, the world is a greener place today than it was 20 years ago. What prompted the change? Well, it appears China and India can take the majority of the credit.
In contrast to the perception of China and India's willingness to overexploit land, water and resources for economic gain, the countries are responsible for the largest greening of the planet in the past two decades. The two most populous countries have implemented ambitious tree planting programs and scaled up their implementation and technology around agriculture.
India continues to break world records in tree planting, with 800,000 Indians planting 50 million trees in just 24 hours.
The recent finding by NASA and published in the journal *Nature Sustainability*, compared satellite data from the mid-1990s to today using high-resolution imagery. Initially, the researchers were unsure what caused the significant uptick in greening around the planet. It was unclear whether a warming planet, increased carbon dioxide (CO2) or a wetter climate could have caused more plants to grow.
After further investigation of the satellite imagery, the researchers found that greening was disproportionately located in China and India. If the greening was primarily a response from climate change and a warming planet, the increased vegetation shouldn't be limited to country borders. In addition, higher latitude regions should become greener faster than lower latitudes as permafrost melts and areas like northern Russia become more habitable.
The map above shows the relative greening (increase in vegetation) and browning (decrease in vegetation) around the globe. As you can see both China and India have significant greening.
The United States sits at number 7 in the total change in vegetation percent by decade. Of course, the chart below can hide where each country started. For example, a country that largely kept their forests and vegetation intact would have little room to increase percent vegetation whereas a country that heavily relied on deforestation would have more room to grow.
NASA used Moderate Resolution Imaging Spectroradiometer (MODIS) to get a detailed picture of Earth's global vegetation through time. The technique provided up to 500-meter resolution for the past two decades.
Both China and India went through phases of large scale deforestation in the 1970s and 80s, clearing old growth forests for urban development, farming and agriculture. However, it is clear that when presented with a problem, humans are incredibly adept at finding a solution. When the focus shifted in the 90s to reducing air and soil pollution and combating climate change the two countries made tremendous shifts in their overall land use.
It is encouraging to see swift and rapid change in governance and land use when presented with a dilemma. It is something that will continue to be a necessary skill in the decades to come.
| true | true | true |
NASA has some good news, the world is a greener place today than it was 20 years ago. What prompted the change? Well, it appears China and India can take the majority of the credit.
|
2024-10-12 00:00:00
|
2019-02-28 00:00:00
|
article
|
forbes.com
|
Forbes
| null | null |
|
16,591,540 |
https://www.humblebundle.com/books/diy-electronics-books
|
Humble Book Bundle: DIY Electronics by Wiley
| null |
Own Remnant II & Persona 5 Strikers with October’s Humble Choice!
This bundle was live from Mar 14, 2018 to Mar 28, 2018 with 46,922 bundles sold, leading to $0 raised for charity.
| true | true | true |
Pay what you want for awesome ebooks and support charity!
|
2024-10-12 00:00:00
|
2024-10-12 00:00:00
|
website
|
humblebundle.com
|
Humble Bundle
| null | null |
|
35,379,216 |
https://techpolicy.press/facebooks-measures-against-rt-and-sputnik-fail-to-address-root-problem-of-information-warfare/
|
Facebook’s Measures Against RT and Sputnik Fail to Address Root Problem of Information Warfare | TechPolicy.Press
|
Fadi Quran
|
# Facebook’s Measures Against RT and Sputnik Fail to Address Root Problem of Information Warfare
Fadi Quran / Mar 21, 2022## CrowdTangle Data Shows Restrictions on Russian State Media Leave Arabic and Spanish-speaking Users Behind
*Fadi Quran is a campaign director at Avaaz, a global movement to bring people-powered politics to decision-making everywhere.*
In response to the war in Ukraine, the European Commission introduced widespread bans on Russian state-controlled outlets known to spread propaganda and misinformation. Then, Facebook announced that it would restrict access to Russia Today (RT) and Sputnik from users in European countries.
But new research from our team at Avaaz shows that after restrictions in Europe were enforced, interactions on *select* RT and Sputnik Facebook pages actually increased, especially those reaching Arabic and Spanish-speaking users. This demonstrates that territorial restrictions fail to outrun the sheer amplification of content by the social media company’s algorithms, and fail to protect all users globally. Algorithmic reform is the most urgent and effective long-term solution to solving the real threat of information warfare and disinformation.
Avaaz measured the interactions on RT and Sputnik’s Facebook pages globally in the 11-day period *before* restrictions were enforced for select European users (February 17, 2022 to February 27, 2022) and the 11-day period *after* those restrictions were enforced (February 28, 2022 and March 10, 2022)..RT’s pages saw a 24% decrease overall (8.7 million interactions to 6.6 million). Sputnik saw a 57% decrease (2.2 million interactions to 966k).
Despite this reduction on these main pages, the following pages saw increases in their interactions within the time period analyzed:
- RT Online (161.27% increase. *RT Online is an Arabic-language page)
- RT Play en Espanol (22.47% increase)
- Sputnik Japan (12.92% increase)
- RT (8.41% increase)
- RT Documentales (5.50% increase. *RT Documentales is a Spanish-language page)
- RT en Espanol (1.00% increase)
We also observed that Facebook introduced other measures to increase friction for accessing RT and Sputnik content. Users see a warning when they encounter RT and Sputnik content on Facebook that notes it is “Russia state-controlled media” and “This link is from a publisher Facebook believes may be partially or wholly under the editorial control of the Russian government.” However, due to a lack of transparency from the platform, it has not been possible to assess the extent to which these interventions have impacted interactions overall.
Not only did interactions on RT’s Arabic and Spanish-language pages increase, RT content promoting pro-Russian government claims about Ukraine spread on the platform in both languages. Facebook has a long track record of failing to protect Arabic and Spanish-speaking users from being exposed to disinformation and information warfare.
Below are screenshots of data obtained through CrowdTangle, a public insights tool owned and operated by Facebook:
**Interactions on RT’s Facebook Pages**
**PRE-Restrictions (Feb. 17 - Feb. 27, 2022)POST-Restrictions (Feb. 28 - Mar. 10, 2022)**
**Interactions on Sputnik’s Facebook Pages**
**PRE-Restrictions (Feb. 17 - Feb. 27, 2022)POST-Restrictions (Feb. 28 - Mar. 10, 2022)**
**Other Russian state-controlled outlets remain unrestricted on Facebook**
Another clear demonstration of the ineffectiveness of restricting some outlets in the fight against the information war is that other Russian-controlled outlets are still free to game the social media algorithm. For example, to date, not included in Facebook’s restrictions is Russian state-controlled outlet Ruptly.
Ruptly, which to date has over 1.3 million followers on Facebook, describes itself as an “international video news agency headquartered in Berlin. Part of RT media family." The outlet has been identified by Facebook as a state-controlled outlet, and during Avaaz’s analysis, posted content debunked by fact-checking organization Correctiv. Here’s one example:
On this **RUPTLY** video, the post caption includes a comment from Russia’s foreign minister, Sergeĭ Viktorovich Lavrov, saying, “Russia does not intend to infringe on the interests of citizens of Ukraine.” The video of Lavrov above garnered over 4,300 views via this post. European fact-checkers have noted that the Russian Ministry of Defense is using state-controlled media to “spread the word that Russia’s attacks on Ukraine are not aimed at cities and that Ukrainian civilians are no in danger.” Correctiv notes that reports refute these claims.
**Reform the Business Model, Mandate Transparency**
These piecemeal, reactive and ultimately intransparent and ineffective restrictions on RT and Sputnik risk setting a dangerous precedent of restricting freedom of expression instead of taking more fundamental and transparent action to contain harms.
The war in Ukraine has global implications. As this research shows, extensions of whack-a-mole-style restrictions targeting these specific outlets are problematic. Detoxing social media platforms’ algorithms offers a more effective policy solution that would protect all users globally from repeat disinformation offenders while defending free expression. Such reform must end the amplification and monetization of disinformation while providing full transparency for researchers and analysts investigating the platforms.
While Ukrainian citizens flee shelling and Russian protesters are thrown in jail, Facebook is failing to address its own business culpability in stopping information warfare at its root. To disarm online disinformation and information warfare effectively, Facebook and other platforms need to fully detox their algorithms and redesign their business models to protect people everywhere. Whistleblowers confirm that platforms can do it. If they won’t, regulators need to further step up their efforts to make them.
| true | true | true |
CrowdTangle Data Shows Restrictions on Russian State Media Leave Arabic and Spanish-speaking Users Behind
|
2024-10-12 00:00:00
|
2023-10-24 00:00:00
|
article
|
techpolicy.press
|
Tech Policy Press
| null | null |
|
5,661,777 |
http://slid.es/
|
Create and Share Presentations for Free | Slides
| null |
## Slides AI
Leverage the power of machine learning to improve your writing and generate content.
Powered by
## Looking for inspiration?
Try our slide generator.
## Meet your new favorite editor
Slides is a suite of modern presentation tools, available right from your browser. Unlike traditional presentation software, there's no need to download anything. Working with collaborators to make an awe-inspiring presentation has never been easier.
## Present like never before
In a meeting, conference call or on stage? With Live Present Mode, you control what your viewers see. You can even use your phone as a remote control with direct access to your speaker notes.
## Work better, together
Slides for Teams makes your whole team work better. It's a secure, shared place for everything your team needs to do their best work, and includes:
- A customizable editor with your company's assets
- Media library with reusable images and videos
- Team-wide collaboration and feedback
- A theme editor that ensures everyone stays on brand
## The best tool for developers
Slides is the only presentation tool with a fully open source format. Your presentations are HTML, CSS and JavaScript. Unlock advanced features, such as:
- Access to your presentation's full source code
- Rich customization options using CSS
- Export a copy and present offline or store on your own web host
## Join over 2 million creators
Slides is used daily by professionals all over the world to speak at conferences, share pitches, school work, portfolios and so much more. Join today and try it out for free.
Start presenting
| true | true | true |
Slides is a place for creating, presenting and sharing modern presentations. Sign up for free.
|
2024-10-12 00:00:00
|
2024-01-01 00:00:00
|
website
|
slides.com
|
Slides
| null | null |
|
6,761,361 |
http://sam-koblenski.blogspot.com/2013/11/tech-book-face-off-ruby-programming.html
|
Lucid Mesh
|
Sam Koblenski
|
VS. |
#### The Ruby Programming Language
This book was written by the same David Flanagan that wrote the excellent
*JavaScript: The Definitive Guide*that I reviewed months ago, and it was co-authored by the inventor of Ruby, Yukihiro "Matz" Matsumoto. It was written in a similar no-frills, straight-to-the-point style that I once again enjoyed and appreciated immensely. When learning the ropes in a new programming language, I want to see a well organized presentation of the features with clear, concise examples and crisp, direct explanations of those features. No games or gimics, please; just the facts. This book delivers exactly to those expectations.
Overall, I would say the book was very easy to read and understand. That could just as easily be said about the Ruby language itself, of course. The flexible syntax and the ability to write programs that read more like natural language than most other programming languages is really appealing. While playing around with solving some basic algorithmic problems in Ruby, I was impressed with how easy it was to express the solutions in the language in nearly the same way that I was thinking about solving them in my head. And Flanagan and Matz did a good job of showing all of the language features in a way that I could pick up quickly and use right away. They methodically explained all of the features and concepts in a nice, logical order.
That is not to say that there weren't a few rough spots. Things got a bit hairy with enumerators and external iterators, and then again with class variables and class instance variables. This was only partially because these concepts didn't click for me right away. After backing up and reviewing those sections, I understood the syntax and what was going on well enough. The main issue with enumerators is that I don't really see when you would want to use them, but I suppose being aware that they exist is the important thing at this point. Beyond that it's a case of "you'll know it when you need it."
With class variables and class instance variables the problem was more of knowing when to use one instead of the other, and Flanagan and Matz didn't get into that much at all. It would have been out of character with the rest of the book, but that's alright because the subject of when and how to use Ruby's language features was pretty much the next book's reason for being. As for
*The Ruby Programming Language*, it was a great introduction and overview of Ruby for an experienced programmer. I highly recommend it to any programmer ready to learn a new and fun language.
#### Eloquent Ruby
Russ Olsen goes in a completely different direction with
*Eloquent Ruby*. I'm not sure if you would be able to learn Ruby if this book was your only resource. At the very least you would need to refer to the documentation quite a bit when you were starting out. But this book will teach you how to use Ruby to solve real problems and why you would write programs certain ways to be more effective. In short Olsen teaches you how to write idiomatic Ruby.
I thoroughly enjoyed reading this book. I don't think I've had this much fun reading a programming language book since, well, ever. It was like reading
*The Pragmatic Programmer*, but for a programming language instead of general programming practices and processes. Olsen was conversational and engaging, and he really helped me understand how to write good Ruby code. He covered everything from the use of different control structures to writing specs, the use of class instance variables, the various uses of method_missing, creating self-modifying classes, and implementing DSLs (domain-specific languages).
The book is packed with all kinds of useful information in a nice, readable format. Olsen starts out with more basic, general concepts like code formatting and comment style and moves progressively into more complex topics, culminating in a series of excellent chapters on metaprogramming and DSLs. Each chapter addresses one self-contained topic about Ruby programming and includes sections on how to avoid common errors and pitfalls, and what real implementations of the topic under discussion look like "in the wild." This format ends up working really well, and I found the pace extremely easy to follow. It was very understandable, and I learned a ton.
I do have a few quibbles about parts of the book, though. Sometimes his reasons for things like commenting lightly or writing short methods seemed a little weak. It's not that I disagreed with his recommendations. In fact, I mostly agree with him, especially about restricting comments and letting the code speak for itself. But I thought he drew out his arguments a bit too long and included points that ended up sounding like, "You should do it this way because that's the way the Ruby community does it." I found myself disagreeing with some these minor points even though I agreed with the overall premise, and that was distracting. His arguments would have been stronger if he had tightened them up a bit and left out the weaker claims.
Another minor annoyance was the footnotes. I really don't understand why he felt the need to include these when they were nearly entirely useless to the reader. Every time I jumped to a footnote hoping for a little extra insight, and instead was confronted with flippant comments like "Try as we might" or "Yet," (seriously!) I was reminded that I really shouldn't be wasting my time with these footnotes. I still read them all because I couldn't help myself, but I guarantee that none of them were essential. Sure, there were a few clarifying points, but if they were so important to include, they should have been integrated into the text instead of cordoned off as footnotes. However, I didn't see any of them as critical, and you're really not missing anything by skipping them. Trust me, I read them so you don't have to.
Thankfully, those two minor drawbacks can be easily ignored, and the book doesn't really suffer for them. There is so much good material in there that I highly recommend
*Eloquent Ruby*to every Rubyist. It will help take your programming to the next level, and give you plenty of ideas for how to write better Ruby code.
#### Ruby: The Language
So what about the Ruby language itself? I have to say that it is great fun to program in it. For someone who has spent more than a decade writing mostly in C++, writing in Ruby gave me the distinct feeling of being set free. It also pleasantly reminded me of my early programming experiences with Logo with many moments of youthful excitement and awe - as in, "wow, you mean you can really do that? Awesome!"
I'd like to do a rundown of some of the basic features of Ruby compared to C++, similar to what I did in my JavaScript book review, to show you how Ruby compresses code. Let's start out again with a few simple variable declarations in C++:
int samples[] = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10};And the same declarations in Ruby:
int sum = 0;
double mean = 0.0;
samples = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10];Nice! Ruby doesn't even need the 'var' keyword in its variable declarations, like JavaScript does. On a variable assignment, it searches for the variable symbol, and if it doesn't find it, Ruby conjures it into existence right there. So easy.
sum = 0;
mean = 0.0;
Now let's go a bit further and do the calculations hinted at by those variable names. First in C++:
sz = sizeof(samples)/sizeof(samples[0]);And then in Ruby:
for(int i = 0; i < sz; i++) {
sum += samples[i];
}
mean = sum/static_cast<float>(sz);
sum = samples.inject(:+)Seriously, that is not even funny. Look at how clean that is! You can actually inject an operator into a collection and it just works.
mean = sum/samples.size.to_f
Alright, let's say you want to create a quick object to hold some data on your pets. In C++ you'd have to at least create a struct and instantiate it. That would look something like this:
struct Pet {And then there's Ruby:
string name;
string type;
string breed;
int age;
double weight;
}
Pet myPet = {"Twitch", "cat", "Siamese", 10, 8.4};
myPet = {This isn't an object literal like it is in JavaScript. It's actually a hash, but since objects are implemented as hashes in JavaScript, this is basically the same thing in Ruby. Now let's say we wanted to make the pet more permanent as a class. In C++ you might do this:
name: "Twitch",
type: "cat",
breed: "Siamese",
age: 10,
weight: 8.4
}
class Pet {You might want to include setter and getter methods for any of the members (class variables) that you'd want to change later, but I'll omit those for brevity's sake. I'm also not including any error checking code that would be required, like making sure the age and weight are within reasonable bounds. Now here's the equivalent Ruby:
private:
string _name;
string _type;
string _breed;
int _age;
double _weight;
public:
Pet(string name, string type, string breed,
int age, double weight) {
_name = name;
_type = type;
_breed = breed;
_age = age;
_weight = weight;
}
}
Pet myPet = new Pet("Twitch", "cat", "Siamese", 10, 8.4);
class Pet
def initialize(name, type, breed, age, weight)
@name = name
@type = type
@breed = breed
@age = age
@weight = weight
end
end
myPet = Pet.new("Twitch", "cat", "Siamese", 10, 8.4);The '@' symbol marks a variable as an instance variable, and they come into being on the spot, just like other variables do. The admittedly small amount of Ruby I have shown here is fairly comparable to JavaScript, but as you get further into Ruby, you find that it goes well beyond JavaScript in its ability to express programs cleanly and compactly, and C++ is left in the dust. Yes, C++ definitely has its uses, and dynamic languages can't hope to compete with it on speed. But Ruby has an enjoyment factor in its expressiveness that clearly outshines C++.
#### So About Those Books
It's difficult to say whether
*The Ruby Programming Language*or
*Eloquent Ruby*is a better book. They are both excellent and serve distinctly different purposes. If you want to get started in Ruby quickly, and you already have a programming background, then you can read the first half of
*The Ruby Programming Language*and be good to go. If you're already familiar with Ruby, but you're unsure of the best way to use certain language features or want to learn more idiomatic ways of solving problems in Ruby, then
*Eloquent Ruby*is the book for you.
*The Ruby Programming Language*makes an excellent reference when you need to look something up, and
*Eloquent Ruby*is a great read for when you want to improve your programming style. They each have their strengths and are targeted for different audiences, or the same audience at different points on the Ruby learning curve. They make a great combination, but one thing neither of them is good for is the novice programmer. If you're starting out learning to program, I think Ruby is a great first language to learn because it is so clean, flexible, and accessible, but these two books are not going to introduce Ruby in a way that a beginner can use. For the aspiring programmer, you'll have to find resources that introduce things at a slower and more careful pace.
For the rest of you programmers out there with a desire to learn or improve programming in Ruby, these two books are definitely worth checking out. They've certainly earned a spot in my library.
## No comments:
## Post a Comment
| true | true | true | null |
2024-10-12 00:00:00
|
2013-11-18 00:00:00
| null | null | null | null | null | null |
39,889,601 |
https://libera.cat/
|
Libera Chat
| null |
# Retiring CertFP Expiration Verification
Here is some good news for folks who use CertFP to log in to
`NickServ`
: we have rolled out a change that means SaslServ will no longer
reject expired certificates when used for identifying to accounts.
Why are we doing this? We don’t have a rotation policy on passwords, which are generally less secure, so it makes no sense for certificates to have one. Meanwhile, certificate expiries are quite disruptive, particularly for folks who use our tor hidden service which does not allow other forms of authentication. Respecting the expiry of the certificate provides no benefit but does cause annoyance for both users and staff.
We do still recommend that you practice good certificate hygiene, such as
cycling your certificates, using unique certificates for each network, and
keeping your `CERT LIST`
clean.
| true | true | true |
A next-generation IRC network for FOSS projects collaboration!
|
2024-10-12 00:00:00
|
2024-07-09 00:00:00
| null | null |
libera.chat
|
Libera Chat
| null | null |
3,565,149 |
http://www.stackmob.com/2012/02/crawford-post-part-ii/
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.